T
I
OCTOBER 199
M A
MATHEMATICAL PROGRAMMING SOCIETY NEWSLETTER
1
INTERIOR POINT METHODS:
Current Status and Future Directions
Introduction and Synopsis
The purpose of this article is twofold: to provide a synopsis of the major develop
ments in interior point methods for mathematical programming in the last twelve
years for the researcher who is unfamiliar with interior points, and to discuss
current and future research directions in interior point methods for researchers
who have some familiarity with interior point methods. Throughout the article,
we provide brief selective guides to the most lucid relevant research literature as
a means for the uninitiated to become acquainted (but not overwhelmed) with
the major developments in interior point methods.
Interior point methods in mathematical programming have been the largest and
most dramatic area of research in optimization since the development of the sim
plex method for linear programming. Over the last twelve years, interior point
methods have attracted some of the very best researchers in operations research,
applied mathematics, and computer science. Approximately 2,000 papers have
been written on the subject following the seminal work of Karmarkar [10]. See for
example the netlib electronic library initiated by Eberhard Kranich, and the World
Wide Web interior point archive:
http://www.mcs.anl.gov/home/otc/InteriorPoint/archive.html .
SEE PAGE 1WO 10
Robert M. Freund'
M.I.T.
Shinji Mizuno
The Institute of Statistical
Mathematics, Tokyo
This article provides a synop
sis of the major developments
in interior point methods for
mathematical programming
in the last twelve years, and
discusses current and future
research directions in interior
point methods, with a brief
selective guide to the re
search literature.
SM.I.T. Sloan School of Management,
Building E40149A, Cambridge, MA
02139, USA. email: rfreunca'i mit edu
I, .
2 The Institute of Statistical Mathematics,
467 MinamiAzabu, Minatoku. Tokyo
100 JAPAN. email: mlrurAu''im ,.: p
,j 1q 
conference notes 10
book reviews 13
gallimaufry 16
i 1
r;
7
P
PAE 21 111PN1 g051 O B 1996
Interior Point Methods i .,', .: ,,..
Interior point methods have permanently changed the landscape
of mathematical programming theory, practice, and computation.
Linear programming is no longer synonymous with the celebrated
simplex method, and many researchers now tend to view linear
programming more as a special case of nonlinear programming
due to these developments.
The pedagogy of interior point methods has lagged the research
on interior point methods until quite recently, partly because
these methods (i) use more advanced mathematical tools than do
pivoting/simplex methods, (ii) their mathematical analysis is typi
cally much more complicated, and (iii) the methods are less ame
nable to geometric intuition. For most of the last twelve years,
educators have struggled with issues of how and where to intro
duce interior point methods into the curriculum of linear and / or
nonlinear programming, and how to cogently exposit interior
point methods to students (and fellow researchers). As the re
search on interior point methods for linear programming has
settled down (and the research on interior points for nonlinear
programming has heated up), a number of new book projects on
linear programming and/or interior point methods have recently
appeared on the scene which promise to surmount these peda
gogical difficulties. For example, in the last two years alone, the
following new book projects on linear programming have been
undertaken which contain substantive and rigorous treatments of
interior point methods: Linear Programming: A Modern Integrated
Analysis by Romesh Saigal (Kluwer, 1995), Interior Point Methods in
Mathematical Programming edited by Tamas Terlaky (Kluwer,
1996), Introduction to Linear Optimization by Dimitris Bertsimas and
John Tsitsiklis (Athena Scientific, forthcoming), Linear Program
ming: Foundations and Extensions by Robert Vanderbei (Kluwer,
forthcoming), Interior Point Algorithms: Theory and Analysis by
Yinyu Ye (John Wiley, forthcoming), !:'. m.'ii Dual Interior Point Al
gorithms by Stephen Wright (SIAM, forthcoming), and Linear Pro
gramming I: An Introduction by George Dantzig and Mukund
Thapa (Springer Verlag, forthcoming).
To begin our synopsis of interior point methods for linear pro
gramming, we consider the linear programming problem in stan
dard form:
P: minimize crx
s.t. Ax = b
x 0,
where x is a vector of n variables, whose standard linear program
ming dual problem is:
D: maximize b y
s.t. A y + s = c
s>0.
Given a feasible solution x of P and a feasible solution (y, s) of D,
the duality gap is simply c x bTy = xTs > 0.
We introduce the following notation which will be very conve
nient for manipulating equations, etc. A feasible solution x of P is
strictly feasible if x > 0, and a feasible solution (y,s) of D is strictly
. ':11:.' if s > 0. Let e denote the vector of ones, i.e., e =(1,..., 1) .
Suppose that x >0. Define the matrix X to be the n x n diagonal ma
trix whose diagonal entries are precisely the components of x.
Then Xe=x, and X e = (1/ xl,...,l1/ x) Also, notice that both X and
X are positivedefinite symmetric matrices.
There are many different types of interior point algorithms for
linear programming, with certain common mathematical themes
having to do with the logarithmic barrier function. In the authors'
opinions, most interior point algorithms fall into one of three
main categories: affine scaling methods, potential reduction meth
ods, and central trajectory methods. We now briefly summarize
these three categories.
Affine Scaling Methods. The basic strategy of the affine scaling
algorithm is as follows: given a strictly feasible solution of P,
construct a simple local ellipsoidal approximation of the feasible
region of P that is centered atx. Call this ellipsoid E. Then, opti
mize the objective function c x over E,, and use the resulting di
rection with a suitable steplength to define a new algorithmic iter
ate. The specifics of this strategy are as follows. Given a strictly
feasible solution of P, the Dikin Ellipsoid att is defined as:
E, = (xe 9t" Ax = b, (x T X2(x x)
(It is straightforward to show that E is always contained in the
feasible region of P whenever is strictly feasible.) The affine scal
ing direction att is then the solution to the following direction
finding problem:
(ADFP) : minimize cTd
s.t. Ad =0
dTX2d < 1.
Note that (ADFP;) is a convex program with all linear compo
nents except for one convex quadratic constraint. It can be solved
directly by forming the associated KarushKuhnTucker system
and then solving an associated linear equation system. One can
also write down a closed form solution algebraically after a little
bit of matrix manipulation. Letting d denote the solution to prob
lem (ADFP;) the next iterate of the affine scaling algorithm is ob
tained by setting x, = x + ad, where the steplength a is chosen by
one of several strategies to ensure the strict feasibility of the new
iterate x" while maintaining a suitable improvement in the objec
tive function.
The affine scaling algorithm is attractive due to its simplicity and
its good performance in practice. (However, its performance is
quite sensitive to the starting point of the algorithm.) The proof of
convergence of the algorithm in the absence of degeneracy is
fairly straightforward, but under degeneracy, such a proof is sur
prisingly long and difficult. There have not been any results on
bounds on the efficiency of the algorithm, but it is suspected (for
very good reasons that are beyond the scope of this synopsis) that
the algorithm is exponential time in the worst case.
Some variants/extensions on the basic affine scaling algorithm
are the dual affine scaling algorithm (designed to work on the
dual problem D), as well as various versions that work simulta
neously on the primal and on the dual, using a variety of ellipsoi
dal constructions in the space of the primal and the dual. Two
comprehensive references on affine scaling methods are Tsuchiya
[19] and the book by Saigal [17].
Potential Reduction Methods. Potential reduction methods typi
cally are designed to find improving solutions to the following
optimization problem:
N 51
PACE 2
OCTOBER 1996
I AGL 3 N0 51 OCTOBERMa1996
Interior Point Methods Freund/Mizuno
PRP: minimize f(x,y,s) = q In (cTxby) Y) ln(x)
s.t. Ax=b
x>0,
A y+s= c
s >0,
where the objective function;i i ) is called the potentialfunction,
and q is a parameter of the potential function. It was this type of
problem that Karmarkar introduced in his seminal paper [10]. No
tice that the "first part" of the potential function is q times the
logarithm of the duality gap, and we would like to drive this part
to o. The second part of the potential function is the logarithmic
barrier function, which is designed to repel !.. Ii LI solutions from
the boundary of the feasible region. The potential function is a
surrogate for the goal of reducing the duality gap to zero, and un
der some mild assumptions regarding the linear program P, one
can easily show that the duality gap is bounded from above by a
function of the value of the potential function, i.e.,
cTx bTy < C exy,,)1q
for a constant C1 that is problem specific.
Now, suppose that the parameter q has been set. In a typical po
tential reduction method, we have a current iterate (, y, s) and we
seek new iterate values (xy S ) with a suitable decrease in
the potential function. There are a number of tools that can be
used to accomplish this, such as Newton's method, a "partial"
Newton's method that only accounts for the Hessian of the second
part of of the potential function, and projective transformation
methods combined with projected steepest descent. In a typical
potential reduction algorithm, there is a guaranteed decrease in
the potential function;i ,. ) of at least an amount 3 at each itera
tion, where 6 >0. Then, from the above, the duality gap is there
fore decreased by a fixed proportion in at most q/ iterations. This
reasoning is then used to establish an upper bound on the total
number of iterations needed to obtain a nearoptimal solution
within some optimality tolerance E from some starting point (x,
y sO), namely
S(In CXo by) + C2)
iterations, for a constant C that depends on the problem P and on
the starting point (x, y s ). This type of logic underlies most po
tential reduction algorithms.
Although potential reduction methods do not have the simplicity
of affine scaling methods, they are more attractive than affine scal
ing algorithms for at least two reasons: they have a performance
guarantee, and they always produce dual information and so al
low the user to specify an optimality tolerance to the computer.
Also, with a linesearch of the potential function, they can be
made very efficient in practice.
(Karmarkar's original algorithm [10] used a very specific form of
PRP and used the machinery of projective transformations in the
algorithm and in the proofs of the algorithm's performance guar
antees. Despite their original mystique, projective transformations
are not necessary for potential reduction algorithms to work either
in theory or in practice. However, in the authors' opinions, the
framework of projective transformations is nevertheless of para
mount importance, at least conceptually, in the understanding of
interior point methods in general.)
There are numerous types of potential reduction methods, some
using the potential function above, others using the socalled
TanabeToddYe symmetric primalanddual potential function
g(x,y,s,) = qln(cx by) ',ln(x) illn (s),
which has additional desirable properties that go beyond this brief
synopsis. In general the potential reduction methods all aim to
drive a potential function to oo by a variety of primal, dual, or pri
malanddual algorithmic tools. Almost all potential reduction
methods enjoy good to excellent performance guarantees, i.e.,
complexity bounds. Potential reduction methods have not re
ceived much attention in terms of computational testing, due per
haps to early difficulties (which have since been overcome) of ap
plying potential reduction methods in a combined Phase IPhase II
environment. For a comprehensive survey of potential reduction
methods, see Anstreicher [4] or Todd [18].
Central Trajectory Methods. Interior point methods based on the
central trajectory are by far the most useful in theory, the most used
in practice, and, in our judgement, have the most aesthetic quali
ties. (In fact, one leading researcher has even referred to the cen
tral trajectory as "the most important object in modern optimiza
tion theory.") The central ii.,i.. fi, of the linear program P is ob
tained as the solution to an amended version of P, where a param
eterized logarithmic barrier term is added to the objective func
tion. Consider the logarithmic barrier problem BP(p) parameter
ized by the positive barrier parameter fp:
BP(p): minimize cTx 1 l~ "ln(x,)
s.t. Ax = b
x>O.
The KarushKuhnTucker conditions for BP(p) are:
Ax = b, x > 0
(1)
c pX e = ATy.
If we define s = pX le, then we can rewrite these optimality condi
tions as:
Ax = b,x > 0
ATy + s = c, s > 0
XSe pe = 0.
Let (x(p), y(p), s(p)) denote the solution to system (2) for the given
positive parameter p. Then the set F = ((x(p), y(p), s(p)) I p > 0} is
defined to be the central trajectory (also known as the central path)
of the linear program P. From the first two equation systems of (2),
we see that a solution (x, y, s) along the central trajectory is strictly
feasible for the primal and the dual problem, and that the duality
gap on the central trajectory is x s = e'XSe = p eTe = /in, which fol
lows from the third equation system of (2). Substituting this equa
tion in the third system of (2), we obtain the following equivalent
and parameterfree characterization of the central trajectory:
~
OCTOBER 1996
N? 51
PAGE 3
Interior Point Methods .. 1 11, I:,.
Ax = b, x > 0
ATy + s = c, s > 0 (3)
XSe (xs/ n)e = 0.
The third equation system in (2) or (3) is precisely where the
nonlinearity arises, and in general it is not possible to solve (2) or
(3) in closed form except in trivial cases.
The strategy in most central trajectory methods is to solve for ap
proximate solutions along the central trajectory (2) or (3) for a de
creasing sequence of the duality gap (or equivalently, of the bar
rier parameter i) that tends to zero in the limit. There are a num
ber of ways to carry out this strategy. For example, for a given
value of the duality gap or of the barrier parameter p, one can
choose to approximately optimize BP(p) or, equivalently, to ap
proximately solve (1), (2), or (3), or to approximately solve some
other equivalent characterization of the central trajectory. Also,
one can choose a number of ways to approximately solve the sys
tem of nonlinear equations under consideration (Newton's method
is one obvious choice, as are predictorcorrector methods and
other higherorder methods, preconditioned conjugate gradient
methods, etc.). Overlayed with all of this is the way in which the
numerical linear algebra is implemented. Furthermore, one needs
to decide how to measure "approximate" in the approximate solu
tion. Last of all, there is considerable leeway in developing a strat
egy for reducing the duality gap (or the barrier parameter p) at
each iteration. (For example, aggressively shrinking the duality
gap seems like a good idea, but will also increase the number of it
erations of Newton's method (or other method) that is used to re
solve (approximately) the new system of nonlinear equations.)
In terms of theoretical performance guarantees, the best central
trajectory methods are guaranteed to reduce the duality gap of the
iterates by a fixed proportion in O('n) iterations.
A short summary of central trajectory methods is given in Jansen
et al. [9]. More comprehensive treatments of central trajectory
methods are given in den Hertog [8] and the forthcoming book by
Wright [22].
The rest of this article is devoted to a discussion of important cur
rent research topics in interior point methods. We discuss the fol
lowing topics, in order: infeasible interior point methods, compu
tational aspects of interior point methods, semidefinite program
ming, convex programming and selfconcordance, homogeneous
and selfdual methods, linear and nonlinear complementarity
problems, and theoretical issues related to interiorpoint methods.
Infeasible Interior Point Methods
By definition, interior point methods naturally work on the inte
rior (or relative interior) of a problem's feasible region, and conse
quently one obvious issue is how an initial feasible interior point
can be obtained. Over the years, a number of techniques for han
dling the "feasibility" or "Phase I" problem have been proposed,
including combined Phase IPhase II methods, shiftedbarrier
methods, and homogeneous selfdual methods. In practice, meth
ods based on a variation of a relatively simple algorithm, the "pri
maldual infeasibleinteriorpoint method" proved to be very suc
cessful. The basic method attempts to reduce the feasibility and
optimality gaps at each iteration by applying Newton's method to
the system (2) or (3) from an initial point (x, y, 0 sO) which is not
necessarily feasible for either P or D, i.e., Ax # b and/or A yO+ so
c, but which is "interior" in that xo > 0 and so> 0. In this sense, the
algorithm is a simple variant of the standard central trajectory
pathfollowing algorithm, but where the iterates are not necessar
ily feasible at each iteration. Let (2, 9, s) be an iterate. Then the
Newton direction (d, d, d) for the algorithm is derived from the
nonlinear equation system (2) and is the solution of the system
Adx = (Ax b)
Ad +d = (A T y+ c) (4)
Sdx +Xd (XSe pe).
Of course, if the iterate (,, y, s) is feasible, which is the case in a
standard central trajectory interior point algorithm, then the right
handside of the first and second equations of (4) are 0, and conse
quently the directions d and d are orthogonal. As it turns out, the
orthogonality of d and d is essential for an "easy" analysis of
Newton's method for solving (2), and is lost when the iterate
(2, Y, s) is infeasible.
Although quite successful computationally, the primaldual infea
sibleinteriorpoint method long defied any reasonable theoretical
analysis, nor even a proof of convergence, until a few years ago,
when satisfactory analyses by several researchers emerged. One
of the difficulties in the analysis of the algorithm was the lack of
foreknowledge of the existence of the 1.. I ,:i. solutions (interior
or not), the existence of the central trajectory, or the existence of
optimal LP solutions. In the case when the either P or D is infea
sible, it is theoretically possible to detect the infeasibility of the
problems P and/ or D, but such detection mechanisms do not nec
essarily work well in practice.
To overcome these difficulties, another type of infeasibleinterior
point algorithm has been developed. The basic algorithm uses the
following variant of the system (3):
Ax = b + O(Ax b), x > 0
TT 0 0
Ay+s =c+O(A y +s c),s>0 (5)
XSe 0 ((x)rsO /n)e = 0
0 0 0
where (x y s ) is the initiating point of the algorithm (where
0 Y1 T 00
x > 0 and s > 0, but quite possibly Ax # b and A y +s c), and
0 E (0,1]. Here, the goal is to use Newton's method to solve (5) for
a decreasing sequence of values of 0 tending to 0.
If IIX SOe ((x ) s /n) I is small enough, the point (x, y0, so) is a
good approximate solution of the system (5) for 0 =1. The set of
solutions to the system (5) forms a path parameterized by 0 which
does not lie in the feasible region of the problem unless the initial
point is feasible. Nevertheless, if P and D are feasible, the path
leads to optimal primal and dual solutions as 6 goes to 0. If either
P and/or D is infeasible, there exists a positive lower bound on 0
for which the system (5) has a solution, and the path diverges to
infinity as 0 approaches this lower bound. By exploiting these and
other features of the path, one can develop an infeasible interior
point pathfollowing algorithm which either solves P and D or de
tects infeasibility in a polynomial number of iterations.
PAGE 4
I
OCTOBER 1996
N? 51
'A, ~5 OTBR19
Interior Point Methods i i .i ,,l',.
The former type of algorithms based on the Newton system (4) are
preferred in practice, probably because they are more effective
than the latter method (based on (5)) when the linear program is
t, .ii,i. The authors believe, however, that the latter type of algo
rithm is most likely to outperform the former when the underly
ing linear program is either infeasible or is close to being infea
sible. Affine scaling methods and potential reduction methods
starting from an infeasible starting point have also been devel
oped, but practical versions of these algorithms have not received
very much attention. For a comprehensive summary of infeasible
interior point algorithms, see Mizuno [141.
Computational Aspects of Interior Point Methods
for Linear Programming
Much of the initial excitement about interior point methods
stemmed from the rather remarkable computational promise of
the method, as articulated by Karmarkar and others. Twelve years
later, computational aspects of interior point methods are still of
paramount importance. Although neither author is particularly
qualified to comment on computational issues, it is only fair to
briefly discuss key aspects of computation nevertheless.
After much effort in designing and testing various interior point
methods for linear programming in the 1980s and early 1990s, the
computational picture of interior point methods for linear pro
gramming has somewhat settled down. While a suitably custom
ized simplex method implementation enjoys superior perfor
mance over interior point methods for most routine applications
of linear programming, interior point algorithms are superior to
the simplex method for certain important classes of problems. The
simplex method tends to perform poorly on large massively
degenerate problems, whereas interior point methods are immune
to degeneracy (and are aided by it in certain ways), and so one
can expect an interior point algorithm to outperform the simplex
method to the extent that the underlying linear program has mas
sive degeneracy. This is the case in large scheduling problems, for
instance, which give rise to LP relaxations of binary integer pro
grams. Also, because the linearalgebra engine of an interior point
iteration works with a Cholesky factorization of the matrix AXA ,T
interior point algorithms will outperform the simplex method to
the extent that the matrix A is conducive toproducing relatively
sparse Cholesky factors of the matrix AX2A Such is the case in
large staircase multiperiod linear programs, for example. Other
than these two general problem types, there are not many other
ways to predict which method will be more efficient. The "state
oftheart" of interior point computation as of the early 1990s was
described in the article of Lustig, Marsten, and Shanno [13]; more
recent advances (using higherorder methods that are up to 25
percent faster) are described in Andersen et al. [3]. For a compre
hensive treatment of computational issues in interior point meth
ods, we recommend the forthcoming book by S. Wright [22].
There are a number of software codes for interior point methods
for linear programming, including PCx (by Czyzyk, Mehrotra,
and '.'! ihl'i. HOPDM (by Gondzio et al.), BPMPD (by M&sziros),
OSL (IBM), CPLEX/Barrier (CPLEX Optimization Inc.), XPRESS
MP (Dash Associates), LOQO (by R. Vanderbei), and LIPSOL (by
Y. Zhang). Information on these and other interior point method
codes is updated regularly on the World Wide Web page:
http://www.mcs.anl.gov/home/wright/IPPD/.
Some of these codes are free to the research community, others are
solely commercial, and others are a hybrid.
Computational issues for nonlinear optimization are deferred to
the following sections on semidefinite programming and on con
vex optimization.
Semidefinite Programming
In the opinion of the authors, semidefinite programming (SDP) is
the most exciting development in mathematical programming in
the 1990s. SDP has applications in traditional convex constrained
optimization, as well as in such diverse domains as control theory
and combinatorial optimization. Because SDP is solvable via inte
rior point methods, there is the promise that these applications can
be solved efficiently in practice as well as in theory. Before defin
ing a semidefinite program, we need to amend our notation. Let S"
denote the set of symmetric n x n matrices, and let S" denote the set
of positive .~,mi.l1. fiii, 'I.*i n x n matrices. Then S"' is a closed
convex cone in 9V" of dimension n x (n1)/2. We write "X > 0" to
denote that X is symmetric and positive semidefinite, and we
write "X t Y" to denote that XY > 0 (">" is the Lbwner partial or
dering on S"). Here, X is any symmetric matrix, not necessarily a
diagonal matrix as denoted earlier. We write "X > 0" to denote
that X is symmetric and positive definite, etc. Let X e S".
A linear function M(X) can be written as M X, where
M e X = E", M. X.Then a semidefinite program (SDP) is an
optimization problem of the form:
SDP: minimize C X
s.t. A *X=b, i=1, ..., m
Xt0,
where X is an n x n matrix, and the data for the problem are the m
symmetric matrices A, ..., A,, the symmetric matrix C, and the m
vector b. Notice that SDP has a linear objective function and linear
equality constraints, just like a linear program. However, the stan
dard LP constraint that x is nonnegative is replaced by the con
straint that the variable X is symmetric and positive semidefinite.
(It is helpful to think of X >t 0 as stating that the vector ofeigenvalues
of X is nonnegative.) The Lagrange dual of SDP is derived as:
SDD: maximize bry
s.t. ', y,A + S = C
S>0.
Given a feasible solution X of SDP and a feasible solution (y, S) of
SDD, the duality gap is simply C X bry = X S > 0.
As stated above, SDP has very wide applications in convex optimi
zation. The types of constraints that can be modeled in the SDP
framework include: linear inequalities, convex quadratic inequali
ties, lower bounds on matrix norms, lower bounds on determi
nants of symmetric positive semidefinite matrices, lower bounds
on the geometric mean of a nonnegative vector, plus many others.
Using these and other constructions, the following problems
(among many others) can be cast in the form of a semidefinite pro
~s~
OCTOBER 1996
NQ 51
PAGE 5
Interior Point Methods Freund/Mizuno
gram: linear programming, optimizing a convex quadratic form
subject to convex quadratic inequality constraints, minimizing the
volume of an ellipsoid that covers a given set of points and ellip
soids, maximizing the volume of an ellipsoid that is contained in a
given polytope, plus a variety of maximum eigenvalue and mini
mum eigenvalue problems.
SDP also has wide applicability in combinatorial optimization as
well. A number of NPhard combinatorial optimization problems
have convex relaxations that are semidefinite programs. In many
instances, the SDP relaxation is very tight in practice, and in cer
tain instances in particular, the SDP relaxation is provably quite
tight and can be converted to provably very good J, .,. I., solu
tions with provably good bounds on optimal objective values.
Last of all, SDP has wide applications in control theory, where a
variety of control and system problems can be cast and solved as
instances of SDP.
As it turns out, virtually all of the mathematics, constructions,
and even the notation of interior point methods for linear pro
gramming extends directly to SDP. This is truly remarkable. (The
extension of interior point methods to SDP was developed inde
pendently by Alizadeh [1] and Nesterov and Nemirovskii [15] us
ing different frameworks.) For example, the analogous parameter
ized logarithmic barrier problem BP(u) for linear programming
extends to SDP as:
BSDP(p): minimize
C X p ln(det(X))
s.t. Ai X = b, i=1, ..., m,
X>0,
where notice that ln(det(X)) replaces the logarithmic barrier func
tion I^ ln(x). The optimality conditions for this problem can be
written as:
A X=b, i= 1,...,m,X> 0
I, yA, + S =C, S > 0 (6)
XS I = 0,
which should be compared with (2). The third equation system of
(6) can alternatively be represented in many different equivalent
ways, including for example, (XS+SX)/2 pl = 0, resulting in
many different nonequivalent Newton directions for solving (6).
In terms of theoretical performance guarantees, the best central
trajectory methods for SDP are guaranteed to reduce the duality
gap of the iterates by a fixed proportion in O(Cn) iterations (where
the variable X is an n x n matrix). This is identical to the theoretical
performance guarantee for linear programming, even though the
dimension of the variables in SDP is much larger (n x (n1)/2 as
opposed to n for linear programming).
There are many very active research areas in semidefinite pro
gramming. In the arena of theory, there is research on the geom
etry and the boundary structure of SDP feasible regions (including
notions of degeneracy). There is research related to the computa
tional complexity of SDP, such as decidability questions, certifi
cates of infeasibility, and duality theory. There is active research
on SDP relaxations of combinatorial optimization in theory and in
practice. As regards interior point methods, there are a host of re
search issues, mostly involving the development of different inte
rior point algorithms and their properties, including rates of con
vergence, performance guarantees, etc.
Because SDP has so many applications, and because interior point
methods show so much promise, perhaps the most exciting area of
research on SDP has to do with computation and implementation
of interior point algorithms. Researchers are quite optimistic that
interior point methods for SDP will become practical, efficient, and
competitive. However, in the research to date, computational is
sues have arisen that are much more complex than those for linear
programming, see for example Alizadeh, Haeberly, and Overton
[2]. These computational issues are only beginning to be wellun
derstood. They probably stem from a variety of factors, including
the fact that the nonlinear conditions in the third equation system
of (6) can be represented in very many different ways, resulting in
many different Newton directions, see above. Other contributing
issues might be the fact that SDP is not guaranteed to have strictly
complementary optimal solutions (as is the case in linear program
i, ,_i' and the fact that the Jacobian of the KKT system defining
the Newton step can be much more poorly conditioned than is the
typical case in linear programming. Furthermore, there are chal
lenges in developing the linear algebra (both symbolic and nu
meric) for solving SDP by interior point algorithms. Finally, be
cause SDP is such a new field, there is no representative suite of
practical problems on which to test algorithms, i.e., there is no
equivalent version of the netlib suite of industrial linear program
ming problems.
A comprehensive survey of semidefinite programming is the ar
ticle by Vandenberghe and Boyd [20].
Convex Programming and SelfConcordance
Almost immediately after Karmarmkar's work appeared, research
ers began to explore extensions of interior point methods to gen
eral convex optimization problems. Indeed, the nonlinear nature
of interior point methods naturally suggested that such extensions
were possible. Throughout the 1980s, a number of papers were
written that showed that central trajectory methods and potential
reduction methods for LP could be generalized to certain types of
convex programs with theoretical performance guarantees, under
a variety of restrictions (such as smoothness conditions) on the
convex functions involved. However, there was no unifying
theory or analysis. Then, in an incredible tour deforce, Nesterov
and Nemirovskii [15] presented a deep and unified theory of inte
rior point methods for all of convex programming based on the
notion of selfconcordant functions. The intellectual contributions of
this one research monograph cannot be overstated, whether it be
for its mathematical depth, its implications for the theory of con
vex optimization and computational complexity, or for its implica
tions for computation. To outline the thrust of this work, consider
the following general convex program:
CP: minimize
s.t.
f(x)
gi(x) <0, i=1,...m,
I ~
PAGE 6
Na 51
OCTOBER 1996
P \GE 7 NY 5 OCTOBER 199
Interior Point Methods Freund/Mizuno
where g,(x) is convex, i=1,...,m, and the objective function f(x) is
linear (if not, add a new variable t and a new constraintf(x) < t
and declare the new objective function to be "minimize t"). Let
D= x Igi(x) < 0, i=1, ..., m), and suppose we have a (convex) barrier
function B(x) that goes to infinity as x goes to the boundary of D.
Then the barrier problem associated with CP is:
BCP(p): minimize
f(x) + pB(x)
s.t. x D,
and the central trajectory of BCP(p) is the set of optimal solutions
x(p) to BCP(p) parameterized by the barrier parameter p.
Nesterov and Nemirovskii show that Newton's method is a very
efficient tool for solving CP by tracing the central trajectory of
BCP(p), when the barrier function B(x) has the property of selfcon
cordance. Their approach is very general: B(x) does not necessarily
depend on the way the functions gi(x) are expressed. It just de
pends on the interior of the underlying feasible region. One of the
central ideas in the understanding of selfconcordance (and the in
terior point algorithms derived from them) is the use of the Hes
sian of B(x) to induce a local norm at x. Let H(x) be the Hessian of
B(x) at x. The induced norm at x is defined to be nx(v)=~1(v H(x)v),
which is a quadratic norm using the Hessian H(x) as the quadratic
form. Roughly speaking, a function B(x) is a 6 selfconcordant bar
rierfunction with barrier parameter d if B(x) satisfies the following
conditions: (i) local changes in the Hessian of B(.) at two points x
and y can be bounded by the induced norm at x of (xy), and (ii)
the induced norm of the Newton step at x is no larger than 4o.
(Stated more colloquially, a function is a 0selfconcordant barrier
if the Hessian of the function is a relevant tool in bounding the
size of the Newton step (by 4O) and in measuring changes in the
Hessian itself.) Nesterov and Nemirovskii show that when a con
vex program has a 0selfconcordant barrier, then Newton's
method improves the accuracy of a given solution of (CP) by at
least t digits in O(4Ot) Newton steps.
At present, 6selfconcordant barriers are known for only a few,
but very important, classes of convex programs. These include lin
ear and convex quadratically constrained programs (where
B(x) = C'" ln(b,axx Qx) and 0 = m) and semidefinite program
ming (where B(x) = ln(det(X)) for the n x n matrix X and 0 = n), as
well as convex programs involving the secondorder cone
(x Ix Qx < (c x+d) c x+d 01, and even epigraphs of matrix
norms. However, at least in theory, selfconcordant barriers can be
used to process any convex program efficiently: indeed, Nesterov
and Nemirovskii show that every open convex set in 9t" posesses a
0selfconcordant barrier where < Cn for some universal con
stant C. The implications of this truly farreaching result for the
complexity of convex programming is now being explored.
Nesterov and Nemirovskii also provide a "barrier calculus" con
sisting of many simple tools which allow the derivation of self
concordant barriers for complicated convex sets, based on self
concordant barriers for simpler convex sets.
In addition, Nesterov and Nemirovskii also work on the following
conic form of convex optimization:
KP: minimize
s.t. Ax = b
x K,
where K is a pointed, closed, convex cone with nonempty interior
which posesses a 6selfconcordant barrier; their algorithms and
performance guarantees apply easily to this case. This elegant
form allows for better presentation and also makes it easier to
draw parallels (when applicable) among interesting and well
studied special cases of CP and KP, such as linear programming
(where K is the nonnegative orthant) and semidefinite program
ming (where K is the cone of symmetric positive semidefinite
matrices).
Finally, researchers such as Giiler [7] are demonstrating deep
connections between the theory of interior point methods using
Oselfconcordant barriers, and other branches of mathematics
including algebra, complex analysis, and partial differential
equations.
At present, computational experience with interior point methods
for convex programming is rather limited, except as noted in the
case of semidefinite programming. However, researchers are opti
mistic that at least some of the success of interior point methods
for linear and semidefinite programming will be realized for more
general convex programs.
Homogeneous and SelfDual Methods
A linear programming problem is called selfdual if its dual prob
lem is equivalent to the primal problem. Given a linear program P
and an initial (possibly infeasible) point (x y s ) for which x >0
and so>0, a homogeneous and selfdual interior point method con
structs the following artificial linear program HSDP which is self
dual and almost homogeneous:
HSDP : minimize
s.t.
((x) s + 1)0
Ax br +bO = 0,
A y +CT E >0,
bry Cx +20 0,
by +c x zz = (x)rso 1,
x>0, r>0,
where
0T O 0 0 rcx+ 1 bTyO.
b = b Ax E=cAy s, 2 =cx +1by.
It is not hard to see that this program is selfdual, because the co
efficient matrix is skewsymmetric. Denote the slacks on the sec
ond and third set of constraints by s and K. Then HSDP has a
trivial feasible interior point (x,T,0,y,s,K) =(x, 1, 1, yo, s, 1) that can
be used to initiate any interior point algorithm for solving HSDP.
Since the dual is equivalent to the primal, the optimal value of
HSDP is zero. By using a path following interior point algorithm,
one can compute a strictly selfcomplementary solution (x*, [*, 0*,
y*, s*, K*) such that 90= 0, x* + s* > 0, and r* + > 0. If T*> 0, then
OCTOBER 1996
N? 51
PAGE 7
PAGE~Blill 8 1 CTBE 19
Interior Point Methods W 1 %11 1 ..1..
x*z* is an optimal solution of P and (y*/ i*, s*/ Z*) is an optimal so
lution of D. Otherwise, by strict complementarity, K*> 0, whereby
from the third constraint it follows that either c x*< 0 or b y*< 0.
The former case implies the infeasibility of the primal problem P,
and the latter case implies the infeasibility of the dual problem D.
The homogeneous and selfdual interior point method possesses
the following nice features: (i) it solves a linear program P without
any assumption concerning the existence of t, I .!, interior fea
sible, or optimal solutions, (ii) it can start at any initial point, fea
sible or not, (iii) each iteration solves a system of linear equations
whose size is almost the same as for standard interiorpoint algo
rithms, (iv) if P is :I. I i., it generates a sequence of iterates that
approach feasibility and optimality simultaneously, otherwise it
correctly detects infeasibility for at least one of the primal and
dual problems, and (v) it solves a problem in polynomial time
(O(nL) iterations) without using any "big M" constants. We point
out that the infeasible interior point algorithms presented in the
second section do not posess this last feature. Homogeneous and
selfdual interior point methods have the promise to be competi
tive in practice with standard interior point software, see Xu et al.
[23].
Homogeneous and selfdual methods can be extended to more
general problems, such as linear and nonlinear complementarity
problems. We refer readers to Ye, et al. [24] for an initial descrip
tion.
Linear and Nonlinear Complementarity Problems
The standard linear complementarity problem, or LCP, is to find a
pair (x,s) of ndimensional variables that satisfy the linear con
straint
s =Mx+q
and the complementarity conditions
(x,s) > 0, xs=0, j=l,...,n,
where M is an n x n matrix and q e R". The optimality conditions
for both linear programming and convex quadratic programming
can be cast as an instance of LCP, and for this reason LCP is often
used as a general model in the development and analysis of inte
riorpoint algorithms.
While there are several important classes of the LCP, the most im
portant class is the monotone LCP, defined to be those instances for
which the set of feasible solutions (x,s) are maximal and monotone
in R2n (equivalently, for which the matrix M is positive
semidefinite). Linear and convex quadratic programming prob
lems fall into this class. More generally, instances of LCP are typi
cally classified according to classes of the matrix M, such as P0ma
trices and P.( c)matrices (see Kojima et al. [11] for definitions).
Interior point methods for solving LCP have been developed using
the following generalization of the central trajectory equation sys
tem (2):
s = Mx + q, x > 0, s > 0,
(7)
XSe pe = 0.
If the matrix M is a P0matrix and a feasible interior point exists,
then the set of solutions to (7) forms a path (central trajectory) pa
rameterized by p, leading to a solution of LCP as p goes to 0, and
so one can solve the standard LCP with a P0 matrix by using a
pathfollowing interior point algorithm. This approach extends to
infeasible interior point methods, and potential reduction meth
ods for solving LCP have also been proposed by researchers. In
the case of the monotone LCP, many interior point algorithms
have a polynomial time performance guarantee. For P,(ic)matrix
LCP, there is also an explicit complexity analysis and performance
guarantee. The solution of LCP with P0matrices is known to be
NPcomplete.
The nonlinear complementarity problem, or P, is the problem
for finding (x,s) such that
s=f(x), (x,s) > 0, X S e=0,
for a given continuous functionf(). If f(.) is monotone, NLCP is
also called monotone. The optimality conditions for convex con
strained optimization can be cast as an instance of the monotone
NLCP. For this class of NLCP, the central trajectory system (7) can
be suitably generalized, and so can be solved by pathfollowing
interior point methods. Interior point methods for more general
classes of NLCP are discussed in Kojima et al. [12].
Recently researchers have become interested in the semidefinite
complementarity problem, or SDCP, which is a special case of
NLCP arising in the study of semidefinite programming (see the
earlier section). Infeasible interior point algorithms have been de
veloped for the monotone instances of SDCP, and SDCP is cur
rently a very active research problem.
Some Theoretical Issues Related to
Interior Point Methods
Recently, theoretical research on the complexity of solving linear
programming has focused on developing appropriate measures
for adequately representing the "size" of an LP instance that are
more relevant to computation than traditional measures of "size"
such as the dimensions m and n or the bitsize L of a binary repre
sentation of an LP instance. In this closing section, we discuss two
such measures, namely C(d) of an LP data instance d=(A,b,c), and
XA for the matrix A.
Consider the very general convex optimization problem cast as
follows:
P(d): maximize c x
s.t. bAx e Cy
x E Cx,
where Cx and C are closed convex cones, and the data d for the
problem is the array d = (A,b,c). Any convex optimization problem
can be cast in this form, including LP as a special case. The termi
nology P(d) emphasizes the dependence of the problem on the
data d=(A,b,c). Renegar [16] develops a condition number C(d) for
P(d) that is intuitively .'p i .li i.. arises naturally in considering
the problem P(d), is an extension of the traditional condition num
ber for systems of linear equations, and posesses many attractive

PAGE 8
NO 51
OCTOBER 1996
P\GF 9 N0 51 OCTOBER 1996
S
Interior Point Methods Freund/Mizuno
geometric and algebraic characteristics. (For example, if P(d) has a
feasible solution, then it must have a feasible solution whose norm
is no larger than C(d).) We give a rough description of C(d) as fol
lows. Let d=(A,b,c) be the data for P(d) and let Ad = (AA, Ab, Ac) be
a change in the data. Let p(d) be the smallest change Ad needed to
make the problem P(d) either infeasible or unbounded. Then C(d)
is defined to be lI d I / p(d). That is, C(d) is a scaleinvariant version
of the reciprocal of the smallest change in the data d=(A,b,c)
needed to cause P(d+Ad) to be illbehaved. Roughly ,.  in,.:
Renegar shows that the complexity of an interior point method for
solving P(d) is inherently sensitive only to the condition number
S, ') of the underlying problem and to the barrier parameter 1 of
the selfconcordant barrier for the cones C and C and that the
complexity bound on the number of iterations is O4I(ln(C(d) +
ln(1 / )) to produce an Eoptimal solution of P(d). Therefore, the
interior point algorithm is efficient in a welldefined sense. Not
surprisingly, the condition number C(d) is intrinsically involved in
a variety of special properties of the central trajectory of a linear
program (see Freund and Nunez [5]), and we anticipate that the
study of the condition number C(d) will yield much new insight
into linear and convex programming in the future.
Another very interesting development due to Vavasis and Ye [21]
is an interior point algorithm for linear programming whose run
ning time depends only on the dimension n and on a certain mea
sure of the matrix A denoted by a. Let (A,b,c) be the data for an LP
instance, where the data are not restricted to be rational numbers.
For the matrix A, define the quantity:
A = sup{ AT (ADA') AD I D is a positive n x n diagonal matrix).
Then Vavasis and Ye present an interior point algorithm for solv
ing a linear program in at most O (n i r,,;' ) +ln(n) + C)) iterations
of Newton's method, where C is a universal constant. The signifi
cance of this result derives from the fact that the data b and c play
no role in the bound on the number of iterations. Put a different
way, the efficiency of their algorithm for linear programming de
pends only on the dimension and on a certain algebraic property
of the matrix A embodied in the quantity X,. I hi research im
proves on earlier work by Tardos, by showing that the depen
dency of the complexity result on A is true even when the data are
not presumed to be integer (or rational) coefficients.
Acknowledgement The authors are grateful to Levent Tunfel and
Stephen Wrightfor their input regarding certain aspects of this article.
(Of course, the authors alone take ,, /...., ..i...' for all opinions stated
herein, as well as for all possible errors.)
References
[1] F. Alizadeh, Interior point
methods in semidefinite
programming with applica
tions to combinatorial
optimization, SIAM Journal on
Optimization 5 1351,1995.
[2] F. Alizadeh, J.P. Haeberly, and
M. Overton, Primaldual
interiorpoint methods for
semidefinite programming:
convergence rates, stability
and numerical results, New
York University Computer
Science Dept Report 721, May
1996.
13] E. Andersen, J. Gondzio, C.
Meszaros, and X. Xu, Imple
mentation of interior point
methods for large scale linear
programming, in Interior point
methods in mathematical
programming, T. Terlaky, ed.,
Kluwer Academic Publisher,
1996.
[4] K. Anstreicher, Potential
Reduction Methods, in
Interior point methods in
mathematical programming, T.
Terlaky, ed., Kluwer Academic
Publishers, 1996.
[5] R.M. Freund and M. Nunez,
Condition measures and
properties of the central
trajectory of a linear program,
submitted to Mathematical
Programming, M.I.T. Sloan
School working paper 388996
MSA, March, 1996.
[6] C. Gonzaga, Path following
methods for linear program
ming, SIAM Review 34(2) 167
227, 1992. 
[7] 0. Giiler, Barrier functions in
interior point methods,
Mathematics of Operations
Research 21 (1996), to appear.
[8] D. den Hertog, Interior point
approach to linear, quadratic
and convex programming,
algorithms, and Complexity,
Kluwer Publishers, Dordrecht,
The Netherlands, 1994.
[9] B. Jansen, C. Roos, and T.
Terlaky, A short survey on ten
years of interior point
methods, Report 9545, Delft
University of Technology,
Delft, The Netherlands, 1995.
[10] N. Karmarkar, A new
polynomialtime algorithm for
linear programming,
Combinatorica 4 373395, 1984.
[11] M. Kojima, N. Megiddo, T.
Noma, and A. Yoshise, A
unified approach to interior
point algorithms for linear
complementarity problems,
Lecture Notes in Computer
Science 538, SpringerVerlag,
Berlin (1991).
[12] M. Kojima, N. Megiddo, and
S. Mizuno, A general frame
work of continuation methods
for complementarity problem,
Mathematics of Operations
Research 18 945963, 1994.
[13] Lustig, R. Marsten, and D.
Shanno, Interior point
methods: computational state
of the art, ORSA Journal on
Computing 6 114,1994.
[14] S. Mizuno, Infeasibleinterior
point algorithms, in Interior
point methods in mathematical
programming, T. Terlaky, ed.,
Kluwer Academic Publishers,
1996.
[15] Y. Nesterov and A.
Nemirovskii, Interiorpoint
polynomial algorithms in
convex programming, SIAM
Publications, Philadelphia,
U.S.A., 1994.
[16] J. Renegar, Linear program
ming, complexity theory and
elementary functional
analysis, Mathematical
Programming 70, 279351, 1995.
[17] R. Saigal, Linear Programming:
A Modern Integrated Analysis,
Kluwer Academic Publishers,
Boston, 1995.
[18] M. Todd, Potential reduction
methods in mathematical
programming, Mathematical
Programming B, to appear.
[19] T. Tsuchiya, Affine scaling
algorithm, in Interior point
methods in mathematical
programming, T, Terlaky, ed.,
Kluwer Academic Publishers,
1996.
[20] L. Vandenberghe and S. Boyd,
Semidefinite programming,
SIAM Review 38 1 4995, 1996.
[21] S. Vavasis and Y. Ye, A primal
dual interior point method
whose running time depends
only on the constraint matrix,
Mathematical Programming 74
79120, 1996.
[22] S. Wright, PrimalDual
Interior Point Algorithms,
SIAM Publications, Philadel
phia, forthcoming.
[23] X. Xu, P.F. Hung, and Y. Ye, A
simplified homogeneous and
selfdual linear programming
algorithm and its implementa
tion, Annals of Operations
Research 62, 151171, 1996.
[24] Y. Ye, M. Todd, and S.
Mizuno, An O(4nL)iteration
homogeneous and selfdual
linear programming algo
rithm, Mathematics of
Operations Research 19 5367,
1994.
NQ 51
OCTOBER 1996
PAGE 9
PAGE 10
jit
4
j
iJ~
'eLrird Internul ,5ncoI ly mmp~ .
simnim oil Opertat;no Peseutneh
anid itF Applicsition
(IsrWA '96)
Guilini, Olrinu
Dec. I 1 13, i996
Optimal Control: thvot y,
(119orithnis, a nd anpplicol; eor.
C~eiiter for Applied
Optimizotion
University of Floiida
Feh. 2) March I, 1997
Third Workshop on Models
and Algoi ;thms for Plannrinmi
and Scheduling Probleivms
.arnibridge, Englarnd
April 1 11, 1997
)sVI luite rncdkronl
SynhIposiur ni 0MatIemutiatNA
P ogrunlunminl, Li.?usm.,nim
Jvtom;n luund, Aug. 1997
1CM98
IHer In, GerkniRtirmy
AtJ.ust 1827, 1998
N" 51
Report on the Fifth International Symposium on
Generalized Convexity
June 1721, 1996, Centre International de Rencontres
Mathematiques (CIRM), MarseilleLuminy
The Fifth International Symposium on Generalized Convexity was held at
Centre International de Rencontres Mathematiques (CIRM) in' I 1 II 
Luminy, June 1721, 1996. Among others, it was sponsored by the Math
ematical , .... ......;I Society and the International Scientific Committee for
the recently founded : i .1,: Group on Generalized Convexity.
About 70 scholars from 15 countries .II... .. a tight schedule of more than
50 talks. The papers covered many aspects in the field such as fractional pro
gramming, various kinds of generalized convex sets and functions, optimality
conditions and duality, generalized monotonicity in variational inequalities
and equilibrium problems, multiobjective .....: I ,,Ir ';,,._. ,,..,.til ,, I.I.
calculus. An emphasis on the fast developing theory of generalized monotone
operators was noticeable. The bulk of the contributions on generalized con
vexity were of a theoretical nature. In addition, several authors presented
algorithmic results of applications, e.g., in economics and mechanics.
Especially well received were invited lectures by F. Clarke, J.S. Pang and
R. Wets in which generalized convexity was related to topics in neighboring
fields such as nonsmooth analysis, mathematical programming/
complementarity theory/variational inequalities and stochastic programming.
The conference site (CIRM) provided 11. opportunity for limitless
professional contact among the participants. A tour to nearby Aixen
Provence and a taste of its worldfamous .. .,il ,. I rounded off a rich
scientific program.
A selection of referred papers presented at the symposium will appear in the
Proceedings to be published by Kluwer Academic Publishers. For informa
tion, also on the proceedings of the previous four symposia in Vancouver
(1980), Canton (1986), Pisa (1988) and Pecs (1992), please contact one of
the undersigned. The sixth symposium has been scheduled to take place on
the Greek Island of Samos in 1999.
JeanPierre Crouzeix, C.U.S.T., Universite Blaise Pascal, Aubiere, France
crouzeix@ucfma.univbpclermont.fr
Siegfried Schaible, Graduate School of Management, University of California,
Riverside
schaible@ucrac 1.ucr.edu
OCTOBER 1996
::
PAGE 11
ICM98
Berlin, Germany
August 1827, 1998
Nominations for the Fields
Medals and the Rolf
Nevanlinna Prize
The Fields Medals and the Rolf
Nevanlinna Prize will be presented at
the opening ceremony of ICM98 on
August 18, 1998. The International
Mathematical Union (IMU) Execu
tive Committee has appointed a
Fields Medals and a Nevanlinna
Prize Committee to select the
awardees.
An individual may contribute to the
selection process by contacting the
Committee of Mathematics of his or
her country. (To find your National
Mathematical Committee, look up
the list of IMU member countries in
the IMUserver
(http://elib.zibberlin.de/IMU) and
click on your own country.) Click on
"Fields Medals and Nevanlinna
Prizes" for additional information.
The National Committees may
suggest candidates. The nominations
must be accompanied by a brief
justification and be received no later
than March 31, 1997 by the
Secretary of IMU:
International Mathematical Union
Professor Jacob Palis, Secretary
Estrada Dona Castorina, 110, Jardim
Bot nico 22.460320 Rio de
Janeiro, RJ Brazil
Fax: (55) (21) 512 4112
email: IMU@IMPA.BR
More information about
ICM98 may be found in the
ICM98 WWWserver
(URL: http://elib.zibberlin.de/
ICM98). This WWWserver
also offers an electronic preregistra
tion form for ICM98. If you do not
have access to the World Wide Web
and would like to subscribe to the
ICM98 circular letters, just send an
email to icm98@zibberlin.de and
put "preliminary preregistration" in
the subject line.
The IMU plans to award travel grants
to young mathematicians to help
them attend the ICM98. These
grants are intended primarily for
young mathematicians, up to 35
years of age, from developing coun
tries. Candidates should give their
name and address and include evi
dence of research at the post
doctoral level in addition to a brief
curriculum vitae showing date of
birth plus a list of publications. The
Local Organizing Committee of ICM
will provide a special il .. .. .. to the
grantees to cover registration, board
and lodging.
Applications for travel grants may be
sent directly to the Secretary of the
Union or may be submitted through
the Committees for Mathematics. All
applications should reach the Secre
tary of IMU by January 1, 1998 (ad
dress listed above).
IMU would like to increase the num
ber of travel grants and is seeking do
nations to its Special Development
Fund (SDF) for this purpose. Dona
tions can be sent at any time and in
any convertible currency to the fol
lowing account: IMU Account Num
ber 086265620821, Schweizeriche
Kreditanstalt, Stadtfiliale Zurich 
Rigiplatz, Universitaetstrasse 105,
CH8033 Zurich Switzerland.
Martin Grbetschel
martin.groetschel@zibberlin.de
Third Workshop on Models and Algorithms for Planning and
Scheduling Problems
Cambridge, England
April 711, 1997
I ..I.. '_ two successful workshops at Lake Como, Italy, in 1993, and in
Wernigerode, Germany, in 1995, the Third Workshop on Models and Algo
rithms for Planning and Scheduling Problems is to be held in Cambridge,
,, ...l,,d. April 711, 1997.
The workshop aims to provide a forum for scientific exchange and coopera
tion in the field of planning, scheduling and related areas. To maintain the
informality of the previous workshops and to encourage discussion and coop
eration, there will be a limit of 80 participants and a single stream of presen
tations. Contributions on any aspect of scheduling are welcome.
Authors presenting papers at the workshop are invited to submit their manu
scripts for possible publication in a special issue of Annals of Operations Re
search. Papers for this special issue will be refereed in the normal way.
Invited Speakers
Y.J. Crama, University of Liege
K.D. Glazebrook, Univ. of Newcastle
D.S. Johnson, AT&T Bell Laboratories
J.K. Lenstra, Tech. Univ. of Eindhoven
D.B. Shmoys, Cornell University
Organizers
M.A. H. Dempster, Cambridge
C.A. Glass, Southampton
C.N. Potts, Southampton
V.A. Stusevich, Greenwich
F. Vanderbeck, Cambridge
R.R. Weber, Cambridge
Persons interested in participating should submit a preregistration form and, if
relevant, an abstract of the intended contribution before Dec. 20, 1996.
A copy of the preregistration form and details about dates, accommodation,
fees, travel, etc., can be obtained by viewing
http://www.statslab.cam.ac.uk/workshop/
or by sending your postal address and a request for a printed workshop bro
chure to secretary@statslab.cam.ac.uk.
The number of preregistrations so far for the 16th International Symposium on Mathematical Programming, to be held in Lausanne, August 2429, 1997, is now
almost 600! The number of invited sessions is also growing fast and is now 117. Next to the parallel sessions there will be more than 20 semiplenary lectures featuring
the state ofthe art in mathematical programming. Otherspecial events include a SIMPLEX METHOD 5(r I i'.il I i1 *. .'PARTY,andworkisunderwayto establish
a new prize for any progress in developing a polynomial time simplex algorithm for linear programming problems or giving a proof that no such algorithm exists.
Important news for people from former Communist countries in Eastern Europe is that they can apply for financial aid at their local SOROS foundation. The Sym
posium Chairman has been told that such applications will be granted. The symposium has a home page at the URL http://dmawww.epfl.ch/roso.mosaic/ismp97.
Here you can find the latest symposium news, preregistration forms, and an overview of all sessions organized so far. Ifyou have not preregistered, please do so quickly
so that you do not miss important information. More comprehensive news will follow in the next issue of OPTIMA.
On behalfofthe Symposium Organizers, KAREN AARDAL
OCTOBER 1996
NQ 51
'"GE 11 OTOBE 199
PAGE 1 NI 5 DOCTOR 199
D.R. Fulkerson Prize
Call for Nominations
This is a call for nominations for the
D. Ray Fulkerson Prize in Discrete
Mathematics of the American Math
ematical Society and the Mathemati
cal Programming Society. The Prize
will be awarded at the XVIth Interna
tional Symposium on Mathematical
Programming to be held in Lausanne,
Switzerland, August 2429, 1997.
The specifications for the Fulkerson
Prize read: "Papers to be eligible
for the Fulkerson Prize should
have been published in a recog
nized journal during the six calen
dar years preceding the year of
the Congress. The extended period
is in recognition of the fact that the
value of fundamental work cannot
always be immediately assessed.
The prizes will be given for single
papers, not series of papers or
books, and in the event of joint au
thorship the prize will be divided.
The term "Discrete Mathematics"
is intended to include graph
theory, networks, mathematical
programming, applied combinato
rics, and related subjects. While re
search work in these areas is usu
ally not far removed from practi
cal applications, the judging of pa
pers will be based on their math
ematical quality and significance."
So papers eligible for the 1997
prize should have appeared in
one of the six years 19911996.
The conclusions of the Fulkerson
Prize Committee (Eva Tardos chair,
Ron Graham, and Ravi Kannan)
will be presented to the two Soci
eties jointly awarding the Prize:
the Mathematical Programming
Society and the American Math
ematical Society.
Please send your nominations (in
cluding reference to the nominated
article and some evaluation) by
January 30, 1997 to the chair of
the Fulkerson Prize Committee:
Eva Tardos, Department of
Computer Science, Cornell
University, Ithaca, NY 14850
email: eva@cs.cornell.edu.
ORLibrary Updated
The ORLibrary (o.rlibrary@ic.ac.uk) now contains the following problem types:
 multiple depot vehicle routing problem see the file multivrpinfo
 network flow problem see the file nefflowinfo
 phub location problem see the file phubinfo
 period travelling salesman problem see the file periodtspinfo
 period vehicle routing problem see the file periodinfo
 set covering problem see the file scpinfo
 timetabling problem see the file tableinfo
ftp access available at mscmga.ms.ic.ac.uk
WWW access available at http://mscmga.ms.ic.ac.uk/
Update of Mathematics Subject Classification
The Mathematical Programming
Society has set up a committee con
sisting of Jose Mario Martinez
(martinez@ime.unicamp.br), Lex
Schrijver (Lex.Schrijver@cwi.nl), and
Mike Todd
(miketodd@CS.Cornell.EDU), chair,
to provide input to Mathematical Re
views and Zentralblatt fur
Mathematik on their proposed up
date of the 1991 Mathematics Sub
ject Classification. It is not envi
sioned that this will be a major up
date, but it needs to incorporate
new areas of research and possibly
appropriately reorganize existing
areas. The committee expects to
concentrate its attention on the orga
nization of 90CXX, mathematical
programming but may also make
suggestions on other parts of the
classification, such as 65KXX,
68QXX, 05DXX, and 90BXX.
The existing classification can be
found on the world wide web at URL
http://www.ams.org/msc/, or cop
ies may be obtained by email or
regular mail from the committee
members.
We welcome suggestions from the
mathematical programming commu
nity which can be sent by email to
any or all of the committee members
or by regular mail to the address be
low. We would appreciate receiving
input before or during the Interna
tional Symposium on Mathematical
Programming to be held in
Lausanne, Switzerland, Aug. 1997.
Michael J. Todd
School of Operations Research and
Industrial Engineering
Rhodes Hall
Cornell University
Ithaca, NY 148533801 USA
SIAM Announces
Postgraduate
Memberships
SIAM is pleased to announce anew class
of membership to be offered in 1997 to
current and new members who are re
centgraduates. "Postgraduate Member
ships" are available on a onetime basis
for up to three consecutive years imme
diately after receiving your highest de
gree. Dues for 1997 are $45.
If you are a currentstudent memberand
cannot claim student status for 1997,
you may consider this category. When
you receive your renewal notice for
1997, indicate what degree you have
earned, and where, and remit Post
graduate Membership dues.
SIAM reminds current students that a
discounted membership class is avail
able to them as well $20 for the calen
dar year 1997. Student Members re
ceive membership in one SIAM activity
group at no charge.
Please contact SIAM Customer Service
for more information:
service@siam.org
SIAM 3600 University City Science
Center Philadelphia, PA 19104
2153829800 fax 2153867999
Call for Nominations for the George B. Dantzig Prize
Nominations are solicited for the George B. Dantzig Prize, administered jointly by the Mathematical Programming
Society (MPS) and the Societyfor Industrial and Applied Mathematics (SIAM). This prize is awarded to one or more
individuals for original research which, by virtue of its originality, breadth and depth, is having a major impact
on thefield of mathematical programming. The contributions eligible forconsideration mustbe publicly available
and may address any aspect of mathematical programming in its broadest sense. Strong preference is given to
contributions by individuals under 50 years of age. The prize will be presented at the symposium. Past Dantzig
Prize recipients have been: M.J.D. Powell and R.T. Rockafellar in 1982, E.L. Johnson and M.W. Padberg in 1985,
M.J. Todd in 1988, M. Groetschel and A.S. Nemirovsky in 1991, and C. Lemar6chal and R. J.B. Wets in 1994.
The prize committee members are: Ellis L. Johnson, Chairman; Jorge Mor6; Claude Lemar4chal and Clovis Gonzaga.
Please send nominations to Ellis L. Johnson, Georgia Institute of Technology, School of Industrial and Systems
Engineering, Atlanta, Georgia 303320205, U.S.A., or by email to ejohnson@isye.gatech.edu. Nominations are
due by November 1, 1996, and should provide a brief oneor twopage description of the nominee's outstanding
contributions, and, if possible, a current resume including a list of the nominee's publications.
Ellis Johnson
 
PAGE 12
N? 51
OCTOBER 1996
P~H13 N 5 OTOBR 99
Global Minimization of
Nonconvex Energy Functions:
Molecular Conformation and
Protein Folding
P.M. Pardalos, D. Shalloway
and G. Xue (Eds.)
DIMACS Series in Discrete Mathematics
and Theoretical Computer Science, Vol. 23,
Providence, R.I.
American Mathematical Society
ISBN NUMBER 0821804715
The book is a collection of 17 papers entirely devoted to
the problem of protein folding, which can be described
as follows: Given the amino acid sequence of a protein,
find the protein threedimensional conformation (fold
/ 1. 
ing) in a biochemical environment. The basic approach
of the entire collection is that the sought folding must
minimize an appropriate energy function, and this re
quires a concerted effort of the specialists in biochemis
try (to produce an adequate energy function), in optimi
zation (to s ,._ .......... I ..1. i ..i 1, ,i com 
puter science (to create a reliable, ;,.I ..11 computa
tional environment in a problem involving a huge num
ber of variables and constraints).
The book is a set of refereed papers based on .11 pre
sented at a workshop held at DIMACS (an NSF Science
and Technology Center located on a < t,,I .' i ... .
University, NJ), March 1995. The arrangement is only
by alphabetical order and no index is provided.
The collection can be partitioned into three classes: (1)
functions specified, sometimes with much discussion and
substantiation of the physics (9 papers: P. Amara et al.,
B.W. Church et al., J. Kostrowicki and H.A. Scheraga,
C.D. Maranas et al., R. Pachter et al., A.T. Phillips et al.,
G. Ramachandran and T. Schlick, G.L. Xue et al., and
M.M. Zacharias and D.G. Vlachos), (2) Computer imple
mentations based on specifics on the problems (4 papers:
R.E. Bruccoleri, R.H. Byrd, J. Gu and B. Du, and X. Hu
et al.), (3) Associated problems (4 papers: H.A.
Hauptman, J.J. More and Z. Wu, A. Sali et al., and M.
Vieth et al.).
Among the methods discussed, there are such techniques
as simulated annealing (P. Amara, J. Ma, andJ.E. Straub;
B.W. Church, M. Oresic, and D. Shalloway; R. Pachter,
Z. Wang, J.A. Lupo, S.B. Fairchild, and B. Sennett; and
M.M. Zacharias and D.G. Vlachos) and local search (R.E.
Bruccoleri; R.H. Bird, E. Eskow, A. van der Hoek, R.B.
Schnabel, C.S. Shao, and Z. Zou; J. Gu and B. Du; and
A.T. :'I.i J.B. Rosen, and V.H. Walke). It should be
pointed out that the techniques here are not (and perhaps
cannot be) applied as they are but in a much enhanced
form and combined with some other useful techniques
(; I.. I ,,, 1 .. .. .. 1 i I ... ... 1 , ........ I, ; ... c o n v ex
optimization, etc.). In this aspect, an original and perhaps
most promising direction seems to be the one which
involves aggregation of the original model (usually, in
terms of its local solutions) according to the "cluster"
structure of the folding, with subsequent performing
optimization for the aggregate model. This idea is em
ployed in may papers, especially in those ofR.H. Bird, E.
Eskow, A. van der Hock, R.B. Schnabel, C.S. Shao, and
Z.Zou;B.W Church, M .(' .... ..i' .i. ll.. .. .. I
G.L. Xue, A.J. Zall, and P.M. Pardalos.
fw : .:
The level of presentation and the potential background
of the reader are quite different for I I,' , papers.
However, there are several articles oriented toward a math
ematician having a quite rudimentary knowledge of mo
lecular biology, with careful motivation of the presented
models and methods. Among them, the reviewer would
recommend the papers of C.D. Maranas, I.P.
Androulakis, and C.A. Floudas; R.H. Bird, E. Eskow,A.
van der Hock, R.B. Schnabel, C.S. Shao, and Z. Zou;
J.J. Mord and Z. Wu; G.L. Xue, A.J. Zall, and P.M.
Pardalos; and J. Kostrowicki and H.A. Scheraga.
Overall, the book can be considered as an important step
in consolidation of the newly emerging discipline which,
currently, seems quite a multidisciplinary one.
BORIS MIRKIN AND ILYA MUCHNIK
Global Optimization in Action
J.D. Pinter
Nonconvex Optimization and Its
Applications, Series 6, Kluwer Academic
Publishers, Dordrecht 1996
ISBN NUMBER 0792337573
It is hard to understand why nearly .1; the research in
nonlinear optimization deals with the search for local
solutions. Is it a more beautiful theory, is it the narrower
connections to classical mathematics like analysis or ge
ometry, is it the lower costs for numerical computations,
0 ;. ; 1, ." ,. .. ,", ,I, I' '" . I 1, . .... ; ,
with the control by the human brain enables the user to
concentrate the search for such local solutions that are
~I~
OCTOBER 1996
PAGE 13
NO 51
PAGE 14 N0 510OCTOBER 1996
almost global ones? Why have there not been published
m ore than tw o d . ,, ,i... .. .1, ,I...,1, I1,.. :. 1..1 I vari
ant? No matter what the reasons are, Pinter's book is an
important contribution to make the theoretical and prac
tical fundamentals of multiextremal optimization more
accessible and to spread the global view of optimization.
T he goal I ,,, .11 ,i,, .......... . , to introduce
the reader to deterministic, gradientfree optimization
algorithms which are based on adaptive, sequential par
, 1 1. I .. .. ,.. 1, , h11 , .. 1 1 .. i rI ,. 1. 1. . 
istic methodology interweaving the topics of the book, a
shorter discussion of adaptive stochastic algorithms is
included. The core of the book, however, is a broad spec
, il,,, ,t1 ,1 .1 , .1 .1, T 1 ,1 1 . 1 f
applications, which originate mainly from joint research
of the author with colleagues during the past decade.
The book is subdivided into four parts. Part 1 (Global
Optimization: A Brief Review; 38 pages) can be seen as
a first look at global optimization and several basic meth
ods. Part 2 (Partition Strategies in Global Optimization:
The Continuous and the Lipschitzian Case; 105 pages)
contains the theoretical core of the book, that is, adaptive
theory. In particular, the partition topics cover intervals,
boxes, simplices, general convex sets and star sets. Part 3
(ImplementationAspects, Algorithm Modifications and
Stochastic Extensions; 100 pages) deals with several
algorithmical implementations ofLipschitz functions and
also with a few stochastic extensions such as decision
making under uncertainty, random search based stochas
tic optimization procedures, and optimization with noise
perturbed function values. Part 4 (Applications, 184
pages) is the focal point of the book and is concerned with
existing and prospective applications of continuous and
Lipschitz global optimization. Among the many pre
sented, one finds methods to solve nonlinear equations
and inequalities, data classification, aggregation of meth
ods to solve nonlinear equations and inequalities, data
classification, aggregation of expert opinions, product
design, wastewater treatment system design, multiple
point source river pollution management, risk analysis and
management of accidental toxic river pollution, etc.
Pinter's book can be recommended to each reader who
needs a farreaching survey of continuous and Lipschitz
optimization. The book is distinguished by its vivid and
informative style, the large number of areas that are
touched, the lackoflongwinded and overly sophisticated
expositions, and the optimum balance between theory and
practical applications.
H. RATSCHEK
Linear Optimization and
Extensions
Manfred Padberg
Series in Algorithms and Combinatorics 12
Springer, Berlin, 1995
ISBN NUMBER 354057349
Linear optimization is at the core of operations research
and therefore it is no wonder that there exist already several
texts on this subject. However, the new book of Manfred
modifying the contents of the previous ones but provides
an approach to linear optimization that is strongly mo
The book consists often chapters and three appendices.
Chapters 1 through 6 cover the basics of linear optimi
zation, the primal and the dual simplex algorithm, and
duality ti i' .i :,'s intent is the solution oflarge
linear programs, he does not introduce the simplex algo
rithm with the help of the wellknown tableau method,
but uses only basic linear algebra. This approach gives the
reader the exact knowledge of the simplex algorithm and
1 ,, I r1. r I . 1. ,,,,,,1. l,.',, ,1 .. I ,,,..,, 1.O f th e
simplex algorithm that solves problem instances larger
than typical text book examples. Moreover, a dynamic
simplex 1:_ .,;,,.... and important aspects of practical
computations such as cycling and data structures are
discussed.
Chapter 7 is a comprehensive outline of the analytical
geometry, in particular polyhedral theory, related to lin
ear optimization. The equivalence of the description of
polyhedra by linear inequalities and by the convex and
conical combinations of points is proven. But even in these
more theoretical sections, computational aspects play a
central role. The double description algorithm transform
ing any representation into the other is outlined. Imple
mentations of this algorithm are a basic tool for the in
vestigation of polyhedra associated with combinatorial
optimization problems. The chapter contains also some
aspects of complexity theory of linear optimization and
of the geometry of the simplex algorithm.
The currently most important "competitors" of the sim
plex algorithm, interior point algorithms and their modi
fications, are the contents ofchapter 8. Mainly projective
,Il...:l.... 1; 1 .. .1 a description of barrier
methods is included.
The ellipsoid method, described in Chapter 9, is mainly
of theoretical interest due to the numerical problems
arising in its implementation. However, the proof of the
equivalence of optimization and separation is an impor
tant theoretical aspect of the linear optimization approach
to integer optimization problems.
The practical application of linear optimization tech
rI,,T ,,. ,,,,. ,r .. .. I l ... *.. ..* .. I,,, t I ..
in the last chapter of the book. The solution of the mixed
integer optimization problem by a branchandcut algo
rithm is shown and the description of the associated
polyhedra is discussed.
T I,, '1 J ; .; ,1,, ., ,,,,11 ,. J ;,,, . . .
cations of linear programming from financial manage
ment, from control of a refinery, and from the solution
of traveling salesman problems. These examples are not
"toy" examples often found in other books but real world
examples motivating the reader to apply linear optimiza
tion.
The bookcontains a series of instructive exercises and can
be recommended as a text for course in linear program
ming for students with a sufficient knowledge of linear
:,I,_) , .. {',! , .,, ,I 1,, ,,111. ......... [,h ,_ iI, ,
of linear optimization to integer and combinatorial op
timization, this book should be particularly helpful in a
series of lectures on linear and mixed integer optimiza
tion.
' ,.ii.., 's impressive book deals with solving linear op
timization problems on modern computers. Therefore,
this book is a useful reference book for everybody work
ing in computational linear and integer programming.
Unfortunately, the bibliography can only be found at the
end of the book and almost no references are cited within
L 1 .. 1 ... ... . .. ...11.; ;..._ ,i.i ,r, I r ,,.l..F.., c h
section or chapter would have been helpful for the reader
to delve more deeply into the topics of the book.
Reading this book is a real pleasure. Padberg not only
teaches the theory and practice of linear optimization in
a clearly structured way; he also is entertaining in moti
vating examples of linear modeling, in describing the
impacts on linear optimization algorithms and in giving
I,. i ,; 1 I .. .. .1 .. .1 .., ..1. .. J ,, I . 1 ,.. ,,,;,, .I
algorithms. He even has stories from Greek .i I,......
and not just about the achievements of early mathema
ticians.
STEFANTHIENEL
OCTOBER 1996
PAGE 14
NO 51
PAGE 15
J.N. Hooker, Resolution and the
;,. .. i of :t. ..,i.'' .
problems.
S. Mehrotra, Asymptotic
convergence in a generalized
predictorcorrector method.
M. Verkama, H. Ehtamo and
R.P. Hamalninen, Distributed
computation of Pareto solutions in
nplayer ...
K.C. Kiwiel, Complexity of some
cutting plane methods that use
analytic centers.
G.S.R. Murthy, T. Parthasarathy
and M. Sabatini, Lipschitzian Q
matrices are Pmatrices.
Z. Piles and V. Zeidan,
Generalized Hessian for C1'1
functions in infinite dimensional
normed spaces.
S.A. Vavasis and Y. Ye, A
primaldual interior point method
whose i,,';,: time depends only
on the constraint matrix.
D. Bienstock, Computational study
of a family of mixedinteger
quadratic programming problems.
P. Marcotte and D.L. Zhu, Exact
and inexact.'. . it.' methods for the
generalized bilevel programming
problem.
T. Wang, R.D.C. Monteiro and
J.S. Pang, An interior point
potential reduction method for
constrained equations.
S.R. Mohan, S.K. Neogy and
R. Sridhar, The generalized linear
I ..* ... .i 'i.,' ;ti. problem revisited.
A. Caprara and M. Fischetti,
0, 1/2ChvdtalGomory cuts.
R.W. Freund, F. Jarre and
S. Schaible, On selfconcordant
barrier functions for conic hulls and
fractional : .......... ;
C.E. Ferreira, A. Martin,
C.C. de Souza, R. Weismantel
and L.A. Wolsey, Formulations
and valid inequalities for the node
capacitated graph partitioning
problem.
M.S. Gowda and R. Sznajder, On
the Lipschitzian properties of
polyhedral multifunctions.
A. Fischer and C. Kanzow, On
finite termination of an iterative
method for linear ..*I ; ... '
problems.
K. Ando and S. Fujishige, On
structures of bisubmodular polyhe
dra.
T. Rapcsdk and T.T. Thang, A
class of polynomial variable metric
algorithms for linear optimization.
A.B. Levy, Implicit multifunction
theorems for the t. i i .. ,1! ,
of variational conditions.
APPLICATION FOR MEMA ERSHIP
I wish to enroll as a member of the Society.
i\) ,. '. i '1. .. i; for my personal use and not for the benefit of any library or institution.
E] I willpay my membership dues on receipt ofyour invoice.
Ei I wish to pay by creditcard (Master/Euro or Visa).
CREDITCARD
NUMBER:
EXPIRY DATE:
FAMILYNAME:
MAILING ADDRESS:
TELEFAX:
TELNO.:
EMAIL:
Mail to:
The Mathematical Programming Society, Inc.
c/o International Statistical Institute
428 Prinses Beatrixlaan
2270 AZ Voorburg
The Netherlands
Cheques or money orders should be made payable to
The Mathematical Programming Society, Inc., in
one of the currencies listed below.
Dues for 1996, including subscription to the journal
 MathematicalProgramming, are Dfl.105.00 (or
$60.00 or DM94.00 or 39.00 or FF326.00 or
Sw.Fr.80.00).
Student applications: Dues are onehalf the above
rates. Have a faculty member verify your student sta
tus and send application with dues to above address.
Faculty verifying status
institution
SIGNATURE
N" 51
OCTOBER 1996
~"
Donald W. Hearn, EDITOR
hearn@ise.ufl.edu
Karen Aardal, FEATURES EDITOR
Utrecht University
Department of Computer Science
P.O. Box 80089
3508 TB Utrecht
The Netherlands
aardal@cs.ruu.nl
Faiz AlKhayyal, SOrrTWARE & COMPUTATION EDITOR
Georgia Tech
I Industrial and Systems Engineering
Atlanta, GA 303320205
Sfaiz@isye.gatech.edu
Dolf Talman, BOOK REVIEW EDITOR
Department of Econometrics
Tilburg University
P.O. Box 90153
5000 LE 11.1I. .
The Netherlands WAg
talman@kub.nl
Elsa Drake, DESIGNER
PUBLISHED BY THE
MATHEMATICAL PROGRAMMING SOCIETY & ae? e
GA, 'rI FF!II1.e ll,'.L PUBLICATION SERVICES p
UNIVERSITY OF FLORIDA Web,.'inifirid otI#S
Takriti in'OPTIMA ix 471995
Journal contents are subject to change by the publisher. De fthe f
Go surfing Deadline frhe
OPTIMA is Novi.I5;196
I 1' i 1 'P .\
t UNIVERSITY OF
FLORIDA
Center for Applied Optimization
371 Weil Hall
PO Box 116595
Gainesville FL 326116595 USA
FIRST CLASS MAIL
