Title: Optima
ALL VOLUMES CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00090046/00052
 Material Information
Title: Optima
Series Title: Optima
Physical Description: Serial
Language: English
Creator: Mathematical Programming Society, University of Florida
Publisher: Mathematical Programming Society, University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: December 1996
 Record Information
Bibliographic ID: UF00090046
Volume ID: VID00052
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

optima52 ( PDF )


Full Text




P


52
DECEMBER 1996


T


M


A


MATHEMATICAL PROGRAMMING SOCIETY NEWSLETTER


Continuous


Global Optimization


Software: A BriefReview


Abstract
Following a concise introduction to multiextremal mathematical programming
problems and global optimization (GO) strategies, a commented list of software
products for analyzing and solving continuous GO problems is presented.

1. Global Optimization Models and Solution Approaches

A large variety of quantitative decision issues, arising in the sciences, engineering and
economics, can be perceived and modelled as a constrained optimization problem. Ac
cording to this generic description, the best decision -often expressed by a real vector
is sought which satisfies all stated feasibility constraints and minimizes (or maximizes)
the value of an objective function. Applying standard mathematical programming nota
tion, we shall consider problems in the general form
(1) min f(x) subject to x E D C R.
The function symbolizes the objectives) in the decision problem, and D denotes the
(non-empty) set of feasible decisions. D is usually defined by a finite number of func
tions; for the purposes of the present discussion, we shall assume that
(2) D ={x DC R: In (2) and u are explicit (finite) bounds, and g, are given constraint functions. Postulat
ing now that all functions defined above are continuous, the optimal solution set to
problem (1)-(2) is non-empty.
Most typically, it is assumed that the decision problem modelled by (1)-(2) has a unique
-locally and, at the same time, also globally -optimal solution. Uniextremality is often
implied by the mathematical model structure (for example, by the strict convexity of f,
and the convexity of D). This paradigm corresponds to the situation in which one, sup
posedly, has a sufficiently close initial 'guess' of the feasible region where the optimal
solution x* is located. Hence, the global optimality of the solution directly follows, hav
ing found the single local optimum of fon D. For example, linear and convex nonlinear
programming models -both, in essence, satisfying the mentioned SEE PAGE TWO


Janos D. Pinter
Pint6r Consulting Services
(PCS), and Dalhousie Uni-
versity
PCS address: 129
Glenforest Drive, Halifax,
NS, Canada B3M 1J2
e-mail: pinter@tuns.ca
http://www.tuns.ca/~pinter/

Keywords: Multiextremal
optimization models; continuous
global optimization; solution
approaches; software review.
AMS Subject Classification:
65K30, 90C05.


conference notes
book reviews
journals
gallimaufry






N 52

S A, i I 1


DECEMBER 1996


CONTINUED


uniextremality assumption in most practical cases -have been exten
sively applied in the past decades to formulate and solve an impressive
range of decision problems.
Although very important classes of models naturally belong to the
above category, there is also a broad variety of problems in which the
property of uniextremality cannot be simply postulated or verified.
Consider, for instance, the following general problem types:
* nonlinear approximation, including the solution of systems of nonlin
ear equations and inequalities
* model fitting to empirical data (calibration, parameterization)
* optimized design and operation of complex 'black box' ('oracle') sys
teams, e.g., in diverse engineering contexts
* configuration/arrangement design (e.g., in various data classification,
facility location, resource allocation, or scientific modelling contexts)
Such problems -together with numerous other prospective application
areas -are discussed by Pinter (1996) and in the extensive list of related
references therein. For further applications consult, e.g., Pardalos and
Rosen (1987), T6rn andZilinskas (1989), Floudas and Pardalos (1990,
1992), Grossmann (1996), Bomze, Csendes, Horst and Pardalos (1996),
or special application-related issues of the Journal of Global Optimiza
tion.
The emerging field of Global Optimization (GO) deals with mathemati
cal programming problems, in the (possible) presence of multiple local
optima. Observe that, typically, the number of local (pseudo)solutions is
unknown and it can be quite large. Furthermore, the quality of the vari
ous local and global solutions may differ significantly. In the presence
of such structure -often visualized by 'hilly landscapes' corresponding
to projections of the objective function into selected subspaces (given by
coordinate-pairs of the decision variable x) -GO problems can be ex
tremely difficult. Hence, most classical numerical approaches are, gen
erally speaking, not directly applicable to solve them. For illustra-
tion, see Figure 1 which displays a relatively simple compo-
sition of trigonometric functions with imbedded poly-
nomial arguments, in just two variables (denoted
by xand y).
Naturally, under such circum 2
stances, it is essential to use a
proper global search strategy. Fur 1.5
thermore, instead of 'exact' solu 1
tions, most typically one has to ac-\
cept diverse numerical approxi 0.5
nations to the globally optimal so- 0
lution (set) and optimum value. -0.5
Following early sporadic work re-
lated to GO (since the late fifties),1 \
the present state-of the-art is char -1.5 /
acterized by several dozen mono-
graphs, a professional journal and
at least a few thousand research
articles devoted primarily to the
subject. A few illustrative refer-
ences are provided at the end of
this brief review. -10
The most important GO model -5
classes which have been exten
sively studied include the follow
ing examples. (Please recall the Figure 1. A multiextremal function in


general model form (1) (2), and note that the problem-classes listed be
low are not necessarily distinct; in fact, several of them are hierarchi
cally contained by more general problem-types listed.)
* Bilinear and biconvex programming (fis bilinear or biconvex, D is
convex)
* Combinatorial optimization (problems which have discrete decision
variables in fand/or in g can be equivalently reformulated as GO prob
lems in continuous variables)
* Concave minimization (f is concave, D is convex)
* Continuous global optimization (fis continuous, D is compact)
* Differential convex (D.C.) optimization (fand g. can all be represented
as the difference of two corresponding convex functions)
* Fractional programming (fis the ratio of two real functions, and g, are
convex)
* Linear and nonlinear complementarity problems (fis the scalar prod
uct of two vector functions, D is typically convex)
* Lipschitz optimization (f and g are arbitrary Lipschitz-continuous
functions)
* Minimax problems (fis some minimax objective, the maximum is con
sidered over a discrete set or a convex set, D is convex)
* Multilevel optimization (models non-cooperative games, involving hi
erarchies of decision-makers, their conflicting criteria are aggregated by
f, D is typically assumed to be convex)
* Multiobjective programming (e.g., when several conflicting linear ob
jectives are to be optimized over a polyhedron)
* Multiplicative programming (fis the product of several convex func
tions, and g are convex, or -more generally -multiplicative functions)


two variables 10


PAGE 2






N 52

S A I r


DECEMBER 1996


CONTINUED


* Network problems (fcan be taken from several function-classes in
cluding nonconvex ones, and g are typically linear or convex)
* Parametric nonconvex programming (the feasible region D and/or
the objective fmay also depend on a parameter vector)
* Quadratic optimization (fis an arbitrary -indefinite -quadratic func
tion; g are linear or, in the more general case, can be arbitrary quadratic
functions)
* Reverse convex programming (at least one of the functions g ex
presses a reverse convex constraint)
* Separable global optimization (fis an arbitrary nonlinear -in general,
nonconvex -separable function, D is typically convex)
* Various other nonlinear programming problems, including, e.g.,
nonconvex stochastic models (in which the defining functions f, g, de
pend on random factors, possibly in an implicit, 'black box' manner)
For detailed descriptions of most of these model-types and their connec
tions consult, e.g., Horst and Pardalos (1995), with numerous further
references.
There are several main classes of algorithmic GO approaches which
possess strong theoretical convergence properties, and -at least in prin
ciple -are straightforward to implement and apply. All such rigorous
GO approaches have an inherent computational demand which in
creases non-polynomially, as a function of problem-size, even in the
simplest GO instances. It should be emphasized at this point that GO
approaches are (should be) typically completed by a 'traditional' local
optimization phase -at least when considering also numerical effi
ciency issues. Global convergence, however, needs to be guaranteed by
the global-scope algorithm component which -theoretically -should be
used in a complete, 'exhaustive' fashion. These remarks indicate the sig
nificant difficulty of developing robust and efficient GO software.
Without aiming at completeness, several of the most important GO
strategies are listed below; for details, consult, for instance, the corre
spending works from the list of references. (Note that the listing is not
complete, and its items are not necessarily mutally exclusive; some soft
ware implementations combine ideas from several approaches.)
* Adaptive partition and search strategies (including, e.g., branch-and
bound algorithms, Bayesian approaches and interval arithmetic based
methods) (Forg6, 1988; Ratschek and Rokne, 1988; Mockus, 1989;
Neumaier, 1990; Zhigljavsky, 1991; Hansen, 1992; Horst and Pardalos,
1995; Horst and Tuy, 1996; Pinter, 1996; Kearfott, 1996)
* Adaptive stochastic search algorithms (including random search,
simulated annealing, evolution and genetic algorithms) (van Laarhoven
and Aarts, 1987; Zhigljavsky, 1991; Horst and Pardalos, 1995;
Michalewicz, 1996; Pinter, 1996)
* Enumerative strategies (for solving combinatorial problems, or certain
'structured' -e.g., concave -optimization problems) (Forg6, 1988; Horst
and Pardalos, 1995; Horst and Tuy, 1996)
* 'Globalized' local search methods (applying a grid search or random
search type global phase, and a local search algorithm) (Horst and
Pardalos, 1995; Pinter, 1996)
* Heuristic strategies (deflation, tunneling, filled function methods,
approximate convex global underestimation, tabu search, etc.) (Horst
and Pardalos, 1995; Pinter, 1996)
* Homotopy (parameter continuation) methods and related approaches
(including fixed point methods, pivoting algorithms, etc.) (Horst and
Pardalos, 1995)
* Passive (simultaneous) strategies (uniform grid search, pure random
search) (Zhigljavsky, 1991; Horst and Pardalos, 1995; Pinter, 1996)


* Successive approximation (relaxation) methods (cutting plane, more
general cuts, minorant construction approaches, certain nested optimi
zation and decomposition strategies) (Forg6, 1988; Horst and Pardalos,
1995; Pinter, 1996)
* Trajectory methods (differential equation model based, path
following search strategies) (Horst and Pardalos, 1995)
In spite of a considerable progress related to the rigorous theoretical
foundations of GO, software development and 'standardized' use lag
behind. The main reason for this is, of course, the inherent numerical
difficulty of GO, even in the case of 'simpler' specific instances (such as,
the indefinite quadratic programming problem). In general, the diffi
culty of a global optimization problem (GOP) can be expected to in
crease as some exponential function of the problem dimension n. Con
sequently, dimensions 100, 50 or even 10 can be considered as 'large',
depending on the GOP type investigated. In the remainder of this pa
per, an illustrative list of software products to solve GOPs is reviewed.


2. GO Software:
Information Sources and Some General Remarks


For the purposes of collecting information for this survey, GO software
authors have been asked (mainly by sending e-mail messages, and by
placing 'electronic ads' at several prominent mathematical program
ming sites on the WWW) to submit documentation related to their
work. The information -or lack thereof summarized below is largely
based on the responses received. Additional information has been col
elected from the Internet, from several GO books, and from the Journal
of Global Optimization. Note that though in many research public
tions reference is made to numerical examples, or even to sophisticated
specific applications, only such work is reported below which is under
stood to be a general purpose and legally distributable program system.
For obvious reasons, the present survey is far from being 'complete' in
any possible sense; rather, it is an attempt to provide a realistic picture
of the state-of the-art, supported by instances of existing software. This
short review is not intended to be either comparative or 'judgemental':
one simple reason being that the information received from GO soft
ware developers is used 'as is', mostly without the possibility of actual
software testing. By the same token, the accuracy of all information
cannot be guaranteed either. Further research in this direction
including the preparation of a more comprehensive and detailed survey
Sis currently in progress.
The software list provided in the next section is simply alphabetical,
without categorization. For a more uniform presentation style, abbre
viations are associated with all software products listed, even when
such names were not given in the documentation available for this sur
vey (existing names were not changed, of course). The descriptions are
almost formula-free and extremely concise -due to space restrictions.
For the latter reason, we decided not to include important classes of
more specific GO approaches and related methodology. In particular
as reflected by the title -pure or mixed integer programming and more
general combinatorial optimization algorithms are not discussed here.
Furthermore, although most of the available top-of the-line continuous
nonlinear (convex) optimization software can be applied -with good
taste and some luck -to analyze GOPs, even the most prominent such
systems are excluded from this review. Again, a more detailed survey
is planned, appropriately discussing also the program system types
mentioned.


PAGE 3






N 52

S A- i I 1


DECEMBER 1996


CONTINUED


The hardware and software platform of the systems reviewed is also
shown when such information is available. In order to assist in obtain
ing additional information, contact personss, their e-mail addresses, ftp
and/or WWW sites are listed, whenever known to me. (For brevity,
only a few such pointers are provided in each case.)
The reader is assumed to have at least some basic familiarity with the
GO approaches mentioned; for related discussions, please consult the
references.


3. Short Software Descriptions


(BB A GO Algorithm for General Nonconvex Problems
An implementation of a Branch-and-Bound (B&B) algorithm which is
based on the difference of convex functions (D.C.) transformation.
Nonconvexities are identified and categorized as of either special or ge
neric structure. Special nonconvex (such as bilinear or univariate con
cave) terms are convex lower bounded using customized bounding
functions. For generic nonconvex terms, convex lower bounding func
tions are derived by utilizing the parameter a (specified by the user or
derived based on theory). aBB solves general unconstrained and con
strained problems; it requires MINOS and/or NPSOL for the solution of
linear or convex optimization subproblems. (Languages: C and Fortran.)
Contact: I.P. Androulakis , C.A.
Floudas , C.D. Maranas
, http://titan.princeton.edu/.
ANNEAL Simulated Annealing
ANNEAL is based on the core SA approach, including several possibili
ties for parameter adjustment and a deterministic solution refinement
phase. It has been applied to predict complex crystal structures. Work
station implementation. Contact: W. Bollweg muenster.de>, H. Maurer , H. Kroll
.
ASA CalTech Adaptive Simulated Annealing
ASA was developed to find the global optimum of a continuous non
convex function over a multidimensional interval (box). This algorithm
permits an annealing schedule for 'temperature' decreasing exponen
tially in annealing time. The introduction of re-annealing also permits
adaptation to changing sensitivities in the parameter-space. Some other
adaptive options in ASA include self optimize (to find optimal starting
conditions) and quenching (to methodically find faster performance
that might be useful for large parameter-spaces). (Language: C.) Con
tact: L. Ingber , http://www.ingber.com/
#ASA CODE.
B&B A Family ofB&B Algorithms
This obvious acronym (by the present author) attempts to summarize
several B&B type algorithms developed to solve certain structured GOP
classes. These include (among others) indefinite quadratic, quasiconvex
concave, and general Lipschitz problems. Workstation implement
tions. (Language: C.) Contact: R. Horst , M.
Nast , N. Thoai .
BARON Branch-And-Reduce Optimization Navigator
Combines interval analysis and duality with enhanced B&B concepts.
The BARON modules can handle structured nonconvex problems up to
thousands of constraints and variables. The library of specialized mod
ules includes solvers for numerous specific GOP-classes. (For other,


more general problems, underestimation routines need to be provided
by the user.) All modules can solve also such problems in which some
or all of the variables are restricted to integer values. The specialized
modules use OSL or MINOS to solve interim subproblems. Worksta
tions, UNIX type operating systems. (Languages: Fortran and GAMS.)
Contact: N.V. Sahinidis, http://archimedes.me.uiuc.edu/sigma/
baron.html, ftp://aristotle.uiuc.edu.
BGO Bayesian Global Optimization
This program system includes four versions of Bayesian search, cluster
ing, uniform deterministic grid, and pure Monte Carlo search. Bound
constraints and more general constraints can be handled. Interactive
DOS and UNIX versions are available. (Languages: Fortran and C.)
Contact: J. Mockus , L. Mockus
.
cGOP Global Optimization Program
Solves structured GOPs which have an objective function of the form
a x+b y+x Ay+f (x)+f(y) with convex f, f,, and linear constraints. Re
quires the presence of the commercial codes MINOS and/or CPLEX to
solve linear, mixed-integer linear and convex subproblems. cGOP has
been used to solve problems involving several hundred variables and
constraints. Versions are available for workstations. (Language: C.)
Contact: V. Visweswaran , C.A. Floudas
, http://titan.princeton.edu/.
CGU Convex Global Underestimator
This approach is designed to generate efficient approximations to the
global minimum of a multiextremal function, by fitting a convex func
tion to the set of all known (calculated) local minima. This heuristically
attractive strategy requires only the sequential solution of auxiliary LPs
and some rather elementary calculations. CGU has been applied to cal
culate molecular structure predictions, up to several dozen variables.
Implemented on parallel workstations and supercomputers. Contact:
K.A. Dill, A.T. Phillips , J.B. Rosen
.
CRS Controlled Random Search
This is a recently developed variant of a popular class of random search
based methods which can be applied under very mild analytical condi
tions imposed on the GOP. Several other related stochastic search
methods have also been developed by this group. Workstation imple
mentations. Contact: M.M. Ali, A. Torn , S.
Viitanen .
CURVI Bound-Constrained Global Optimization
Windward Technologies (WTI) develops advanced numerical and visu
alization software, for solving constrained and unconstrained nonlinear
optimization problems. One of their solvers, CURVI is aimed at solving
bound-constrained nonlinear programs which have a complicated
possibly multiextremal -objective function. (Language: Fortran.) Con
tact: T. Aird , http://users.aol.com/WTI/.
DE Differential Evolution Genetic Algorithm for Bound Con-
strained GO
DE won third place at the 1st International Contest on Evolutionary
Computation on a real-valued function test set. It was the best genetic
algorithm approach (the first two places of the contest were won by
non-GA algorithms). (Languages: Matlab and C.) Contact: R. Storn
, http://http.icsi.berkeley.edu/-storn/
code.html.


PAGE 4






N 52

S] AI!


DECEMBER 1996


CONTINUED


ESA Edge Searching Algorithm
An implementation of an edge search algorithm for finding the global
solution of linear reverse convex programs. ESA is based on an efficient
search technique and the use of fathoming criteria on the edges of the
polytope representing the linear constraints. In addition, the method
incorporates several heuristics, including a cutting plane technique
which improves the overall performance. Implemented for several
UNIX platforms; the TPG Test Problem Generator is also available.
(Language: Fortran.) Contact: K. Moshirvaziri
, .
GA Genetic Algorithms
Genetic algorithms -as a rule -can be applied to GOPs under mild
structural requirements. Both general and specific information related
to this popular solver class is available from the following sources: A
Commented List of Genetic Algorithm Codes, ftp://
ftp.germany.eu.net/pub/research/softcomp/ec/faq/www/q20 1.htm
-GA Archive, http://www.aic.nrl.navy.mil/galist/src/. Only a few il
lustrative examples are listed in the present review.
GAS Genetic Algorithm
Unconstrained and bound-constrained versions are available. For DOS
and UNIX operating systems. (Language: C++.) Contact: M. Jelasity
, J. Dombi , ftp://
ftp.jate.u-szeged.hu/pub/math/optimization/GAS/.
GAucsd Genetic Algorithm
Developed and maintained at the University of California, San Diego.
GAucsd was written in C under Unix but should be easy to port to
other platforms. The package is accompanied by brief information and
a User's Guide. (Language: C.) Contact: nici@ucsd.edu, GAucsd
request@cs.ucsd.edu, ftp://cs.ucsd.edu/pub/GAucsd/.
GENERATOR Genetic Algorithm Solver
This method is aimed at solving a variety of (combinatorial and con
tinuous multiextremal) scientific and engineering optimization prob
lems. It is designed to interact with Excel which serves as a user inter
face. (Platform: Excel.) Contact: New Light Industries
, http://www.iea.com/-nli/.
GC Global Continuation
GC is a continuation approach to GO applying global smoothing in or
der to derive a simpler approximation to the original objective function.
GC is applied by the authors to distance geometry problems, in the con
text of molecular chemistry modelling. IBM SP parallel system imple
mentation. Contact: J.J.More , Z. Wu.
GENOCOP III Genetic Algorithm for Constrained Problems
Solves general GOPs, in the presence of additional constraints and
bounds (using quadratic penalty terms). System parameters, domains,
and linear inequalities are input via a data file. The objective function
and any nonlinear constraints are to be given in appropriate C files.
(Language: C.) Contact: Z. Michalewicz, http://www.coe.uncc.edu/
-zbyszek/gcreadme.html, ftp://ftp.uncc.edu/coe/evol/
genocopIII.tar.Z.
GEODES Minimum-Length Geodesic Computing
Approximating a minimum-length geodesic on a multidimensional
manifold, GEODES is differential geometry software. However, it has
potential also in the GO context. GEODES includes example manifolds
and metrics; it is implemented in Elements (a matrix and function ori


ented scientific modelling environment) to compute and visualize geo
desics on 2D surfaces plotted in 3-space. Portable to various hardware
platforms. (Languages: C, C++.) Contact: W.L. Anderson
, http://www.netcom.com/~elements/,
http://www.netlib.org/ode/geodesic/.
GLO Global and Local Optimizer
GLO is a modular optimization system developed for 'black box' prob
lems in which objective function calculations may take a long time. Its
methodology is based on the coupling of global (genetic) and local
(variable metric) nonlinear optimization software with scientific appli
cations software. It has been applied to automated engineering design.
Besides the modular optimization control system, GLO also has a
graphical user interface and includes a pre-processor. Contact: M.J.
Murphy, http://www.llnl.gov/glo/09glo.html, M. Brosius
.
GLOBAL Multistart with Stochastic Clustering
GLOBAL can be used for the solution of the general bound-constrained
GOP which has a (measurable) real objective function. The algorithm is
a derivative-free implementation of the clustering stochastic multistart
method of Boender et al., supplemented with a quasi-Newton local
search routine and with a robust random local search method. Avail
able for UNIX machines, IBM-compatible mainframes and PCs. (Lan
guages: Fortran and C.) Contact: T. Csendes szeged.hu>, http://www.inf.u-szeged.hu/~csendes/, ftp://ftp.jate.u
szeged.hu/pub/math/optimization/index.html.
GLOBALIZER An Educational Program System for Global Opti-
mization
Serves for solving univariate GOPs. After stating the problem, the user
can choose among various (random search, B&B based, or Bayesian par
tition based) solver techniques. The software has interactive tutoring
capabilities, provides textual and graphical information. Works on PCs,
under MS-DOS. Contact: R.G. Strongin ,
V.P. Gergel, A.V. Tropichev.
GLOPT Constrained Global Optimization
Solves GOPs with a block-separable objective function subject to bound
constraints and block-separable constraints; it finds a nearly globally
optimal point that is near to a true local minimizer. GLOPT uses a B&B
technique to split the problem recursively into subproblems that are ei
their eliminated or reduced in their size. It includes a new reduction
technique for boxes and new ways for generating feasible points of con
strained nonlinear programs. The current implementation of GLOPT
uses neither derivatives nor simultaneous information about several
constraints. (Language: Fortran.) Contact: A. Neumaier
, S. Dallwig and H. Schichl.
GOPP Global Optimization of Polynomial Problems using
Gr6bner Bases
The (local) optimality conditions to polynomial optimization problems
lead to polynomial equations, under inequality constraints. Applying
recent Gr6bner basis techniques, this approach is aimed at finding all
solutions to such systems, hence also finding global optima. (Language:
Maple.) Contact: K. Hagglof , P.O. Lindberg
, L. Svensson , http://
www.optsyst.math.kth.se.


PAGE 5






N 52

S A, i I 1


DECEMBER 1996


CONTINUED


GOT Global Optimization Toolbox
GOT combines random search and local (convex) optimization. DOS
and HP UX versions are available. (Language: Fortran.) Contact: A.V.
Kuntsevich .
GSA Generalized Simulated Annealing
GSA is based on the generalized entropy by Tsallis. The algorithm
obeys detailed balance conditions and, at low 'temperatures', it reduces
to steepest descent. (Note that the members of the same research group
have been involved in the development of several SA type algorithms.)
Contact: J.E. Straub , P.Amara
, J. Ma .
IHR Improving Hit-and-Run
IHR is a random search based GO algorithm that can be used to solve
both continuous and discrete optimization problems. IHR generates
random points in the search domain by choosing a random direction
and selecting a point in that direction. Versions have been imple
mented, using different distributions for the random direction, as well
as several ways to randomly select points along the search line. The al
gorithm can also handle inequality constraints and a hierarchy of objec
tive functions. IHR has been used to solve GOPs in various disciplines
such as in engineering design. Contact: Z. Zabinsky
, ftp://ftp.bart.ieng.washington.edu.
IMINBIS Interval Arithmetic Based GO
This method applies interval arithmetic techniques to isolate the station
ary points of the objective function. Next, a topological characterization
is used to separate minima from maxima and saddle points, followed by
local minimization (sub)searches to select the global solution. The
method has been applied also to 'noisy' problems. Workstation and PC
implementations, extensive related research. (Language: Fortran.) Con
tact: M.N. Vrahatis , D.G.
Sotiropoulos , E.C. Triantaphyllou.
INTBIS Global Solver for Polynomial Systems of Equations
Finds all solutions of polynomial systems of equations, with rigorously
guaranteed results. The software package INTBIS is ACM-TOMS Algo
rithm 681; it is available through NETLIB. Distributed with the package
are four source code files, sample input and output files, and a brief
documentation file. The source files consist of the following: interval
arithmetic, stack management, core INTBIS routines, and machine con
stants. (Language: Fortran.) Contact: R.B. Kearfott ,
http://interval.usl.edu/kearfott.html, ftp://interval.usl.edu/pub/
interval math/intbis/.
INTOPT 90 Verified (Interval) Global Optimization
Serves to the verified solution of nonlinear systems of equations and un
constrained and bound-and-equality-constrained global optimization.
Based on exhaustive search, driven by a local optimizer, epsilon-infla
tion, interval Newton methods, and interval exclusion principles; uses
automatic differentiation. Test results with hundreds of test examples.
The underlying interval arithmetic package (ACM TOMS Algorithm
737) is also distributed. Workstation and PC implementations. (Lan
guage: Fortran.) Contact: R.B. Kearfott , http://
interval.usl.edu/kearfott.html, ftp://interval.usl.edu/pub/
interval math/intbis/.


INTGLO, INTGLOB Integral Global Optimization
These methods solve unconstrained and constrained, as well as discrete
GOPs by the integral method. They also include a discontinuous pen
alty function approach for constrained problems. Problems up to one
hundred variables have been solved. A set of test problems is also
available, including box or unconstrained, constrained, concave mini
mization, discrete variable programs and multicriteria programs. For
IBM PCs. (Language: Fortran.) Contact: Q. Zheng
, D. Zhuang .
ISA Inductive Search Algorithm
ISA won first place at the 1st International Contest in Evolutionary
Computation on a real-valued function test-suite. (Language: C++.)
Contact: G. Bilchev, information available at http://
solon.cma.univie.ac.at/~neum/glopt/testresults.html#bilchev.
LGO Continuous and Lipschitz Optimization
Solves bound-constrained and more general GOPs under mild struc
tural requirements; it can be applied also to 'black box' problems. LGO
integrates several global (adaptive partition and random search based)
and local (derivative-free conjugate directions type) strategies: these
can be activated in interactive or automatic execution modes. The PC
version has a menu interface to assist the application development pro
cess, includes a concise information / tutoring session, and has visual
ization capabilities. Available also for workstations. LGO has been ap
plied to problems with up to 100 variables (can be configured to encom-
pass larger sizes). Accompanied by a User's Guide and sample prob
lems. (Language: Fortran.) Contact: J.D. Pinter ,
http://www.tuns.ca/-pinter/.
LOPS Lipschitz Optimization Program System
In all approaches listed below, the objective function is defined over n
intervals. The Lipschitz-continuity of for f' is also assumed. Problem
classes and corresponding available versions include: one-dimensional
GOPs (sequential methods with local tuning, PC version (Language:
C++) -one-dimensional GOPs, parallel solver implementations (Lan
guage: Alliant FX/80, parallel Fortran) -multi-dimensional GOPs:
sequential and parallel algorithms using Peano curves (Language:
Alliant FX/80, parallel Fortran) Contact: Y.D. Sergeyev
.
MAGESTIC Data Fitting by Global Optimization
Automatic global optimization based on a fast modified Gauss-Newton
approach combined with Monte Carlo search. MAGESTIC handles cali
bration model variants (e.g., parameter and error masks for restricted
sub-fitting, implicit equation fitting without solving, etc.). Suitable for
use also with Lagrange multipliers for constrained optimization. Uses
Excel as an interface (under Windows) and for generating graphics.
(Platform: Excel.) Contact: Logix Consulting ,
http://www.lgx.com/magestic.html.
MULTISTART Clustering Algorithm
This widely used approach is based on random search -or some other
initial sampling in the feasible set -combined with clustering and local
optimization launched from the most 'promising' pointss. Imple
mented on SUN workstations. Several interesting applications -in
combination with simulation models -are related to the analysis of oil
resources. (Language: Fortran.) Contact: S. Buitrago
.


PAGE 6






N 52

SC4 AI


DECEMBER 1996


CONTINUED


NETSPEAK General Network Optimization
This is an algebraic modelling language used to specify, solve, and ana
lyze general -linear, but also possibly nonconvex -minimum cost net
work flow problems. A wide variety of network and network-related
topologies (pure networks, networks with side-constraints and/or side
variables, generalized networks) can be modelled using NETSPEAK.
The language is being developed as a Windows application; it features
flexible I/O, robust program control, and intuitive commands. Contact:
B.W. Lamar .
PA Packet Annealing
In PA, the Gibbs distribution of the objective function is deterministi
cally 'annealed' by tracing the evolution of a multiple Gaussian packet
approximation. The approach has been applied to analyze complex mo
lecular conformation models. IBM PC implementation. Contact: D.
Shalloway , B.W. Church , M.
Oresic .
PROFIL Interval Branch and Bound Method
Bound constrained interval global optimization, with rigorously guar
anteed results. PROFIL is based on BIAS (Basic Interval Arithmetic
Subroutines) which provides an interface for interval operations. For
PC and a number of UNIX systems. (Language: C.) Contact: C. Jansson,
O. Knueppel , http://www.ti3.tu
harburg.de/Software/PROFILEnglisch.html, ftp://ti3sun.ti3.tu
harburg.de/pub/profil/unix/profopt.tar.Z, ftp://ti3sun.ti3.tu
harburg.de/pub/profil/pc/profopt.tgz.
PVGO Parallel Verified Global Optimization
PVGO is a new parallel method for interval global optimization. Imple
mented on a Connection Machine CM5. (Language: Pascal-XSC.) Con
tact: S. Berner .
RSBB Reduced Space Branch-and-Bound Method
RSBB applies variable domain reductions and an underestimating mod
ule. Two versions run under Unix; one of these algorithms is described
in Chapters 1 and 2 of Grossmann (1996). (Language: both versions in
C++, they use external Fortran procedures.) Contact: T. Epperly,
t.epperly@ic.ac.uk, http://www.ps.ic.ac.uk/-epperly/index.html.
SA Simulated Annealing
In addition to the algorithm itself, this includes on-line interactive dem
onstration, and additional information on C++ classes random number
generation, Monte Carlo methods and Forth. A Nelder-Mead simplex
method implementation is also available. (Languages: C, C++, Ada
and Forth.) Contact: E. Carter, Taygeta Scientific Inc.
, http://www.taygeta.com/annealing/
simanneal.html.
SAT Global Optimization for Satisfiability Problems
Boolean satisfiability (SAT) problems can be directly transformed into
unconstrained GOPs. These, in turn, can be solved by specifically tai
lored solvers. Workstation implementations. Contact: J. Gu
.
SIGMA Stochastic Integration Global Minimization Algorithm
The software package SIGMA is Algorithm 667; appeared in ACM
TOMS 14 (1988) 366-380. Includes also the code of several test GOPs.
(Language: Fortran.) Contact: F. Aluffi Pentini, V. Parisi and F. Zirilli
, http://www.netlib.no/netlib/toms/667.
SIMANN Simulated Annealing


This program implements the continuous simulated annealing global
optimization algorithm described in Corana et al., ACM TOMS 13
(1987) No. 3, 262-280. Algorithm modifications and many details on its
use can be found in Goffe et al., J. of Econometrics 60 (1994) No. 1-2, 65
100. (Language: Fortran.) Contact: B. Goffe,
, http://www.netlib.no/netlib/opt/
simann.f.
SOLVEX Solver for Nonlinear Optimization Problems
For solving constrained and unconstrained nonlinear, multiobjective
and GOPs. SOLVEX algorithm libraries include the methods listed be
low: unconstrained minimization: Hooke-Jeeves direct search, conju
gate gradient method, Shor R-algorithm, Powell-Brent method; general
nonlinear programming problem: penalty functions, Lagrange function
method, parameterization method; global optimization: sequential set
covering technique, simulated annealing, clustering algorithm;
multicriteria optimization: convolution methods, including goal pro
gramming, direct Pareto approximation. Interactive use enhanced by a
built-in problem editor and graphics capabilities. Contact: M.A.
Potapov .
TORUS Stochastic Algorithm for Global Minimization with
Constraints
It is a Monte Carlo algorithm, combined with annealing search prin
ciples. The software package TORUS is Algorithm 774; appeared in
ACM-TOMS 21 (1995) 194-213. Includes also the code of several test
GOPs. Contact: F. M. Rabinowitz, http://www.netlib.no/netlib/toms/
744.
TRUST Terminal Repeller Unconstrained Subenergy Tunneling
This method formulates the GOP as the solution of a deterministic dy
namical system incorporating terminal repellers and a subenergy tun
neling function. Benchmark tests comparing this method to other glo
bal optimization procedures are presented, with favourable results.
The TRUST formulation leads to a simple stopping criterion. In addi
tion, the structure of the equations enables an implementation of the al
gorithm in analog VLSI hardware (in the spirit of artificial neural net
works) for further speed enhancements. Contact: B.C. Cetin, J. Barhen
and W. Burdick; TRUST is described in JOTA 77 (1993) No. 1.
TVC Toolbox for Verified Computing
Can be applied to the rigorous solution of nonlinear systems of equa
tions and to general unconstrained and bound-constrained GOPs. TVC
is based on interval B&B and interval Newton methods; it also has au
tomatic differentiation capabilities. The toolbox can be used on PCs,
workstations and parallel computers. (Languages: PASCAL-XSC, C++.)
Test problems have been solved up to one hundred variables. A driver
program and hundreds of test examples are available from the author.
Contact: D. Ratz , http://
www.uni-karlsruhe.de/-iam, http://ourworld.compuserve.com/
homepages/numeriksoftware.
UFO Universal Functional Optimization
Interactive modular system for solving problems and for algorithm de
velopment. Several types of GO methods -random search, continue
tion, clustering, and random plus local search -can be applied. For PCs.
(Language: Fortran.) Contact: L. Luksan , M.
Tuma, M. Siska, J. Vlcek and N. Ramesova, ftp: uivt.cas.cz/pub/
msdos/ufo.


PAGE 7






N 52

S I- A I 1


DECEMBER 1996


CONTINUED


UNICALC Interval Branch and Bound Algorithm
UNICALC serves for bound-constrained GO; accepts also inequality
and/or equality constraints and decision variables. Contact: A.
Semenov, information available at ftp://ftp.iis.nsk.su/pub/ai/unicalc.
VerGO Verified Global Optimization
VerGO is designed for rigorous bound (and approximate general con
strained) GO of a twice continuously differentiable objective function.
VerGO features include interval arithmetic, automatic differentiation,
non-convexity test, monotonicity test, and local optimization. Tested on
problems up to over 30 variables. DOS, OS/2, Linux and workstation
versions. (Language: C++.) Contact: R. van Iwaarden
, http://www.cs.hope.edu/-rvaniwaa/
VerGO/VerGO.html.
VTT Interval Arithmetic Research
The goals of the Interval Arithmetic, Constraint Satisfaction and Prob
ability Project are summarized as follows: development of portable C++
libraries for interval programming tasks; integration of the libraries to
Microsoft Excel; application in financial planning software products
(Platforms: C++, Excel.) Contact: S. De Pascale
, http://www.vtt.fi/tte/.


4. Acknowledgements


The software review presented here
is based to a significant extent on
information kindly provided by
colleagues working on GO and/or
closely related areas. I would like
to especially thank Arnold
Neumaier and Simon Streltsov for
the information collected on their
WWW Global Optimization Pages
(respectively, http://
solon.cma.univie.ac.at/~neum/
glopt/, and http://cad.bu.edu/go/
). I also wish to thank Faiz Al-
Khayyal for his valuable comments
on the manuscript.
The space (and time) limitations of
this review certainly have made it
illusory to include 'all' existing
software on this rapidly changing
area; omissions are entirely
possible but absolutely
unintentional. It is planned to
continue this work and to provide a
more comprehensive and
informative picture of the
state-of-the-art for the mathematical
programming community.
Comments and suggestions are
most welcome; they will contribute
to an 'unabridged' GO software
review in the near future.


References


To avoid a superfluously long list
ing, the reference list is reduced to
the most topical journal, and to
several GO monographs and
handbooks published in the past
ten years.
Bomze, I.M., Csendes, T., Horst, R.,
and Pardalos, P.M., eds. (1996)
Developments in Global Optimiza
tion. Kluwer Academic Publishers,
Dordrecht / Boston / London.
Floudas, C.A. and Pardalos, P.M.
(1990) A Collection of Test
Problems for Constrained Global
Optimization Algorithms. Lecture
Notes in Computer Science 455,
Springer, Berlin / Heidelberg / New
York.
Floudas, C.A. and Pardalos, P.M.,
eds. (1992) Recent Advances in
Global Optimization. Princeton
University Press, Princeton.
Forgo, F. (1988) Nonconvex
Programming. Akademiai Kiado,
Budapest. Grossmann, I.E., ed.
(1996) Global Optimization in
Engineering Design. Kluwer
Academic Publishers, Dordrecht /
Boston / London.


Hansen, E.R. (1992) Global
Optimization Using Interval
Analysis. Marcel Dekker, New
York.
Horst, R. and Tuy, H. (1996) Global
Optimization Deterministic
Approaches. Springer, Berlin/
Heidelberg / New York. (3rd Edn.)
Horst, R. and Pardalos, P.M., eds.
(1995) Handbook of Global
Optimization. Kluwer Academic
Publishers, Dordrecht / Boston/
London.
Journal of Global Optimization
(published since 1991, by Kluwer
Academic Publishers).
Kearfott, R.B. (1996) Rigorous
Global Search: Continuous
Problems. Kluwer Academic
Publishers, Dordrecht / Boston/
London.
Michalewicz, Z. (1996) Genetic
Algorithms + Data Structures
Evolution Programs. Springer,
Berlin / Heidelberg / New York.
(3rd Edn.)
Mockus, J. (1989) Bayesian
Approach to Global Optimization.
Kluwer Academic Publishers,
Dordrecht / Boston / London.
Neumaier, A. (1990) Interval
Methods for Systems of Equations.
Cambridge University Press,
Cambridge.


Pardalos, P.M. and Rosen, J.B.
(1987) Constrained Global
Optimization: Algorithms and
Applications. Lecture Notes in
Computer Science 268, Springer,
Berlin / Heidelberg / New York.
Pint6r, J.D. (1996) Global Optimiza
tion in Action. Kluwer Academic
Publishers, Dordrecht / Boston/
London.
Ratschek, H. and Rokne, J.G. (1988)
New Computer Methods for Global
Optimization. Ellis Horwood,
Chichester.
T6rn, A.A. and Iilinskas, A. (1989)
Global Optimization. Lecture Notes
in Computer Science 350, Springer,
Berlin / Heidelberg / New York.
van Laarhoven, P.J.M. and Aarts,
E.H.L. (1987) Simulated Annealing:
Theory and Applications. Kluwer
Academic Publishers, Dordrecht/
Boston / London.
Zhigljavsky, A.A. (1991) Theory of
Global Random Search. Kluwer
Academic Publishers, Dordrecht/
Boston / London.


PAGE 8






N- 52

S A I


DECEMBER 1996


Nominations for the A.W Tucker Prize Beale-Orchard-Hays Prize
Extended Deadline


The Mathematical Programming Soci
ety invites nominations for the A.W.
Tucker Prize for an outstanding paper
authored by a student. The award will
be presented at the International Sym
posium on Mathematical Program
ming in Lausanne (August 24-29,
1997). All students, graduate and un
dergraduate, are eligible. Nominations
of students who have not yet received
the first university degree are especially
welcome. In advance of the Sympo
sium an award committee will screen
the nominations and select at most
three finalists. The finalists will be in
vited, but not required, to give oral
presentations at a special session of the
Symposium. The award committee
will select the winner and present the
award prior to the conclusion of the
Symposium. The members of the
committee for the 1997 A.W. Tucker
Prize are Kurt Anstreicher, Depart
ment of Management Sciences, Uni
versity of Iowa; Rolf Moehring,
Fachbereich Mathematik, Technical
University of Berlin; Jorge Nocedal,
EECS Department, Northwestern
University; Jean-Philippe Vial (Chair
man), HEC, I I ..i. .... ii Studies,
University of Geneva; David
Williamson, IBM T.J. Watson Re
search Center, Yorktown Heights.
EligibilityThe paper may concern any
aspect of mathematical programming;
it may be original research, an exposi
tion or survey, a report on computer
routines and computing experiments,
or a presentation of a new and inter
testing application. The paper must be
solely authored and completed after
January 1994. The paper and the
work on which it is based should have
been undertaken and completed in
conjunction with a degree program.
Nominations Nominations must be
made in writing to the chairman of
the award committee as follows:
Jean-Philippe Vial
HEC, r I ... ..... ,1 Studies
University of Geneva
102, Bd Carl-Vogt
CH-1211 Geneva 4
Switzerland
Fax: 41 22 705 81 04
E-mail: jpvial@uni2a.unige.ch


They must be submitted by a faculty
member at the institution where the
nominee was studying for a degree
when the paper was completed. Let
ters of nomination must be accompa
nied by a statement that each mem-
ber of the committee (including the
chairman) was sent the following
documents: the student's paper; a
separate summary of the paper's con
tributions, written by the nominee,
and no more than two pages in
length; and a brief biographical
sketch of the nominee.
Deadline Nominations must be sent
to the chairman and postmarked no
later than February 15, 1997.
Addresses of the other members of
the committee:
Prof. Kurt M. Anstreicher
Department of Management Sciences
University of Iowa
Iowa City, IA 52242 USA
E-mail: kanstrei@scout-
po.biz.uiowa.edu
Prof. Dr. Rolf H. Moehring
Fachbereich Mathematik, Sekr. 6-1
Technische Universitat Berlin
Strasse des 17. Juni 136
10623 Berlin Germany
Fax: 49 30 314-25191
E-mail: moehring@math.tu-berlin.de
Prof. Jorge Nocedal
Electrical Engineering and Computer
Science
Northwestern University
Evanston, IL 60208-3118 USA
Fax: (847) 467-4144
E-mail address:
nocedal@eecs.nwu.edu
Dr. David P. Williamson
IBM T.J. Watson Research Labs
P.O. Box 218
Yorktown Heights, NY 10598 USA
Fax: (914) 945-3434
E-mail: dpw@watson.ibm.com
The above information is reproduced
on the web at the address:
http://dmawww.epfl.ch/
roso.mosaic/ismp97/tucker.html


Call for Nominations: Nominations
are being sought for the Mathemati
cal Programming Society Beale
Orchard-Hays Prize for Excellence
in Computational Mathematical
Programming.
Purpose: This award is dedicated to
the memory of Martin Beale and
William Orchard-Hays, pioneers in
computational mathematical pro
gramming. To be eligible a paper or
a book must meet the following re
quirements:
1) It must be on computational
mathematical programming. The
topics to be considered include:
a) experimental evaluations of one or
more mathematical algorithms,
b) the development of quality math
ematical programming software (i.e.
well-documented code capable of ob
training solutions to some important
class of MP problems) coupled with
documentation of the applications of
the software to this class of problems
(note: the award would be presented
for the paper which describes this
work and not for the software itself),
c) the development of a new compu
national method that improves the
state-of-the-art in computer imple
mentations of MP algorithms
coupled with documentation of the
experiment which showed the im
provement, or
d) the development of new methods
for empirical testing of mathematical
programming techniques (e.g., devel
opment of a new design for compu
national experiments, identification
of new performance measures, meth
ods for reducing the cost of empirical
testing).
2) It must have appeared in the open
literature.
3) If the paper or book is written in a
language other than English, then an
English translation must also be in
cluded.
4) Papers eligible for the 1997 award
must have been published within the
years 1993 through 1996.


These requirements are intended as
guidelines to the screening committee
but are not to be viewed as binding
when work of exceptional merit
comes close to satisfying them.
Frequency and Amount of the Award:
The prize is awarded every three
years. The 1997 prize of $1500 and a
plaque will be presented in August
1997, at the Swiss Federal Institute of
Technology (EPFL), Lausanne Swit
zerland, at the Awards Session of the
International Symposium on Math
ematical Programming sponsored by
the Mathematical Programming Soci
ety.
Judgement criteria: Nominations will
be judged on the following criteria:
1) Magnitude of the contribution to
the advancement of computational
and experimental mathematical pro
gramming.
2) Originality of ideas and methods.
3) Clarity and excellence of exposi
tion.
Nominations: Nominations must be
in writing and include the titles) of
the papers) or book, the authorss,
the place and date of publication and
four copies of the material. Support
ing justification and any supplemen
tary materials are welcome but not
mandatory. The awards committee
reserves the right to request further
supporting materials from the nomi
nees.
Nominations should be mailed to:
Professor Robert J. Vanderbei
Dept. of Civ. Eng. and Operations
Research
ACE-42 Engineering Quad
Princeton University
Princeton, NJ 08544 USA
The deadline for submission of nomi
nations is January 1, 1997.
This call-for-nomination can be
viewed online by visiting:
http://www.sor.princeton.edu/
~rvdb/BOH97.html


PAGE 9


I






Ni52

S


DECEMBER 1996


5th Twente Workshop on
Graphs & Combinatorial Optimization
University of Twente, Enschede, The Netherlands


Optimal Control: theory,
algorithms, and applications
Center for Applied
Optimization
University of Florida
Feb. 27- March 1, 1997
Third Workshop on Models
ESPECIVE E and Algorithms for Planning
and Scheduling Problems
Cambridge, England
IFSVERSION TWO April 7-11, 1997
ISON TRANSF
...H ..EL. 5th Twente Workshop on
SGraphs and Combinatorial
Optimization
T- S ALTHOU, University of Twente
i O E.L Enschede, The Netherlands
I .ST.C 20 22 May 1997
HIGH FIDELITY COLOR PROJECT
MPDP-19 Nineteenth
ANDONPUBICATION THI Symposium on Mathematical
"TOUR OUTPUT WILL CHANGE P ymp
(SOLD INKDENSITY) PRODUCE Programming with Data
CLEAREST, SHARPEST IMAGES Perturbations
MATCHES THE SID AND DOT-G Te Geo
......... The George Washington
CHARACTERISTICS OF THE IN G h
IN THE PANTONE HEXCHROM University, Washington, DC
ELECTOR CONSULT WITHYO May 22-23, 1997
RE(
DENSITY AND DOTGAIN INFC XVI International
BELOW ALLVALUES WEREM Symposium on Mathematical
S.RITEMODEL F IYOUR Programming, Lausanne
OF THE PANTONE COLORSELI l A
ROM THESE. DETERMINED Switzerland, Aug. 1997
OF DI ERENCE AND ADJUST ICM98
VALUESACCORDINGLY FOR
YOUR BACK (K) MEASURES Berlin, Germany
10% REDUCTION FROM THE August 18-27, 1998
SELECTORSD INTHE TABLE
SUBTRACT 10% FROM THE I
I FIICTOD ETTHESIDYC


20- 22 May 1997


The Twente Workshop on Graphs and
Combinatorial Optimizationis organized
biennially at the Faculty of Applied Math
ematics at the University of Twente. Top
ics are theory of graphs and discrete algo
rithms (both deterministic and random)
and their applications in operations re
search and computer science.
We have attempted to keep a "workshop
atmosphere" and thus far have succeeded
in scheduling no more than two parallel
presentations. Costs have been kept as low
as possible in order to make the workshop

ers.
Prospective speakers are asked to submit
an extended abstract of their presentation
which will be refereed by a programming
committee. The extended abstract should
be at least three but not more than four
pages in length and should reach the or
ganizers before 28 February 1997.
The accepted extended abstracts will be
collected as a conference volume which
will be available at the workshop.
The external program committee
members include:


J.A. Bondy (Lyon), R.E. Burkard
(Graz), W.J. Jackson (London),
R.H. Moehring (Berlin), H. Sachs
(Ilmenau), R. Schrader (Cologne),
A. Schrijver (Amsterdam),
C. Thomassen (Copenhagen).
A refereed special issue of Discrete
Applied Mathematics will be devoted
to the proceedings of the workshop.
If you are interested in participating
in the 5th TWENTE WORKSHOP,
please pre-register now informally
and give your complete postal as well
as your e-mail address. Indicate
whether you would like to give a pre
sentation of about 30 minutes, nam
ing subject and/or title if you know
it. Then you should receive a registra
tion form and more detailed informa
tion by December 1996.
U. Faigle C. Hoede
Faculty of Applied Mathematics
University of Twente
P.O. Box 217 7500 AE Enschede
The Netherlands
e-mail: faigle@math.utwente.nl;
hoede@math.utwente.nl;
hunting@math.utwente.nl


SYMPOSIUM n e w S

The number of preregistrations for the 16th International Symposium on
Mathematical Programming, to be held in Lausanne, August 24-29 1997,
is growing nicely! So far 773 people from 54 countries have
preregistered. Some 145 invited sessions have been organized, and 20
speakers have agreed to give plenary or semi-plenary talks.
All people interested in the symposium are encouraged to consult the
homepage at the address http://dmawww.epfl.ch/roso.mosaic/ismp97/.
Here you can find information about the symposium site and the planned
topics, as well as travel information, suggestions regarding
accommodation, and a preregistration form.
People from soft currency countries can apply for financial support at
their local SOROS foundation. If there is no such foundation in your home
country, you can also apply for funding toward meeting the expenses
directly to the organizing committee of the Symposium. In the application
you should provide justification for the application and provide a title and
an abstract of the presentation you intend to give at the Symposium. The
applications should be received by April 30, 1997, and can be sent by
mail or e-mail to: Chairman of the Organizing Committee Professor Th.M.
Liebling, Department of Mathematics, EPFL CH-1015, Lausanne,
Switzerland. e-mail: liebling@dma.epfl.ch
On behalf of the Symposium Organizers, Karen Aardal


PAGE 10


'IC


f


~,S~L~~jl;j
I +rr






N 52

S] II!


Financial Support for

INTERNATIONAL CONGRESS OF MATHEMATICIANS

Berlin, 1998


F 11

PAPERS
R


The ICM'98 Organizing Committee has already re-
ceived quite a number of requests concerning financial
support for participation at the International Congress
of Mathematicians 1998in Berlin. The Circular Let
ter ICM98 CL6 describes how mathematicians from
developing countries can apply for financial help. The
local Organizing Committee is currently making ef
forts to obtain donations from German industry, gov
ernment, foundations and individuals to be able to
partially support mathematicians from Eastern Europe
and the independent states of the former Soviet Union.
To secure participation of as many persons as possible,
the local Organizing Committee will only support lo
cal costs in Berlin. Berlin is very close to Eastern Eu-
rope, and it is expected that applicants will find other
means to cover their travel costs.
To handle the applications and manage the financial
support the ICM'98 Organizing Committee has set up
a subcommittee, Committee for Support of Mathema
ticians from Eastern Europe (CSMEE).
CSMEE will distribute application forms for grants,
as described above, for mathematicians from Eastern
Europe in late summer 1997. These forms will be
made available through the ICM'98 server (http://
elib.zib.de/ICM98) and by e mail. Applicants will be
asked to provide a brief curriculum vitae (including
academic education, degree, professional employment,
and a list ofpublications).
Applicants should submit their application form
before January 1, 1998 to CSMEE to one of the
following addresses:


Prof. Dr. H. Kurke
Humboldt-Universitaet Institut fuer Mathematik
Unter den Linden 6 D-1 0099 Berlin
Germany
e-mail: kurke@mathematik.hu-berlin.de
Prof. Dr. W. Roemisch
Humboldt-Universitaet
Institute fuer Mathematik Ziegelstrasse 13A
D-1 0099 Berlin
Germany
e-mail: romisch@mathematik.hu-berlin.de
All applications will be reviewed.
Further questions concerning financial support for
mathematicians from Eastern Europe to attend
ICM'98 should be directed to Professors Kurke or
Roemisch.
More information about ICM98 can be found in
the ICM98 WWW-server (URL: http://elib.zib-
berlin.de/ICM98). This WWW-server also offers
an electronic preregistration form. If you do not
have access to the World Wide Web and would
like to subscribe to the ICM98 circular letters,
just send an e-mail to
icm98@zib-berlin.de
writing
PRELIMINARY PREREGISTRATION
in the SUBJECT line.
MARTIN GROETSCHEL, PRESIDENT OF THE ICM98
ORGANIZING COMMITTEE


Special Issue of Computers &
Operations Research:

Travelling Salesman Problem

Computers & Operations Research will publish a spe
cial issue on the "T -.. Salesman Problem. "Pa
pers are soughtin the broad area of '. . .
man problem and its variations which discuss compu
national and/or algorithmic aspects. In particular, we


or heuristic algorithms, analysis of heuristics, domina


with special structures, new applications, etc.


dardreviewprocess. Four copies of thepaper, following
thestandardguidelinesforComputers & OperationsRe
search, should be sent by March 1997 to.
Dr. Abraham Punnen
Dept. ofMathematics, Statistics & ComputerScience
University of Brunswick
SaintJohn, New Brunswick
CANADA E2L 4L5


MPDP-19 Nineteenth Symposium on Mathematical Programming with Data Perturbations
May 22-23, 1997, The George Washington University, Washington, DC


The NINETEENTH Symposium on Mathematical Programming with Data
Perturbations will be held at George Washington University's Marvin Center on
22-23 May 1997. The objective is to bring together practitioners who use
mathematical programming optimization models, and deal with questions of
sensitivity analysis, with researchers who are developing techniques applicable to
these problems.
The symposium webpage is:
http://rutcor.rutgers.edu:80/-bisrael/MPDP-19.html
CONTRIBUTED papers in mathematical programming are solicited in the following
areas: 1. Sensitivity and stability analysis and their applications; 2. Solution
methods for problems involving implicitly defined functions; 3. Solution
approximation techniques and error analysis. "CLINICAL" presentations that
describe problems in sensitivity analysis encountered in applications are also
invited.
DEADLINES: 15 March 1997 Registration and submission of tentative title and
abstract;
1 May 1997 Submission of final abstract for inclusion in the Symposium Program
REGISTRATION FEE: $50 USD payable at the meeting.
To REGISTER and/or SUBMIT ABSTRACT please use the electronic form in the URL:
http://rutcor.rutgers.edu:80/-bisrael/MPDP-19.html#form


Or mail to:
MPDP-19
c/o Adi Ben-Israel
RUTCOR Rutgers Center for
Operations Research
P.O. Box 5062
Rutgers University
New Brunswick, NJ 08903-5062, USA
Or email/fax to one of the organizers
listed below:
Adi Ben-Israel
Rutgers University

Tel: +1-908-445-5631
Fax: +1-908-445-5472
Hubertus Th. Jongen
RWTH-Aachen

Tel: +49-241-804540
Fax: +49-241-8888390


Riethard Klatte
University of Zurich

Tel: +41-1-257 3772
Fax: +41-1-252 1162
Doug Ward
Miami University

Tel: +1-513-529-3534
Fax:+1-513-529-1493
Anthony V. Fiacco, General Chairman
Sponsored by the Dept. of Operations
Research & the Institute for
Management Science & Engineering,
School of Engineering & Applied
Science The George Washington
University Washington, DC 20052,
USA


PAGE 11


DECEMBER 1996






DECEMBER 1996


reviews


Combinatorial Network Theory

Edited by Ding-Zhu Du and
D. Frank Hsu
Series of Applied Optimization
Kluwer Academic Publishers
Dordrecht, 1996
ISBN 0-7923-3777-8

The design ofinterconnection networks and related theo
retical research has received a lot of attention in recent
years. A network suitable for parallel computation should
satisfy two major requirements. First, a network should
have computational efficiency to run parallel algorithms.
One of the main overheads incurred during parallel ex
ecution is the time spent on communication between
processors. Thus, the ability of an interconnection net
work to disseminate information efficiently is one of the
most important properties of a network. Secondly, reli
ability of a network is another important requirement.
An interconnection network should be fault-tolerant so
that it can work even if some of its links or nodes fail. This
books devoted to the theoretical study of dissemination
of information in interconnection networks and their re
liability.
Caley graphs provide suitable candidates for interconnec
tion networks. One of their advantages is their symmet
ric structure, which makes them easy to build and sim
plifies routing and communication algorithms. Examples
of prominent interconnection networks which are Caley
graphs include the Hypercube, the Butterfly and the
Cube-Connected-Cycles network. In the first two chap
ters of the book Caley graphs are considered. Since Caley
graphs are defined on groups, group theory provides a
powerful tool in the study of Caley graphs.
An extensive study of the connectivity of Caley graphs
defined on finite Abelian groups is presented in Chapter
1. Connectivity can be viewed as a measure of the reli
ability of a network. Along with the presentation of new
results, it is shown how some well-known results from the
additive group theory were rediscovered and applied to
graph-theoretical problems such as that of network con
nectivity. Unfortunately, this chapter contains many
textual mistakes.


The study of edge and vertex connectivity in Caley graphs
continues in Chapter 2. In contrast to Chapter 1, most
of the proofs in this chapter are obtained by using atoms
of graphs.
De Bruijn and Kautz digraphs and their generalizations
are other popular candidates for interconnection net
works. Their popularity is explained by the fact that these
networks have a highly symmetric structure and give
optimal or nearly optimal solutions of the problem of
minimizing the diameter and maximizing connectivity for
a graph with a given number of nodes and degree. Con
nectivity and diameter are important parameters for both
reliability and the ability of a network to disseminate
information efficiently. Chapter 3 presents an extensive
study of diameter, connectivity, line-connectivity, super
line connectivity and the Hamiltonian property of de
Bruijn and Kautz digraphs and their generalizations.
In Chapter 4 various properties (in particular, link con
nectivity) of extended double loop networks (EDLN) are
studied. / I .... vi .- I -I DLN includes suchwell-known
networks as generalized de Bruijn networks, the Imase
Itoh networks and double loop networks. This study
shows that certain EDLN networks are suitable candidates
for good interconnection networks.
Dissemination of information in interconnection net
works is considered in Chapter 5. The three main prob
lems of dissemination of information are broadcasting,
accumulation and gossiping. These problems are consid
ered for all well-known interconnection networks under
several communication modes. The solutions for these
problems are given mainly in terms of lower and upper
bounds on communication time. This chapter provides
the reader with many basic proof techniques and ideas in
this area, gives a good survey of known results and for
mulates many open problems.
The book is aimed at graduate students and researchers
and provides the reader with a deep insight into combi
natorial network theory. Some of its chapters (especially
Chapter 5) would also be suitable for undergraduate stu
dents.
E.A. STOHR


State of the Art in Global
Optimization

Computational Methods and
Applications
Edited by C.A. Floudas and
P.M. Pardalos
Nonconvex Optimization and Its
Applications 7
Kluwer Academic Publishers
Dordrecht 1996
ISBN 0-7923-3838-3

This is the seventh volume of an excellent and much
needed series in "Nonconvex Optimization and Its Ap
plications" put together by leaders in this field. The vol
ume contains 36 invited and thoroughly refereed papers
that were presented at the conference on "State of the Art
in Global Optimization: Computational Methods and
Applications." The conference was organized by C.A.
Floudas and P.M. Pardalos and held at Princeton Uni
versity, April 28-30, 1995. The conference papers
spanned the gamut of theory, computational implemen
stations and applications of global optimization.
Among other fine contributions, one finds the following:
* R. Horst and N. van Thoai propose two types of new
finite branch and bound algorithms for global minimi
zation of separable concave functions under linear con
straints with totally unimodular matrices. The key obser
vation is that the underlying problems can be viewed as
integer programs. Finiteness can, therefore, be achieved
by integral branching.
* K.G. Ramakrishnan, M.G.C. Resende and P.M.
Pardalos report on computational experience with a
branch and bound algorithm for Quadratic Assignment
Problems. All problems of dimension n <15 of QAPLIB
are solved.
* R.B. Kearfott reports practical experience with an in
terval branch and bound algorithm for equality-con
strained optimization.
* T. Van Voorhis and F. Al-Khayyal use range reduction
techniques to accelerate a branch and bound algorithm
for quadratically constrained quadratic programs.

* J.P. Shectman and N.V. Sahinidis prove that exhaus
tiveness and branching on the incumbent whenever
possible in tandem ensure finiteness of rectangular-sub
division-based branch and bound algorithms for global
minimization if separable concave programs.






N 52

S I' Ii A1


DECEMBER 1996


* H. Tuy shows that many important location problems
(Weber's problem with attraction and repulsion, con
strained multisource and multifacility problems and oth
ers) can be formulated as d.c. optimization problems in
low-dimensional spaces and describes algorithms for their
solution.

Additional articles in the volume are authored by S.
Zlobec; E. Novak and K. Ritter; S. Shi, Q. Zheng and D.
Zhuang; B. Ramachandran and J.F. Pekny; G. Isac; P.
Maponi, M.C. Recchioni and F. Zirilli; G.H. Staus, L.T.
Biegler and B.E. Ydstie; V. Visweswaran, C.A. Floudas,
M.G. IerapetritouandE.N. Pistikopoulos; J. Barhenand
V. Protopopescu; D. MacLagan, T. Sturge and W.
Baritompa; K. Holmqvist and A. Migdalas; D.W. Bulger
and G.R. Wood; W. Edmonson, K. Srinivasan, C. Wang
and J. Principe; E. Falkenauer; I.P. Androulakeis, V.
Visweswaran and C.A. Floudas; T. Qian, Y. Ye and P.M.
Pardalos; A. Torn and S. Viitanen; K.I.M. McKinnon,
C. Millar and M. Mongeau; E. Haddad; J. Shi and Y.
Yoshitsugu; I. Garcia and G.T. Herman; P. Sussner, P.M.
Pardalos and G.X. Ritter; J.A. Filar, P.S. Gaertner and
M.A. Janssen; W.F. Eddy and A. Mockus; L. Mockus and
G.V. Reklaitis; A. Lucia andJ. Xu; J.R. Banga and W.D.
Seider; M. Turkay and I.E. Grossmann; F. Friedler, J.B.
Varga, E. Feher and L.T. Fan; E.S. Fraga.

The above papers cover the theory and algorithms of
deterministic global optimization, stochastic global op
timization, branch and bound methods, interval arith
metic methods, d.c. programming, duality, concave pro
gramming, bilevel programming, integral optimization,
decomposition methods, logic-based algorithms, and
trust algorithms.

The book is also very rich in applications in resource
allocation, computer vision, chemical process design,
control and optimization, chemical and phase equilib
rium, facility location, climate change dynamic visualize
tion, batch process scheduling, and process synthesis.

One, therefore, cannot help but to fully agree with the
editors that "the book will be a valuable source of infor
nation to faculty, students and researchers in optimiza
tion, engineering, mathematics, computer sciences and
related areas."
NIKOLAOS SAHINIDIS


Postoptimal Analyses,
Parametric Programming and
Related Topics

Tomas Gal
de Gruyter
Berlin, 1995
ISBN 3-11-014060-8.

This book is the second edition of a monograph first writ
ten in the years 1968-1969 in Czech, translated in 1973
toGermanandin 1 .1'1 I. 1 ..I I Ii .I. i ;divided
into two parts. The first is based on illustrative examples;
the second part is an abridged mathematical presentation.
Each chapter ends by a selected bibliography.

This presentation in two parts has the great advantage that
the book is easy to read and understand for a wide audi
ence in the first part and is mathematically complete in the
second part. It is also a good, quite complete reference on
the subject. However, it has two main disadvantages. The
first is probably linked to the origin of the book: there
remain some typographical errors and inconsistencies in the
notations. Furthermore, the references at the end of each
chapter often date before 1970. The second disadvantage
is an omission: the consideration of the sensitivity analysis
results in case of degeneracy. This part is quite poor and
recently proposed new results should be introduced.

Chapter one is devoted to fixed basic concepts and nota
tions in linear programming. It recalls all the very well
known concepts in linear programming such as basic vari
ables, reduced cost, dual value, etc. It also recalls the basis
of the Simplex method for solving linear problems. The dual
problem is also presented with the dual Simplex method
and finally the concepts of primal or dual degenerate so
lutions are presented.

Chapter two is devoted to suboptimal solutions, redundant
constraints and degeneracy. For suboptimal solution, the
effect on the objective level and on the activities level of
producing a nonoptimal activity level is derived from the
optimal Simplex tableau. A redundant constraint is defined
as a constraint that does not influence the feasible region.
Aweaklyredundantconstrainthas apointincommonwith
the feasible region. The strongly redundant one does not.
This simply means that the associate slack cannot be
equalised to zero. This can be easily detected and the cor
responding constraint can be deleted. A primal degenerate
solution is obtained when some basic variables are equal to
zero. This can lead to a degenerate step in the Simplex
algorithm: the base changes but the algorithm remains at
the same vertex.


Chapter three is devoted to sensitivity analysis withrespect
to changing right-hand side without basis exchange. The
classical sensitivity analysis with a single component of the
right-hand side is introduced through a simple example.
The same basis remains optimal if the basic variables remain
nonnegative. One can deduce a range for the maximal
variation of this component. The effect on the optimal basic
variables is given by the corresponding column of the in
verse of the basic matrix. The effect on the objective func
tion is given by the value of the corresponding dual vari
able. Then sensitivity analysis with respect to several com
ponents of the right-hand side depending on a scalar pa
rameter is examined. Finally, the multiparametric sensitive
ity analysis case is considered, i.e. changing several compo
nents of the right-hand side depending on several param
eters.

Chapter four concerns linear parametric programming with
respect to change in the right-hand side. This implies in
general a basis exchange. The critical values of the param
eters are defined as the values for which the optimal basis
changes. First, the case of a single change in the right-hand
side is considered. Then, a change in several components
of the right-hand side is considered. The case where these
components change linearly with a single parameter and
the case where they change multilinearly with several pa
rameters are both considered.

At the end of this chapter, the problem of sensitivity analy
sis under primal degeneracy is briefly considered. In fact,
the sensitivity analysis with the right-hand side consists of
determining the critical interval where the same basis re
mains optimal. If the optimal solution is degenerate, sev
eral optimal bases exist. The critical interval is defined here
as the union of all the critical intervals associated with the
different optimal bases. Note that the analysis concerning
the shadow price that can be different left and right is not
very developed here. For a more complete analysis, see, for
example, M.P. Williams, Model Building in Mathemati
cal Programming, JohnWiley, 1990. Also, the assertion of
Gal (see page 175) that commercial LP software offering
sensitivity analysis and shadow prices yield false results when
primal degeneracy occurs, is not correct. There is one
exception that I know: XPRESS-MP of Dash Associated
which gives the two ". II ..I .... 1) correct values.

Chapter five concerns sensitivity analysis with respect to
changing cost coefficients withoutbasis exchange. Note the
very bad choice made for notations: the market price is
denotedby c and theunitcostbyp (see page 211). The cases
of a single change for a nonbasic objective coefficient and
for a basic variable objective coefficient are considered.
Then the case of changing several cost coefficients depend


PAGE 13






PAGE 14


ing on a scalar or multiple parameters is considered. In each
case the critical region of the parameter (i.e. the region in
which the same basis is optimal) is computed.

Chapter six is devoted to linear parametric programming
with respect to change in cost coefficients with basis ex
change. Since the partial derivative of the optimal value of
the objective function with cost coefficient is the value of
the primal variables, it is sufficient to compute the success
sive optimal solutions when the objective coefficients vary.
This can be done by changing from basis to adjacent basis,
simply by a primal Simplex step. Then the problem of
finding the minimal and maximal value of the parameter
such that the problem has a bounded optimal solution is
considered. It is asserted (see example 6-2, page 239) that
the example has no solution for zero value of the param-
eter. This is false. There is not finite optimal solution. The
geometric interpretation (slope change) of cost coefficient
change is then given. Then the case of changes depending
on several parameters is considered. The chapter ends with
the problem of determining the optimal objective cost
coefficients where these coefficients can vary homoge
neouslywithaparameter. The consideration of degeneracy
is omitted in this chapter.

Chapter seven is dedicated to sensitivity analysis to simul
taneous changing of the right-hand side and of the cost
coefficients. The objective coefficients and the right-hand
side are written here as linear functions of the same param
eter. First, the case of changing of the right-hand side and
cost coefficient with a scalar parameter is considered. Then
the case of the dependence on a vector of parameters is
considered.

Chapter eight is dedicated to sensitivity analysis with re
spectto the elements of the technological matrix often noted
A in linear problems. The coefficient at the intersection of
line i and column denotes the consumption of the ith pro
duction factor per unit of thejth production and can be
affected by a change in technology. Two small examples
ofonecolum n ofA .. I .1,. i..- ,, ,,,-. 1., ... ,,,, ,,,,
trate the complications which arise from such changes: the
optimal value of the objective function varies nonlinearly
with this parameter! This chapter is certainly one of the most
original as such results are rarely presented in books and
most of the time not published. I agree totally with the
author when he says that, despite the importance of this
sensitivity analysis withrespect to matrix coefficients, papers
have beenrarely published. He cites as examples his CORE
discussion papers 7013 and 7018 dating from 1970!

However, I regret that the author directly goes to the case
of one column that entirely varies and does not consider
the partial derivatives of the optimal value of the objective
functionwith respect to a single coefficient ofmatrix A. One
can derive from the (more complicated) formula of the
abridged mathematical presentation of chapter 8 the pretty
nice result: this partial derivative is equal to the opposite
of the product of the dual optimal value associated with the
line of the coefficient by the primal optimal value associ


N 52





ated with the column of the coefficient. The degenerate case
is also totally omitted. The result however exists: it is forth
coming in Mathematical Programming under the title
'Generalised derivative of the optimal solution of the ob
jective function of a linear problem with respect to matrix
coefficient' by D. De Wolf and Y. Smeers.

Chapter nine is dedicated to multicriteria linear program
ming, i.e. the problem of maximising several conflicting
linear goals over a linear feasible region. One can define in
this framework an efficient solution as a nondominated
solution in Pareto sense: i.e. when trying to improve the
value of one goal, the value of at least one of the remaining
objective functions becomes worse. A method for determine
ing the set of all efficient solutions is proposed. The reason
why multicriteria problems are considered in this book is
the following: the so-called Efficiency Theoremstates that
there is a one-to-one correspondence between efficient
solutions and optimal solutions for the homogeneous
multiparametric problem that is defined in chapter six.
Therefore parametric programming can help to compute
efficient solutions for the multicriteria linear problem.
Finally, the non-essential objective functions are defined
as objective functions that do not affect the set of efficient
solutions. By an efficiency test, one can identify these non
essential objectives and thus reduce the number of objec
tive functions to consider.

Chapter ten indicates possible applications of sensitivity
analysis and linear parametric programming in decision
making. Firstly, if for somereason, one wants to change the
value of some basic variables, it is sufficient to change the
corresponding right-hand side. Changing the value of some
nonbasic value is equivalent to considering a suboptimal
solution. Secondly, the problemof the inconsistency of the
constraint is considered. For practical applications with
thousands of constraints, it often occurs that the solution
setis empty (one artificial variable remains positive in phase
one). A simple way of removing inconsistency is to change
the value of the right-hand side. Thirdly, the problem of
redundancy of some linear inequalities is considered. Re
call that a redundant constraint is a constraint that does not
affect the feasible region. One way for detecting such con
straints using sensitivity analysis is to notice that the range
of variation of the right-hand side of such constraints var
ies from minus infinity to plus infinity. In conclusion, sen
sitivity analysis helps to compute efficient solutions in
multicriteria programming (first application), to remove
inconsistency (second application) and to detectredundant
constraints (third application).
DANIEL DE WOLF


,6


C)







~- or


-- -


Ia

-ir





('WI(













0 0 0 I


DECEMBER 1996


N 52

S] A r i!


A. von Arnim, R. Schrader and
Y. Wang, The permutahedron of
N-sparse posets
Z.-Q. Luo, J.-S. Pang, D. Ralph
and S.-Q. Wu, Exact penaliza-
tion and stationarity conditions
of mathematical programs with
equilibrium constraints
Tsuchiya, T. and R.D.C.
Monteiro, Superlinear conver-
gence of the affine scaling
algorithm



R.J.B. Wets, Challenges in
stochastic programming
E.L. Plambeck, B.-R. Fu, S.M.
Robinson and R. Suri, Sample-
path optimization of convex
stochastic performance func-
tions
S.S. Nielsen and S.A. Zenios, A
stochastic programming model
for funding single premium
deferred annuities
K. Marti, Differentiation
formulas for probability
functions: The transformation
method


P. Kall and J. Mayer, SLP-IOR:
An interactive model manage-
ment system for stochastic linear
programs
G. Infanger and D.P. Morton,
Cut sharing for multistage
stochastic linear programs with
interstate dependency
J.L. Higle and S. Sen, Duality
and statistical tests of optimality
for two stage stochastic pro-
grams
K. Frauendorfer, Barycentric
scenario trees in convex multi-
stage stochastic programming
N.C.P. Edirisinghe and W.T.
Ziemba, Implementing bounds-
based approximations in convex-
concave two-stage stochastic
programming
J.R. Birge, C.J. Donohue, D.F.
Holmes and O.G. Svintsitski, A
parallel implementation of the
nested decomposition algorithm
for multistage stochastic linear
programs


M. Constantino, A cutting plane
approach to capacitated lot-
sizing with start-up costs
H. Yamashita and H. Yabe,
Superlinear and quadratic
convergence of some primal-dual
interior point methods for
constrained optimization
J.-P. Crouzeix and J.A. Ferland,
Criteria for differentiable
generalized monotone maps
T. De Luca, F. Facchinei and C.
Kanzow, A semismooth equation
approach to the solution of
nonlinear complementarity
problems
D. Goeleven, G.E. Stavroulakis
and P.D. Panagiotopoulos,
Solvability theory for a class of
hemivariational inequalities
involving copositive plus
matrices.Applications in robotics
E.C. Sewell, Binary integer
programs with two variables per
inequality
M.D. Grigoriadis and L.G.
Khachiyan, Approximate
minimum-cost multicommodity
flows in Oe2 KNM time


APPLICATION FOR MEMBERSHIP


I wish to enroll as a member of the Society.

My subscription is for my personal use and not for the benefit of any library or institution.

E I willpay my membership dues on receipt ofyour in voice.

SI wish to pay by creditcard (Master/Euro or Visa).


CREDITCARD
NUMBER:

FAMILY NAME:


EXPIRY DATE:


MAILING ADDRESS:


Mail to:
The Mathematical Programming Society, Inc.
c/o International Statistical Institute
428 Prinses Beatrixlaan
2270 AZ Voorburg
The Netherlands



Cheques or money orders should be made payable to
The Mathematical Programming Society, Inc., in
one of the currencies listed below.
Dues for 1996, including subscription to the journal
MathematicalProgramming, are Dfl.105.00 (or
$60.00 or DM94.00 or 39.00 or FF326.00 or
Sw.Fr.80.00).
Student applications: Dues are one-half the above
rates. Have a faculty member verify your student sta
tus and send application with dues to above address.
Faculty verifying status


institution


TEL.NO.: TELEFAX:

E-MAIL:

SIGNATURE






Donald W. Hearn, EDITOR
hearn@ise.ufl.edu
Karen Aardal, FEATURES EDITOR
Utrecht University
Department of Computer Science
P.O. Box 80089
3508 TB Utrecht
The Netherlands
aardal@cs.ruu.nl
Faiz Al-Khayyal, SOFTWARE & COMPUTATION EDITOR
Georgia Tech
Industrial and Systems Engineering
Atlanta, GA 30332-0205
faiz@isye.gatech.edu
Dolf Talman, BOOK REVIEW EDITOR
Department of Econometrics
Tilburg University
P.O. Box 90153
5000 LE Tilburg
The Netherlands
talman@kub.nl
Elsa Drake, DESIGNER
PUBLISHED BY THE
MATHEMATICAL PROGRAMMING SOCIETY &
GATOREngineering. PUBLICATION SERVICES
UNIVERSITY OF FLORIDA

Journal contents are subject to change by the publisher.


Harvey Greenberg has developed a Mathematical
Programming Glossary on the Web. URL is http://www-
math.cudenver.edu/cudenver.edu/~hgreenbe/glossary/
glossary.html and it includes links to several bibliographies.
He invites comments and suggestions via a link to his
home page. Other useful URLs are the MPS home page
http://www.caam.rice.edu/~mathprog/ and the home
page for the next Symposium: http://dmawww.epfl.ch/
roso.mosaic/ismp97/welcome.html Deadline
for the next OPTIMA is Feb. 15, 1997.


O P T I M A
MATHEMATICAL PROGRAMMING SOCIETY

UNIVERSITY OF

FLORIDA
Center for Applied Optimization
371 Weil Hall
PO Box 116595
Gainesville FL 32611 6595 USA


FIRST CLASS MAIL




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs