Title: Optima
ALL VOLUMES CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00090046/00079
 Material Information
Title: Optima
Physical Description: Serial
Language: English
Creator: Mathematical Programming Society, University of Florida
Publisher: Mathematical Programming Society, University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: May 2009
 Record Information
Bibliographic ID: UF00090046
Volume ID: VID00079
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

optima79 ( PDF )


Full Text










OPTIMA

Mathematical Programming Society Newsletter


Katya Scheinberg
Geometry in model-based algorithms for
derivative-free unconstrained optimization

Abstract. Derivative free optimization addresses general nonlin-
ear optimization problems in the cases when obtaining deriva-
tive information for the objective and/or the constraint functions
is impractical due to computational cost or numerical inaccu-
racies. Applications of derivative free optimization arise often
in engineering design such as circuit tuning, aircraft configura-
tion, water pipe calibration, oil reservoir modeling, etc. Tradi-
tional approaches to derivative free optimization until late 1990's
have been based on sampling of the objective function, without
any attempt to build models of the function or its derivatives.
In the late 90's model based trust region derivative free meth-
ods started to gain popularity, pioneered by Powell and further
advanced by Conn, Scheinberg and Toint. These methods build
linear or quadratic interpolation model of the objective func-
tion and hence can exploit some first and second oder informa-
tion. In the last several years the general convergence theory for
these methods, under reasonable assumptions, was developed
by Conn, Scheinberg and Vicente. Moreover, recently Scheinberg
and Toint have discovered the "self-correcting" property which
helps explain the good performance observed in these methods
and have shown convergence under very mild requirements.



I What is derivative free optimization?
Derivative free optimization is a class of nonlinear optimization
methods which usually comes to mind when one needs to apply
optimization to complex systems. The complexity of those systems
manifests itself in the lack of derivative information (exact or approx-
imate) of the functions under consideration. What usually causes the
lack of derivative information is the fact that the function values are
a result of a black-box simulation process or a physical experiment.
The situation is often aggravated by the high cost of the function
evaluations and the numerical noise in the resulting values. Thus
the use of finite difference derivative approximation is typically pro-
hibitive.
The numerous applications of derivative free optimization can be
found in engineering design, geological modeling, finance, manufac-
turing, biomedical applications and many other fields. As the avail-
able computational power grows the simulation processes become
routine and using optimization of complex systems becomes possi-
ble and desirable. Thus the number of applications of derivative free
optimization grows continuously, which partially explains the contin-
uing growth of the field itself. Another reason for the growth of the
field is the recent development of relatively sophisticated algorithms
and theory which address the specific needs of the derivative free
problems. Here we will discuss some of the recent developments


in the theory of model based derivative free methods. We would
like to note that the purpose of this article is to focus on the issue
of the maintenance of the geometry of the sample sets in model
based derivative free methods. Since this is not a survey the list of
references is very limited.


2 The role of geometry
When it comes to the derivative free optimization, it is clear that
most standard optimization approaches do not apply since they rely
on Taylor type models and hence require derivatives. Instead vari-
ous methods of sampling the objective function have been proposed.
The most widely used and well-known of them is the Nelder-Mead
algorithm [ 12], popular for its simplicity and effectiveness, but at the
same time notorious for its failure to converge even in simple cases.
Roughly speaking, what Nelder-Mead does is the following (for
description and analysis of the method see [9] and [23]): the objec-
tive function is evaluated at n + 1 affinely independent points and
the point with the worst function value is selected. The worst point
is then reflected with respect to the hyperplane formed by the re-
maining n points. Depending on the function value achieved at this
new sample point the original simplex may be contracted or new
simplex may be expanded or contracted along a certain direction
and a possible new sample point may be produced and evaluated.
The contraction and reflection steps are designed to find progress
along a (hopefully) descent direction. In the process the shape of
the simplex changes, often adapting itself to the curvature of the
objective function. This observed behavior of Nelder-Mead method
is what makes it often so successful in practice. However, it is also
the cause of its failure to converge (in theory and in practice) the
simplex may change the shape until it becomes "flat" and the fur-
ther progress is impossible because the sample points are no longer
affinely independent and the sample space may become orthogonal
to the gradient direction.



Contents of Issue 79 / May 2009
I Katya Scheinberg, Geometry in model-based algorithms for
derivative-free unconstrained optimization
6 Jorge Nocedal, Finding the middle ground between first and
second-order methods
7 Steve Wright, MPS Chair's Column
8 Steve Wright, New Constitution and Bylaws for MPS
8 Constitution of the Mathematical Programming Society
10 Announcement: MPS Election Results
10 Alberto Caprara, Andrea Lodi and Katya Scheinberg,
What's new in Optima?
10 Imprint






OPTIMA 79


In the early 90s a new class of derivative free methods emerged
- the pattern search methods, suggested by Torczon ([21], [22]).
As opposed to the Nelder-Mead method the pattern search meth-
ods evaluate the objective function on a pattern of a fixed shape.
The pattern can contract or expand, but the shape and, hence, the
affine independence of the sample points never change. With this re-
striction comes the benefit of a global convergence theory, but also
the loss of the ability to use the curvature information. The pattern
search methods and also a related class of direct search methods,
[I], have since grew in variety and sophistication allowing a use of
different patterns and incorporating other sampling techniques in-
cluding the use of interpolation models. However the convergence
theory essentially relies on the predetermined geometry of a pat-
tern.
In the mid to late 90s some of the more classical methods from
the derivatives based literature found their analogue in the deriva-
tive free world. Specifically, pioneered by Powell ([13], [14], [15],
[17], [18]), and also developed by Conn, Scheinberg and Toint [10],
[4] [5], a class of trust-region methods based on quadratic interpo-
lation, rather than Taylor models, was introduced. Quadratic inter-
polation models are built based on sample sets of points, preferably
in reasonably close proximity to the current best iterate. Due to
the expense of the function evaluations, the sample sets typically
consist of past iterates, recent unsuccessful steps and possibly some
additional sample points. It was understood early on that, unlike the
Taylor model, whose accuracy depends entirely on the properties
of the approximated function and the distance to the center of the
Taylor expansion, the interpolation model's quality depends also on
the geometry of the sample set. It was also understood that, if no
special care is taken, the sample set may deteriorate, just as in the
Nelder-Mead algorithm, and produce incorrect or inaccurate mod-
els. It was, thus, believed that the geometry of the sample set needs
to be maintained throughout the progress of the algorithm by the
means of special "geometry steps".
We will now explain the effect of the geometry on the conver-
gence properties of model-based derivative free methods.


3 Interpolation models and trust-region methods
We consider the unconstrained minimization problem

mm f(x) (3.1)
xeR"
where the first derivatives of the objective function f(x) are as-
sumed to exist and be Lipschitz continuous. However, explicit eval-
uation of these derivatives is assumed to be impossible, either be-
cause they are unavailable or because they are too costly.

3.1 Polynomial interpolation and Lagrange polynomials
Let us consider Pd, the space of polynomials of degree < d in Rn
and let pi = p + 1 be the dimension of this space. One knows that
for d 1 pl n+l andthatfor d 2, pl -= (n+ l)(n +2).A
basis { (o(x), k (x),..., ,p (x)} of P is a set of pi polynomi-
als of degree < d that span Pd. For any such basis 4, any polynomial
m(x) e Pd can be written as
P
m(x) Z ajY j(x),
j-o

where the aj's are real coefficients. We say that the polynomial
m(x) interpolates the function f(x) at a given point y if m(y) =
f (y).
Assume now we are given a set y = {yo,Y,..., yp} C Rn of
interpolation points, and let m(x) denote a polynomial of degree


d in Rn that interpolates a given function f(x) at the points in y.
The interpolation polynomial exists and is unique if and only if the
set y is poised. If the set is poised then one can define the basis of
Lagrange polynomials ([ 16]).

Definition 3.1. Given a poised set of interpolation points Y =
{Yo,Y,..., yp}, a basis of pi p + 1 polynomials -e(x), j
0,..., p, in Pd, is called a basis of Lagrange polynomials if

1 if i # j
ij(Yi) i j 0 if i t j.


Lagrange polynomials have a number of useful properties. In partic-
ular, we are interested in the crucial fact that, if m(x) interpolates
f(x) at the points of y, then, for all x,


p
m(x) = f(yi)fj(x).
j=o


For more details and other properties of Lagrange polynomials
see Section 3.2 in [9].
From (3.2) it is relatively easy to derive the relation between the
value of Lagrange polynomials and the accuracy of the interpolation
at a given point. Specifically,


p
Im(x) f() f < M Yj Iyj -. I l Ij(x)I,
j-o


where M is a constant which depends only on the Lipschitz constant
of Vf(x). See [2] for a comprehensive treatment of interpolation
error bounds expressed via Lagrange polynomials.
It is clear from (3.3) that the absolute values of the Lagrange poly-
nomials is the key indicator of the "geometry" of the interpolation
set that we discussed above. If we consider now a ball B of ra-
dius A which contains y, then, in general, the smaller the maximum
absolute value of the Lagrange polynomials on B the better m(x)
approximates f on B. In fact we can see that the bound (3.3) is simi-
lar to the Taylor expansion. However, unlike in the Taylor expansion
case, to obtain a better agreement between f(x) and m(x) one
has to consider not only the ball of a smaller radius, but a new in-
terpolation set y which fits into such a ball, while the maximum
absolute value of the Lagrange polynomials has to remain bounded
by the same constant.
We will make use of the following concept (borrowed from [7]
and [9]) of A-poisedness of an interpolation set.

Definition 3.2. Let A > 0 and a set B e Rn be given. A poised set
Y = {yo, Y,... yp } is said to be A-poised in B if and only if, for the
basis of Lagrange polynomials associated with Y, one has that

A > max max If-(x)l.
j 0,...,p xe

The following lemma [20] (see also [9]) is central to the use of La-
grange polynomials in geometry maintenance that we discuss here.

Lemma 3.3. Given a closed bounded domain B, any initial interpola-
tion set y e B and a constant A > 1, consider the following procedure:
find j e {0,..., p} and a point x e B such that I-e(x) > A (if such
a point exists), and replace yj by x to obtain a new set y. Then this
procedure terminates after a finite number of iterations with a sample set
which is A-poised in B.

3.2 Fully linear and quadratic models
We have observed that an interpolation model based on a A-poised
sample set provides a Taylor-like approximation of the objective
function. For the purposes of the algorithmic framework we may






May 2009


want to abstract from the specifics of Lagrange polynomials and in-
terpolation models (we will return to them later).
In [8] and [9] general concepts of fully-linear and fully-quadratic
models were introduced.
Loosely speaking we call a model m(x) to be a fully-linear model
of f(x) in B(x,A) if
- the error between the gradient of the model and the gradient of
the function satisfies

l|Vf(y) Vm(y)l < Keg A, Vy e B(x;A),

and
- the error between the model and the function satisfies

If(Y) m(y)l < Kef A2, Vy (x;A),

with constants Kef and Keg independent of y.
Analogously we call m(x) fully quadratic in B(x, A) if
the error between the Hessian of the model and the Hessian
of the function satisfies

IIV2f(y) V2m(y)ll < KehA, Vy e (x;A),

the error between the gradient of the model and the gradient
of the function satisfies

l|Vf(y) Vm(y)l < Keg A2, Vy e (x;A),

and
the error between the model and the function satisfies

If(Y) m(y) < Kef A3, Vy B(x;A),

with constants Kef, Keg and Keh independent of y.
It is then required that there exists an algorithm which in a fi-
nite effort either certifies that a given model is fully-linear (or fully-
quadratic), on a given B(x, A) and for given constants, or constructs
such a model, if it exists.
It is then shown in [9] that by means of Lagrange polynomials and
Lemma 3.3 or other similar mechanisms such algorithms exist for
polynomial interpolation. The proof of the fact relies on Lemma 3.3.
In the framework that we describe below the abstract concept of
fully-linear and fully-quadratic models is utilized.

3.3 A trust-region framework
Let m(x) define a local model of the objective function f(x) of
(3.1) in the framework of trust-region algorithms. Such algorithms
are iterative and build, around an iterate xk, a model mk(xk + s)
of the objective function which is assumed to represent this latter
function sufficiently well in a "trust region" B(xk, Ak), where Ak is
known as the radius of the trust region. The model is then mini-
mized (possibly approximately) in B(xk,Ak) to define a trial step
Xk, and the value f(xk) is then computed. If this value achieves (a
fraction of) the reduction from f(xk) which is anticipated on the
basis of the model reduction mk(xk) mk (xk), then the trial point
is accepted as the new iterate, the model is updated and the trust-
region radius is possibly increased: this is a "successful iteration". If,
on the contrary, the reduction in the objective function is too small
compared to the predicted one, then the trial point is rejected and
the trust-region radius is decreased: this is an unsuccessful iteration.
(See [3] for an extensive coverage of trust-region algorithms.)
Thus we can roughly describe the following trust region algorith-
mic framework.


Algorithm 3.1.
Step 0: Initialization. Choose an initial trust region radius Ao, an ini-
tial poised interpolation set Yo and a starting point xo. This in-
terpolation set defines an (at most quadratic) interpolation model
mo around xo. Chose appropriate constants for the description
that follows.
Step I: Criticality Step. If the current model gradient is much
smaller than the radius of the ball containing the sample set and
Xk then recompute a fully-linear model based on another sample
set which is closer to Xk until the radius and the gradient are com-
parable. Set the trust region radius to be comparable to the size of
the gradient as well.
Step 2: Compute a trial point. Compute xk such that I||x
xkll < Ak and mk(Xk ) is "sufficiently small compared to
mk(Xk) "
Step 3: Evaluate the objective function at the trial point.
Compute f(xk) and

f (xk) f(xf )
k mk(xk) mk (xk)'

Step 4: Define the next iterate.
Step 4a: Successful iteration. If Pk > rq, define xk+ =
Xk and choose Ak + > Ak. Obtain Yk+l by exchanging
one of the interpolation points with {xk}.
Step 4b: Unsuccessful iteration. If Pk < rl, then define
Xk+l = Xk and reduce Ak by a constant factor if mk(x)
is fully-linear, otherwise keep Ak the same. Possibly update
!k+1 to include Xk+l.
Step 5: Update the sample set and the model. If the model
mk is not fully-linear, then make at least one step of the model
improving algorithm. Increment k by one and go to Step I.

This algorithmic framework is theoretical and leaves many options
open. For instance, what we meant by "sufficiently small' compared
to mk(xk)" (in Step I) is not specified. The first order convergence
analysis merely requires that


mk(Xk) mk(xk+) > Kc k Imin 1 + Hk'II k]

where we define gk Vmk(xk) and Hk V2mk(xk), and where
Kc is some constant in (0, 1) the condition well-known in trust-
region analysis under the name of "Cauchy condition".
The trust region maintenance is flexible in Step 4, while in Step 5,
all is required is that an algorithm is used which can construct a
fully-linear model in a finite number of steps.
The fully-quadratic models can also be used if V2f(x) is Lipschitz
continuous and second order conditions are used in Steps I and 2
(see [3] and [8] for formal statements and details of second order
conditions).
In [8] it is shown that an algorithm based on this framework
(with some additional flexibility) converges to first order station-
ary points) in the case of fully-linear models and to second-order
stationary points) in the case of fully-quadratic models and second
order conditions in Steps I and 2.
The theory provides foundation for some existing and possible
future model-based DFO algorithms, but it strongly depends on the
model improvement steps (in Steps I and 5) where some (hope-
fully not many) extra sample points need to be introduced and their
function values computed. Although such extra points are also com-
puted in practical implementations of the DFO algorithms (such as
NEWUOA [19] and DFO [6]) the question still remained how nec-
essary these extra steps are.






OPTIMA 79


4 Is it necessary to consider geometry?
4. 1 Geometry-free framework
In particular, at the same time as the development of the conver-
gence theory for model-based DFO methods, Fasano, Nocedal and
Morales in [I I] proposed an implementation of a model-based trust
region method which avoided all geometry considerations entirely.
Here is rough outline of the algorithm they proposed:

Algorithm 4.1.
Step 0: Initialization. Choose an initial trust region radius Ao, an ini-
tial poised interpolation set Yo and a starting point xo. This in-
terpolation set defines an (at most quadratic) interpolation model
mo around xo. Chose appropriate constants.
Step I: Compute a trial point. Compute xk such that IIx
xkll < Ak and mk(xk) is "sufficiently small compared to
mk(xk)".
Step 2: Evaluate the objective function at the trial point.
Compute f (xk) and


f(xk)
Pk (X)
mk(xk)


f(xk)
mk(xk)


Step 3: Define the next iterate.
Step 3a: Successful iteration. If Pk > rq, define xk+ =
k+ and choose Ak+ > Ak. Define the new interpolation
set Yk+l by including xk and by removing from Yk the
point Yk which is the furthest away from xk.
Step 3b: Unsuccessful iteration. If Pk < rl, then define
Xkl = Xk and reduce Ak by a constant factor. If xk is
closer to xk than any of the points in Yk then replace the
furthest point in Yk with xk.
Step 4: Update the sample set and the model. If k changed,
update the model mk. Increment k by one and go to Step I.

This algorithm computes only one sample point per iteration and
each such point is computed in the hope of reducing the objective
function, hence it seems the least wasteful in terms of function evalu-
ations. Indeed the computational results produced by the implemen-
tation were quite encouraging. However we have to note that the
results were obtained by using a complete quadratic interpolation
models, which require (n + 1)(n + 2)/2 sample points. If is possible
that even if quadratic models become quite inaccurate they still con-
tain valuable curvature information (see the column by J. Nocedal in
this issue).
So do we need to be concerned about the geometry of sample
sets or not?

4.2 Why considering geometry is necessary
In [20] it is shown, by two examples, that some geometry consid-
erations are necessary in order to guarantee global convergence.
Below we present one of the examples on which the algorithm pro-
posed in [I I] (and discussed in the previous subsection) produces a
nonpoised set of points and converges to a nonstationary point.
Consider the following starting set of interpolation points:

S 11 11 10 10 10 9 )
I ) 0 )' -1 )' 1 )( )0 o 9


This set is A-poised, in a ball of radius 2 around x =
with A < 2.25. Assume that we are given a function
x = (xI,X2)T with the following function values on Yo:


(10, O)T,
f(x) for


{121 + a, 121, 100 + a, 100 + a, 100, 81},

for some fixed a > 0. Also assume that along the X2 = 0 subspace
the function f(x) reduces to x2 and has a minimum at xl 0. For


instance the simple function

x + a(x + (10 x1)X2) if X1 < 10;
f( ) x2 + (ax2 if Xl > 10,

has such properties. Note that this function has a discontinuous
Hessian, however, Vf(x) is Lipschitz continuous, so convergence
to a first order stationary point is possible. Also observe that it is
possible to construct a function in C2 with the same properties as
f(x).
Now let us consider a quadratic model based on Y. It is easy to
see that the model is

ax2.
m (x) = x + ax .

Choose now a trust region of radius A = 2 centered around
4 (10,0)T.
The iterates produced by the algorithm are shown in Figure I. At
the end the interpolation set is completely aligned with the direction
X2 = 0 and the model degenerates into m(x) = x The algorithm
then terminates at the point x (0, 0), which is obtained at the
next iteration and which is a non-stationary point for the original
function f(x).
We see here that, if the gradient of the model converges to zero,
it does not imply that so does the gradient of the true function,
unless the poisedness of the interpolation set is maintained.


5 The new algorithm
It is indeed necessary to consider geometry of the sample set to
guarantee convergence of the trust-region model-based DFO meth-
ods, however, it turns out that it is not necessary to compute extra
sample points unless the gradient of the model becomes small.
The final algorithm that we present here relies on a remarkable
"self-correcting" property of the trust-region framework. Recall the
bound (3.3). Assume that the trust region step is not successful, this
implies that the error If(x ) mi I ) is relatively large (if it were
not, then the good agreement between the model reduction and the
objective function reduction would have caused the step to be suc-
cessful). Due to (3.3) If(xk ) mr 1) can be relatively large only if
either one of the I Ix+ y II is relatively large or if one of the values
I-i (x~) I is relatively large, which in turn means that replacing one of
the yi's by xk will improve the sample set (see Lemma 3.3). In [20]
this intuition is supported by rigorous derivation and the following
algorithm is proposed.

Algorithm 5.1.
Step 0: Initialization. Choose an initial trust region radius Ao, an ini-
tial poised interpolation set Yo and a starting point xo. This in-
terpolation set defines an (at most quadratic) interpolation model
mo around xo. Chose appropriate constants.
Step I: Criticality Step. If the current model gradient is smaller
than some threshold Ek then build a fully-linear model based on
a sample set which is sufficiently close to xk (so that the inter-
polation radius and the gradient are comparable). Set the trust
region radius to be comparable to the size of the gradient as well.
Decrease Ek by a constant factor.
Step 2: Compute a trial point. Compute xk such that IIx -
xkII < Ak and mk(xk ) is "sufficiently small compared to
mk(xk)".
Step 3: Evaluate the objective function at the trial point.
Compute f (x ) and

f(Xk) f(Xk+)
k mk(xk) mk (xk+)



















0 2 4 6 8 10 12









0 2 4 6 8 10 12




*


0 2 4 6 8 10 12

0 2 4 6 8 10 12


Figure I. From left to right and top to bottom: the successive iterates of the algorithm on the associated models, where the current iterate is marked by a diamond and
surrounded by its circular-shaped trust region. The final convergence point is indicated by a star.


Step 4: Define the next iterate.
Step 4a: Successful iteration. If Pk > r, define xk =1
Xk and choose Ak + > Ak. Update the interpolation set
to obtain Yk+l by swapping x+ with the point Yj,k in
Yk which is either far away or for which I-k,j(x)|l is the
largest.
Step 4b: Unsuccessful iteration. If Pk < rl, then define
Xk+1 Xk and
(i) if there is a far away point in y then replace it with
Xk,
(ii) if for some j I-k,j(xk) is larger than some fixed
A > 1, replace Yj,k with xk,
(iii) otherwise reduce Ak by a constant factor.
Step 5: Update the sample set and the model. If the interpo-
lation set Yk changed, then update the model mk. Increment k
by one and go to Step I.

In [20] the detailed mechanism of Step 4 is derived in a way which
guarantees that there can only be a finite number of consecutive
unsuccessful steps while the model gradient is bounded away from
zero. The proof relies on Lemma 3.3. Using this fact it is then shown
(under the usual and reasonable conditions) that this algorithm has
at least one limit point which is the first order stationary point for
f(x).
It is important to note that without Step I this algorithm will also
fail on the example in the previous section, since the outcome of
the example does not depend on the order in which interpolation
points are removed from the interpolation set.
The maintenance of Lagrange polynomials does not add extra
computational cost since it is the same as the cost of maintaining
the quadratic model [16]. This means that aside from the iterations
which invoke Step I, each iteration of this algorithms is essentially
the same in terms of computational cost as the steps of algorithm
in [I I]. We also note that in fact this new algorithm is the closest
theoretically convergent algorithm to the practical implementations
in [19] and [6] that exist so far.


It remains to be seen if stronger theoretical results can be ob-
tained for this (or a similar) algorithm.

Katya Scheinberg, Department of Industrial Engineering and Operations Re-
search, Columbia University, New York. katyascheinberg@gmail.com


References
[I] M. A. Abramson and C. Audet. Convergence of mesh adaptive direct
search to second-order stationary points. SIAM journal on Optimization,
17:606-619, 2006.
[2] P. G. Ciarlet and P. A. Raviart. General Lagrange and Hermite interpolation
in Rn with applications to finite element methods. Arch. Ration. Mech. Anal.,
46:177-199, 1972.
[3] A. R. Conn, N. I. M. Gould, and Ph. L. Toint. Trust-Region Methods. Num-
ber 01 in MPS-SIAM Series on Optimization. SIAM, Philadelphia, USA,
2000.
[4] A. R. Conn, K. Scheinberg, and Ph. L. Toint. On the convergence of
derivative-free methods for unconstrained optimization. In A. Iserles and
M. Buhmann, editors, Approximation Theory and Optimization: Tributes to M. .
D. Powell, pages 83-108, Cambridge, England, 1997. Cambridge University
Press.
[5] A. R. Conn, K. Scheinberg, and Ph. L. Toint. Recent progress in uncon-
strained nonlinear optimization without derivatives. Mathematical Program-
ming, Series B, 79(3):397-4 14, 1997.
[6] A. R. Conn, K. Scheinberg, and Ph. L. Toint. A derivative free optimization
algorithm in practice. Proceedings of the 7th AIAA/USAF/NASA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization, St. Louis, Mis-
souri, September 2-4, 1998.
[7] A. R. Conn, K. Scheinberg, and L. N. Vicente. Geometry of interpolation
sets in derivative free optimization. Mathematical Programming, I I1:141-
172, 2008.
[8] A. R. Conn, K. Scheinberg, and L. N. Vicente. Global convergence of
general derivative-free trust-region algorithms to first and second order
critical points. SIAM journal on Optimization, (to appear), 2009.
[9] A. R. Conn, K. Scheinberg, and L. N. Vicente. Introduction to Derivative-
free Optimization. MPS-SIAM Optimization series. SIAM, Philadelphia, USA,
2009.
[10] A. R. Conn and Ph. L. Toint. An algorithm using quadratic interpolation for
unconstrained derivative free optimization. In G. Di Pillo and E Gianessi,
editors, Nonlinear Optimization and Applications, pages 27-47, New York,
1996. Plenum Publishing.


May 2009


10 12






OPTIMA 79


[I1] G. Fasano, J. Nocedal, and J.-L. Morales. On the geometry phase in model-
based algorithms for derivative-free optimization. Optimization Methods and
Software, (to appear), 2009.
[12] J. A. Nelder and R. Mead. A simplex method for function minimization.
Comput. ., 7:308-313, 1965.
[13] M. J. D. Powell. A direct search optimization method that models the
objective and constraint functions by linear interpolation. In S. Gomez
and J. P. Hennart, editors, Advances in Optimization and Numerical Analysis,
Proceedings of the Sixth Workshop on Optimization and Numerical Analysis,
Oaxaca, Mexico, volume 275, pages 51-67, Dordrecht, The Netherlands,
1994. Kluwer Academic Publishers.
[14] M. J. D. Powell. A direct search optimization method that models the
objective by quadratic interpolation. Presentation at the 5th Stockholm
Optimization Days, Stockholm, 1994.
[15] M.J. D. Powell. A quadratic model trust region method for unconstrained
minimization without derivatives. Presentation at the International Confer-
ence on Nonlinear Programming and Variational Inequalities, Hong Kong,
1998.
[16] M. J. D. Powell. On the Lagrange functions of quadratic models that are
defined by interpolation. Optim. Methods Softw., 16:289-309, 200 1.
[17] M.J. D. Powell. On the use of quadratic models in unconstrained minimiza-
tion without derivatives. Technical Report NA2003/03, Department of
Applied Mathematics and Theoretical Physics, Cambridge University, Cam-
bridge, England, 2003.
[18] M. J .D. Powell. Least Frobenius norm updating of quadratic models
that satisfy interpolation conditions. Mathematical Programming, Series B,
100(1):183-215, 2004.
[19] M. J. D. Powell. Developments of NEWUOA for minimization without
derivatives. IMA Journal of Numerical Analysis, 28(4):649-664, 2008.
[20] K. Scheinberg and Ph. L. Toint. Self-correcting geometry in model-based al-
gorithms for derivative-free unconstrained optimization. Technical report,
submitted, 2009.
[21] V. Torczon. On the convergence of the multidirectional search algorithm.
SIAM Journal on Optimization, 1:123-145, 1991.
[22] V. Torczon. On the convergence of pattern search algorithms. SIAM journal
on Optimization, 7:1-25, 1997.
[23] P. Tseng. Fortified-descent simplicial search method: a general approach.
SIAM Journal on Optimization, 10:269-288, 1999.





Discussion column


Jorge Nocedal

Finding the middle ground between first

and second-order methods

In the last few years, we have witnessed the emergence of first-
order methods for a variety of nonlinear optimization applications.
The advocacy of first-order methods is, however, in stark contrast
with much of the algorithmic practice of the last 30 years that has
emphasized methods based on quadratic models to achieve faster
convergence. Therefore, it is reasonable to ask whether this shift in
emphasis is well justified.
Several arguments have been advanced in favor of first-order
methods.
I. For very large and data-intensive problems, inexpensive first-
order methods are more efficient than more rapidly convergent,
but more expensive, methods. Furthermore, in cases where ap-
proximate solutions are adequate the benefits of second-order
methods are not realized since the optimization is terminated
early.
2. In some applications, derivatives are not available, and approxi-
mating Hessian matrices is very costly in methods for derivative-
free optimization.


3. For non-smooth problems, quadratic approximations may not be
appropriate.
4. One can establish complexity results for first-order methods on
certain challenging problem classes. To establish similar results
for higher-order methods requires unrealistic assumptions.
5. First-order methods are more than adequate for problems that
contain uncertainty in the data.
Undoubtedly, there are some cases when first-order methods are
the right tool for the job. But in many contexts, some of the argu-
ments given above are not well justified and lead to algorithms that
are unnecessarily slow. The key observation of this note is that sim-
ple quadratic models that do not attempt to accurately approximate
the Newton model often give rise to very attractive algorithms for
many of the situations listed above. Indeed, the judicious use of se-
lective second-order information can bring dramatic savings in com-
puting time.
The article by Katya Scheinberg in this issue deals with derivative-
free optimization, one of the areas in which the use of (low qual-
ity) quadratic models has proved surprisingly effective. Her arti-
cle explores the limits of inaccuracy in model-based methods for
derivative-free optimization, a class of methods pioneered by Powell
and by Scheinberg and her collaborators; see [I]. Numerical experi-
ence has shown that quadratic models give rise to much more effec-
tive methods than linear models, even though they require O(n2) vs
O(n) function values to define the model via interpolation. Further-
more, Powell [10] has recently proposed a framework for updating
quadratic models using only O(n) interpolation points, which al-
lows the method to solve much larger problems than in the past.
Needless to say, such an approach does not aim to generate a good
approximation of the Hessian matrix and there is no hope of ob-
taining superlinear convergence but the method constitutes a su-
perior approach for derivative-free optimization. Some remarkable
numerical experiments by More and Wild [8] indicate that Pow-
ell's model-based method is very effective (and clearly outperforms
a leading pattern search method) even for certain classes of nons-
mooth problems. This efficiency is achieved in spite of the very low
accuracy of their quadratic approximations. As Scheinberg discusses
in this issue of Optima, one needs to impose only minimal quality
controls to promote convergence and ensure good performance.
Equally surprising is the recent study by Lewis and Overton [5] on
general-purpose methods for the minimization of locally Lipschitz
nonsmooth functions. They observe that the BFGS quasi-Newton
method is far more effective than more conservative techniques,
such as bundle methods. Since locally Lipschitz functions are differ-
entiable almost everywhere, the BFGS iteration (with an appropri-
ate line search) is normally well defined and is able to approximate
solutions even when they occur at a point of nondifferentiability .
As is the case in Powell's method for derivative-free optimization,
the quadratic models become extremely ill conditioned, but this
does not prevent the methods from moving along fruitful search
directions. Lewis and Overton do not provide convergence results
(except for very simple special cases) but offer good insights; for
example they report that the BFGS matrix often provides a good
approximation of the so-called U and V spaces associated with the
objective function. One cannot yet claim that the BFGS method rep-
resents a general-purpose algorithm for non-smooth optimization
because it typically breaks down close to the solution and is there-
fore unable to provide a certificate of optimality. Nevertheless, the
renewed interest in the use of second-order information is yet to be
fully developed in non-smooth optimization.
Two of the most popular methods for smooth large-scale opti-
mization, the inexact (or truncated) Newton method and limited
memory BFGS method, are typically implemented so that the rate






May 2009


of convergence is only linear. They are good examples of algorithms
that fall between first and second-order methods. Let me mention
three specific application areas where significant progress has been
made by designing new methods of this kind.
Rigid body simulations for computer game simulations often lead
to linear complementarity problems (LCP) that must be solved very
quickly because graphics operate at 60 frames per second. Unlike
studio animation movies, computer game animations need not be
of very high quality, and therefore the solution of the linear com-
plementarity problem is terminated quite early. Game developers
typically use the projected Gauss-Seidel (or projected SOR) method
to compute very approximate solutions of the linear complemen-
tarity problems. This prototypical first-order method has been ad-
vocated by the gaming community because the use of second-order
methods is not practical (interestingly, interior-point methods are
not well suited in this case).
Kocvara et al. [3] and Morales et al. [7] have shown that, by inter-
lacing a subspace improvement iteration with the projected Gauss-
Seidel iteration, it is possible to compute very accurate solutions
in less time. When the LCP is symmetric, the subspace improve-
ment space amounts to the minimization of an associated quadratic
program over a set of active free variables, and [7] shows that it
is effective to use exact second order information on this subspace.
The key observation is that the subspace improvement phase greatly
accelerates the identification of the optimal constraints it does not
simply provide a higher rate of convergence once this identification
has been made. These advances might not be altogether surprising
given that Polak [9] demonstrated long ago that the gradient pro-
jection method benefits greatly from a subspace minimization phase.
Nevertheless, it is often not straightforward to translate general de-
sign principles from one context to another one.
In fact, essentially the same approach that proved effective in com-
puter game simulations has recently been applied by Wen et al. [I I ]
in compressive sensing applications, and by Feng et al. [2] in the pric-
ing of American options. In these two papers the subspace minimiza-
tion phase uses an iterative approach (CG or BFGS for compressive
sensing, GMRES with ILU preconditioner, for options pricing). Signif-
icant speedups are obtained with respect to first-order methods.
It is difficult to explain precisely why a minimal use of second-
order information can bring substantial benefits. In the case of BFGS
for nonsmooth optimization, approximate Hessians provide much
needed information of curvature of functions, but also, more impor-
tantly, information about changes in the function due to discontinu-
ities in first-order derivatives. All the new methods I have mentioned
in this column, can be seen as occupying a middle ground between
first and second-order methods. This is fertile territory.

Jorge Nocedal, Department of Electrical Engineering and Computer Science,
Northwestern University, Evanston, IL, USA. nocedal@eecs.northwestern
edu



References
[I] A. R. Conn, K. Scheinberg, and L. Vicente. Introduction to derivative-free
optimization. SIAM, Philadelphia, USA, 2009.
[2] L. Feng, J.L. Morales,J. Nocedal, and V. Linetsky. On the solution of comple-
mentarity problems arising in American options pricing. Technical Report
OTC 09/02, Northwestern University, 2009.
[3] M. Kocvara and J. Zowe. An iterative two-step algorithm for linear com-
plementarity problems. Numerische Mathematik, 68:95-106, 1994.
[4] C. Lemarechal. Numerical experiments in nonsmooth optimization. In
E. A. Nurminski, editor, Progress in nondifferentiable optimization, volume 26,
pages 61-84, Laxenburg, Austria, 1982. The Interface Foundation of North
America.


[5] A.S. Lewis and M.L. Overton. Nonsmooth optimization via BFGS. Tech-
nical report, New York University, 2009. submitted to SIAM Journal on
Optimization.
[6] L. Luksan and J. Vlcek. Globally convergent variable metric methods for
convex nonsmooth unconstrained optimization. journal of Optimization The-
ory and Applications, (102):593-613, 1999.
[7] J.L. Morales, J. Nocedal, and M. Smelyanskiy. An algorithm for the fast
solution of linear complementarity problems. Numerische Mathematik,
II 1(2):251-266, 2008.
[8] J.J. More and S. M. Wild. Benchmarking derivative-free optimization algo-
rithms. SIAMJ. Optimization, 20(1):172-191, 2009.
[9] B. T. Polyak. The conjugate gradient method in extremal problems. U.S.S.R.
Computational Mathematics and Mathematical Physics, 9:94-1 12, 1969.
[10] M. J. D. Powell. Least Frobenius norm updating of quadratic models that
satisfy interpolation conditions. Mathematical Programming, 100(1):183-
215, 2004.
[II] Z. Wen, W. Yin, D. Goldfarb, and Y Zhang. A fast algorithm for sparse re-
construction based on shrinkage, subspace optimization and continuation.
Technical Report CAAM Technical Report TR09-01, Rice University, 2009.





Steve Wright

MPS Chair's Column

April 16, 2009. I welcome the new Optima! As would be evident
by the time you read these words, our society's newsletter is now
being designed and produced under new arrangements. The designer
and typesetter is Christoph Eyrich, who is also in charge of the de-
sign and production of the German Mathematical Society (DMV)
newsletter. These new arrangements represent the next major step
in our drive to renew Optima by publishing it on a regular schedule,
speeding up the production/distribution process, and re-conceiving
the content. I thank the many people in MPS who have contributed
to this process, most especially editor Andrea Lodi.
It's time to recognize the tremendous contributions of Don
Hearn, who was the founding editor of Optima (in 1980) and who
has served continuously as editor and then publisher since that time.
Don and his publication team at U. Florida have played a central role
in sustaining Optima (and thus MPS) over three decades, and we
thank them for their dedicated service.
All past issues of Optima can be found on the MPS web site
http://www.mathprog.org. A column by Michael Held (then Publi-
cations Committee Chair) in Issue I (1980) explains the origins of
Optima, which grew out of discussions among the MPS leadership
of the time, including MPS chair Phil Wolfe, George Nemhauser and
Michael Powell.
The new MPS web site went live in January 2009 after an ex-
tended and careful redesign. The new site is much easier to maintain
and (we hope) easier to navigate. Special thanks to webmaster Marc
Pfetsch for his initiative and hard work during this process. If you
have optimization-themed photos for including in the album, or any
other comments, on the site, please contact Marc.
Also newly available on http://www.mathprog.org are the issues of
the COAL Newsletter published between 1979 and 1993. Thanks to
Trond Steihaug for supplying the scans of these newsletters, which
were influential during a key period in algorithm and software devel-
opment for optimization. COAL the Committee on Algorithms -
was formerly a standing committee of MPS.
Mathematical Programming Studies, the predecessor of Mathe-
matical Programming, Series B, which was published in 3 volumes
between 1974 and 1987, is now available free to MPS members on
the SpringerLink web site. If you log in with your personal MPS iden-
tifier for SpringerLink, you should have full-text access to all papers.
These volumes contain many influential papers in the development






OPTIMA 79


of our field, along with interesting historical information about lead-
ing figures in mathematical programming, in the dedications of some
volumes. To get to the right page on SpringerLink, you can follow
the link from the mathprog.org web site, or else use the "Find" box
in SpringerLink. Speaking of SpringerLink, it is a good idea to add
the pages for Mathematical Programming, Mathematical Program-
ming Computation, and Mathematical Programming Studies to your
list of saved items on this site, for easy access each time you log
in.
We look forward to the 20th ISMP (August 23-28, 2009; www.
ismp2009.org), which returns to Chicago for the first time since the
"Zeroth ISMP" in 1949 (sixty years ago!) and the 4th ISMP in 1962.
This symposium will be held at a particularly exciting time for our
field. Optimization is playing a vital role in more and more areas of
science and engineering, and awareness continues to grow of the key
contributions that optimization can make to many interdisciplinary
projects. Our field is revitalized by the new paradigms and formu-
lations that arise in these applications, which are often extremely
challenging because of their size, their complexity, the need for ap-
proximate solutions in real time, and the need to incorporate risk
and uncertainty in the models. Recent failures in economic/financial
systems present us with new opportunities to influence public pol-
icy through better risk models and better algorithms. We face the
challenges of finding credible collaborators in the financial and eco-
nomic fields, and of interacting with decision-makers to identify poli-
cies that are politically and socially feasible, as well as near-optimal
by some measure.
As I write, abstract submission for ISMP 2009 has just closed and
indications are that attendance will be high, despite challenging eco-
nomic times. If you submitted an abstract, please remember to regis-
ter by the author registration deadline of May 29 to ensure that your
talk is scheduled. I also recommend to book accommodation early
(see the conference web site for information) and to book early
for the banquet if you wish to attend. The banquet will be held on
Wednesday evening at the Field Museum, Chicago's famous natural
history museum, and tickets are strictly limited.
Committees have been working hard to select winners of the
prizes to be awarded during the ISMP opening ceremony, to be held
on August 23 at Chicago's Orchestra Hall. Please also reserve the
Tuesday evening of ISMP, after sessions conclude, for the MPS busi-
ness meeting. Here, we will have a membership vote on the new MPS
constitution, present the new officers and council, and announce the
location of ISMP 2012.



Steve Wright, MPS Chair

New Constitution and Bylaws for MPS

A few years ago, MPS was advised that its bylaws were not quite in
the standard form expected for non-profit organizations under US
tax law. The bylaws had been drafted with the help of an attorney
when the society was founded in 1970, but amendments in later
years had taken place generally without legal advice. We decided to
do a thorough redrafting of the bylaws, bringing them into line with
current MPS practice, adding a new section to account for ICCOPT,
incorporating the Prize rules, and adding precision in many places.
Our aim was not merely to satisfy the legal requirements but also to
provide a reference document for future MPS officers, editors, and
conference organizers. Naturally, we consulted with the society's at-
torney to ensure that the final document could pass legal muster.
We hope that it will serve the society, with minor amendments and
additions as needed, for at least the next 20 years. The new bylaws


were approved by vote of MPS Council on 23 Feb. 2009, and can be
found on the MPS web site at www.mathprog.org.
We took this opportunity to amend the Society's constitution as
well. The changes are intended to modernize and clarify the doc-
ument. They are minor, but too numerous to be detailed here; I
urge you to read the proposed new version which is printed below
and which can also be found on the web site. Amendments to the
constitution require approval by the full membership of the Society.
Council recommends a vote in favour of the new constitution. A
vote of the membership will be held at the MPS business meeting
during ISMP, where a simple majority will suffice to approve, provid-
ing a quorum is present.
Thanks to all those who contributed to the final versions, es-
pecially David Gay, who worked on various drafts and handled the
communications with our attorney.




Constitution of the Mathematical

Programming Society*


I Name
The society is an international organization to be called "Mathemat-
ical Programming Society, Inc." It will henceforth be referred to as
the Society.

II Objectives
The objectives of the Society are the communication of knowledge
of the theory, applications, and computational aspects of mathemat-
ical programming and related areas and the stimulation of their de-
velopment. To realize these objectives, the Society publishes sev-
eral journals, holds International Symposia and sponsors such other
activities consistent with the objectives as may be directed by the
Council.

III Membership
The membership of the Society consists of individual members and
of corporate members. Members join the Society by application in a
form prescribed by the Council.

IV Council
I. The elected members of the Council of the Society are the
Chair, the Vice-Chair, the Treasurer, and four at-large members.
The Chair of the Executive Committee, the chair of the Publica-
tions Committee, and the Editors-in-Chief of the journals shall be
invited to all Council meetings and shall be included on all Council
correspondence. All must be members of the Society.
2. The Chair chairs the meetings of the Council. The Council
votes by majority of the elected members present, with the Chair
having a casting vote.
3. The Chair will submit a report on the activities of the Society
when he** relinquishes the office. This report will be published in a
journal or newsletter of the Society. The Chair will chair a business
meeting on the occasion of any International Symposium held during
his term of office.
4. The Vice-Chair replaces the Chair whenever the necessity
arises.
5. The Treasurer is responsible for the administration of the
funds of the Society, as directed by the Council. The Treasurer shall
make a financial report to the Society at the International Sympo-
sium held within his term of office.
6. The Editors-in-Chief of the journals are appointed by the
Council subject to the terms of the contract in force with publishers






May 2009


of the journals. They are responsible for implementing the directives
of the Council, in the organization of the journals, and for carrying
out its policy.
7. At each International Symposium there will be a combined
meeting of the outgoing Council and the incoming Council.
Additional meetings must be held when requested by at least three
members of the Council. The place of such meetings is decided by
the Chair. The Chair makes arrangements for the taking of minutes
at meetings of the Council and business meetings of the Society.
8. The policies of the Council are carried out by the Executive
Committee. The chair of the Executive Committee is appointed by
the Council, following a nomination by the Chair, which the Coun-
cil may approve or disapprove, and thereafter serves until the Chair
nominates a replacement candidate for the office. The chair of the
Executive Committee is responsible for executing the executive di-
rectives of the Council and for advising the Council. The Chair, Vice-
Chair and Treasurer are ex-officio members of the Executive Com-
mittee. The Chair may appoint additional members of the Executive
Committee, as necessary to allow the Executive Committee to carry
out its purpose. Such members serve at the pleasure of the Chair.
9. The Council appoints such other committees as it finds nec-
essary to carry out the business of the Society or to further its
objectives. The Chair and the chair of the Executive Committee are
ex-officio members of all such committees, except for those com-
mittees formed for purposes of determining winners of the Society's
prizes.

V International Symposia
I. International Symposia are sponsored by the Society at inter-
vals of approximately three years. The Chair nominates and the
Council elects the chair of the Organizing Committee and the
chair of the Program Committee of the next International Sym-
posium.


The 20th International Symposium on Mathematical Programming
will take place August 23-29, 2009 in Chicago, Illinois. The meet-
ing will be held at the University of Chicago's Gleacher Center and
the Marriott Downtown Chicago Magnificent Mile Hotel. Festivities
planned for the conference include the opening session in Chicago's
Orchestra Hall, home of the Chicago Symphony Orchestra, the con-
ference banquet at the Field Museum, Chicago's landmark natural
history museum, and a celebration of the 60th anniversary of the
Zeroth ISMP Symposium.

The plenary and semi-plenary speakers are
- Eddie Anderson, University of Sydney
- Mihai Anitescu, Argonne National Lab
- Stephen Boyd, Stanford University
- Friedrich Eisenbrand, EPFL
- Matteo Fischetti, University of Padova
- Lars Peter Hansen, University of Chicago
- Jong-Shi Pang, University of Illinois at Urbana-Champaign
- Pablo Parrilo, MIT
- Andrzej Ruszczynski, Rutgers


2. Fees for the International Symposium are fixed by the Organiz-
ing Committee, in consultation with the Chair. The Council shall
adopt guidelines regarding the financial obligations between the
Society and the Organizing Committee.

VI Elections
I. In this section, the word "term" is defined to be the period from
the end of one International Symposium to the end of the follow-
ing International Symposium.
2. Elections for the Offices of Chair, Treasurer and the four at-large
members of Council are concluded at least two months prior
to each International Symposium. The elected Chair serves on
Council for the two terms following his election. He is the Chair
from one year after the beginning of the first term until one year
after the beginning of the second term. He takes the office of
Vice-Chair during the remainder of his period of service. The
Treasurer takes office one year after the beginning of the term
following his election and he serves until one year after the be-
ginning of the next term. At-large members of Council serve for
the term following their election. If the office of Chair becomes
vacant, it is filled automatically by the Vice-Chair. The Chair, after
consultation with Council, may appoint a member of the Society
to fill any other office that becomes vacant until the next elec-
tion. No one may serve for more than two consecutive terms as
an elected at-large member of Council.
3. The Chair invites nominations for all elections, giving at least two
months notice through a journal or newsletter of the Society of
the closing date for the receipt of nominations. Candidates must
be individual members of the Society. They may be proposed ei-
ther by Council or by any six individual members of the Society.
No nomination that is in accordance with the constitution may be
refused, provided that the candidate agrees to stand. The Chair
decides the form of the ballot.


- Martin Skutella, Technische Universitat Berlin
- David Shmoys, Cornell
- Eva Tardos, Cornell
- Paul Tseng, University of Washington
- Shuzhong Zhang, The Chinese University of Hong Kong

Please plan on attending the opening session on Sunday evening,
where MPS prizes will be presented. Please also plan to attend the
MPS business meeting on Tuesday evening, which will include an an-
noucement of the site of the next ISMP and a vote on the proposed
new constitution.
Please keep checking the symposium web site www.ismp2009.org
during the coming months, where all developments will be posted.
In particular, you can register for the conference through the web
site and find out about accommodation options. We urge you to
book hotels as soon as possible.
One new feature at this symposium will be the daily newsletter Op-
tima@ISMP, which will contain news about each day's events, inter-
views with MPS and ISMP personalities, and local information about
Chicago.


20h neratoalSy psim nMaheaicl rgrmmn
Chi o Agut 3 2, 00






OPTIMA 79


VII Secretariat
I. The Council is assisted by a Secretariat, which is supervised by
the chair of the Executive Committee and Treasurer.
2. The Secretariat will keep an up-to-date list of members of the
Society and a list of past and present members of the Council,
with an indication of their functions.

VIII Fees
Membership fees are fixed by Council. A member who has not paid
his dues before the end of the current year will be deemed to have
left the Society.

IX Journals
Journals of the Society are distributed to all members of the Society,
free of any charge additional to the membership fee, to their last
known address.

X Agents
Council may approve the payments of membership fees, or of sub-
scription fees for the journal, in national currency, to local agents in
countries where the Council, in its sole discretion, determines it is
difficult for individual members to obtain convertible currency.

XI Other activities
In addition to International Symposia, the Society may sponsor
other conferences and seminars. The organization of such spon-
sored meetings is subject to directives by the Chair.

XII Amendment of the Constitution
If proposed by at least ten individual members of the Society, or by
vote of the Council, the constitution may be amended by a majority
of individual voters, either at a business meeting of the society on
the occasion of an International Symposium at which a quorum is
present, or by a written ballot. Proposals must reach the Chair at
least two months before the voting takes place.

XIII Bylaws
I. To carry out the obligation as set forth in this constitution and
to conduct the business of the Society, the Council shall adopt
bylaws. The bylaws may be adopted, annulled, or amended by an
affirmative vote of at least four members of the Council. The
bylaws also may be amended by the members of the Society at
any business meeting of the Society by a majority vote of those
present in person or by proxy, where such meeting was called in
whole or in part for that purpose and notice of such proposal
was given at least thirty (30) days prior to the date of the meet-
ing. The Council shall have the authority in its sole discretion to
interpret the bylaws.
2. Council shall adopt bylaws governing elections designed to pro-
mote and maintain international representation of the Council
and Executive Committee.

Notes
* MATHEMATICAL PROGRAMMING SOCIETY is a registered trademark of
the Mathematical Programming Society, Inc.
** Throughout this document, in accordance with standard English, no assump-
tion about gender is implied by the use of a male pronoun.


IMPRINT


Announcement: MPS Election Results

Triennial elections have recently concluded for the Mathematical
Programming Society. The elected candidates are as follows:
- Chair: Philippe Toint (University of Namur)
- Treasurer: Juan Meza (Lawrence Berkeley National Laboratory)
- Council-Members-At-Large: Jeff Linderoth (University of Wis-
consin-Madison), Claudia Sagastizibal (Associacio Instituto Na-
cional de Matemitica Pura e Aplicada, Rio de Janeiro), Martin
Skutella (Technische Universitat Berlin), and Luis Nunes Vicente
(Universidade de Coimbra).
The newly elected At-Large Council members will be installed at the
20th ISMP this summer, in Chicago, and will hold office August 2009
- August 2012. The new Chair and Treasurer take office in August
2010 and will serve for the following three years. The current Chair,
Stephen Wright, will be Vice-Chair during the period August 2010
- August 2012. As is readily apparent, leadership of the Society will
continue to be in very good hands.





Alberto Caprara, Andrea Lodi and Katya Scheinberg

What's new in Optima?

Optima was born in 1980 thanks to the idea of Don Hearn who
has continued to ensure its existence with endless dedication and
energy. At the beginning of its 30th year of life the time has come
for Optima to move out of its childhood home at the University of
Florida. It is moving to its new design and production site in Europe.
Needless to say, we are indebted to Don for his years of service and
specifically for his guidance during our own first two years as the
Optima team of service a very short period with respect to his 29
years! Nevertheless, we felt bold enough to take on the decisions
necessary for the new production process of the MPS newsletter.
We are glad to present Optima 79 as the first issue of the new Op-
tima. With respect to the content we do not present big changes
but we hope the new design, a splash of red and the more accurate
IaTEX-based mathematical layout will be appreciated by our readers.
We like to thank here the new designer Christoph Eyrich, and the
Optima Committee members Steve Wright, Jon Lee, Harvey Green-
berg and Mike Trick for their help in sorting out the future of the
newsletter.


Editor: Andrea Lodi, DEIS University of Bologna, Viale Risorgimento 2, 1-40136 Bologna, Italy. andrea.lodi@unibo.it
Co-Editors: Alberto Caprara, DEIS University of Bologna, Viale Risorgimento 2, 1-40136 Bologna, Italy. acaprara@deis.unibo.it
Katya Scheinberg, Department of Industrial Engineering and Operations Research, Columbia University, 500 W 120th Street, New York, NY, 10027.
katyascheinberg@gmail.com
Founding Editor: Donald W. Hearn
Published by the Mathematical Programming Society.
Design and typesetting by Christoph Eyrich, Mehringdamm 57 / Hof 3, 10961 Berlin, Germany. optima@0x45.de
Printed by Oktoberdruck AG, Berlin.






May 2009


M TA 2009 LEHIGH


The conference

MOPTA is for people from both
Discrete and Continuous
Optimization, working on both
theoretical and applied aspects.
Format: Six invited talks and many
more contributed talks, spread over
three days.



The University

Located in the south of Bethlehem,
PA, it is just an hour from New
York City and an hour from
Philadelphia.
Bus connections to Bethlehem are
available from Newark, JFK, La
Guardia, and Philadelphia airports.


Invited speakers

Ravindra K. Ahuja (UFL Gainesville)
Natalia Alexandrov (NASA)
Paul I. Barton (MIT)
John M. Mulvey (Princeton)
Pablo A. Parrilo (MIT)
Robert Weismantel (U. Magdeburg)




Organizing committee

Tamis Terlaky (Chair)
Pietro Belotti
Jitamitra Desai
Imre P6lik
Ted Ralphs
Larry Snyder
Robert H. Storer
Aur6lie Thiele


Modeling competition

Organized in collaboration with
AIMMS, this competition challenges
groups of students from around the
world to model and solve a difficult,
real-world Optimization problem.
The finalists will present their work
at MOPTA, where the prize for the
best work will be awarded.


Practical information

June 6: abstract submission deadline
July 1: early registration deadline
Contact: ISE Dept, Lehigh University
200 W Packer Ave.
Bethlehem PA 18015
Phone: 610 758 3865
Email: mopta20 0 9@lehigh .edu


http://mopta.ie.lehigh.edu


Application for Membership
I wish to enroll as a member of the Society. My subscription is for my personal use
and not for the benefit of any library or institution.
L I will pay my membership dues on receipt of your invoice.
L I wish to pay by credit card (Master/Euro or Visa).


Credit card no.


Expiration date


Family name

Mailing address




Telephone no. Telefax no.

E-mail

Signature


Mail to:
Mathematical Programming Society
3600 University City Sciences Center
Philadelphia, PA 19104-2688
USA

Cheques or money orders should be made
payable to The Mathematical Programming
Society, Inc. Dues for 2009, including sub-
scription to the journal Mathematical Pro-
gramming, are US $85. Retired are $40.
Student applications: Dues are $ 20. Have
a faculty member verify your student sta-
tus and send application with dues to
above address.



Faculty verifying status


Institution
























b


MPS-SIAM Series on

OPTIMIZATION
Philippe Toint, Editor-in-Chief
University of Namur, Belgium

BOOKS IN THE SERIES INCLUDE:
Introduction to Derivative-Free Optimization NEW
Andrew R. Conn, Katya Scheinberg, and Luis N.Vicente
List Price $73.00 SIAM Member Price $51.10- Code MP08 Ao o
Also of Interest:
Linear Programming with MATLAB
Assignment Problems
Michael C. Ferris, Olvi L. Mangasarian, and Stephen J.Wright Assign t
2007 xii + 266 pages. Softcover. ISBN 978-0-898716-43-6 Rainer Burkard, Mauro Dell'Amico,
List Price $45.00 S SIAM Member Price $31.50 Code MP07 and Silvano Martello
-a. 2009 xx + 382 pages Hardcover
Variational Analysis in Sobolev and BV Spaces: ISBN 978-0-898716-63-4 List Price $110.00
SApplications to PDEs and Optimization SIAM Member Price $77.00 Code OT106
HedyAttouch, Giuseppe Buttazzo, and G6rard Michaille
2005 xii + 634 pages Softcover ISBN 978-0-898716-00-9
List Price $140.00 MPS/SIAM Member Price $98.00 Code MP06
YOU ARE INVITED
Applications of Stochastic Programming CONTRIBUTE
Edited by Stein W.Wallace and William T. Ziemba
2005 xvi + 709 pages Softcover ISBN 978-0-898715-55-2 The goal of the MPS-SIAM series is to
List Price $142.00 MPS/SIAM Member Price $99.40 Code MP05 publish a broad range of titles in the
The Sharpest Cut: field of optimization and mathematical
The Impact of Manfred Padberg and His Work programming, characterized by the
Edited by Martin Grtchel highest scientific quality.
Edited by Martin Grotschel
2004 xi + 380 pages Hardcover ISBN 978-0-898715-52-1 If you are interested in submitting a
List Price $106.00 MPS/SIAM Member Price $74.20 Code MP04 proposal or manuscript for publication
A Mathematical View of Interior-Point Methods in the series or would like additional
in Convex Optimization information, please contact:
James Renegar Philippe Toint
2001 viii + 1 17 pages Softcover ISBN 978-0-898715-02-6 University of Namur
List Price $47.00 MPS/SIAM Member Price $32.90 Code MP03 philippe.toint@fundp.ac.be
Lectures on Modern Convex Optimization: OR
Analysis, Algorithms, and Engineering Applications Sara J. Murphy
Aharon Ben-Tal and Arkadi Nemirovski Series Acquisitions Editor
2001 xvi + 488 pages Softcover ISBN 978-0-898714-91-3 SIAM
List Price $121.50 MPS/SIAM Member Price $85.05 Code MP02 murphy@siam.org
Trust-Region Methods SIAM publishes quality books with
A. R. Conn, N. I. M. Gould, and Ph. L.Toint practical implementation at prices
2000 xx + 959 pages Hardcover ISBN 978-0-898714-60-9 affordable to individuals.
List Price $146.50 MPS/SIAM Member Price $102.55 Code MPO I

O i&
A '4E LESs oN M






.. ..


1IR Complete information about SIAM and its book program can be found at www.siam.org/books.
See summaries, tables of contents, and order online at www.siam.org/catalog.


C;




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs