Title: Optima
Full Citation
Permanent Link: http://ufdc.ufl.edu/UF00090046/00064
 Material Information
Title: Optima
Series Title: Optima
Physical Description: Serial
Language: English
Creator: Mathematical Programming Society, University of Florida
Publisher: Mathematical Programming Society, University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: October 2000
 Record Information
Bibliographic ID: UF00090046
Volume ID: VID00064
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.


This item has the following downloads:

optima64 ( PDF )

Full Text





m rSr q Mathematical Programming Society Newsletter


Recent Progress in Submodular Function Minimization

Lisa Fleischer*
August 2, 2000

Last summer, two independent
papers [31, 41] gave a positive
answer to a question posed almost
two decades ago: is there a polynomi
al time, combinatorial algorithm to
minimize a submodular function?
This article is an attempt to relate
the history and importance of the
problem, the difficulties that arose
in confronting it, the motivation
for the solutions proposed, some
consequences of the existence of
these new algorithms, and some
further challenges.

conference notes 12

*Graduate School of
Industrial Administration,
Carnegie Mellon
University, 5000 Forbes
Ave., Pittsburgh, PA
15213. This article was
written while the author
was a faculty member at
the Department of
Industrial Engineering
and Operations Research,
Columbia University, New
York. Additional support
provided by NSF through
grant EIA 9973858.

reviews 16 gallimaufry 18

miid sharpener 15

S P I* A



Progress in




1. Introduction

A submodular function fis a set function
defined on all subsets of a discrete set V (denot
ed 2 ) that satisfies the following inequality for
every pair of subsets A, B c V
f(A)+f(B)>f(A U L)+f(A n B). (1)
Lovasz showed that every submodular function
has a natural extension to the nonnegative reals
that is convex [35]. As with convexity, submod
ularity is a property of many naturally occurring
functions in engineering and economics, and it
is preserved under many natural transform
tions. Submodular functions arise as cut func
tions in a graph (see Figure 1), as matroid rank
functions, and have applications in areas include
ing game theory, information theory, and graph
theory. An in-depth treatment of submodular
functions and applications can be found in sur
veys of Lovasz [35], Fujishige [20], Topkis [45],
and Frank and Tardos [17].

Examples of Submodular Functions.
The cut function in a graph: Given an undi
rected graph G on vertex set V define A on a
subset of vertices A c Vby A(A) :=number of
edges with exactly one endpoint in A. Then A
is a submodular function on 2 Similarly, if
G is a directed graph, then the function A
defined by &(A) := number of arcs leaving A,
is a submodular function.
The {s' t -cut function in a graph: Given
graph Gon vertex set VU {s, t}, define Aon
a subset A c Vby A (A) := the number of
edges with exactly one endpoint in A U {s}.
Then A is a submodular function on 2 The
appropriately defined directed version, S is
also submodular.
The rank function of a matroid: A matroid
is defined by a ground set Mand set of inde
pendent subsets Jthat satisfy 0 e 9 Ic JE
Implies IEf and/ J= e J IA < I\J implies
IU {e} E Jfor some e Ef J Then the rank
function r, defined on a subset A to be the size
of the largest independent set contained in A,
is a submodular function.

Figure 1.
Given submodular function fon V a
minimizer of f is a subset A c Vsuch that
f(A) < f(B) for all other subsets Bc V
Submodular function minimization (SFP) is the
problem of finding a a minimizer of f As with


convex functions, it is possible to find a mini
mizer of a submodular function by starting from
an arbitrary suboptimal point and following any
sequence of feasible steps in improving direc
tions. In addition, for a submodular function f
with f(I) = 0, a minimizer of Lovasz's convex
extension of restricted to the 0-1 cube corre
spends to a minimizer of fof the same value
The classic problem of finding a minimum
{s, t}-cut in a graph is a special case of submodu
lar function minimization. The minimum {s, t-
cut problem is ill.. ..1l solved via combinator
ial algorithms invoking the strong duality rela
tion of minimum cut equals maximum flow. For
the minimum cut problem, all efficient algo
rithms use the structure of the graph. Imagine
now having only an oracle that, given a set of
vertices, returns the value of the associated cut.
How could you use this oracle to find the mini
mum cut? While this problem is easier than gen
eral submodular function minimization because
it is possible to recover the graph [4], it gives the
essence of the problem: given an oracle that
returns the value on any subset of V find the
minimizing subset.
Another special case of submodular function
minimization arises in the field of matroids: find
the maximum cardinality independent set of two
matroids defined on the same ground set. This is
the matroid intersection problem of which maxi
mum bipartite matching is a special case. Let
M = (J, 1/ and AM = ( i, /) be two matroids
where J and J2i are the set of independent sets
of M1 and M2 respectively. Denote the rank
function of M1 by r.. The matroid intersection
theorem of Edmonds says that
max\J I Je| r / =

min{r,(A) + r2(flA) IAc V} (2)
Since r, + r2 defined by (r, + r2)(A) : r (A) +
r2( VA) is submodular, the cardinality of the
maximum sized independent set can be deter
mined by submodular function minimization.
While the minimum {s, t}-cut problem and
matroid intersection problem both have combi
natorial polynomial-time algorithms [15, 3],
until recently, general submodular function min
imization did not. In 1981, Grotschel, Lovasz,
and Schrijver established the polynomial time
equivalence of separation and optimization for
combinatorial problems via the ellipsoid method
[26]. One of the major implications of this

S T -- Ai6 4


The Greedy Algorithm and Matroids
The greedy algorithm for submodular func
tions is a natural extension of the greedy algo
rithm for matroids. Given a vector c R ,
index the elements of Iby decreasing c-value,
so that c(v) > c(vz) > ... > c(v). This defines
a total order < of Vby v < ,v Iteratively set
x(v) to be the maximum possible amount,
subject to having maximized the first i 1
components of x. Mathematically, define
< := {v ***, v). Then the greedy algorithm
sets x(v) = f(<') (<"'-1) = <) 1- 1 AV' )
In other words, greedy sets x(< ) =
(< ') for each set < Any set with this proper
ty is called x tight. The submodularity of f
implies that the x determined in this manner
is an element in P(t). Since these equations
are linearly independent, it follows that the
greedy algorithm finds an extreme point of
P(f). This extreme point x maximizes c x over
all points in P(f). Each extreme point of P(f)
is obtained by applying the greedy algorithm
to some order < of V[8, 42]. The greedy algo
rithm for matroids is the special case of this
algorithm when fis the rank function of a
The submodular polyhedron P(f) is a natu
ral extension of the matroidpolyhedron, which
is the convex hull of incidence vectors of inde
pendent sets of a matroid. (The incidence vec
tor XA of a set A is defined by X(v) = 1 if VE
A and XA(v) = 0 if v A.) While the matroid
polyhedron is restricted to the nonnegative
orthant and all vertices of the matroid polyhe
dron are {0,1} vectors, the submodular poly
hedron contains the negative orthant in its
extreme cone, and has vertex coordinates that
depend on the values taken by the submodu
lar function. Figure 3 contains an example of
a submodular polyhedron on a two element
ground set.

Figure 2.

result was the first polynomial time algorithm
for submodular function minimization. Their
algorithm for SFM uses the fact that the greedy
algorithm (described in Figure 2) maximizes c x
over all vectors x contained in the submodular
polyhedron P(f) defined below. In this definition
and elsewhere, a vector x e R determines a
function on 2 by x(A):= A x(v) for A c V

P(f) :={xe R x(A) f(A),V A Vc}.

(Note that P(f) is nonempty only if f(0) > 0.
Since the function obtained by adding the same
constant to every value of a submodular func

tion is submodular, we will assume without loss
of generality that f(0) = 0 for the remainder of
this article.)
The greedy algorithm establishes that the
optimization problem for submodular polyhedra
is polynomial time solvable [8, 42]. Thus,
Grotschel, Lovasz, and Schrijver's work implies
the separation problem for P(f) is polynomial
time solvable. The separation problem is to
determine if xe P(f) for a given xand if not
return a set A for which f(A) -x(A) < 0. If
0 e P(f l- for (f -)(A):= f(A) //, then the
minimum value of fis at least /. With simple
bounds on a minimizer, obtainable using the
greedy algorithm and another max-min relation
of Edmonds given later in (3)1 binary search can
be used to find the minimizing set A in polyno
mial time.
The ellipsoid algorithm is not combinatorial.
It relies heavily on linear algebra, and does not
use the combinatorial structure of the problem
except in a very abstract way. It provides a proof
of the polynomial time solvability of SFM, but
does not give any new understanding of how
and why. This leaves a natural open problem to
develop combinatorial algorithms for SFM.
Work toward this goal led to demonstrating that
some special subclasses are solvable directly:
Cunningham gives a pseudopolynomial time
algorithm for submodular function minimize
tion that generalizes his strongly polynomial
time algorithm for the special case of testing
membership of a vector x R in a matroid
polyhedron [3, 5]. Queyranne describes a
strongly polynomial time algorithm for minimize
ing symmetric submodular functions [39],
building on the overall minimum cut algorithm
of Nagamochi and Ibaraki [38]. Both of the new
combinatorial, polynomial-time algorithms for
general submodular function minimization [31,
41] build on Cunningham's work. Despite this
common starting point, these new algorithms
are very different from each other. These combi
natorial algorithms not only yield deeper under
standing of submodular functions, but also, due

to their combinatorial nature, lend themselves
more easily to modification to special cases or
extension to more general problems.
The next section contains a review of
Cunningham's algorithm and a high level sum-
mary of the new algorithms. Figure 5 provides
an explanation of some of the general ideas
applied to the special case of the minimum
{s, t}-cut problem. In Section 3, the new ideas
and algorithms are described in more detail.
Section 4 discusses some extensions of these
combinatorial algorithms. Section 5 concludes
with some open problems.

2. Combinatorial Algorithms

In this section, we describe the basic ingredients
used in combinatorial algorithms for submodu
lar function minimization, and explain the diffi
culties encountered in using them to obtain a
polynomial time algorithm. Figure 5 describes
some of these ingredients in the context of the
minimum {s, t)-cut problem.
Cunningham describes the first combinatorial,
pseudopolynomial time algorithm for submodu
lar function minimization [5]. His algorithm is
motivated by the min-max relation given in (3)
below. This relation is implied by a theorem of
Edmonds [8] that generalizes the matroid inter
section theorem (2). First, a little notation. The
base polytope of a submodular function, denoted
B(f), is the face of the submodular polyhedron
P(f) that satisfies x(A) = f( ). Note that the
greedy algorithm described in Figure 2 always
returns an element in B(f). An element of the
base polytope is called a base and extreme points
of B(f) are extreme bases. For a vector xe R let
x (v): min{A(v),0}. Edmonds theorem implies:

max{x( (l E B(f)} = min{f(A)AA c V} (3)
This equality is a form of strong duality. The
weak duality direction is easy to see: x(1/) <
x(A) < t(A) for all A c Vsince at worst, A
contains all elements of V/that have negative
x value, and does not contain any element with
positive x-value.

1The greedy algorithm produces a vector xE P(t) satisfying x() = f( 14. By (3), x ( I gives a lower bound on a
minimum value of f This does not exceed the maximum absolute value of fby more than a linear factor. A
simple upper bound may be taken as /(0), /( 4 or any other function value.
2A function is pseudopolynomial if for an input Qit is a polynomial in length(Q and max(Q), where length(Q)
is the number of bits needed to describe Qand max(Q) is the magnitude of the largest number in Q. This dif
fers from a polynomial function which needs to be a polynomial in length() and can only depend logarithmi-
cally on max(Q. A function is strongly polynomial if for an input Q, it is polynomial in the number of items in
Q and does not depend at all on the size of the numbers in Q.
A symmetric set function satisfies f(A) = f( 1 A) for all A c V


S P I* A


x(v) f(v)


x(w) = f(w)

Figure 3. The submodular polyhedron and base
polytope of a submodular function defined on a
two-element ground set. The intersection of the
base polytope with the negative orthant is the
set of bases that maximize x({v, wk).

Relation (3) can be used as the motivation for
a combinatorial algorithm to find a minimizer of
. Starting with an arbitrary base x B(f)
obtained using the greedy algorithm, try to
move towards" the base x* that maximizes the
left hand side of (3). An optimal base x* can
then be used to determine the minimizing set A.
This is the idea underlying Cunningham's algo
rithm; we give details below.
The maximizer of the left hand side of (3)
may not be an extreme point of B(f). An exam
ple of a base polytope with this property is given
in Figure 3. This raises the following issue:
Given the pair (x, A) as a candidate pair of opti
mal solutions for (3), how is it possible to verify
their optimality, short of calling a subroutine for
submodular function minimization? It is easy to
check if x ( = f(A), but what proof is there
that x e B(f)? To confirm that x is an extreme
base, it is sufficient to provide a total order that
generates xvia the greedy algorithm. This can be
done ili. .. ..i e.g. [2]. For xe B(f) not
extreme, a proof of membership may be given
by expressing x as a convex combination of
affinely independent extreme bases y,, i e I
x= j1y. (Affine independence is used to
ensure that lis not too large.) Cunningham
introduced this idea first as a way to verify mem
bership of x in a matroid polyhedron [3].

While it is possible to move from any base to
any other base in B(f) by the addition of the
appropriate vector xe R with x( V = 0, it is
simpler to consider a restricted set of moves. The
simplest move is to increase one component of x
x(w) and decrease another component of x by the
same amount. Mathematically, we can represent
such a move from x using the incidence vector
X where Xz(v) = 1 and X (w) = 0 for all w v.
This move from x may be represented as
x = x+ t(X-X-). This move is called an
exchange operation or simply an exchange.

Accessible Pairs

For an extreme base, it is possible to compute
the exchange capacities of some pairs of ele
ments il l. .1i The following lemma illus
trates this.
Lemma 2.2 Suppose y is an extreme base of
B(f) generated by a total order L = {, v,,
v1, v i+, v, V. Then

a(cy, ) =L(v)\ {v) -y(L(,)\ {v))

andy yl= a ,v is generated
by L' =vl, "E, V ii, V, J, v).

There may be several different total orders
that generate an extreme base y. For each of
these orders, Lemma 2.2 applies. Bixby,
Cunningham, and Topkis [2] describe an
0(n2) algorithm to calculate the exchange
capacities of all adjacent pairs (v,, v1) over all
total orders generating extreme base y. It is
this extended set of pairs that we call access
ble pairs for y.

Figure 4.

To fully describe the set of allowable
exchanges, it is necessary to specify how large a
may be for a given triple (x,v,w) e B(f) x Vx V.
The constraint is that x' be a base, i.e. that it sat
isfies t(A) -x'(A) > 0 for all A c V To ensure
this, it is necessary to consider those sets A for
which x'(A) > x(A). These are precisely the sets
that contain wand don't contain v. The amount
of allowable exchange is determined by that set
A which minimizes t(A)-x(A). Thus the exchange
capacity of (x, v, w), denoted a(x, v, w), is

a(xv,w) := min{f(A) -x(A)lwe Ac Av{}}v. (4)
Exchange operations that increase a negative
component of x and decrease a positive compo
nent of x improve x ( The following theorem,
which is implicit in the proof of correctness of

Cunningham's algorithm for SFM, implies it is
possible to reach a maximizer by performing
improving exchange operations only.

Theorem 2.1 A base xe B(f) maximizes x ()
over all bases in B(f) if and only if for all elements
v with x(v) > 0 and all elements w with x(w) < 0,
the exchange capacity X(x, v, w) = 0.

Theorem 2.1 implies a deceptively simple algo
rithm for submodular function minimization:
Define S: {vCx(v) > 0} and 5:= {vfx(v) < 0}.
Next, find a maximizer of the left side of (3) by
repeatedly finding a pair of elements in S x 5
and performing the appropriate exchange opera
tion. When the conditions of Theorem 2.1 are
satisfied, then find a minimizer of fby taking
the complement of the set of elements Wthat
have strictly positive exchange capacity with an
element in {we Vlx*(w) <0}. (It is not hard to
show that V\ Wis x*-tight. Thus x*( =
x*(V\w= A(V\W}.)
The problem with this algorithm is that per
forming exchange operations relies on comput
ing exchange capacities for x. Our expression for
exchange capacity for x e B(f) requires finding a
minimizer of the function f-x defined by
(f-x)(A) := (A)-x(A). This is again submodular
since the function x satisfies (1) at equality. In
general, there is nothing special about the sub
modular function f-x that would enable us to
compute it faster than general submodular func
tion minimization. In fact, even in the special
case of finding a minimum s, t-cut, determining
the exchange capacity requires a minimum cut
computation. (See Figure 5.) Thus we have
reduced our original problem to a problem that
is just as hard.
Note that in the above discussion, we have
ignored the representation of x as a convex com-
bination of extreme bases, x =j Xy,, that we
need to maintain as proof that x e B(f). In
order to maintain this, Cunningham restricts
exchange operations to extreme bases y, in this
convex representation. Thus, performing an
exchange operation of size a(y,, v, w) on y,
changes x(v) and x(w) by only 2. a(y, v,w).
The fact that computing exchange capacities
in general is as difficult as SFM is an important
obstacle that algorithms for submodular func
tion minimization must overcome. In his
pseudopolynomial time algorithm, Cunningham


S P T -- Ai6 41


An Example of Submodular Function Minimization: Minimum Cuts
Here, we demonstrate some of the concepts introduced in Section 2 by explaining what they mean
for a familiar example of submodular function minimization: the minimum {s, t-cut problem.
For the minimum {s, t-cut problem, the submodular function we want to minimize is A. We
start by normalizing to create A by subtracting A'(0) from each value of A. Thus the minimum
function value will be the value of the minimum cut minus A'(0), which will be at most 0. A base
of B(A) is determined orienting all edges incident to s away from s, all edges incident to t towards
t, and assigning a fractional orientation to all other edges. A fractional orientation of an edge (v, w)
is a replacement of the undirected edge (v, ) with two directed edges (v, ) and (w, v) along with
nonnegative values u so that u(v, )+u(w, v) =1. The corresponding base x is then defined setting
x(v) equal to the value of arcs leaving v minus the value of arcs entering v.
An extreme base of B(A) is obtained from order (vl, vz2,'*, v,) by first orienting all edges inci
dent to s away from s. The greedy algorithm then visits vertices in order, and iteratively orients all
edges incident to v without previous orientation so that they leave v. (The greedy algorithm
applied to order (vl, v ,', v) sets x(v) A= A(v) if there is no arc from s to v, and otherwise sets
x(v1) = A(v) 2.) In general, an orientation corresponds to an extreme base if and only if the
resulting graph is acyclic: given an acyclic graph, any topological sort gives the order that generates
the orientation and extreme base via the greedy algorithm.
An exchange operation for (x,,v, w) corresponds to decreasing the value of arcs along a path or
set of paths from v to w in the oriented graph G that generates x. This is done so that the net
value of arcs leaving v decreases, net value of arcs leaving w increases, and net value of arcs leaving
all other nodes stays the same. (Net value leaving v is total value of arcs leaving v minus total value
of arcs entering v.) This could be interpreted as sending flow through the oriented graph G from v
to w respecting upper bounds on capacities determined by u, and letting the new G be the residual
graph of this flow. The exchange capacity of (v, u) is twice the value of the maximum flow from v
to w in the oriented graph G with values u interpreted as upper bounds on capacities. (Twice the
value because reversing the direction of (v, w) increases x(w) by two, and decreases x(v) by two.)
So, even in this special case, the problem of computing exact exchange capacities is again a prob
lem of the same form of the original: here, a minimum cut problem. However, finding a lower
bound on the exchange capacity, using a simple augmenting path, is a much easier computation.
In addition, if x is an extreme base, and v and v,1 are adjacent in the order generating x, then
a(x, vV 1) is either 0 if there is no edge between v and v or 2 if there is such an edge.
The alternative characterization of a maximizer given in Theorem 2.1 for the minimum {s, t-
cut problem is the assertion that an optimal orientation does not contain a path from any v with
positive value to any with negative value. This corresponds to the well-known alternative charac
terization of a maximum flow. The union of s with the set of nodes reachable from S in the final
oriented network defines a minimum cut.

Figure 5.

[5] uses an idea first proposed in Bixby,
Cunningham, and Topkis [2] to restrict
exchange operations to a subset of directed pairs
for which the exchange capacities can be com-
puted ili . ..il For x =Ej Xiwith y, an
extreme base for all i, each y, determines a set of
special directed pairs with easily computed
exchange capacities. (See Figure 4 for more
explanation.) Call the union of these directed
pairs over all i I accessible pairs. Bixby et al. [2]
define a graph with vertex set equal to Iand a
directed arc for each accessible pair (called an
accessible arc). (For a((y, v, w)>0 with (y, v, w)
accessible, the arc is directed from v to w.)
Cunningham proves that it is always possible to
reach the optimal x* by repeating the following

operation a pseudopolynomial number of times:
Find a path in the union of accessible arcs from
Sto Y and perform exchange operations for
each arc along the path so that the x value of an
element in S decreases, the x-value of an ele
ment in S increases, and all other x values
remain the same. Such a path is called an
augmenting path.
This algorithm looks very similar to strongly
polynomial augmenting path algorithms for the
maximum flow problem [15], and to
Cunningham's strongly polynomial algorithm for
testing membership in matroid polyhedra [3].
However, it is not known to be even weakly
polynomial for general submodular function
minimization. For maximum flow, Edmonds

and Karp [10] propose two heuristics used to
select paths in a way that will lead to a polyno
mial time algorithm: the shortest path heuristic
and the fattest path heuristic. Why don't these
heuristics work here?
The shortest path heuristic for maximum flow
selects the path with the least number of arcs.
The proof of polynomiality of this heuristic fol
lows from showing that the length of the short
est path never decreases, and after a polynomial
number of iterations, provably increases. This
heuristic does not work for Cunningham's algo
rithm because the length of the shortest path
may actually decrease: The set of accessible pairs
considered in finding the augmenting path is
not the complete set of pairs with positive
exchange capacity. After each exchange opera
tion, the exchange capacities and the set of
accessible pairs change.
The fat path heuristic selects the residual path
with largest capacity. This works for maximum
flow because the path with the largest residual
capacity must carry at least a polynomial frac
tion of the difference between the maximum
flow value and the current flow value, thus guar
anteeing sufficiently large progress with each
augmentation. For Cunningham's algorithm,
there may be no residual path with large capacity
[5]. The problems are two-fold: First, even if
f(A) -x(A) is large, there may not be an access
ble arc with large exchange capacity leaving A.
Second, the amount of exchange that is affected
in x by performing exchanges on y, for i e I
depends not only on the exchange capacity, but
also on the multiplier ., in the convex represent
station, which may be exponentially small. (It
could depend inversely on a polynomial in
maxAV If(A).)
For network flow problems, since fat paths
exist, an effective way to ensure large augmenta
tions is to use scaling. However, it is not clear
how to scale submodular functions. The perhaps
natural idea of using
[-1 defined by [- (A):= J does not work
because it is not submodular, and hence the
structure of submodular polyhedra is lost. For
example, if the function fis not submodular,
then the greedy algorithm no longer optimizes
over P(f). (An easy way to see [ 7 is not sub
modular is to consider the case of 8 = 2, IAI and
IB, odd for A,Bc V A ln and IA U B even,
and f(A) + (B) f(A n B)+r(A U B).)


S P I* A


New Approaches to SFM

The two new algorithms for submodular func
tion minimization take two different approaches
to extend Cunningham's algorithm. Iwata,
Fleischer, and Fujishige [31] devise a scaling
scheme and special augmenting paths to employ
a "fat path" approach to SFM: They show that
each of their augmentations is bounded by a
polynomial in the difference between the current
x (14 value and optimal x* (V) value. Schrijver
[41] designs a different graph and augmentation
step to employ a "short path" approach to SFM:
he shows that the shortest path distances from S
in his graph are nondecreasing, and that at least
one distance increases after a polynomial num
ber of augmentations. Both of these approaches
are described further in the next section.

3. New Combinatorial, Polynomial
Time Algorithms

The two new polynomial time algorithms for
submodular function minimization propose
alternative solutions to the difficulty of establish
ing a polynomial time, augmenting path algo
rithm. Iwata, Fleischer, and Fujishige [31] use a
scaling framework to relax the value of the sub
modular function by an additive function
depending on the scaling parameter. This relax
ation has the effect of relaxing exchange capaci
ties by the same scaling parameter, thus allowing
sufficiently large augmentations demonstrated in
a "fat path" analysis. Schrijver [41] instead uses a
short path" analysis by allowing the use of all
exchange operations for each y, in the convex
representation of the current base x. Thus he
works with a graph that contains an exchange arc
for every directed pair (v, w) with positive
exchange capacity for some y,, i E I To over
come the difficulty of computing exact exchange
capacities, he shows how it is possible to com-
pute lower bounds on exchange capacities that
are sufficient for the algorithm to make demon
strable progress. In addition, he abandons the
direct use of augmenting paths, instead focusing
on removing selected exchange arcs in a way that
ensures shortest path lengths are nondecreasing.
His approach uses a layered network (A la Dinic
[7]). Recently, Fleischer and Iwata [12] have
shown that Schrijver's idea of removing selected
arcs by successive exchange operations can be
embedded in a push-relabel framework, thus

yielding a faster and more adaptable algorithm.
The push-relabel algorithm of Goldberg and
Tarjan [24] provides a good analogy in maxi
mum flows for the technique of removing select
ed arcs from the residual graph to ensure that
shortest path lengths are nondecreasing. A more
detailed description of both approaches follows.

3.1. A Scaling Algorithm

In this section, we describe the scaling algorithm
for SFM introduced in [31]. Most of the new
ideas used to obtain the polynomial time scaling
algorithm for submodular function minimize
tion appeared first in recent algorithms for sub
modular flow. To explain this connection, we
start by defining submodular flow and describe
ing some relations between submodular flow and
submodular function minimization.

Submodular Flow

Like standard network flows, the submodular
flow problem is defined on a graph G=(V,)
with vertex set Vand arc set E. A flow p is a
function defined on the arcs that obeys capacity
constraints t(v,w) < (p(v,w) < u(v,w) for all (v,w)
e E. Given a vector b e R of supplies and
demands satisfying b( / = 0, a standard feasible
fow is a flow cp that satisfies flow conservation
constraints that say the net flow leaving v, writ
ten E [(p(v, ) (w, v)], equals b(v) for each
vertex v e V.
A feasible submodular fow for submodular
function frelaxes the flow conservation con
straints in standard network flows and replaces
them by inequalities that require that the net
flow leaving any subset is at most the submodu
lar function value of the subset. This can be
described as follows. Define the boundary of the
flow at A, denoted a(9(A), as the net flow leaving
A. Mathematically, q(p(A) := [E [p(v, w)
q(w, v)]. The submodular constraints can then be
written as (p(A) < f(A) for all A c V. It is not
hard to show that y(p(A) = YA (p(v). Since
(( 1) = 0, we assume that f( = 0, and thus
this is equivalent to requiring a(p e B(f). The
submodular fow problem asks for a feasible sub
modular flow of minimum cost, where cost is
the dot product of c: E R with (p. Written as
an exponential size linear program:

min c (p
(SF) :t<, a(pe B(f)

The minimum cost flow problem is a special
case of submodular flow with replaced by the
set function defined by supply vector b. (Since
a(p(v) < b() and a(p(v) a(p(V\v) < b(Vl\v) =
b(v), feasible (p satisfies flow conservation con-
The submodular flow problem was intro
duced by Edmonds and Giles [9] as a framework
that includes as special cases minimum cost
flows, submodular function minimization,
(poly)matroid intersection, and the problem of
contracting a minimum cost subset of arcs to
make a directed graph strongly connected. It
also models other network design problems,
such as finding the least cost subgraph contain
ing k vertex disjoint paths in a graph from a root
rto all other vertices vye V[17, 18]. Edmonds
and Giles show that the set of extreme points of
the submodular flow polytope are integral [9].
The submodular flow problem can be solved in
polynomial time using the ellipsoid algorithm
[27], and via combinatorial algorithms that
depend on an oracle for submodular function
minimization, e.g. [40, 34, 16, 6, 44, 22, 21,
14]. Thus, another application of SFM is in
solving submodular flow problems. Why do all
of these algorithms for submodular flow require
an oracle for SFM?
With maximum flows, to move from one fea
sible flow to another, it suffices to send flow
around a cycle, or set of cycles, in the residual
graph of the flow. This maintains flow conserve
tion at all nodes. With submodular flows, it is
permissible to change the net flow in or out of
any vertex, as long as the submodular constraints
are satisfied. Thus, moving from one feasible
flow to another may involve also sending flow
along paths. Suppose one sends flow from s to t
in the residual graph of (p. How much flow can
be sent so that the new flow (p' is feasible?
Sending flow along an s-tot path increases the
net flow leaving s and decreases the net flow
leaving t. This increases the net flow leaving any
set A containing s but not t. To stay feasible, the
amount of flow sent along the path must be
bounded by f(A) p(A) over all sets A that
contain s but not t. This is precisely the
exchange capacity a((p, t,s). The combinatorial
algorithms for submodular flow add exchange


S T -- Ai6 41


arcs for every pair (t, s) with positive exchange
capacity to the set of flow arcs, assign each arc a
capacity equal to the corresponding exchange
capacity, and solve the submodular flow problem
in the resulting graph.

New Ideas for Submodular Functions
and Flows

While submodular flow algorithms assume an
oracle for SFM to compute exchange capacities,
a capacity (or function) scaling algorithm for
submodular flow would presumably deal with
some of the problems Cunningham faced in try
ing to obtain a polynomial time algorithm for
SFM. In particular, a capacity or function scal
ing algorithm would have to address the prob
lem of finding augmenting paths with sufficient
ly large capacity. However, the existence of a
polynomial time, function-scaling algorithm for
submodular flow remained an open problem
until only recently. We highlight below the two
main ideas that led to the new scaling algorithm
for SFM.
Approximate Optimality. In the first function
scaling algorithm for submodular flow, to adjust
for the fact that [ is not submodular, Iwata
[30] proposes using f defined by
6(A):= 6 (A) / J+a A V- A This extra
term 81AII VAI is introduced to ensure submodu
larity. It is the cut function of the complete
directed graph of capacity 8, and hence itself
submodular. The submodular function f, rely
ing as it does on [(A) /J, which is not sub
modular, is difficult to deal with, and the result
ing submodular flow algorithm has a high com-
plexity [30].
A slight modification of this idea is to use the
submodular function f, defined by f, (A) = f(A) +
81 Al V -A. This function has the advantage that
it is the sum of two submodular functions,
which makes it easier to work with, as detailed
below. This function is first used by Iwata,
McCormick, and Shigeno [32] in a faster func
tion-scaling algorithm for submodular flow.
The relaxation fg has a natural interpretation
in the setting of network flows. As a counterpart
to Goldberg and Tarjan's approximate optimality
for conditions for minimum cost flows [25],
Ervolina and McCormick [11] introduce a dual
notion of approximate optimality for minimum
cost flows. They define a relaxation that relaxes
the capacity of each flow arc by the scaling
parameter 8. If one includes also 0-capacity arcs,

this is tantamount to adding a complete directed
graph on Iof capacity 8. For submodular net
works, the "graph" is the set of possible exchange
arcs, which is really the complete directed graph
on V Thus, using f., interpreted as relaxing all
exchange capacities by 8, is like solving the origi
nal problem with the added complete directed
graph of capacity 8. This leads to an approxi
mate optimality for submodular function mini
mization. Call this added graph H(8), and the
arcs in this added graph by relaxation arcs.
The algorithms that use this relaxed submod
ular function maintain a base z E B(f) and seek
to maximize z(/) instead of x (/) during a
8-scaling phase. The base z is represented as a
base x E B(i) and a base aW E i i where
AMH) is the directed cut function of H(8) defined
on a subset A by the capacity of arcs leaving A
minus arcs entering A. This representation of a
base in B(f) explicitly uses the fact that f, is the
sum of two submodular functions. The latter
term aW is equivalently expressed as the bound
ary of a flow 1 in the complete directed graph of
capacity 8. A bound on z(/) yields an approxi
mate bound on x( I : x(V) > z (V) n2/2.

Fat Paths for SFM. How should H(8) be used
to obtain a polynomial time algorithm? With
H(8), it is now possible to augment on paths
consisting of exchange arcs and relaxation arcs.
However, once all paths of relaxation arcs are
saturated, the same problem Cunningham faced
still remains: there may be no fat path among
the easily accessible exchange arcs.
The answer is suggested in a third function
scaling algorithm for submodular flow by
Fleischer, Iwata, and McCormick [14]. This is
an augmenting path algorithm that augments
only along paths of relaxation arcs (and original
flow arcs for the submodular flow problem). It
avoids exchange arcs on an augmenting path by
trading exchange capacity on an exchange arc for
flow capacity on a parallel relaxation arc.
Whenever an exchange arc (y,s, t) is encountered
in the search for an augmenting path from v
with z(v) > 8 to with z(w) < an exchange
for (y,s,t) is performed of value (a 8, and the
flow on relaxation arc (s, t) is reduced by Xa.
This enforces that z= x / remains
unchanged. The flow reduction is an exchange
operation for the triple (aW,s,t) e Li , x IVx
V. Thus this trading of exchange capacity for
flow capacity is called a double-exchange. It can

also be viewed as sending "flow" around a cycle
consisting of an exchange arc, and a residual
relaxation arc. A double-exchange removes an
exchange arc whose behavior is hard to predict
(in terms of being able to compute its capacity
in the future), and replaces it with a parallel
relaxation arc with a known, fixed capacity.
The idea for the double-exchange routine
designed in [14] has roots in a distinctly differ
ent subroutine used in the submodular flow
algorithm of Iwata, McCormick, Shigeno [32].
In both [14] and the SFM algorithm [31], a
double-exchange is performed for individual
arcs, and for a = min{S,Ca(y,s,t)}, where 8 is the
scaling parameter. For SFM, an augmenting
path of relaxation arcs with capacity 8 is found
after at most n double-exchanges.
A 8-scaling phase ends when there are no aug
meeting paths of capacity 8. At this point, the
set A of elements that can reach {v z(v) < -}8
along paths of capacity at least 8 satisfies x (/)
> f(A)-n 8 [31].4 Thus, at the end of the 8 =
1/n2 scaling phase, the algorithm finds a set A
and a vector with x(V) > f(A)-1. If fis inte
ger, relation (3) implies that A is a minimizer.
The above ideas lead to a straightforward
scaling algorithm for submodular function mini
mization that has a weakly polynomial run time,
in that the run time depends on log (maxACv
|t(A)l). It is possible to turn this into a strongly
polynomial algorithm replacing this log term
with an n log n term. The main idea is to show
that it is possible, after log n scaling phases to do
one of the following two things:
i. Identify a new element that is contained in
every minimizer of f
ii. Identify a new pair of elements (v, w) such
that w is contained in every minimizer con
training v.
The strongly polynomial time algorithm
maintains pairs satisfying (ii) as directed arcs in a
directed acyclic graph. Whenever a cycle forms
in the graph, the elements in the cycle may be
contracted into a single element, since either
they are all contained in a minimizer, or none of

4In [31], the graph used has the direction of each
accessible arc reversed. This choice of direction is
arbitrary but affects some details in the description of
the algorithm. For example, with this alternate choice,
augmentations are from with z() 8 to with
z(> > 8, and the corresponding statement about the
set A applies in their setting to the set of elements
reachable from I{vz( 8}.


S P I* A


them are. Since this directed graph can have at
2 2
most n new arcs, after at most n log n phases, a
minimizer of fis found. The basic concept of
fixing a new arc after log n scaling phases was
first used by Tardos [43] in the design of the first
strongly polynomial algorithm for minimum
cost flow.

3.2. Schrijver's Approach
Schrijver [41] takes a completely different
approach to confronting the difficulty of com-
puting general exchange capacities. Instead of
maneuvering with only a subset of the exchange
arcs, Schrijver considers the full set of exchange
arcs for each y,, i E I Given a pair (s, t) with
positive exchange capacity for x= I1, v,
Schrijver shows how it is possible to compute a
lower bound 1 < (x,s, t) so that after performing
the exchange x' = x+ 1(01- z), the pair (s, t)
does not have positive exchange capacity for any
extreme base y, i e I in the convex represent
tion of x'. In other words, (s, t) is no longer in
the union of exchange arcs for y, i e I Applied
to a pair (s, t) for which there originally was such
an arc, this has the effect of increasing the dis
tance between s and t in the graph of the union
of exchange arcs of y, i e I Schrijver shows that
by applying this to specially chosen pairs (s, t),
the distance of vertices from the set S+ never
decreases, and after a polynomial number of itera
tions, the distance label of some vertex increases.
A Useful Subroutine. The key to Schrijver's
algorithm is a subroutine that after at most n
calls, effectively removes an arc from the union
of exchange arcs of extreme bases. To describe
this subroutine, we will need a little notation.
Let the relation s-< t indicate that (y,s, t) > 0.
(Note that -< is transitive, since if (vs, t) = 0,
then there is a set A satisfying s c A c IA{t} and
y(A) = f(A). For any we Veither wE A in
which case a(y,w,t) = 0, or wE A in which case
((ysw) = 0.) It is possible to compute the rela
tion -< for extreme base y. i 1 .. ..i e.g. [2].
Define [st] := {ve s-< v-< t}.
Given a pair (s, t) with positive exchange
capacity for some extreme base in the convex
representation of x, let y be the extreme base in
this convex representation with maximum I[s, t],
and let X be its multiplier in the representation.
Schrijver devises a subroutine that computes a

lower bound //< 0X(ys, t), and extreme bases z,
j E Jwith multipliers Yy such that
1. y+p( -Xz) = yz,,
2. yy= 1,
3. [s,t] < I[s,t] for allje j
Let y be the extreme base obtained by per
forming the full exchange operation: y = y +
a(ys,t)(X,- x). Let x= x+ Xp/(x,- X) be the new
base obtained by performing the lower bound
exchange. The first two properties allow us to
replace X(1- )y y+ /p y in the convex represent
station of x' with X y.j z,. The third property
ensures that this substitution makes some
progress: After one iteration either max I [s, t]Y,
decreases, or the number of bases in Iachieving
this maximum decreases. Call this subroutine
Reduce-Interval(s, t).
If Gaussian elimination is used after each call
to Reduce-Interval to reduce the number of bases
in the convex representation of x', then A/
remains at most n. Since there are at most n pos
sible values of [s, t] this implies that Reduce
Interval(s, t) is called at most n times before
a(yi,s, t) = 0 for all i e I
Short Paths for SFM. Schrijver finds a lexico
graphically shortest augmenting path from S to
Sin the union of exchange capacity arcs for y,,
i e and applies Reduce-Interval to the last arc
(s, t) in this path until either x(t) = 0 or a(ys,t)
= 0, V i e I He shows that by such application,
the distances of vertices from S never decrease,
and that after a polynomial number of steps, the
distance of some vertex increases. This algorith
mic framework bears resemblance to the layered
network approach for submodular flows
described by Tardos, Tovey, and Trick [44],
which in turn combines ideas from earlier aug
meeting path algorithms for feasible submodular
flow [40, 34, 16] with the maximum flow
framework of Dinic [7].
Instead of detailing the particulars of this
approach, we describe a modification that is
both faster and cleaner. To avoid computing
augmenting paths in each iteration, the push
relabel framework, introduced for maximum
flows by Goldberg and Tarjan [24], maintains
distance labels in a lazy manner and, like
Schrijver's algorithm, performs operations on
only one arc at a time. Goldberg and Tarjan
show that this improves on Dinic's maximum
flow algorithm by a factor of n. This was adapt
ed to the feasible submodular flow problem by
Fujishige and Zhang [22] to obtain a similar

improvement in run time. It can be adapted to
SFM as well [12] and thus the overhead of com
putting shortest augmenting paths is avoided. We
outline the main ideas of this push-relabel algo
rithm for SFM.
For maximum flows, the push-relabel algo
rithm maintains a flow cp that satisfies capacity
constraints and a labeling d of the vertices that
corresponds to a lower bound on the distance of
a vertex from the sink. A valid label d satisfies
d(source)= n, d(sink)= 0, and d(s) < d(t) +1 for
any arc (s, t) with (p(s, t) less than the capacity of
(s, t). (In this case, arc (s, t) has residual capacity
equal to the capacity of (s, t) minus p(s, t).) The
algorithm relies on two operations: push and
relabel. A push operation applies to (s, t) if the
net flow leaving s (flow in minus flow out) is less
than 0, arc (s, t) has residual capacity, and d(s)
d(t) +1. Call excess at s the net flow entering s,
and denote this by e(s). Push applied to (s, t)
increases flow on (s, t) by the minimum of e(s)
and the residual capacity of (s, t). A relabel oper
ation applies to s if e(s) > 0, and no push opera
tion applies to s. Relabel applied to s increases
d(s) by one. Goldberg and Tarjan [24] show that
the push and relabel operations maintain valid
labels. In addition, if a push is applied to the
vertex with excess and the highest label, then
they show that the algorithm terminates after at
3 2
most n pushes and n relabels.
For submodular function minimization, a
valid label d satisfies d(t) = 0 if x(t) < 0, and d(s)
< d(t) +1 if s-<, tfor some i e I The operation
Push(s, t) repeatedly calls Reduce-Interval(s, t)
until either x(s) = 0 or a(ys,t) = 0, Vie I
Push(s, t) applies if x(s) > 0, s- and d(s) = d(t) +1. In effect, Push is applied to
the first arc of an augmenting path, instead of
the last. (Schrijver's algorithm may be modified
in this way also, yielding the same run time as
his version that works on the last arc.) A relabel
operation applies if x(s) > 0 and no Push opera
tion applies to s. Relabel(s) increases d(s) by one.
The main complication in extending the push
relabel framework to submodular function mini
mization is to show that Push(s, t) maintains
valid labels. All the other analyses in [24] then
follows easily.
The problem that arises is that after applying
an exchange operation to (s, t) to move to y = y
+ C(X,- x), a pair (v, w) that initially had 0
exchange capacity may now have positive capaci
ty. This can happen if there is a y tight set A that


S T -- Ai6 41


includes w and s and excludes v and t. After the
exchange, this set is no longer tight, since y' (A) =
y(A) a y(A) = tA). If this was the only tight
set separating v and w, then (v, w) has positive
exchange capacity for y. The following lemma,
which appears in Schonsleben [40] and Lawler
and Martel [34], describes under what circum
stances this can happen.

Lemma 3.1 Ifa(vyv,w) = 0 and(y, v, w) > 0
where y = y+ a(X X) fory yey B(f), then
ay,s,w) > 0 and y, v,t) > 0.

One consequence of this lemma is that Push(s, t)
maintains valid labels: if (v, ) satisfies v-/ wfor
all ie Ibefore Push(s, t) and v- j le after Push(s, t), then by the validity of
labels before Push and Lemma 3.1 we have d(v)
< d(t) +1 d(s) < d(v) +1.
The push-relabel algorithm terminates with
the optimal x* when there are no augmenting
paths from S to S. By the validity of the dis
tance label d, this occurs the first time there is a
distance value p such that no vertex has this
label, and all elements in S have higher labels.
The set of elements with label lower than p is a
minimizer of f By applying push or relabel
always to the highest labeled vertex in the
number of pushes and relabels that can occur
before this condition is met is bounded in the
same way as in the push-relabel algorithm for
maximum flow.

4. Some Consequences of
Combinatorial, Polynomial Time
Algorithms for SFM

We now describe some examples that demon
state that these new combinatorial algorithms
lend themselves more easily to modification or
generalization to other problems. Both sets of
examples take advantage of the existing algorith
mic research literature in network flows and sub
modular flows.

4.1. Advantages of a Push-Relabel
Algorithm for SFM

The first extensions we describe use the push
relabel algorithm for submodular function mini
mization. This algorithm can be extended to
ill. ,. .1l solve a parametric submodular func
tion minimization problem. This, in turn, leads
to a combinatorial algorithm to compute a lexi
cographically optimal base [13].

The push-relabel algorithm for maximum
flow [24] has been used in the design of efficient
algorithms for many more general problems.
Two prominent examples include the parametric
flow problem and the overall minimum cut
problem in a directed graph. In the first
instance, Gallo, Grigoriadis, and Tarjan extend
push-relabel to solve the parametric network
flow problem [23]. This is defined on a network
with arc capacities that are functions of a param
eter 0: increasing in 0 leaving the source,
decreasing in 0 entering the sink, and constant
elsewhere. The parametric network flow problem
is to compute a maximum flow for each of an
increasing sequence of parameters 01 < 02 <
< 0 in the same asymptotic time as one push
relabel maximum flow computation. This has
applications in scheduling [36]. In the second
instance, Hao and Orlin [28] extend the push
relabel algorithm to compute an overall mini
mum cut in a directed graph in the same asymp
totic time as one run of the push-relabel algo
Both of these ideas can be generalized to sub
modular function minimization, with some care.
In the first instance, let f'<-- denote that for all
sets A, Bwith A c B

f( (A) f'() f'(A).

The relation f' <- f is called a strong map. Both
the parametric flow problem considered in [23],
and the problem of finding minimizers for a
sequence of parameterized submodular functions
f+ 0 w for a nonnegative vector w R and an
increasing sequence of parameter values of 0, are
examples of strong map sequences. Iwata,
Murota, and Shigeno [33] give a generalization
of the algorithm in [23] to polymatroid intersec
tion for a strong map sequence. Extending the
new push-relabel algorithm for submodular
function minimization, it is possible find the
minimizer of a sequence of submodular func
tions f <- f2 <- f in the same asymptotic
time as the push-relabel algorithm for SFM [13].
In turn, an efficient algorithm for strong map
submodular function minimization implies a
more efficient algorithm to find a lexicographic
optimal base, a generalization of Megiddo's lexi
cographically optimal flow [37] introduced by
Fujishige [19].

4.2. Improved Algorithms for
Submodular Flow

In Section 3.1, we saw that ideas developed for
submodular flow led to recent progress in algo
rithms for submodular function minimization.
In this section we explain how these new algo
rithms for SFM lead to more direct algorithms
for solving submodular flow. In particular, we
briefly outline the first algorithms for submodu
lar flow that do not rely on an oracle for com-
puting exchange capacities. We start by high
lighting similarities between network flows,
SFM, and submodular flow.
Algorithmic Relation to Network Flows and
SFM. Generic augmenting path algorithms for
minimum cost flow with supply and demand
vector b start with a flow cp and augment along
paths of flow arcs with residual capacity from
{i]b(v) > qcp(v)} to {]b(v) < ycp(v)}. The underly
ing idea behind the augmenting path algorithms
we have described for submodular function min
imization start with a base x B(f) and aug
ment along paths of exchange capacity arcs from
{ix(v) > 0} to {tx(v) < 0}. A generic submodu
lar flow algorithm may start with a flow (p and a
base x e B(f). By augmenting along paths of
flow and exchange arcs from d{i((v) < x(v)} to
{ia((v) > x(v)}, it is desired to obtain base x* e
B(f) and flow cp* satisfying cp* = x*.
Submodular Flow without an Exchange Oracle.
The hard part of submodular flow is dealing
with the exchange capacities ili i. .11 This is
the problem solved by combinatorial, polynomi
al-time algorithms for submodular function
minimization. Thus, by adding to the arc set in
these SFM algorithms an additional set of arcs
that correspond to the flow arcs of a submodular
flow problem, it is possible to modify the SFM
algorithms to solve the feasible submodular flow
problem. Capacities of flow arcs are easy to com-
pute, and thus they do not add extra complexity
to these algorithms. This is the idea behind the
feasible submodular flow algorithm detailed in
[12]. It modifies the above mentioned push-rela
bel algorithm for SFM by adding flow arcs to
the set of exchange arcs for y,, i e I and general
izes the Push operation to allow sending flow on
these flow arcs as well, thus solving the feasible
submodular flow problem within the same
asymptotic time bound as the push-relabel algo
rithm for SFM.


S A 64


Given this result, it is perhaps natural to ask if
feasible submodular flow equivalent to submod
ular function minimization. Checking if a sub
modular flow is feasible is a submodular func
tion minimization problem. But it is not clear
what the answer is if one asks to find a feasible
flow. Even for the special case of minimum cuts,
it is still not known if the exact (s, t) minimum
cut problem is any easier than maximum flow.
Minimum Cost Submodular Flow. While there
seems to be a natural relation between feasible
submodular flow and submodular function min
imization, the submodular flow problem with
costs would appear to be a more difficult prob
lem. However, it is possible to extend the combi
natorial algorithm for SFM in [31] to solve the
minimum cost submodular flow problem [12].
One reason for the success of this extension is
due to the fact that the SFM algorithm in [31] is
based on ideas arising in algorithms for submod
ular flow, and in fact is highly similar to the sub
modular flow algorithm in [14]. Unlike the case
for extending SFM algorithms to solve feasible
submodular flow, it is not sufficient to simply
add flow arcs to the set of arcs used in [31]. The
problem is that the algorithm described in [31]
is not particular in its choice of augmenting
path. In augmenting path algorithms for stan-
dard minimum cost flows, it is necessary to
select the least cost augmenting path. This is also
necessary in submodular flows. Thus it must be
shown that the augmenting path subroutine in
[31] can be extended to find a least cost aug
meeting path. This is proven in [12]. The result
ing algorithm finds an optimal dual solution, i.e.
optimal node prices, for the submodular flow
problem. The optimal flow can then be found
with n 1 additional submodular function mini

5. Some Additional Questions

1. An example of a submodular function mini
mization problem that we do not know how to
solve in polynomial time without recourse to a
general submodular function minimization algo
rithm is checking the feasibility of transshipment
problem over time [29]. The transshipment
problem over time is flow problem with multiple
sources (vertices with positive supply) and multi
ple sinks (vertices with negative supply; also
called positive demand). Each arc (u, v) in G has
a capacity, and a nonnegative transit time T
that determines how long it takes for flow leav
ing u to arrive at v. Thus, flow leaving u at time
t arrives at v at time t + T Given a time bound
L the feasibility problem is to determine if there
exists a flow that completes by time respects
capacity constraints, and meets all demand. This
can be solved by submodular function mini
mization. Is there a more efficient approach to
this problem that uses a more specialized algo
rithm than general SFM? More broadly, do the
algorithms in this paper yield new and interest
ing algorithms for special cases of SFM?
2. Despite the analogies drawn in this article,
submodular flow algorithms are not simple
extensions of minimum cost flow algorithms.
While all of the existing submodular flow algo
rithms build on ideas used previously in network
flows, it is often nontrivial to extend these net
work flow algorithms to submodular flows. For
example, Goldberg and Tarjan devised a cost
scaling algorithm for minimum cost flow using
the push-relabel framework [25], but there is
still no cost scaling algorithm using push-relabel
techniques for minimum cost submodular flow.
Hence we do not know if the push-relabel algo
rithm for SFM can be extended to solve general
submodular flow problems.
3. Submodular flow is a general problem class
for which all extreme solutions are integral. It is
not the most general such class. Edmonds and
Giles [9] describe a framework that includes a
more general class of problems with integral
polytopes: TDI systems. Can these algorithms be
extended to yield combinatorial algorithms for
TDI systems, or other general problem classes?

4. There are other linear programs involving
submodular functions for which the resulting
polytopes are not integral. An interesting class
model network design problems. A generic net
work design problem asks to add a minimum
cost set of edges to an existing graph so that cer
tain connectivity properties are met. Special
cases include the Steiner tree problem and the
minimum cost k-connected subgraph problem.
If fis a set function describing the connectivity
requirements between each subset and its com-
plement, then the network design for a graph
G = (VE) and cost vector c E RE, can be mod
eled as an integer program with an exponential
number of constraints variable vector x e {0,1}
to indicate whether or not an edge is included in
the minimum cost solution: min c subject to
V A c V x(A(A)) > (A) and xe {0,1} .
Frequently fis supermodular (meaning that -fis
submodular), or weakly-supermodular (i.e. fsat
isfies f(A) + f(B) max{f(A U B) + f(A n B,
f(A\B) + f(fA)}). Some approximation algo
rithms to solve these often NP-hard problems
round the optimal solution to the linear pro
gram relaxation of this formulation. Is there a
combinatorial, polynomial time algorithm to
solve this simple class of linear programs, for
even special cases of f?


I am grateful to Satoru Iwata, Tom McCormick,
and Eva Tardos for sharing their insights on the
subject of submodular functions and flows. I am
indebted to them and also to Garud Iyengar, Jay
Sethuraman, and Kevin Wayne for taking the
time to read and provide useful criticism on a
draft of this article.



[1] Proceedings of the 32th Annual
ACM Symposium on Theory of
Computing, 2000.
[2] R.E. Bixby, WH.
Cunningham, and D.M.
Topkis. Partial order of a poly
matroid extreme point. Math.
Oper Res., 10:367-378, 1985.
[3] WH. Cunningham. Testing
membership in matroid poly
hedra. J Combinatorial Theory
B 36:161-188, 1984.
[4] WH. Cunningham.
Minimum cuts, modular func
tions, and matroid polyhedra.
Networks, 1985.
[5] WH. Cunningham. On sub
modular function minimize
tion. Combinatorica, 5:185
192, 1985.
[6] WH. Cunningham and A.
Frank. A primal-dual algo
rithm for submodular flows.
Math. Oper Res., 10:251-262,
[7] E.A. Dinic. Algorithm for
solution of a problem of maxi
mum flow in a network with
power estimation. Soviet
Math. Dokl., 1970.
[8] J. Edmonds. Submodular
functions, matroids, and cer
tain polyhedra. In R. Guy, H.
Hanani, N. Sauer, and J.
Schonheim, editors,
Combinatorial Structures and
Their Applications, pages 69
87. Gordon and Breach, 1970.
[9] J. Edmonds and R. Giles. A
min-max relation for submod
ular functions on graphs. Ann.
Discrete Math., 1:185-204,
[10] J. Edmonds and R.M. Karp.
Theoretical improvements in
algorithmic 11l. ,. .. for net
work flow problems. Journal
of the ACM, 19:248-264,
[11] T.R. Ervolina and S.T.
McCormick. Two strongly
polynomial cut cancelling
algorithms for minimum cost
network flow. Disc. Appl.
Math., 46:13-165, 1993.
[12] L. Fleischer and S. Iwata.
Improved algorithms for sub
modular function minimize
tion and submodular flow. In
ACM [1], pages 107-116.


[13] L. Fleischer and S. Iwata. A
push-relabel framework for
submodular function mini
mization and applications.
Working paper, 2000.
[14] L. Fleischer, S. Iwata, and S.T.
McCormick. A faster capacity
scaling algorithm for submod
ular flow. Technical Report
9947, C.O.R.E. Discussion
Paper, Louvain-la-Neuve,
Belgium, 1999.
[15] L.R. Ford and D.R.
Fulkerson. Flows in Networks.
Princeton University Press,
Princeton, NJ, 1962.
[16] A. Frank. Finding feasible vec
tors of Edmonds-Giles polyhe
dra. J Combin. Theory,
36:221-239, 1984.
[17] A. Frank and E. Tardos.
Generalized polymatroids and
submodular flows. Math.
Programming, 42:489-563,
[18] A. Frank and E. Tardos. An
application of submodular
flows. Linear algebra and its
applications, 114/115:329
348, 1989.
[19] S. Fujishige. Lexicographically
optimal base of a polymatroid
with respect to a weight vec
tor. Math. Opera Res., 5:186
196, 1980.
[20] S. Fujishige. Submodular
Functions and Optimization.
North-Holland, 1991.
[21] S. Fujishige, H. Rock, and U.
Zimmermann. A strongly
polynomial algorithm for min-
imum cost submodular flow
problems. Math. Oper Res.,
14:60-69, 1989.
[22] S. Fujishige and X. Zhang.
New algorithms for the inter
section problem of submodu
lar systems. Japan J Indust.
Appl. Math., 9:369-382, 1992.
[23] G. Gallo, M.D. Grigoriadis,
and R.E. Tarjan. A fast para
metric maximum flow algo
rithm and applications. SIAM
Journal on Computing,
18(1):30-55, 1989.
[24] A.V. Goldberg and R.E.
Tarjan. A new approach to the
maximum flow problem.
Journal of the ACM, 35:921
940, 1988.


[25] A.V. Goldberg and R.E.
Tarjan. Solving minimum cost
flow problems by successive
approximation. Mathematics
of Operations Research, 15:430
466, 1990.
[26] M. Grotschel, L. Lovasz, and
A. Schrijver. The ellipsoid
method and its consequences
in combinatorial optimization.
Combinatorica, 1:169-197,
[27] M. Grotschel, L. Lovasz, and
A. Schrijver. Geometric
Algorithms and Combinatorial
Optimization. Springer-Verlag,
[28] J. Hao and J.B. Orlin. A faster
algorithm for finding the min
imum cut in a directed graph.
J Algorithms, 17(3):424-446,
[29] Hoppe and E. Tardos. The
quickest transshipment prob
lem. Math. Oper Res., 25(1),
February 2000. Extended
abstract appeared in
Proceedings of SODA 1995.
[30] S. Iwata. A capacity scaling
algorithm for convex cost sub
modular flow. Math.
Programming, 76:299-308,
[31] S. Iwata, L. Fleischer, and S.
Fujishige. A combinatorial,
strongly polynomial-time algo
rithm for minimizing sub
modular functions. In ACM
[1], pages 97-106.
[32] S. Iwata, S. T. McCormick,
and M. Shigeno. A strongly
polynomial cut canceling algo
rithm for the submodular flow
problem. In 7th International
Integer Programming and
Combinatorial Optimization
Conference, pages 259-272,
[33] S. Iwata, K. Murota, and M.
Shigeno. A fast parametric
submodular intersection algo
rithm for strong map
sequences. Math. Oper. Res.,
22:803-813, 1997.
[34] E.L. Lawler and C.U. Martel.
Computing maximal polyma
troidal network flows. Math.
Oper. Res., 7:334-347, 1982.

[35] L. Lovasz. Submodular func
tions and convexity. In A.
Bachem, M. Grotschel, and B.
Korte, editors, Mathematical
Programming -The State of the
Art, pages 235-257. Springer
Verlag, 1983.
[36] S.T. McCormick. Fast algo
rithms for parametric schedule
ing come from extensions to
parametric maximum flow.
Operations Research,
47(5):744-756, September
October 1999. Extended
abstract appeared in
Proceedings of STOC 96.
[37] N. Megiddo. Optimal flows in
networks with multiple
sources and sinks.
Mathematical Programming,
7:97-107, 1974.
[38] H. Nagamochi and T. Ibaraki.
Computing edge-connectivity
of multigraphs and capacitated
graphs. SIAMJ Disc. Math.,
5:54-66, 1992.
[39] M. Queyranne. Minimizing
symmetric submodular func
tions. Math. Programming,
82:3-12, 1998. Extended
abstract appeared in
Proceedings of SODA 1995.
[40] P. Schonsleben. Ganzzahlige
Algorithmen. PhD thesis,
ETH, Zurich, 1980.
[41] A. Schrijver. A combinatorial
algorithm minimizing sub
modular functions in strongly
polynomial time. Preprint.
[42] L.S. Shapley. Cores of convex
games. International Journal of
Game Theory, 1:11-26, 1971.
[43] E. Tardos. A strongly polyno
mial minimum cost circula
tion algorithm. Combinatorica,
5:247-255, 1985.
[44] E. Tardos, C.A. Tovey and
M.A. Trick. Layered augment
ing path algorithms. Math.
Oper. Res., 11:362-370, 1986.
[45] D.M. Topkis. Supermodularity
and Complementarity.
Frontiers of Operations
Research. Princeton University
Press, Princeton, NJ, 1998.

The First Sino-Japan Optimization Meeting
October 26-28, 2000, Hong Kong
INFORMS Fall 2000
November 3-7, 2000, San Antonio, Texas, USA
URL: http://ie.tamu.edulinforms2000/
June 13-15, 2001, Utrecht, The Netherlands
and DONET Summer school on Integer and Combinatorial Optimization
June 11-12, 2001, Utrecht, The Netherlands

Utrecht, The Netherlands
June 13-15, 2001
and DONET Summer school on Integer and Combinatorial Optimization
Utrecht, The Netherlands
June 11-12, 2001

The eighth Integer Programming and Combinatorial Optimization (IPCO) conference will be
held in Utrecht, The Netherlands, from June 13 to 15, 2001. The IPCO conference will be
immediately preceded by a summer school on Integer and Combinatorial Optimization. The
summer school will be given by Daniel Bienstock and Eva Tardos, who will each present four
one-hour lectures. The school is organized under the auspices of DONET, a European Network
for Discrete Optimization subsidized by the European Community. We hope that the combine
tion of the two activities will make it easier for young researchers to participate in the joint event
even if an IPCO submission is not made.
For more information, please see the home page of the IPCO conference at
or send an e-mail to .
To be added to our mailing list, please fill out the pre-registration form at the web site or send
your name and e-mail address to .
Karen Aardal, The ICPO VIII Organization Committee

S0 P TI A,6I


First Announcement
The First Sino-Japan Optimization Meeting
Hong Kong, October 26-28, 2000

Objectives The meeting aims to provide a forum
for researchers, who are from Japan, Mainland
China, Hong Kong, Taiwan, Singapore, other
countries and regions, and working in the area
of optimization, to gather together to report and
exchange their latest works on optimization.
Topics Include Linear and Nonlinear
Optimization, Continuous and Discrete
Optimization, Deterministic and Stochastic
Optimization, Smooth and Nonsmooth
Optimization, Single and Multi-Objective
Optimization, Integer and Combinatorial
Optimization, Convex and Nonconvex
Organization and Endorsement
The Sino Japan Optimization Meeting (SJOM)
is endorsed by the Mathematical Programming
Society (MPS), the Research Association of
Mathematical Programming (RAMP), Japan,
and the Chinese Mathematical Programming
Society. The First Sino Japan Optimization
Meeting (SJOM 2000) is organized by The City
University of Hong Kong and The Hong Kong
Polytechnic University. The organization com
mittee of SJOM 2000 is in charge of the organi
zation work of SJOM 2000, and will report to
the steering committee of SJOM. The steering
committee will assist the organization of SJOM

2000, and select the organizers and locations of
the future SJOM meetings. It is expected that
the steering committee will formally meet dur
ing SJOM 2000. The term of the co-chairs of
the steering committee will end at the second
SJOM meeting.
Organization Committee of SJOM 2000
Liqun Qi (, The Hong
Kong Polytechnic University), Co-Chair;
Jianzhong Zhang (,
City University of Hong Kong), Co-Chair;
Chuangyin Dang (,
City University of Hong Kong); and Xiaoqi
Yang (, The Hong
Kong Polytechnic University), Treasurer.
Steering Committee of SJOM
Xiaoqiang Cai (The Chinese University of Hong
Kong); Xiaojun Chen (Shimane University);
Satoru Fujishige (Osaka University); Masao
Fukushima (Kyoto University), Co-Chair; Jiye
Han (Chinese Academy of Sciences); Toshihide
Ibaraki (Kyoto University); Masakazu Kojima
(Tokyo Institute of Technology); Hiroshi Konno
(Tokyo Institute of Technology); Shinji Mizuno
(Tokyo Institute of Technology); Kazuo Murota
(Kyoto University); Liqun Qi (The Hong Kong
Polytechnic University); Jie Sun (The National
University of Singapore); Kunio Tanabe

(Institute of Statistics Mathematics); Tetsuzo
Tanino (Osaka University); Kok Lay Teo (The
Hong Kong Polytechnic University), Co-Chair;
Soon-yi Wu (National Cheng Kung University);
Wenci Yu (East China University of Science and
Technology); Ya-xiang Yuan (Chinese Academy
of Sciences); Jianzhong Zhang (City University
of Hong Kong); Xiangsun Zhang (Chinese
Academy of Sciences)
Guest Plenary Speakers
Rainer Burkard (Technische Universitat Graz,
Austria), Gianni Di Pillo (Universita di Roma
"La Sapienza," Italy), Carl Timothy Kelley
(North Carolina State University, USA), Jean
Philippe Vial (University of Geneva,
Special Arrangements
Meeting proceedings or special issues of some
journals, tours and economical hotel accommo
dations will be indicated in the Second
Further Information
E-Mail: Eva Yiu (maevayiu@polyu.edu.hk) or
Peggy Chan (machan@cityu.edu.hk); web site:
; or contact
Organization Committee Members (e-mail
addresses shown above)

Call for Papers

Computational Optimization and Applications
Special Issue on Stochastic Programming

Computational Optimization and Applications
(COAP) announces a call for papers in the area
of Stochastic Programming with special empha
sis on new computational methods, experiment
tal results as well as applications of stochastic
programming methodology. We use the term
stochastic programming" in a broad sense to
cover stochastic optimization problems in which
information is revealed sequentially over time.
Thus, traditional dynamic programming as well
as its more recent adaptations are also consid
ered relevant to this special issue. As suggested
by the focus of COAP we are particularly inter
ested in computational approaches and signifi
cant applications. Hence submitted papers
should be motivated by these thrusts. As one

might expect, a primary requirement for public
cation in this issue is novelty of the ideas
expounded in the paper.
The special issue, which will be edited by
Professors Suvrajeet Sen and Julia Higle, will be
refereed in accordance with established proce
dures of COAP Please send five copies of the paper
to: Melissa Sullivan, Computational
Optimization and Applications Editorial
Office, Kluwer Academic Publishers, 101
Philip Drive, Norwell, MA 02061, USA.
Important: In your cover letter, provide your e
mail address and state that the paper is for con
sideration in the special issue on Stochastic
Programming. Papers for the special issue should
be submitted by December 15, 2000.



Sixth International Symposium on
Generalized Convexity/Monotonicity

Karlovassi, Samos, Greece
August 30 September 3, 1999
preceded by Summer School on Generalized
Convexity/Monotonicity, August 25-28,1999

The Symposium and its preceding Summer School were organ
ized by the Working Group on Generalized Convexity
(WGGC). About thirty students participated in the Summer
School and over one hundred researchers attended the
Symposium. The Summer School, directed by J.B.G. Frenk,
introduced graduate students as well as researchers from other
fields to major topics of Generalized Convexity and
Generalized Monotonicity. The material was covered in twelve
tutorials presented by J.P. Crouzeix, J.B.G. Frenk, N.
Hadjisawas, D.T. Luc, J.E. Martinez-Legaz, P.M. Pardalos, J.P.
Penot and S. Schaible. Most "students" of the Summer School
stayed on for the Symposium while several participants of the
Symposium arrived early to attend tutorials of the Summer
School. The response to the first WGGC summer school pre
ceding a WGGC symposium was overwhelmingly positive.
Thanks to the efforts of M. Sniedovich, IFORS representative,
the three best students of the Summer School obtained an
IFORS scholarship. During the following week more than fifty
lectures were presented and followed by researchers from twen
ty-six countries. Topics covered included various kinds of gen
eralized convex functions and generalized monotone maps,
optimality conditions, duality, fractional programming, multi
objective programming, nonsmooth analysis, variational
inequalities, equilibrium problems as well as topics less repre
sented at previous symposia such as stochastic convexity and
global optimization. Furthermore, as at the last symposium,
several invited lectures introduced participants to neighboring
fields of generalized convexity. This time set-valued optimiza
tion, multiplicative/fractional programming, global optimiza
tion and stochastic programming were the emphasis with tuto
rials by J. Jahn (Germany), H. Konno (Japan), P.M. Pardalos
(USA) and A. Prekopa (USA), respectively.
The conference site, a small town on a Greek island, offered
limitless opportunities for professional contacts among the par
ticipants. A rich scientific program was complemented by a
social program with many highlights. The Symposium was
hosted by the Department of Mathematics of the University of
the Aegean, located on Samos, the birthplace of Pythagoras and
Aristarchus. This truly scenic island with its interesting archeo
logical sites gave the conference a special background. The par
ticipants left enriched and appreciative of the excellent organi
zation by N. Hadjisawas (Samos) who with his local and inter
national team put together this memorable event in the history
of WGGC. Further details are found on the web site of the
Summer School and the Symposium at
. A selection of refer
eed papers presented at the Symposium will appear in the
Proceedings to be published by Springer Verlag in the series
"Lecture Notes in Economics and Mathematical Systems" (eds.
N. Hadjisawas, J.E. Martinez-Legaz and J.P. Penot). For infor
nation on the proceedings of the previous five symposia in
Vancouver (1980), Canton (1986), Pisa (1988), Pecs (1992)
and Marseille-Lyminy (1996), see the web site of WGGC
Siegfried Schaible, Scientific Committee of WGGC


Department of Management Science & Engineering

applications are invited for a tenure-track faculty position in the field
of optimization. The rank is open, and all branches of optimization are
of interest, e.g., continuous, discrete, large scale, etc. In particular the
department seeks a new faculty member who has an outstanding methodological
foundation and strong applications interests. If appropriate, the successful candi
date may choose to become involved in School of Engineering activities in the
area of Computational and Mathematical Engineering. The department hopes
to fill the position by September 1, 2001.
The Department of Management Science & Engineering (MS&E) is newly
created by the merger of the Engineering-Economic Systems and Operations
Research Department and the Industrial Engineering and Engineering
Management Department. The department's mission is research and education
associated with the development of the knowledge, tools, and methods required
to make decisions, shape policies, configure organizational structures, design
engineering systems, and solve operational problems arising in the information
intensive, technology-based economy. The department has special interest in
theory and application in optimization and systems modeling, probability and
stochastic systems, production operations and manufacturing, decision analysis
and risk analysis, economics and finance, organizational behavior, and manage
ment and entrepreneurship. The department is also developing expertise in
information science and technology and in technology, policy and strategy. A
more complete description appears in the department Web site at
Applicants should send a resume (including research accomplishments, teach
ing experience, publications), transcript of (doctoral) graduate study, at least one
published or unpublished research paper if available, and names and addresses of
at least three references to: Professor Richard Cottle, Search Committee Chair,
Department of Management Science & Engineering, Terman Engineering
Center, Stanford University, Stanford, CA 94305-4026. They should also ask
referees to send recommendation letters directly to Professor Cottle. Review of
the applications will begin in January of 2001.
Stanford University is an equal opportunity employer and welcomes nomina
tions of women and minority group members and applications from them.

FAP Web Announcement We are happy to announce
that the WWW server, FAP Web on Frequency Assignment, is now publicly avail
able at . Our intention is to collect and supply information
related to solving frequency assignment problems.
We have compiled a (certainly still incomplete) list of publications on
frequency assignment methods and collected results on the CALMA, COST 259,
and Philadelphia instances. We would like to keep track of the latest results
obtained as well as to supply benchmark problems for download. This cannot be
done without your support.
We would appreciate your assistance with helping to disseminate the information
on FAP Web, offering suggestions to improve FAP Web, and keeping us updated.
All information can be submitted with forms available at FAP Web (Submit sec
tion), or by sending an e-mail to . You are also encouraged to fill in
the form concerning your contact address if you'd like this address listed at the
server. (This may increase the likelihood of receiving spam mail, however.)
Finally, we would like to take the opportunity to thank everybody who has
(mostly indirectly) contributed so far, be it by sending (preprints of) papers, by sup
plying benchmark instances, or by providing assignments for some of the instances.
Andreas Eisenblaetter (eisenblaetter@zib.de) and Arie Koster (koster@zib.de)

S0 *TIMA64


S T -- Ai6 41


r'n/n h r n We invite OPTIMA readers to submit solutions to the problems to Robert Bosch
I J LkLL ii]T~ ~(bobb@cs.oberlin.edu). The most attractive solutions will be presented in a forthcoming issue.

Lights Out
by Robert A. Bosch
Dec. 14, 1999

In Tiger Electronics' handheld electronic solitaire game Lights Out,
the player strives to turn out all 25 lights that make up a 5 x 5 grid
of cells. On each turn, the player is allowed to click on any one
cell. Clicking on a cell activates a switch that causes the states of
the cell and its neighbors to change from on to off or from off to
on. Corner cells are considered to have two neighbors, edge cells to
have three, and interior cells to have four. Figure 1 demonstrates
what happens when the player clicks on cells (1, 1) and (1, 2).

1 2 3 4 5 1 2 3 4 5

23 3 3 3 ^ 2likn 313133
3 K Clickon (1,1) 3 OV 0
4 4
5 33531Q 51313131313

on (1,2)

on (1,2)

1 2 3 4 5 1 2 3 4 5
12345 12345

2 2
3 Click on (1,1) 3
4 4
5 5

Figure 1.

Problems Interested readers may enjoy trying to solve the
following problems:
1. Formulate an integer program for finding a way to turn out all the
lights in as few turns as possible. Hint 1:The order in which the cells
are clicked doesn't matter. Hint 2. A cell shouldn't be clicked more
than once.
2. What if each cell has a three-way bulb? I i" I 11 clicking on a
single three-way bulb changes its state from off to low, from low to
medium, from medium to high, from high to off, and so on.) Which
is easiest: (a) turning off all the lights when they're all on their high
setting? (b) turning them off when they're all on medium? (c) turn
ing them off when they're all on low?
Readers who have a Java-enabled browser can play the game online by
going to Martin Chlond's "Integer Programming and Recreational
Mathematics" web page (www.chlond.demon.co.uk/academic/puzzles.html)
and following the link to the "Five by five puzzle." (In Chlond's version,
all of the lights are initially off and the goal is to turn them all on.) To the
author's knowledge, Chlond was the first person to use integer program
ming to find an optimal solution to a Lights Out-type game. Incidentally,
the rest of Chlond's web page is well worth a visit too. It is a lovely collec
tion of recreational mathematics problems that can be solved with integer

Armies of Queens, Revisited

In the June installment of OPTIMA Mind Sharpener, we presented a vari
ant of the well-known 8-queens problem. We stated that two armies of
queens (black and white) peaceably coexist on a chessboard when they are
placed on the board in such a way that no two queens from opposing
armies can attack each other. We then asked readers to formulate an inte
ger program to find the maximum size of two equal-sized peaceably coex
isting armies.
Frank Plastria submitted the following very nice, very simple IP formu
lation of the problem:
8 8
max 1 1
1~ 1]

8 8
i=l w1

8 8

i=l 1=1

b, +w2 2 <1
b W E(,1}1

for all ((/jl', 1i)(, 2)) e

for all 1 <, / <8.

Plastria's formulation has two binary variables for each square (i, j) of the
board. One of them, b, j, indicates whether or not square (i, j) holds a
black queen. The other, w, j, specifies whether or not square (i, j) contains
a white queen. The constraint (1) keeps the armies the same size. The
constraints (2) keep the armies at peace. The set Mis the set of all ordered
pairs of chessboard squares that share a row, column, or diagonal line of
the chessboard. (Note that ((i' l), (12j2)) e Mif and only if i = 2j = j2
or i i = J,1
Due to a lack of adequate IP software, Plastria was unable to implement
his formulation. He did prove, however, that the optimal value of the LP
relaxation of his integer program is 32, and he did conjecture that the
optimal value of his integer program is 9. (The best solution he was able
to find has nine queens in each army and is displayed in Figure 2.)
Plastria's formulation is not a tight one. The author tested it with
CPLEX (version 4.0.9) on a 200 MHz Pentium PC and obtained an opti
mal solution (with objective value equal to 9) in just over four hours. It
turns out that there are many optimal solutions. The author hasn't yet
classified all of them. His favorite one (for the moment) is displayed in
Figure 3.

Figure 3.


Figure 2.


ir d

I i.

Integer Programming

by Laurence A. Wolsey

Wiley 1998

ISBN 0-471-28366-5, GBP 37.50

nteger programming is a powerful, but finicky tool; while it provides a very broad framework
for applications, care needs to be taken in the model building process in order to effectively
use the current generation of solution methods. Laurence Wolsey is the world's master at
squeezing the most out of the IP framework; watching him manipulate the constraints of an
IP is like seeing a lecture at the Hogwarts School of Witchcraft and Wizardry. It is very fortunate
that Wolsey's text is aimed squarely at teaching the rest of us the ins and outs of IP practice.
Wolsey's book is a true textbook. It is about one-third the size of the classic Integer and
Combinatorial Optimization by G.L. Nemhauser and L.A. Wolsey, and it contains a factor of 148
times more exercises than the monumental work Theory of Linear and Integer Programming by A.
Schrijver. I highly recommend Wolsey's book for adoption as a text in undergraduate and graduate
courses. Indeed, in our department we created a new course to match Wolsey's text, and it is being
very well received by the senior undergraduates, as well as by students in our Ph.D. program. The
text, moreover, is also a nice vehicle for potential industrial users of integer programming to see how
to best use IP software tools.
After two introductory chapters, a quick treatment of some basic combinatorial algorithms
(shortest paths, minimum spanning trees, bipartite matching) is presented in chapters 3 and 4. This
is followed by chapters on dynamic programming, complexity theory, and branch and bound. The
heart of the book begins with the cutting-plane material presented in chapters 8 and 9. Wolsey gives
an extensive treatment of general purpose cutting planes as well as facet-defining inequalities for
special classes of integer and mixed-integer problems. Lagrangian relaxation and column generation
methods are covered in chapters 10 and 11, and a short treatment of heuristic algorithms is given
in chapter 12. The text concludes with an excellent summary chapter titled "From Theory to
Solutions," that should be of particular interest to IP practitioners. This final chapter is presented
as a series of questions and answers, guiding the reader through the modeling and solution process.
Wolsey's book is a pleasure to read, and the author has done a very good job at selecting the mate
rial to cover. Integer Programming will no doubt be the standard IP text for years to come.



Scheduling Algorithms

by Peter Brucker

Springer-Verlag, 1998

ISBN 3-540-64105-X, DM 138

ISBN 0-444-82537-1, 1088 PAGES, USD 190.50

This is a slightly extended (16 additional pages) and updated version of the book
with the same title that appeared in the first edition in 1995. The structure and
the titles of the chapters are the same as in the first edition. The book deals
with deterministic problems in connection with machine or processor sched-
uling, and it concentrates on the presentation of polynomial algorithms for different class
es of scheduling problems. Moreover, branch and bound algorithms as well as local search
algorithms for the exact and heuristic solution of NP-hard scheduling problems are briefly
discussed. The book consists basically of three parts.
The first part (Chapters 1 3) contains some elementary material dealing with the class
sification of scheduling problems, with some basic problems and algorithms that are rele
vant for the development of solution procedures in connection with scheduling problems
(e.g. linear and integer programming, maximum flow problems, matching problems, arc
coloring in bipartite graphs and dynamic programming) and with the computational com-
plexity of scheduling problems.
Chapters 4 6 cover classical scheduling algorithms for solving single machine problems
(Chapter 4), parallel machine problems (Chapter 5) and shop scheduling problems (open
shop, flow shop, job shop and mixed shop; Chapter 6). This is the main part of the book
and contains known scheduling algorithms developed during the past 40 years. In Chapter
6, the disjunctive graph model as a useful tool for constructing optimal schedules is intro
duced. The job shop problem with minimizing the makespan is considered in detail. In
particular, a branch and bound algorithm based on the block approach and immediate
selection is described, which was the first algorithm that was able to solve the famous 10
x 10 problem given by Muth and Thompson (1963), and the application of tabu search
techniques for the heuristic solution of this problem is discussed.
The last part (Chapters 7 11) deals with some topics that have been considered recent
ly, particularly in connection with flexible manufacturing, and which are not or only par
tially included in other recent books on scheduling. More i ... .I. .11 single machine
scheduling problems involving due dates (Chapter 7), single machine watching problems
(Chapter 8), scheduling problems with changeover and transportation times (Chapter 9),
problems with multi-purpose machines (Chapter 10) and problems with multiprocessor
tasks (Chapter 11) are treated.
Most extensions in comparison with the first edition can be found in Chapters 8 and
11. In contrast to the first edition, where only watching problems with the length of a
batch equal to the sum of the processing times of the jobs in the batch have been consid
ered, the new edition includes also p-batching problems, where the length of a batch is
equal to the maximum of the processing times of the jobs in the batch. Both the unbound
ed (the batch can contain arbitrarily many jobs) and the bounded model are considered.
The chapter on multiprocessor tasks has been extended by some comments on preemptive
multiprocessor task problems and on multi-mode multiprocessor task problems.
The chapters finish with a survey of complexity results, where both the maximal poly
nomially solvable problems and the minimal NP-hard problems are listed in the tables.
These tables have been updated in comparison with the first edition of the book, and they
now also contain a lot of results that have been recently obtained. The book is a very help
ful tool for the development of concrete scheduling algorithms.

compared with its important influence on
today's Geometry and its strong impact on
other fields inside and outside mathematics, a
comprehensive and extensive source on
Computational Geometry was long overdue. However, is
there a chance to cover at least most of the variety of this
field, ranging from Algorithms and Data Structure,
Optimization and Euclidean Geometry to applications in
Visualizations, Robotics, CAD, etc., in only one book?
There is! And the proof is given by the Handbook on
Computational Geometry edited by J. R. Sack and J. Urrutia.
This handbook contains 22 chapters, most of them
devoted to basic problems and concepts in Computational
Geometry like Davenport-Schinzel sequences, Arrange
ments, Voronoi Diagrams, Closest-Point Problems, Link
Distance Problems, Graph-Drawing, Similarity of
Geometric Shapes, Spanning Trees and Spanners, Visibility,
Animation, Geometric Data Structures, Spatial Data
Structures, Illumination Problems, Polygon Decom
position, Parallel Computational Geometry, Randomized
Algorithms, Derandomization and Robustness in
Geometric Computation. In addition, three chapters on
Geographical Information Systems, Geometric Shortest
Path and Network Optimization, and Mesh Generation
show in an impressive manner the interaction of
Computational Geometry with other fields of science.
My overall impression is that most of the chapters are
written in a very comprehensive, detailed and stimulating
way. Each chapter contains an extensive list of references,
and there is no doubt that this book will be a very valuable
resource for researchers. However I can also recommend
this handbook to everyone who just wants to get an impres
sion of the problems, tools and techniques used in
Computational Geometry, or who is just curious about what
is going on in this field. I am sure that this book will satis
fy her/his curiosity and thirst for knowledge.

Handbook of Computational Geometry

edited by J.R. Sack and J. Urrutia

Elsevier Science, North-Holland

Application for Membership

I wish to enroll as a member of the Society.

My subscription is for my personal use and not for the benefit of any library or institution.

E I will pay my membership dues on receipt of your invoice.

E I wish to pay by credit card (Master/Euro or Visa).








Mail to:

Mathematical Programming Society
3600 University City Sciences Center
Philadelphia PA 19104-2688 USA

Cheques or money orders should be made
payable to The Mathematical Programming
Society, Inc. Dues for 1999, including sub
scription to the journal Mathematical
Programming, are US $75.
Student applications: Dues are one-half the
above rate. Have a faculty member verify your
student status and send application with dues
to above address.

Faculty verifying status


University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs