
Full Citation 
Material Information 

Title: 
Optima 

Physical Description: 
Serial 

Language: 
English 

Creator: 
Mathematical Programming Society, University of Florida 

Publisher: 
Mathematical Programming Society, University of Florida 

Place of Publication: 
Gainesville, Fla. 

Publication Date: 
December 2009 

Copyright Date: 
2009 
Record Information 

Bibliographic ID: 
UF00090046 

Volume ID: 
VID00081 

Source Institution: 
University of Florida 

Holding Location: 
University of Florida 

Rights Management: 
All rights reserved by the source institution and holding location. 

Downloads 

Table of Contents 
Main
Page 1
Main
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7
Page 8
Page 9
Page 10
Page 11
Page 12
Page 13
Page 14
Page 15
Page 16
Page 17
Page 18
Page 19
Page 20
Page 21
Page 22

Full Text 
OPTIMA
Mathematical Programming Society Newsletter
Steve Wright
MPS Chair's Column
November 20, 2009. Along with so many colleagues, I am deeply
saddened by the news of Paul Tseng's disappearance while on a kayak
trip on the Jinsha River in China on August I 3, 2009. Many of us have
many wonderful memories of times spent with Paul visits, collab
orations, conversations, conference social events and many have
been influenced by his scholarly, incisive, and prolific research. A
workshop in Paul's honor will take place at Fudan University next
May (see announcement in this issue), and an article on Paul will ap
pear in the next issue of Optima. A Wikipedia entry about Paul's life
and work can be found at http://en.wikipedia.org/wiki/PaulTseng.
In MPS news, we have introduced a new section on the web
site mathprog.org containing links to faculty job listings in optimiza
tion and related areas. Click on the "Jobs" tag at the top of the
page, and send any listings you wish to advertise on the site to
webmaster@mathprog.org.
The increasing visibility of optimization in other areas of mathe
matics, science, and engineering has become ever more apparent in
recent months. Optimization is being recognized more and more as
an important tool for modeling and analysis and as a key to many in
terdisciplinary research efforts. I illustrate the point with two recent
developments. First, the Institute for Pure and Applied Mathemat
ics (IPAM) at UCLA is organizing a threemonth program entitled
"Modern Trends in Optimization and its Applications," consisting of
a series of workshops in Fall 2010. Second, the optimization group at
Wisconsin was selected to be one of the five "themes" in the Wis
consin Institutes of Discovery, an interdisciplinary research institute
under construction on the UWMadison campus and funded with
$150m of private, state, and alumni research foundation grants. No
doubt there are other compelling recent examples of the expanding
reach of optimization.
Optimization is naturally an outwardlooking discipline. It chal
lenges domain scientists to articulate what it is they need to learn
from a model and to evaluate the strengths and weaknesses of
various formulations. Conversely, opportunities for fundamental re
search in optimization grow, as optimizers are challenged to provide
new theory and algorithms for the large and complex problems that
arise in new collaborations. Interdisciplinary initiatives provide us
with exciting opportunities to collaborate with top researchers in
other fields. Each success increases the visibility of optimization and
makes it easier to find new openings and collaborations in new areas.
Economics provided much of the initial impetus for our discipline.
The "zeroth ISMP" took place at the Cowles Commission for Re
search in Economics in Chicago in 1949; the proceedings can be
found at http://cowles.econ.yale.edu/P/cm/m 13/. Sixty years later, in
the wake of financial and economic crises, the need for rigorous an
alytical tools in economics and finance is growing once again. Tools
based on recent developments in stochastic optimization, robust op
timization, complementarity, and algorithmic game theory will be
particularly important. The use of optimization in operations re
search and engineering has grown steadily over the years, in such
specific areas as process control, engineering design, and logistics.
But it is the past few years that have seen a striking expansion of the
application space, into such areas as structural biology and bioinfor
matics, signal and image processing (including compressed sensing),
computational learning, the design and control of electrical grids,
and medical applications (e.g. cancer treatment), to name just a few.
The applications that gave rise to our field remain as important as
ever, and the vitality and variety of new research topics both fun
damental and interdisciplinary continues to grow. There has never
been a more exciting time to be an optimizer!
Note from the Editors
The topic of Optima 81 is "Counting and Estimating Lattice Points".
Instead of the usual format consisting of one scientific contribution
and the discussion column, we are very pleased to publish two scien
tific contributions presenting the subject from different perspectives.
The result is a very detailed picture of the topic.
Contents of Issue 81 1 December 2009
I Steve Wright, MPS Chair's Column
I Jesus A. De Loera, Counting and Estimating Lattice Points: Tools
from Algebra, Analysis, Convexity, and Probability
9 Gilles Pesant, Counting and Estimating Lattice Points: Special
Polytopes for Branching Heuristics in Constraint Programming
14 Advertisements
16 Imprint
Jesus A. De Loera
Counting and Estimating Lattice Points:
Tools from Algebra, Analysis, Convexity,
and Probability
This article gives an overview of the mathematical foundations and
the computational complexity of several algorithms for counting lat
tice points inside polyhedral sets. Given a polytope P = {x : Ax =
b, x > 0}, specified by a d x n integral matrix A and an integral
dvector b, our goal is to count or at least estimate how many lattice
OPTIMA 81
points are inside P. In some applications we are in fact interested on
the counting function (4A(b) = # x : Ax = b, x > 0, x integral ,
where b can be taken as a vector of parameters. In pure mathemat
ics one is in fact interested in formulas for (4A(b) in terms of the
variables bl,..., bd. Areas of application where these algorithmic
questions often appear include mathematical programming, proba
bility and statistics, and theoretical computer science. We illustrate
with several examples the many uses of computing or evaluating
(A (b).
Counting lattice point of polytopes is at the foundation of the the
ory of mathematical optimization. There is a very close algorithmic
relationship between optimization on one hand and counting on the
other (this is made explicit in [48, 35, 36]). The number of feasible
lattice points in a polyhedron is a measure of progress of a branch
andbound algorithm for solving integer optimization problems. For
integer programs in some applications it is desirable to find all opti
mal solutions [31]. For example, in network flow problems regarding
stability or security one may be interested in knowing the number
of maximal integralvalued flows or cuts [7]. This is the same as
counting the number of lattice points of a flow polytope (see [8] and
references therein).
In Statistics counting and sampling lattice points becomes useful
to make inferences about the statistical models imposed in tabular
data (contingency tables). For example, to decide whether two of
the traits of the statistic, say, income and race, are independent of
each other, several tests of independence depend on counting the
integral tables satisfying some marginal conditions (see [38, 45] and
their references). Statisticians have developed ad hoc samplingbased
estimation methods and in fact counting is closely related to the
problem of sampling a point uniformly at random or with respect
to other probability distribution models (see [23, 25, 26, 56] and
references therein). Of course for an arbitrary polyhedron this is an
unrealistic assumption since it would require intractable computa
tion to even jump start such methods. Counting is also related also
to statistical mechanics models such as the Ising and Potts models
(see, e.g., [54, 66]).
In computer science, coding theory and cryptography have al
ways been important areas of application for lattices and lattice
point counting [64, 83]. An emerging new application of lattice
point counting is computer program verification and code optimiza
tion. The systematic reasoning about a program's runtime behav
ior, performance, and execution requires specific knowledge of the
number of operations and resource allocation within the program.
This is important knowledge for the sake of checking correctness
as well as for automatically detecting runtime errors, buffer over
flows, nullpointer dereferences or memory leaks. This can be mod
eled as problem of counting points in parametrized regions (see
[28, 47, 67, 78] and references therein). For example, how often is
instruction I of the following computer code executed?
void proc(int N, int M)
{
int i,j;
for (i=2NM; i<= 4N+Mmin(N,M), i++)
for(j=0; j
I;
}
Clearly, the number of times we reach instruction I depends para
metrically on N,M. In terms of these parameters the set of all
possible solutions is given by the number of lattice points inside
of a parametrized family of polygons. In our toy example these
are described by the conditions {(i,j) e Z2 : i > 2N M, i <
4N + M min(N,M), j > 0, j 2i < N 1.
One can ask why do we restrict ourselves to polyhedra? What are
the most complicated regions for which counting lattice points is possible?
When the sets for which we count lattice points are arbitrary the
problem gets harder and harder as the dimension of the set grows:
Given (a, b, c) positive integers, deciding whether there is a lattice
point in the set {x : ax2 + bx c, x > 0 is an NPcomplete prob
lem [4]. Deciding whether there is a nonnegative integer root for
arbitrary polynomials inside Z[xl,..., xg] is undecidable, i.e., roughly
speaking there is no computer algorithm to solve this problem for
all instances [57] already for nine variables! The trouble with unde
cidability indicates one cannot even create an algorithm for finding a
bounding box containing all the possible solutions. How about then
if we restrict ourselves to the case of bounded sets? This restriction
still contains some interesting surprises. Already Gauss proposed a
famous counting problem for circles [49]: Suppose you are given a
circle of radius r centered at the origin. Clearly volume is an ap
proximation to the number of lattice points, but how good is the
approximation of the number of lattice points N(r) as the area of
the circle as r grows? In other words, what is the exact order of
magnitude for the error function N(r) rr2? The answer is still
not known exactly, although it is supposed to be in between r5 and
r6301369863 (see, e.g., [50]).
Counting lattice points in (fourdimensional) convex bodies is
something that credit card cyberthieves care about too! The rea
son is the RSA encryption algorithm relies on the difficulty of fac
torizing a number of the form pq with p, q large primes. Here is a
way to factorize via counting lattice points of 4dimensional balls:
For an integer number n = pq consider the 4dimensional ball
B(n) = x e R4 : x2 + X2 + x2 + x2 < n Jacobi (see [5, 51])
proved that the number of ways in which a positive integer can be
written as a sum of four squares is eight times the sum of its divisors
that are not a multiple of four. So, for example there are 744 ways of
writing 100 as sum of four squares because 1, 2, 5, 10, 25, 50 are the
only divisors of 100 not divisible by 4. Thus if we know that n = pq,
and IB(n)l denotes the number of lattice points in B(n) we have
IB(n) IB(n 1)1 = 8(1 + p + q + n). Therefore a factorization
of n could be done fast if we knew how to compute IB(n)l fast.
Thus the RSA cryptosystems used in internet transactions could be
broken if you knew how to quickly count lattice points inside four
dimensional balls.
We continue now with an overview of the theory behind count
ing and estimation of lattice points inside polytopes. We recommend
the books [9, 79, 90] as good references for many concepts about
polyhedra and lattices used in the sequel. The literature in this area is
quite large and due to lack of space will not be able to mention but a
fraction of what is available. We have chosen to delve in more details
for two particular algorithms because they both have had somewhat
closer relationship to mathematical programming.
I Exact Counting
Counting lattice points inside convex polyhedra is still difficult: We
know that when the dimension is an input variable the problem
of detecting a lattice point in polyhedra is NPhard [46]. Already
counting 2 x n contingency tables is #Phard [43]. Many combina
torial problems can be phrased as counting the lattice points of a
polyhedron, thus their hardness translates directly to our problem
(e.g., counting perfect matching of bipartite graphs, also known as
the computation of the permanent). In particular one can prove it
is NPhard to estimate the number of lattice points of certain poly
topes to any polynomial (in the input length) factor (see, e.g., [58]).
But, life goes on, what can we do if we still have the practical need of
counting solutions?
December 2009
Branchandcut techniques and exhaustive enumeration methods
which are common in discrete optimization are also useful for enu
merating lattice points. In both approaches one generates a search
tree to list all solutions, but branchandcut is aided by linear pro
gramming relaxations to detect infeasible subproblems and do spe
cial pruning (see [3 I, 3]). One way to attack wellstructured prob
lem (such as binary restricted) is to use powerful techniques from
constraint programming techniques (see [76] in this issue and refer
ences therein). Other search trees techniques have been developed
for binary problems (see [18, 24, 3] and references therein). Despite
their enormous success, enumeration method can get stuck in sur
prisingly small problems (e.g., hard knapsack problems such as those
in [I]). Enumeration techniques are not useful to derive formulas as
they depend on the size of the righthandside vector b. During the
1980's and 1990's a new breed of algorithms and counting formulas
were created. They rely on deep algebraic and analytic properties of
polyhedra.
Some authors have emphasized the algebraicgeometric nature
of the lattice points of polyhedra via the theory of toric varieties
[82, 39]. Given a polytope P = x : Ax = b, x > 0 defined by the
matrix A with integer entries, the toric ideal IA is the polynomial ideal
generated by the binomials of the form xU x such that A(u v) =
0 (see [82, 85] and references therein). If we find a finite set of bino
mials, a Gr6bner basis is GA { xUI xVI, X2 XV2,..., xk xVk }
that generates IA (Gr6bner bases in general can be computed with
the wellknown Buchberger algorithm [30]). It is wellknown that
the vectors F = ul i, U2 VZ.... k Vk will have the fol
lowing properties: For the polytope P = x : Ax = b, x > 01,
one can form a connected directed graph whose vertices are the
lattice points of P with a unique sink. We connect any pair Zi,Z2
of lattice points by an edge if there is a vector u v in F such that
Zi u + v = Z2 with Zi u > 0, then the resulting graph is con
nected. Moreover if we orient the edges of this graph according to a
monomial term order (in the sense of Gr6bner bases theory) the di
rected graph will have a unique sink. Thus from any lattice point of P
there is an"augmenting" path to this sink. Thus we have a connected
rooted directed tree that can be used very effectively in conjunc
tion with the reversesearch enumeration algorithm [6] to obtain an
outputsensitive algorithm of constant memory for listing all lattice
points of a polyhedron.
Other methods make a much stronger use of complex analy
sis and convexity (see [I 8, 40, 59, 62, 63, 73, 84, 75] and the
many references within). In a way these algorithms consider count
ing as a special case of the problem of integration. For this recall
(4A(b) = #{x : Ax = b,x > 0, integer}. If we denote by Ai the
columns of the matrix A, then it is easy to see that
SA (b)Zb In A)
It is also fairly easy to see that:
It is also fairly easy to see that:
becone(A)n1n
A (b)e b,z
nA 1A
HIA, A 1 e
How to invert this equations to recover QAA? Some authors have
taken a complex analytic view. For instance, in [17] the integral is
evaluated by repeated application of Cauchy's integral theorem and
clever manipulations.
f f zbi 1 b 1
A( (2bTi)  l J lZm (I ) A dz. ..(
Here 0 < El,..., m < 1 are different numbers such that we can ex
pand all the1 ZM into the power series about 0. This method has
z k
been applied very successfully to determine the volumes of Birkhoff
polytopes in [16]. The reader knows from basic calculus that to in
tegrate rational functions it is sometimes useful to use a partial frac
tion decomposition, in order to simplify the expressions. M. Vergne
and collaborators have exploited the structure of the hyperplane ar
rangement associated to the columns of the matrix A in order to
have such a simpler decomposition. See [8, 20, 84] and references
within for more details and implementation.
We now try to compile a list of software packages for counting
lattice points in polyhedra. For 01 problems Azove is C++ code
developed by Markus Behle at the Max Planck Institute of Saar
brucken www.mpiinf.mpg.de/behle/azove.html. Lisonek created a
Maple package that works on knapsack type problems [65]. The
BeckPixton software [ 16] can perform fast computations with two
dimensional transportation polytopes. They have the best compu
tation times for that family of polytopes. Vergne and collaborators
(see [8]) have developed a package special for unimodular matrices
which is very fast. Working with general polytopes we have two pro
grams: one is ehrhart, developed by Clauss, Loechner and Wilde
(http://icps.ustrasbg.fr/Ehrhart/program/). The first implementation
of Barvinok's algorithm was the software LattE developed at UC
Davis [33, 34]. LattE counts lattice points, generates Ehrhart's
quasipolynomials, and even solves small integer programs. LattE is
freely available at www.math.ucdavis.edu/latte. More recently M.
Koppe improved LattE and implemented several new algorithms
in LattE macchiato. It is available at the same web site as LattE
or from www.math.ucdavis.edu/mkoeppe/latte/ (see also [60]). S.
Verdoolage wrote a second implementation of the same algorithm,
barvinok [88]. We now conclude this first section with a more
detailed description of Barvinok's algorithm.
1.1 Counting through Barvinok's Generating Functions
In the 1990's, based on work by the geometers Brion, Khovanski,
Lawrence, and Pukhlikov, A. Barvinok created an algorithm to count
integer points inside polyhedra. When the dimension is fixed the al
gorithm can count the number of lattice points in a polytope in poly
nomial time in the size of the input (the size is given by the binary
encoding of the data). See [10, I I] and the references within. Barvi
nok's result is in a way a generalization of H. Lenstra's algorithm to
detect integer points in polyhedra. Lenstra's algorithm is based on the
LLLalgorithm and the idea of short vectors (see [48, 61]). Shortly
after Barvinok's breakthrough, Dyer and Kannan [41] modified the
original algorithm of Barvinok, which originally relied on Lenstra's
result, giving a new proof that integer programming problems with a
fixed number of variables can be solved in polynomial time.
The key ideas of Barvinok's algorithm are using rational functions
as an efficient data structure to encode lattice points and a clever
way to decompose polyhedra into cones. Given a convex polyhe
dron P (not necessarily a polytope anymore!), we wish to compute
the multivariate generating function
f(P;x) = X0,
ocePn^d
where x" = x" x" ...xd". This is an infinite formal power series
if P is not bounded, but if P is a polytope it is a (Laurent) poly
nomial with one monomial per lattice point. For example, if we
consider a rectangle with vertices VI (0,0), V2 (5000,0),
V3 (0,5000), and V4 (5000, 5000) the generating function
f(P) has over 25,000,000 monomials,
f(P,2z,22) = 1 21 22 21 22 2 5000 5000
f(P,Z1,Z2) I + Za + Z2 + ZaZ2 + Zz2 + ...a+ Z Z2
The representation of f(P; z1,Z2) as monomials is clearly way too
long to be of practical use. But Barvinok's method instead rewrites
OPTIMA 81
Figure I. The tangent cone at vertex v
Figure 2. A two dimensional cone (its vertex at the origin) and its fundamental
parallelepiped
it as a compact sum of rational functions. For instance, only four ra
tional functions suffice to represent the over 25 million monomials.
Indeed f(P, z, z2) equals
1 Zo5000
(1 l)(1 Z2) (1 Zi 1)(1 Z2)
2 5000 Z 5000Z2 5000
(1 Z2 1)(1 Z1) ( 211) (1 Z21)
Note that if we wish to know the number of lattice points in P, and
we knew f(P;z), we could compute it as the limit when the vec
tor (xi,...,xn) goes to (1,1,...,1). Similarly the maximum of a
linear functional over the lattice points equals the highest degree of
the univariate polynomial one gets after doing a monomial substitution
xi tc1 (See [12]). These two calculations can be difficult because
of the poles of the rational functions. One uses complex analysis
(residue calculations) to find the answer.
A beautiful theorem of M. Brion [19] says that to compute the
rational function representation of f(P; z) it is enough to compute
the tangent cones at each vertex of P. Let P be a convex polytope
and let V(P) be the vertex set of P. Let K, be the tangent cone
at v e V(P). This is the (possibly translated) cone defined by the
facets touching vertex v (see Figure I). Then the following nice for
mula holds:
f(P;z) = f(K,;z).
veV(P)
Since it is enough to compute everything for cones, we first ex
plain how to compute the rational function for the "smallest" cones,
i.e., the simple cones. A cone is said to be simple if its rays are lin
early independent vectors. For instance all the tangent cones of the
pentagon of Figure I are simple. Obtaining the rational function rep
resentation of the lattice points of a simple cone K c Bd, is easy
(see all details in [80] or in Chapter IV of [81]). Here is the formula
that one can write directly from the coordinates of the rays of the
cone and its fundamental parallelepiped H:
f(K; z) H=nnI z
(1 zcl)(1 zC2)...(1 ZCd)'
Here H is the half open parallelepiped {x : x = aic + +
oacad, 0 < ai < 1}. A twodimensional example is shown in Fig
ure 2: we have d = 2 and ci (1, 2), C2 = (4, 1). We have:
zfZ2 + ZIZ2 + Z 2Z2 + ZZ2 + + Z 1 Z + 1
( z) Z1Z )(1 Z4Z 1)
First triangulate
cone, then
Each simple cone gets
further decompose by
a plusminus sum of
unimodular cones
+
+
Figure 3. The signed decomposition ofa cone into unimodular cones. Two general
steps
But what to do if the cone K is not simple? Break it into simple
cones! The wonderful idea of A. Barvinok was observing that, al
though triangulating the cone K may be an inefficient way to sub
divide the cone (i.e., there are exponentially many pieces), if one is
willing to add and substract cones for fix dimension d, there exists a
polynomial time algorithm which decomposes a rational polyhedral
cone K c Rd into simple unimodular cones Ki. A simple cone is uni
modular if its fundamental parallelepiped is a single lattice point. See
Figure 3. In fact, via the decomposition, with numbers i e {1, 11
we can write an expression
f(K;z) = Y if(Ki;z), III < 0o.
icI
The index set I is of size polynomial in the input data as long as we
work in fixed dimension.
2 Estimation and Approximate Counting
Since exact count is hard it is not surprising that there is a rich and
exciting theory of estimation and approximation. Now we present
various algorithms applicable to arbitrary polyhedra (much work has
also been done for special families, such as contingency tables, see
for instance [8, 14, 16, 32, 38, 72] and the many references there).
I /
P
December 2009
Unfortunately we cannot avoid omitting part of this interesting lit
erature. We recommend the surveys [43, 54, 53, 89] and the ref
erences therein for more information. One important point in the
theory of approximate counting was that in 1986 Jerrum, Valiant and
Vazirani showed that the existence of a good approximation (i.e., a
fully polynomial randomized approximation scheme) is equivalent to
almost uniform sampling from the set of lattice points [55]. Thus
with the close connection to sampling the effort has often been on
finding reasonable sampling procedures.
One approach is direct MonteCarlo sampling or dartthrowing tech
niques. The basic framework is that we wish to estimate the size
ISI of the lattice point set S of a given polyhedron. To do this, we
enclose S in an easy set S' which approximates S and can be sam
pled uniformly (think of S' a nice box around a nasty looking S).
The process is then to sample from S' to get an estimate p of the
proportion ISI/IS'I of sampled points which lie in S. Estimate IS
by PIS'l. Of course, the key to using this approach is to find a set
S' such that S c S', IS' is easy to determine and can be easily
sampled. Finally, it is desirable that IS'I/ISI is polynomially bounded.
There are several clever variations of this simple idea. One note
worthy example is the combination of dartthrowing with dynamic
programming which was proposed by Dyer in [42]. Another exam
ple is the use of randomized rounding to improve the dartthrowing
technique [58].
Another approach is the Markov chain MonteCarlo method
(MCMC) in which we try to sample using a random walk directly
in the set of lattice points and hopefully proving that this will ap
proach the uniform distribution in time polynomial in the instance
size (so called rapid mixing). Notable successes of MCMC include vol
ume computation, permanents, 0/1knapsacks, (see [54, 74] and the
references therein). In special polytopes (e.g., contingency tables) it
is not a problem to design a random walk, thus the main difficulty of
MCMC is in proving the polynomial time convergence property to a
uniform distribution. This means that the method is not as easy to
apply rigorously as it may appear at first sight. On the other hand,
for arbitrary polyhedra, e.g., in the context of integer programming
where we do not even know whether the polytope has lattice points
at all, it is not easy to automatically generate a wellbehaved Markov
chain. The algebraic approach in Section I helps with this. The paper
[39] provides a way to generate a Markov chain as a walk on the
lattice point graph defined by the toric ideal. Unfortunately, Markov
chain moves (a Markov basis) can be quite large and hard to compute
(see, e.g., [37]).
These difficulties motivate a search for other ways to sample
lattice points. In the context of twoway contingency tables [26]
introduced an alternative to the MCMC method. Sequential impor
tance sampling uses the idea of divide and conquer to sequentially
build up the proposed distribution by sampling variable entries one
by one. Roughly speaking, the value of a particular solution for
Ax = b, x > 0 can be done entry by entry, where xi is constrained
by the choices made for xl,...,xi 1. The sampling is easy to do if
geometrically the possible solutions for xi form an interval of values
without gaps. In [27] many real examples show that sequential im
portance sampling is applicable and useful, and can be used in cases
where a Markov basis cannot be easily computed (e.g., multiway con
tingency tables). This method had great success on the problem of
estimating the number of labeled graphs with a prescribed degree
sequence (see [23]).
Lately, a strong connection between convex optimization and in
teger feasibility and counting has been observed by several authors
[52, 70, 71, 15]. Most recently Barvinok and Hartigan [ 15] presented
probabilistic theorems for the estimation of the number of lattice
points or 0 I points inside a polyhedron which are based on convex
programming tools. To conclude this section, we take a closer look
at this very recent development.
2.1 Estimation through Convex Optimization
We now briefly recount the main results presented in [15] that we
used here to compute estimations on the number of lattice points in
side a polyhedron by solving specially constructed convex optimiza
tion problems on polytopes. Unlike other techniques the Barvinok
Hartigan idea uses no prior knowledge about the input polyhedron.
In what follows, a polytope P c ~n is the set of solutions to the
system Ax = b,x > 0 where A = [al,...,an] is the matrix with
ddimensional vectors al,..., an and x is thought of as a column
vector x = [;,..., n] For simplicity we assume that vectors
al,..., an span Bd and that in fact the vectors al,..., an and b
are integral, that is, al,...,an;b e Zd. We also assume that P
has a nonempty interior, that is, contains a point x = (1,..., n),
where the defining inequalities are strict (it is wellknown this can
be checked fast both in theory and in practice). We wish to estimate
the number P n Zn of integer points in P. We also consider the
problem of counting the set of all vectors in P with the coordinates
0 and I.
Recall that a discrete random variable x has geometric distribu
tion ifPr {x k} pqk for k 0,1,.... for positive numbers p
and q such that p + q = 1. For the expectation and variance of x we
haveEx = and varx = respectively. Conversely, ifEx
for some ( > 0 then p q = and varx = + .
Theorem I ( Theorem 2.1 in [ 15]). Let P c Rn be the intersection
of an affine subspace in Rn and the nonnegative orthant Rn. Suppose
that P is bounded and has a nonempty interior, that is contains a point
y (n71 ... in) where rIj > 0 for j = 1,...,n.
Then the strictly concave function
g(x) = Y( (, + 1) In (, + 1) In )
j=1
attains its maximum value on P at a unique point z = ( 1,..., Cn) such
that Cj > 0 for j 1,...,n.
Suppose now that xj are independent geometric random variables
with expectations Cj for j 1... n. Let X (xi,...,xn). Then the
probability mass function ofX is constant on P n Zn and equal to e g(z)
at every x e P n Zn. In particular,
Pr X e P = eg(z) Pp Zn I
Similarly, Barvinok and Hartigan proposed a clever probability
mass distribution which is constant on the 01 points. Let p and
q be positive numbers such that p + q 1. We recall that a dis
crete random variable x has Bernoulli distribution if Pr {x 0}
p and Pr{x 1} = q. We have Ex = q and varx = qp.
Conversely, if Ex = for some 0 < ( < 1 then p 1 q =
( and varx = 2. Their second main result is as follows.
Theorem 2 (Theorem 2.4 in [15]). Let P c Rn be the intersection
of an affine subspace in Rn and the unit cube {0 < ;j < 1 : j
1,...,n}. Suppose that P has a nonempty interior, that is, contains a
point y = (71i,..., rn) where 0 < rlj < 1 for j = 1...,n. Then the
strictly concave function
n 1 1
h(x) = I In I +(I y) In 1
attains its maximum value on P at a unique point z = (j1,..., 'n) such
that 0 < Cj < 1 for j = 1...,n.
Suppose now that xj are independent Bernoulli random variables with
expectations Cj for j 1...,n. Let X (xi,...,xn). Then the prob
ability mass function ofX is constant on P n {0, 1}n and equal to eh(z)
OPTIMA 81
for every x e P n {0, 1}n. In particular,
Pr {X e P} = eh(z) IPn O,l } .
Now we discuss the important practical consequences of the
above results. First of all, clearly the two theorems above suggest
two simple sampling algorithms
1. Solve the optimization problem
y(P) max Y' g(xj)
subject to x e P.
2. Let z (= 1,..., n) be the unique optimal point and y the
optimum value. Sample a random point X = (x1,...,xn)
by sampling xj independently at random from the geomet
ric distribution with expectation Cj, that is with parameter
pj = (1 + j) 1. Check whether X e P. Repeat many times
and compute the proportion a of times that the point X lands
in P (i.e., 0 < a < 1).
3. Estimate IP n Z'nl aey.
Similarly, by using the entropy function h of Theorem 2, instead
of the concave function g of Theorem I, we can get a sampling al
gorithm to estimate 0 I points inside P:
1. Solve the optimization problem
y(P) max Y'I h(xj)
subject to x e P.
2. Let z (1,...,n) be the unique optimal point and y
the optimum value. Sample a random binary point X
(xi,...,Xn) by sampling xj independently at random from
the Bernoulli distribution where xi is equal to 1 with proba
bility Cj.
3. Check whether X e P. Repeat many times and compute
the proportion a of times that the point X lands in P (i.e.,
0 < < 1).
4. Estimate IP n {0, 1}In aey.
At the same time, Barvinok and Hartigan developed a heuristic
approximation to the above numbers that does not require sam
pling. We describe it first for general integer programs. For a poly
tope P, defined by a system Ax b,x > 0, we find the point
z = (1,...,n) maximizing
j in j)
g(x) j + 1) In 1 + 1)
j1
on P. Let A = (a). We compute the dx d matrix Q = (.,) by
n
i.,, = ka(i (j ( k + k)
k=1
We assume that A is an an integer n x d matrix of rank d < n. Let
A A (Zn) be image of the standard lattice, A c Zd. The proposed
approximation the number of integer points in P is given by
p Zn det A
(2Tr)d/2(detQ)1/2
Similarly, for estimating 0I points inside P, we find the point z =
(1 ... ,n) maximizing
h(x) = (,lIn + (1 ) In
j1 1 L
on the truncated polytope defined by the equations Ax = b and
inequalities 0 < x < 1 (that is, 0 < j < 1 for j = 1...,n). We then
compute the d x d matrix Q = i.,,) by
n
I, = a(ikajk ( ()
k=1
This way the desired approximation to the number of 0 vectors in
P is given by
detA
(2Tr)d/2(detQ)112'
where A A (Zn).
Remark. It is wellknown that all the above mentioned optimization
problems can be solved quite efficiently, both in theory and in prac
tice, by interior point methods, see [22, 77]. This is an additional
attraction of these ideas.
To conclude we report on the practical experimental perfor
mance of the two estimation algorithms derived from the theory
in [15]. Our test sets included Knapsacks (both general and binary,
some randomly generated and others artificially constructed), mul
tiway transportation problems, market split problems, and a combi
natorial 0 problems (graph enumeration). In all instances, the goal
was to measure the accuracy of the estimation. We observed that,
in a good number of cases, the BarvinokHartigan techniques per
form rather well. Most important, the methods used here require
no prior knowledge of the polyhedron (e.g., integral feasibility) and
work for arbitrary general rational polyhedra. We divided our exper
iments into four families: (I) knapsack simplices, (2) flow polytopes
and two and threeway contingency tables, (3) market split problem,
and (4) combinatorial 0 1 polyhedra. Each category is reported in
the Appendix. Note that categories (I) and (2) are polyhedra with
lattice points with "large" integer entries, while categories (3) and
(4) are testing examples of binary polyhedra. In all cases the prob
lems are given in the canonical form Ax b,x > 0 and we checked
for accuracy of the estimation using the software LattE and Azove.
All computations were done on a 2GHz Pentium PC running
Red Hat Linux. For the solution of the necessary convex opti
mization problems we used the software LOQO a software pack
age for solving general (smooth) nonlinear optimization problems.
LOQO implements an infeasibleprimaldual pathfollowing method.
Even though LOQO can handle nonconvex problems in general, it
performs much better with convex problems (see [21, 86, 87] and
references therein). This was indeed the case in our situation and in
absolutely all the instances that we tested the number of iterations
was never above twenty. Thus we can say that finding the necessary
estimation parameters can be done easily and efficiently.
The simulation of the geometric and the Bernoulli distributions
used in the two sampling algorithms was done using the statistics
package of MAPLE version 12. Each experiment consisted on finding
the parameters y and the probability distribution numbers for each
instance. Then we sampled three different times with (1000), (5000),
and (10000) samples and the same test parameters read from LOQO
and we report on the average. As described in the algorithm, the
sampling parameters are the exponential exponent y and the pa
rameters for geometric and Bernoulli random variables which come
in the form of a point of the polytope.
The evidence we have presented in the Appendix supports the
usefulness and accuracy of the BarvinokHartigan Gaussian estima
tion. The easytouse Gaussian estimation is widely applicable and
fairly accurate in most instances we tried. In 40 percent of tests done
December 2009
Performed a total of 108 experiments
Most experiments were within 5% error
lim .
 ....
Relative Error
using the Gaussian estimation the relative error of the Gaussian es
timation was 5% or less. Only in the most pathological cases (e.g.,
market split, Aardal knapsacks) one can see a prediction worse than
twice the correct value. See Figure 4 for details. We noted also that
the algorithms seem to scale well as the number of variables grows.
The sampling algorithm is nevertheless not very good. It performed
best for knapsack problems but when two or more constraints are
in play it is clearly not as reliable for estimating the number of lattice
points in polyhedra. Using aggregation of constraints did not help
with the problem.
Of course, as expected from the theoretical hardness of the prob
lem, one can indeed find rather nasty problems for which the num
ber of lattice points grows arbitrarily large and yet the estimation
is consistently poor. Nevertheless, the experiments actually suggest
that the convex optimization problems solved for finding the sam
pling parameters could in principle be used as a fairly good measure
of how "wellbehaved" is an integer linear program will turn out to
be.
We conclude by saying that although other algorithms are avail
able to carry out the same kind of estimation, we are not aware of
any other algorithm that works in such a generality without prior
knowledge of the integer feasibility of the polyhedron in question.
For instance, the MCMC and Importance Sampling method both
rely on the ability of generating random lattice points according to a
pivot or completion procedures. This is unavailable in most polyhe
dra. Thus the BarvinokHartigan process has a distinctive advantage
of not requiring prior knowledge about the input polyhedron.
Acknowledgements
The author is grateful to the Institute for Pure and Applied Math
ematics at UCLA for their hospitality during the time this article
was written. The author gratefully acknowledges support from NSF
grant DMS0914107. He also wants to thank Prof. Alexander Barvi
nok, Prof. Benjamin Morris, and Prof. Hande Benson for many useful
conversations that greatly helped the creation of this article. Very
special thanks go to Prof. Andrea Lodi for his help and support.
Appendix
Details for experiments with the Gaussian estimation of Barvinok
and Hartigan are available in the onlineappendix, accessible at www.
mathprog.org//OptimaIssues/optim8 app.pdf.
J. A. De Loera, Department of Mathematics, University of California, Davis
CA, USA. deloera@math.ucdavis.edu, www.math.ucdavis.edu/deloera
References
[I] Aardal, K., Lenstra, A.K., and Lenstra, H.W. Jr.: Hard equality constrained
integer knapsacks. In: WJ. Cook and A.S. Schulz (eds.), Integer Program
ming and Combinatorial Optimization: 9th International IPCO Confer
ence, Lecture Notes in Computer Science vol. 2337, SpringerVerlag, 350
366, 2002.
[2] Aardal, K., Weismantel, R., and Wolsey, L.: Nonstandard approaches to inte
ger programming Discrete Applied Math. 123 No. 13, (2002), 574.
[3] Achterberg T., Heinz S., and Koch T. Counting solutions of integer programs
using unrestricted subtree detection ZIB technical report 0809, 2008.
[4] Adleman, L. and Manders, K.: NPcomplete decision problems for quadratic
polynomials. In: Proceedings of the Eighth annual ACM symposium on the
ory of computing, Hershey PA 1976, 2329, ACM, New York, 1976.
[5] Andrews, G., Ekhad, S., and Zeilberger, D.: A short proof of acobi's formula
for the number of representations of an integer as a sum of four squares, Amer
ican Math. Monthly, 100, 274276, 1993.
[6] Avis, D. and Fukuda, K, Reverse Search for Enumeration, Discrete Applied
Math, Vol. 6, 2146 (1996)
[7] Balcioglu, A. and Wood R.: Enumerating NearMin st Cuts. In: D.L. Woodruff
(editor) Network Interdiction and Stochastic Integer Programming ,
vol. 22, series operations research/computer science interfaces, Kluwer,
Boston 2002.
[8] BaldoniSilva, W, De Loera, J., and Vergne, M.: Counting integral flows on
Networks, Foundations of Computational Mathematics, Vol. 4, 277314,
2004.
[9] Barvinok, A.I.: A course in Convexity, Graduate studies in Mathematics, vol.
54, Amer. Math. Soc., Providence RI, 2002.
[10] Barvinok, A.I.: Polynomial time algorithm for counting integral points in poly
hedra when the dimension is fixed, Math of Operations Research, Vol. 19,
769779, 1994.
[II] Barvinok, A.I. and Pommersheim, J.: An algorithmic theory of lattice points in
polyhedra. In: New Perspectives in Algebraic Combinatorics (Berkeley, CA,
19961997), 91147, Math. Sci. Res. Inst. Publ. 38, Cambridge Univ. Press,
Cambridge, 1999.
[12] Barvinok, A.I. and Woods, K.: Short rational generating functions for lattice
point problems, J. Amer. Math. Soc., Vol. 16, 957979, 2003.
[13] Barvinok, A.I., Enumerating contingency tables via random permanents, Com
binatorics, Probability and Computing, v. 17 n. (2008) 119.
[14] Barvinok, A.I, Asymptotic estimates for the number of contingency tables, inte
ger flows, and volumes of transportation polytopes, International Mathematics
Research Notices 2009 (2009), 348D385.
[15] Barvinok, A. I. and Hartigan, J. Maximum entropy Gaussian approximation for
the number of integer points and volumes of polytopes manuscript 2009, avail
able at www.math.lsa.umich.edu/barvinok/papers.html and at the Arvix
http://arxiv.org/abs/0903.5223
[16] Beck, M. and Pixton, D., The Ehrhart polynomial of the Birkhoff polytope,.
Discrete Comput. Geom. 30, no. 4 (2003), 623D637.
[17] Beck, M.: Counting lattice points by means of the residue theorem. Ramanujan
Journal, 4, no. 3, 299310, 2000.
[18] Behle, M. and Eisenbrand, F 01 vertex and facet enumeration with BDDs In
David Applegate et al. (Eds.), Proceedings of the 9th Workshop on Algo
rithm Engineering and Experiments (ALENEX'07), New Orleans, U.S.A.,
SIAM, p. 158165, 2007.
[19] Brion, M.: Points entiers dans les polyedres convexes. Ann. Sci. Ecole Norm.
Sup., Vol. 21,653663, 1988.
[20] Brion, M. and Vergne, M.: Residue formulae, vector partition functions and lat
tice points in rational polytopes, J. Amer. Math. Soc., Vol. 10, 797833, 1997.
[21] Benson H., Shanno, D.F and Vanderbei, R.J. A Comparative Study of Large
Scale Nonlinear Optimization Algorithms In High Performance Algorithms and
Software for Nonlinear Optimization, (G. Di Pillo and A. Murli, editors),
94126, Kluwer Academic Publishers, 2003.
Figure 4. Error estimation over tests
OPTIMA 81
[22] BenTal, A. and Nemirovski, A. Lectures on modern convex optimization: anal
ysis, algorithms, and engineering applications, MPSSIAM series on Optimiza
tion, 2001.
[23] Blitzstein, J. and P. Diaconis A sequential importance sampling algorithm
for generating random graphs with prescribed degrees, manuscript (2009).
[24] M.R.Bussieck and M.E.Lubbecke The vertex set of a Oilpolytope is strongly
Penumerable Comput. Geom., 11(2):103109, 1998.
[25] Chen, Y, I. Dinwoodie, A. Dobra, and M. Huber, Lattice Points, Contingency
Tables, and Sampling. Contemporary Mathematics, Vol. 374, 6578. Ameri
can Mathematical Society, (2005).
[26] Chen, Y, Diaconis, P, Holmes, S., and Liu,J. S. Sequential Monte Carlo meth
ods for statistical analysis of tables. Journal of the American Statistical Asso
ciation, 100, (2005) 109120.
[27] Chen, Y, Dinwoodie, I. H., and Sullivant, S. Sequential importance sampling
for multiway tables. The Annals of Statistics, 34, (2006) 523545.
[28] Clauss, P. and Loechner, V: Parametric Analysis of Polyhedral Iteration Spaces,
Journal of VLSI Signal Processing, Vol. 19, 179194, 1998.
[29] Cornuejols, G. and Dawande, M. A class of hard small 01 programs. In R. E.
Bixby, E.A. Boyd, and R.Z. RiosMercado, editors, Integer Programming and
Combinatorial Optimization. 6th International IPCO Conference, Hous
ton, Texas, June 1998. Springer Lecture Notes in Computer Science 1412,
Heidelberg, 1998.
[30] Cox, D., Little, J., and O'Shea D. Ideals, varieties and algorithms, Undergrad
uate Texts in Mathematics, Springer, New York, 1992.
[31] Danna E., Fenelon M., Gu Z., and Wunderling R.: Generating multiple so
lutions for mixed integer programming problems In M. Fischetti and D.P.
Williamson, eds., Integer Programming and Combinatorial Optimization,
12th International IPCO conference, Ithaca New York, June 2007. Springer
Lecture Notes in Computer Science 4513, 280294.
[32] De Loera, J. and Sturmfels, B. Algebraic unimodular counting, Math. Program
ming Ser. B., Vol. 96, No. 2, (2003), 183203.
[33] De Loera, J. A., Haws, D., Hemmecke, R., Huggins, P, Tauzer, J. and Yoshida,
R.: A User's Guide for LattE, software package LattE and manual are avail
able at www.math.ucdavis.edu/latte/ 2003.
[34] De Loera, J. A. and Hemmecke, R. and Tauzer, J. and Yoshida, R.: Effective
lattice point counting in rational convex polytopes, J. Symbolic Comp., vol. 38,
12731302, 2004.
[35] De Loera, J.A, Hemmecke R., Koppe M., and Weismantel R. Integer poly
nomial optimization in fixed dimension. Mathematics of Operations Research.
(2006), vol 31, No. I, 147153.
[36] De Loera, J.A, Hemmecke R. and Koppe M. Pareto optima of multicriteria
integer programs, INFORMS journal of Computing. Vol. 21, No. I, (2009),
3948.
[37] De Loera, J. A. and Onn, S. Markov bases of threeway tables are arbitrarily
complicated J. Symbolic Computation, 41:173181, 2006.
[38] Diaconis, P. and Gangolli, A.: Rectangular arrays with fixed margins, In: Dis
crete probability and algorithms (Minneapolis, MN, 1993), 1541, IMA Vol.
Math. Appl., 72, Springer, New York, 1995.
[39] Diaconis, P. and Sturmfels, B. Algebraic Algorithms for Sampling from Condi
tional Distributions. Annals of Statistics, 26( ): 363397, 1998.
[40] Diaz, R. and Robins, S.: The Ehrhart Polynomial of a Lattice Polytope, Annals
of Mathematics, 145, 503518, 1997.
[41] Dyer, M. and Kannan, R. On Barvinok's algorithm for counting lattice points in
fixed dimension. Math of Operations Research 22 (1997) 545549.
[42] Dyer, M. Approximate counting by dynamic programming Proceedings of the
35th Annual ACM Symposium on the Theory of Computing (STOC 03),
ACM Press, pp. 693699, 2003.
[43] Dyer, M., Mount, D., Kannan, R. and Mount, J.: Sampling contingency tables,
Random Structures and Algorithms, Vol. 10, 487506, 1997.
[44] Ehrhart, E.: Polynomes arithm6tiques et m6thode des polyedres en combinatoire,
International Series of Numerical Mathematics, Vol. 35, Birkhauser Verlag,
Basel, 1977.
[45] Fienberg, S.E., Makov, U.E., Meyer, M.M. and Steele, R.J.: Computing the ex
act conditional distribution for a multiway contingency table conditional on its
marginal totals. In: Data Analysis From Statistical Foundations, 145166, Nova
Science Publishers, A. K. Md. E. Saleh, Huntington, NY, 200 1.
[46] Garey, M.R. and Johnson, S.J.: Computers and Intractability: A Guide to the
Theory of NPCompleteness, Freeman, San Francisco, 1979.
[47] Ghosh, S., Martonosi, M., Malik, S.: Cache miss equations: a compiler frame
work for analyzing and tuning memory behavior, ACM transactions on pro
gramming languages and systems, 21, 703746, 1999.
[48] Grotschel, M., Lovisz, L., and Schrijver, A. Geometric Algorithms and Com
binatorial Optimization, second edition. Algorithms and Combinatorics, 2,
SpringerVerlag, Berlin, 1993.
[49] Guy, R. K. Unsolved Problems in Number Theory, 2nd ed. New York: Springer
Verlag, 240241, 1994.
[50] Huxley, M.N. Area, lattice points, and exponential sums, Oxford Univ. Press.
Oxford, 1996.
[5 1] Jacobi, C. Gesammelte Werke, Berlin 18811891. Reprinted by Chelsea,
New York, 1969.
[52] Jarre, F Relating maxcut problems and binary linear feasibility problems avail
able at www.optimizationonline.org/DB_FILE/2009/02/2237.pdf
[53] Jerrum, M.R. Counting, sampling and integrating: algorithms and complexity,
Birkhauser, Basel, 2003.
[54] Jerrum, M.R. and Sinclair, A.: The Markov Chain Monte Carlo Method: An ap
proach to approximate Counting and integration, in: Dorit Hochbaum (Ed.),
Approximation Algorithms, PWS Publishing Company, Boston, 1997.
[55] Jerrum, M.R., Valiant, L.G., and Vazirani, V.V. Random generation ofcombina
torial structures from a uniform distribution, Theoretical Computer Science,
43:1690188, 1986.
[56] Jerrum M., Sinclair A., and Vigoda, E., A polynomialtime approximation algo
rithm for the permanent of a matrix with nonnegative entries. Journal of the
ACM, 51(4):671697, 2004.
[57] Jones, J.P.: Universal diophantine equations, Journal of symbolic logic, 47 (3),
403410, 1982.
[58] Kannan, R. and Vempala, S. Sampling lattice points in proceedings of the
twentyninth annual ACM symposium on Theory of computing El Paso,
Texas USA, 1997, 696700.
[59] Kantor, J.M. and Khovanskii, A.: Une application du Th6oreme de Riemann
Roch combinatoire au polynome d'Ehrhart des polytopes entier, C. R. Acad. Sci.
Paris, Series I, Vol. 317, 501507, 1993.
[60] Koppe, M. A primal Barvinok algorithm based on irrational decompositions To
appear in SIAM Journal on Discrete Mathematics.
[61] Lenstra, H.W Integer Programming with a fixed number of variables Mathe
matics of Operations Research, 8, 538548
[62] Lasserre, J.B. and Zeron, E.S.: On counting integral points in a convex rational
polytope Math. Oper. Res. 28, 853870, 2003.
[63] Lasserre, J.B. and Zeron, E.S A simple explicit formula for counting lattice points
ofpolyhedra. Lecture Notes in Computer Science 45 13, Springer Verlag pp.
367381 (2007)
[64] Micciancio, D. and Goldwasser, S. Complexity of lattice problems. A crypto
graphic perspective, Kluwer International Series in Engineering and Com
puter Science, 671. Kluwer, Boston, MA, 2002.
[65] Lisonek, P: Denumerants and their approximations, Journal of Combinatorial
Mathematics and Combinatorial Computing, Vol. 18, 225232, 1995.
[66] Loebl M., Zdeborova L., The 3D Dimer and Ising Problems Revisited, European
J. Combinatorics No. 29 Vol. 3, (2008).
[67] Loechner, V, Meister, B., and Clauss, P: Precise data locality optimization of
nested loops, J. Supercomput., Vol. 21, 3776, 2002.
[68] Macdonald, I. G.: Polynomials associated with finite cellcomplexes, J. London
Math. Soc. (2), 4, 181192, 1971.
[69] Mahadev, N.V.R. and Peled, U.: Threshold graphs and related topics. Annals of
Discrete Mathematics, Vol 56, NorthHolland Publishing Co. Amsterdam.
[70] Mangasarian, O.L: Knapsack Feasibility as an Absolute Value Equation Solvable
by Successive Linear Programming, Optimization Letters 3(2), (2009), 161
170.
[71] Mangasarian O.L. and Recht, B.: Probability of unique Integer solution to a sys
tem of linear equations Data Mining Institute, Univ. of Wisconsin,Technical
Report 090 1, September 2009.
[72] Mount, J.: Fast unimodular counting, Combinatorics, Probability, and Com
puting 9 (2000) 277285.
[73] Morelli, R.: Pick's theorem and the Todd class of a toric variety, Adv. Math. 100,
183231, 1993.
[74] Morris, B. and Sinclair A.: Random walks on truncated cubes and sampling 01
knapsack solutions. SIAM journal on computing, 34 (2004), pp. 195226
[75] Pemantle, R. and Wilson, M.: Asymptotics of multivariate sequences, part I:
smooth points of the singular variety. J. Comb. Theory, Series A, vol. 97, 129
161, 2001.
[76] Pesant G.: Counting and Estimating Lattice Points: Special polytopes for branch
ing heuristics in constraint programming. This issue.
[77] Renegar, J.: A mathematical view of interiorpoint methods in convex optimiza
tion MPSSIAM series in Optimization, 2001
[78] Sankaranarayanan S, Ivancic, F, and Gupta, A.: Program analysis using symbolic
ranges, in Proceedings of the Static Analysis Symposium 2007, vol. 4634 of
Lecture Notes in Computer Science, Springer, 366383.
[79] Schrijver, A.: Theory of Linear and Integer Programming. Wileylnterscience,
1986.
[80] Stanley, R. P: Decompositions of rational convex polytopes. In: Combinatorial
mathematics, optimal designs and their applications (Proc. Sympos. Com
bin. Math. and Optimal Design, Colorado State Univ., Fort Collins, Colo.,
1978), Ann. Discrete Math., 6, 333342, 1980.
[81] Stanley, R.P: Enumerative Combinatorics, Volume I, Cambridge, 1997.
December 2009
[82] Sturmfels, B.: Gr6bner bases and convex polytopes, university lecture series,
vol. 8, AMS, Providence RI, (1996).
[83] Soprunov, I. and Soprunova, J.: Toric surface codes and Minkowski length of
polygons, SIAM J. Discrete Math. 23 (2008/09), no. I, 384400.
[84] Szenes, A. and Vergne, M.: Residue formulae for vector partitions and Euler
Maclaurin sums, Adv. in Appl. Math, 30, 295342, 2003.
[85] Thomas, R.: Algebraic methods in integer programming, In Encyclopedia of
Optimization (eds: C. Floudas and P. Pardalos), Kluwer Academic Publish
ers, Dordrecht, 200 1.
[86] Vanderbei, R. J.: LOQO user's manual version 3.10 Optimization Methods
and Software, Vol. I I, Issue 14 (1999), 485514.
[87] Vanderbei, R.J and Shanno, D.F.: An InteriorPoint Algorithm for Nonconvex
Nonlinear Programming. Computational Optimization and Applications, 13:
(1999) 231252, 1999.
[88] Verdoolaege, S. barvinok: User guide. Available from URL http://freshmeat.
net/projects/barvinok/, 2007
[89] Welsh, D.J.A.: Approximate counting in Surveys in Combinatorics, (edited by
R. A. Bailey), Cambridge Univ. Press, Cambridge, 1997.
[90] Ziegler, G.M.: Lectures on Polytopes, SpringerVerlag, New York, 1995.
Gilles Pesant
Counting and Estimating Lattice Points:
Special Polytopes for Branching Heuristics
in Constraint Programming
I Introduction
The Counting Problem in combinatorics is important in many areas
and for different reasons (see for example [4] in this issue). The pur
pose of this paper is to describe recent efforts in using counting to
guide heuristic branching within a Constraint Programming frame
work, in order to find solutions to combinatorial problems. In such
a context where counting is done repeatedly and used as a heuris
tic, exactness is not crucial but execution speed is. The algorithms
we will describe often trade some accuracy for faster computation,
given that one cannot spend more than a fraction of a second on a
given counting instance. What helps is that we will not count solu
tions to the problem as a whole but to parts of that problem, as will
become clear later.
In the rest of this paper we present a short introduction to Con
straint Programming and constraintcentered search heuristics. We
then concentrate on three families of special polytopes and on how
we evaluate the number of lattice points they contain.
2 Constraint Programming
Constraint Programming (CP) is a powerful technique to solve com
binatorial problems [ 17]. It applies sophisticated inference to reduce
the search space and a combination of variable and valueselection
heuristics to guide the exploration of that search space. Like Inte
ger Programming (IP), one states a model of the problem at hand in
mathematical language and also builds a search tree through problem
decomposition. But there are mostly important differences, among
them:
 CP works directly on discrete variables instead of relying mostly
on a continuous relaxation of the model;
 the modeling language offers many highlevel primitives represent
ing common combinatorial substructures of a problem often a
few constraints are sufficient to express complex problems;
 each of these primitives, which from now on we shall call con
straints, may have its own specific algorithm to help solve the
problem;
 one does not branch on fractional variables but instead on in
determinate variables, which currently can take several possible
values (variables are not necessarily fixed to a particular value at
a node of the search tree);
 even though CP can solve optimization problems, it is primarily
designed to handle feasibility problems.
In its most general form, a problem is modeled as
x e Cj cZn 1
xi e DiCZ < i
where x is the vector of variables (xi,...,Xn), each Di, called the
domain of xi, is finite, and each Cj is a constraint on the variables,
restricting the combination of values from their respective domains.
Constraints are more naturally expressed on a subset of the vari
ables (called its scope), and indeed we will most often present them
in that way, writing C(xi,...,xm) to mean constraint C on vari
ables xi,...,xm. In any case they can be trivially extended over
all the variables to fit this form. The following sections will provide
concrete examples of these constraints. Note that we could have
written domains as constraints and even combined all constraints
into one, but the above formulation is closer to the mechanics of
model solving in CP.
Picturing the model as a (hyper)graph whose vertices are the vari
ables and whose (hyper)edges are the constraints linking the vari
ables in their scope provides insight into the basic algorithm used
in CP. Looking locally at a particular edge (constraint), the algorithm
attempts to modify the domain of the incident vertices (variables) by
removing values which cannot be part of any solution because they
would violate that individual constraint; this local consistency step can
be performed efficiently. The modification of a vertex's domain trig
gers the inspection of all incident edges, which in turn may modify
other domains. This recursive process stops when either all domain
modifications have been dealt with or the empty domain is obtained,
in which case no solution exists (i.e. the feasible set is empty). The
overall behavior is called constraint propagation.
Different levels of consistency have been defined to formally de
scribe which values are left in domains we present the two main
ones, after some notation. Let Dm"m and Dmax denote respectively
the smallest and largest value in set D. We write [D] { a e :
Dmin < a < Dmax}, the smallest discrete interval containing every
value in domain D and [D]P {a e R : Dmin < a < Dmax}, the
smallest real interval containing every value in domain D.
Definition 2.1 (domain consistency). Constraint C(xi,...,xm) is
domain consistent iff for each variable xi i
di e Di in its domain, there are values dj e Dj for every other variable
xj i
Definition 2.2 (bounds consistency). Constraint C(xi,...,Xm) is
bounds consistent iff for each variable xi l
[Dj]" for every other variable xj si
Dmm,di+ ,...,dm) e C and (di,..., di 1,D, ,dil.,...,dm) c
C.
Note that the "supporting" dj's in bounds consistency are possibly
real values. There is an analogy to be made here with the use of
continuous relaxations in IP: whereas domain consistency can be ex
pensive to enforce in general (but see particular cases in Sections 4
and 5) because of its combinatorial nature over a discrete domain,
bounds consistency typically only requires simple algebraic manip
ulations over relaxed continuous domains. However bounds con
sistency is a strictly weaker property than domain consistency and
unless stated otherwise, from now on when we talk of consistency
we refer to domain consistency.
OPTIMA 81
Since constraint propagation may stop with indeterminate vari
ables (i.e. whose domain still contains several values), the solution
process requires search and its potentially exponential cost. It usu
ally takes the form of a tree search in which branching corresponds
to fixing a variable to a value in its domain, thus triggering more
constraint propagation. We call variableselection heuristic and value
selection heuristic the way one decides which variable to branch on
and which value to try first, respectively.
3 ConstraintCentered Branching Heuristics
Until recently, the only visible effect of the consistency algorithms
had been on the domains, projecting a constraint's set of solutions
on each of the variables. Accordingly, most branching heuristics in
CP rely on information at the level of individual variables, essentially
looking at the cardinality of their domain or at the number of con
straints on them. Constraints play a central role in CP because they
encapsulate powerful specialized consistency algorithms but firstly
because they bring out the underlying structure of combinatorial
problems. That exposed structure could also be exploited during
search. Information about the number of solutions to a constraint
can help a search heuristic focus on critical parts of a problem or
on promising solution fragments. Such constraintcentered branching
heuristics rely on some knowledge about the number of solutions to
individual constraints [22].
In the rest of this article we concentrate on evaluating the num
ber of solutions of certain combinatorial structures (i.e. constraints)
for the purpose of constraintcentered branching heuristics. Keep in
mind that counting here is not an end in itself but a mean to guide
search, and it will be performed repeatedly. Therefore in this con
text we are willing sometimes to trade some accuracy for speed of
execution.
One simple constraintcentered branching heuristic that has been
very successful branches first on the variable assignment with the
highest solution density overall. But we need the following defini
tions.
Definition 3.3 (solution count). Given a constraint C(x1,...,xm)
and respective finite domains Di i
points in D1 x . x Dm that are solutions to constraint C, called its
solution count
Definition 3.4 (solution density). Given a constraint C(xi,...,xm),
respective finite domains Dj i
a value d e Di, we will call
# (xi, d, C)
#C(Xi,...,Xm)
the solution density of pair (xi, d) in C. It measures how often a certain
assignment is part of a solution.
As much as possible, we wish these concepts to carry over to poly
topes. For the constraints we introduce, corresponding polytopes
can be defined with the understanding that each domain D is being
relaxed to [D]. So strictly speaking we may not be counting over the
same structure if a domain has "holes". But with that exception the
concept of solution count of a constraint corresponds to counting
lattice points in the corresponding polytope. The numerator in the
definition of solution density can also be seen as the number of lat
tice points inside the intersection of the polytope describing C with
the xi = d hyperplane, yielding still another polytope.
Before moving on we mention a few related exceptions to the
norm with regards to CP branching heuristics. Kask et al. [5] ap
proximate the solution count of the whole problem given a partial
solution and use it in a value selection heuristic, choosing the value
whose assignment to the current variable gives the largest approxi
mate solution count. Inspired by the use of pseudocosts in IP, Refalo
[14] proposes a generic variable selection heuristic based on the im
pact the assignment of a variable has on the reduction of the remain
ing search space, roughly approximated as the Cartesian product of
the domains of the variables.
4 The FrequencyofOccurrence Polytopes
Problems of a combinatorial nature often make statements about
the number of occurrences of certain objects in any solution. This
has been recognized in CP and given the status of a few special con
straints, the first ones we will investigate.
Definition 4.5 (Global Cardinality Constraint [16]). The set of fea
sible tuples of a constraint gcc(X,D, 1,u) where X { xl,..., xm} is
a set of variables, Dc Z is a set ofvalues, and 1: D N u : D N
are functions, is defined as:
gcc(X,D,1,u) {(dl,..., dm) :di cDi V < i
1(d) <  {di ii
These functions associate with each value in D a lower and upper bound
on the number of times it occurs in a tuple.
Even more frequent is the special case where each value should oc
cur at most once, i.e. the values taken by the given variables should
be pairwise different. It is usually considered separately.
Definition 4.6 (All Different Constraint [15]). The set of feasible
tuples of a constraint alldifferent(X) where X = {x,...,Xm} is
defined as:
alldifferent(X)
= {(di..., dm) : di c Di, di # dj V1 i j m}
The polytopes of these constraints are well understood and corre
spond to particular network flow formulations.
Example 4.1. Consider constraint gcc({xl,x2, 3, x4}, {1, 2 ,1, u)
with D = D3 = {1,2}, D2 = D4 = {1}, and 1(1) = 1,1(2) =
3,u(1) =2,u(2) = 5. There is a natural representation of this as a
network with one node for each variable and value, from the source to
every variable node an arc with demand and capacity 1, for each value
d e Di an arc from the "xi" node to the "d" node with demand 0 and
capacity 1, and from every "d" node to the sink an arc with demand 1(d)
and capacity u(d). Figure I illustrates the corresponding network. There
is a onetoone correspondence between solutions to the gcc and feasible
flows.
In the rest of this section we concentrate on counting algorithms
for the all different constraint.
Figure I. A feasible network flow problem representation of a gcc constraint.
December 2009
Definition 4.7 (Value Graph). Given a set of variables X
{Xi,..., xm} with respective domains D1,..., Dm, we define the value
graph as a bipartite graph G = (X u D,E) where D = Ui,...,mDi and
E {{xi, d} : d Di}.
There is a natural bijection between a maximum matching of size
IXI on the value graph and a solution to the related alldi fferent
constraint, hence algorithms from matching theory are adapted to
reach domain consistency and even bounds consistency.
Finding the number of solutions is equivalent to counting the num
ber of maximum matching on the value graph, which is itself equiv
alent to the problem of computing the permanent of a square (0I)
matrix A:
m
per(A) = Z al, per(Al,)
j=1
where Aij denotes the submatrix obtained from A by removing row
i and column j (the permanent of the empty matrix is equal to I).
That problem is #Phard so we turn to estimates.
4. 1 Sampling
Based on the latter recursive definition, Rasmussen proposed in [ 13]
a very simple recursive estimator for the permanent. Despite its sim
plicity, the estimator is unbiased and in general shows good exper
imental behavior. Rasmussen also gave some theoretical assurances
for dense matrices but still for some specific matrices such as the
upper triangular matrix the estimate is likely to be very poor.
A simple way to improve the quality of the approximation is to
add propagation to Rasmussen's estimator. After randomly choosing
a row i and a column j, we can propagate on the submatrix Aij in
order to remove all the Ientries (edges) that do not belong to any
maximum matching thus achieving domain consistency (the pseudo
code is shown in Algorithm I). This broadens the applicability of the
method; in matrices such as the upper triangular matrix, the propa
gation can easily lead to the identity matrix for which the estimator
performs exactly. And because we maintain domain consistency, we
reach a feasible sample every time whereas Rasmussen's original al
gorithm ends up with infeasible solutions whenever it reaches a case
in which W = 0.
The solution count is then simply
#alldifferent z XA
Given the set of solution samples S, we denote by Sx,,d c S the
subset of samples in which xi = d. The solution densities are ap
proximated as:
o (xi,d,alldifferent) a sxtd
ISI
The time complexity of the sampling approach is quadratic in the
number of variables and linear in the sum of the domain sizes [22].
ifm =0 then
XA 1
else
Choose i u.a.r. from {1... m};
W = {j:,i, 1};
Choose j uniformly at random from W;
Propagate on Aij;
Compute XAJ;
XA= IWI XA,,,;
end
Return XA;
Algorithm I. Rasmussen's estimator of the solution count of al l di fferent,
augmented with propagation
4.2 Upper Bounds
Even though the sampling approach is fast there is still a computa
tional price to pay compared to simpler branching heuristics, and it
can be a disadvantage when solving easy instances. There are known
upper bounds on the permanent that are even faster to compute.
Minc [7] conjectured and then Br6gman [2] proved that the perma
nent is bounded from above by the following formula:
m
perm(A) < M (r!)11 r
i1
where in our case ri = IDi\. Recently, Liang and Bai [6], inspired by
Rasmussen's work, proposed a new upper bound:
m
perm(A)2 < ] q(ri qi + 1).
ii
where qi = min {[rl [ ]}. Since none of the two upper bounds
dominates the other, we take their minimum as our alternate ap
proximation of the solution count for an all di fferent constraint.
By precomputing and storing part of the formulas, each call takes
time which is asymptotically linear in the number of variables in
volved [21].
For their part, solution densities are computed according to their
definition but taking the ratio of the two corresponding approxima
tions.
4.3 Counting Accuracy
We provide some empirical evidence for the accuracy of the pro
posed approaches. A thousand instances over 10 to 20 variables and
with domains of various sizes were generated and their lattice points
enumerated (some instances had billions of solutions). We measured
the relative counting error, the maximum absolute solution density
error, and the average absolute solution density error.
Figure 2 shows the counting error for the sampling algorithm and
Rasmussen's with varying timeout. The results for the upper bound
approach are not reported because they are much worse (from 40%
to 2300%). Different shades of gray indicate different percentages of
removals of values inside the domains; series represent different al
gorithms and they are grouped based on the varying timeouts. For
randomized algorithms we report the average of ten runs.
Figures 3 and 4 show respectively the maximum and average ab
solute solution density errors. Again the sampling algorithm shows a
better accuracy than Rasmussen's. Despite their poor performance
in approximating the solution count, upper bounds provide a very
good tradeoff between approximation accuracy and computation
time when deriving solution densities: giving an equivalent amount
of time to the other two, they behave best.
160%
140%
,120%
~100%
P 80%
60%
40%
20%
0%
."A
4P"
4"'~.
~. ~,.
14dd
Figure 2. Relative counting error for one thousand alldifferent instances with
varying variable domain sizes.
Q PQ4
OPTIMA 81
100%
90%
80%
70%
60%
S 50%
40%
O30%
E 20%
10%
0%
SPj 1 4 4
0' 'ss
/ z'4P
Figure 3. Maximum absolute solution density error for one thousand alldifferent
instances with varying variable domain sizes.
25%30   
20% 
35%
30%
25%
20%
15%
5%
0%om m
.' N ;,
f' ;
$dB d$P
4P 4P 4P 1
"I PJ 0
Figure 4. Average absolute solution density error for one thousand alldifferent
instances with varying variable domain sizes.
for variable xi. We denote by vy,, the vertex corresponding to
state q in layer . The first layer only contains one vertex, Vl,qo; the
last layer only contains vertices corresponding to accepting states,
Vm+1,q with q e F. This graph has the property that paths from the
first layer to the last are in onetoone correspondence with solu
tions of the constraint. (And any arc not belonging to such a path is
removed through a forward/backward sweep of the graph.) Figure 5
gives a very small example of a layered directed graph built for one
such constraint on five variables.
As for its polytope, a network flow formulation is given in [3] for
the regular constraint, following closely the structure of the graph
described above.
Definition 5.10 (ContextFree Grammar). A contextfree gram
mar is a 4tuple g = (C,N,S,P) where Y is an alphabet (the set of
terminal symbols), N is a set of nonterminal symbols, S e N is the
starting symbol, and P is a set of productions of the form A w where
A e N is a nonterminal symbol and w is a sequence of terminal and
nonterminal symbols. The (contextfree) language recognized by G is the
set of words reproduced by the leaves of parsing trees for G (obtained
through the application of productions to nonterminal symbols, starting
from S).
Definition 5.1 I (ContextFree Language Membership Constraint).
The set of feasible tuples for a constraint grammar(x, g) where x =
(xi,...,xm) and G is a contextfree grammar is defined as:
grammar(x, )
5 The SequencingOrder Polytopes
Some combinatorial problems feature sequencing constraints, re
stricting the way certain objects are sequenced in a given logical
(usually temporal) order. This is frequent in rostering when con
sidering activity types over time; it may also occur in production
scheduling e.g. when considering cars with options on an assembly
line.
From the realization that a sequence of values can be viewed
as a word in some language, researchers in CP turned to the the
ory of formal languages, which is wellknown to computer scientists
because of its ties with compiler design and the theory of com
putation. The result was the introduction of the regular[8] and
grammar[18, I I] constraints, corresponding respectively to regular
and contextfree languages.' We describe the two constraints and
discuss counting the lattice points of the first.
Definition 5.8 (Deterministic Finite Automaton). A deterministic
finite automaton is a 5tuple H (Q, ,qo,F) where Q is a fi
nite set of states, Y is an alphabet, 6 : Q x Y Q is a partial tran
sition function, qo c Q is the initial state, and F c Q is the set of
final (or accepting) states. The transition function can be extended to se
quences of symbols (words) from the alphabet: (q, (sI,s2,...,sm))
6( . 6(6(q,si),s2),... sm). The (regular) language recognized by H
is the set of words {w e Y* : (qo, w) e F}.
Definition 5.9 (Regular Language Membership Constraint). The
set of feasible tuples for a constraint regular(X,H) where x =
(xi,..., Xm) and H is a finite automaton is defined as:
regular(x,H) ={(di,...,dm) : di Di, did2 ... dm belongs to
the language recognized by H}
The consistency algorithm associated to this constraint is based
on the computation of paths in a graph. The automaton is unfolded
into a layered acyclic directed graph G = (V, A) where vertices of a
layer correspond to states of the automaton and an arc joining a ver
tex of layer Li to another of layer Li+1 represents a feasible value
{(di,..., d) : di Di, did2 .. dm belongs to
the language recognized by G}
The consistency algorithm for the grammar constraint is based
on the CYK parser and builds an andlor graph that encodes every
possible parsing tree [12]. That graph was linearized in [3] in order
to obtain the polytope of the grammar constraint.
5.1 Counting Lattice Points for regul ar
Given the graph built by the consistency algorithm for regular,
what is the additional computational cost of determining its number
of solutions? As we already pointed out, every (complete) path in
that graph corresponds to a solution. Therefore it is sufficient to
count the number of such paths. We express this through a simple
recurrence relation, which we can compute by dynamic program
ming. Let #op(,q) denote the number of paths from v,q to a
vertex in the last layer. Then we have:
#op(m+ 1,q) = 1
#op(e, q)
S #op( + 1,q'), 1
( v,,Vi+,q ,) eA
We compute #ip(, q), the number of paths from V1,qo to vy,q, in
a similar way.
The solution count is given by
#regular =#op(1,qo)
in time linear in the size of the graph even though there may be expo
nentially many of them. Therefore this is absorbed in the asymptotic
complexity of the consistency algorithm.
In the graph of regul ar, variablevalue pair (xi, d) is represented
by the arcs between layers i and i + 1 corresponding to transitions
on value d. The number of solutions in which xi = d is thus equal to
the number of paths going through one of those arcs. Consider one
such arc (vi,q, Vi+1,q'): the number of paths through it is the prod
uct of the number of outgoing paths from Vi+1,q' and the number of
incoming paths to vi,q.
1 P 1
.1:
r
&
c
_s
o5
i
December 2009
X1 X2 X3x4
LI L2 L3 L4 L5 L6
Figure 5. Graph of a regular constraint
Let A(i, d) c A denote the set of arcs representing variablevalue
pair (xi, d). The solution density of pair (xi, d) is thus given by
d, r ) (v,,),CA(i,d) #ip(i,q) #op(i + l,q')
a (xi,d, regular) = a 
#op(1,qo)
Once the #ip(;#op() values are tabulated (they are shown as ver
tex labels in Figure 5), the time to compute the solution density of
a given pair is in the worst case linear in IQI, the number of states
of the automaton [22].
6 The Knapsack Polytopes
Knapsack constraints are very common in IP formulations of com
binatorial problems and they also occur in CP. Here the polytope is
immediate.
Definition 6.12 (knapsack constraint). The set of feasible tuples for
a constraint knapsack(x, c, u), where x is a column vector of finite
domain variables (xI, X2 ..., Xm)T, c = (Cl, C2,..., Cm) is an integer
row vector, and and u are integers2, is defined as:
m
knapsack(x,c,,u) {(dl,..., dm) : di e Di, < > cidi < u}
i
Strictly speaking the coefficients ci and the variables xi are non
negative but the algorithms we describe in this section also work if
that restriction is lifted.
6.1 Counting Lattice Points for knapsack
In [19], Trick proposes an algorithm to achieve domain consistency
for knapsack constraints that relies on a graph whose structure is
very similar to that of the regul ar constraint. Not surprisingly then,
the computation of exact solution counts and solution densities for
knapsack constraints follows quite directly from Section 5.1. How
ever in this case the algorithm runs in pseudopolynomial time since
the size of the graph also depends on the magnitude of the knapsack
bounds (and of the coefficients and domain values if we allow them
to be negative). We refer the reader to [9] for details.
6.2 Estimating Lattice Points for knapsack
But knapsack constraints in CP have traditionally been handled by
enforcing bounds consistency, much cheaper than domain consis
tency when the domains of the variables are large. So if the graph
mentioned above is not built, how do we count solutions?
Approximation of a Combination of Uniformly Distributed Random Variables
0 5 10 15 20 25 30 35 40 45 50
x
Figure 6. The histogram is the actual distribution of the expression 3x + 4y + 2z
for x,y,z e [0, 5]. The curve is the approximation given by the Gaussian curve
with mean p = 22.5 and variance o2 = 84.583.
We first relax each domain D to [D], the smallest discrete inter
val containing it. Consider variable x whose domain is the discrete
interval [a, b] and assume that each domain value is equiprobable.
We associate to x the discrete random variable X which follows a
discrete uniform distribution with probability mass function
f (v)= b 1 ifa
f 0 otherwise
a+b 2 (ba+1)2_1
mean p = and variance a2 (b1)2
To find the distribution of a variable subject to a knapsack con
straint, one needs to find the distribution of a linear combination
of uniformly distributed random variables. Lyapunov's Central Limit
Theorem allows us to approximate the distribution of such a linear
combination.
Theorem 6. I (Lyapunov's Central Limit Theorem). Consider the in
dependent random variables Xi,...,Xn. Let pi be the mean of X, a2
be its variance, and rF = E[Xi i pl 3] be its third central moment. If
lim O r 0
no (n n 2
_i7 iO)a
then the random variable S = 1 Xi follows a normal distribution with
mean Vs = =1 S pi and variance oa = 1 o02.
Note that this theorem does not assume that the variables
are taken from identical distributions. This is necessary since vari
ables with different domains have different distributions. Let Y
7i1 ciXi be a random variable where Xi is a discrete random vari
able uniformly chosen from the interval [ai,bi] and ci is a non
negative coefficient. When n tends to infinity, it can be shown that
the distribution of Y tends to a normal distribution with mean
n a+bn 2 (b, aa +1)21
Si=1 i and variance _i=1 C2 ( 1)2
Consider the knapsack constraint < Y i= cixi < u. Introduce
variable xm+l with domain [,u]. We obtain xj = (Xm+ 
icixi j+l cixi). Some coefficients in this expression might
be negative. They can be made positive by using c = ci and
[Di]' [ D[ a, Dmi]. When k grows to infinity, the distribution
of xj tends to a normal distribution as stated before. In practice the
normal distribution is a good estimation even for small values of k.
For example Figure 6 shows the actual distribution of the expression
3x + 4y + 2z for x, y, z e [0, 5] and its approximation by a normal
distribution.
OPTIMA 81
With this in hand, we cannot quite count solutions but we can
quickly identify an assignment of highest solution density (for re
laxed domains) to branch on which, after all, is our motivation here.
The resulting algorithm is linear in the number of variables and log
arithmic in the size of the domains [9].
Interestingly, this work on knapsack constraints recently inspired
new branching direction heuristics for mixed integer programs [10].
There is much work left to design counting algorithms for other
frequently occurring polytopes in CP models and to find the best
way to exploit that information in order to guide search. Early re
sults suggest that such an approach may significantly advance the
state of the art in CP search.
Notes
I. However we are restricting the language to finite words of some given
length.
2. We assume that I and u are finite as they can always be set to the small
est and largest value that cx can take.
Gilles Pesant, Department of Computer and Software Engineering, Ecole
Polytechnique de Montreal, Canada. gilles.pesant@polymtl.ca
References
[I] F. Benhamou (ed.), Principles and Practice of Constraint Programming CP
2006, 12th International Conference, CP 2006, Nantes, France, September
2529, 2006, Proceedings, Lecture Notes in Computer Science, vol. 4204,
Springer, 2006.
[2] L.M. Bregman, Some Properties of Nonnegative Matrices and their Permanents,
Soviet Mathematics Doklady 14 (1973), no. 4, 945949.
[3] M.C. C6te, B. Gendron, C.G. Quimper, and L.M. Rousseau, Formal Lan
guages for Integer Programming Modeling of Shift Scheduling Problems, Con
straints (forthcoming).
[4] J. De Loera, Counting and Estimating Lattice Points: Algebraic and Probabilistic
Tools, Optima (2009).
[5] K. Kask, R. Dechter, and V. Gogate, CountingBased LookAhead Schemes for
Constraint Satisfaction, in Wallace [20], pp. 31733 1.
[6] H. Liang and F Bai, An Upper Bound for the Permanent of(0, I)Matrices, Lin
ear Algebra and its Applications 377 (2004), 291295.
[7] H. Minc, Upper Bounds for Permanents of (0, I)matrices, Bulletin of the
American Mathematical Society 69 (1963), 789791.
[8] G. Pesant, A Regular Language Membership Constraint for Finite Sequences of
Variables, in Wallace [20], pp. 482495.
[9] G. Pesant and C.G. Quimper, Counting Solutions of Knapsack Constraints,
CPAIOR (L. Perron and M. A. Trick, eds.), Lecture Notes in Computer
Science, vol. 5015, Springer, 2008, pp. 203217.
[10] J. Pryor, Branching Variable Direction Selection in Mixed Integer Programming,
Master's thesis, Carleton University, 2009.
[11] C.G. Quimper and T. Walsh, Global Grammar Constraints, in Benhamou [I ],
pp.751755.
[12] Decomposing Global Grammar Constraints, CP (C. Bessiere, ed.),
Lecture Notes in Computer Science, vol. 4741, Springer, 2007, pp. 590
604.
[13] L. E. Rasmussen, Approximating the Permanent A Simple Approach, Random
Structures and Algorithms 5 (1994), no. 2, 349361.
[14] P. Refalo, ImpactBased Search Strategies for Constraint Programming, in Wal
lace [20], pp. 557571.
[15] J.C. Regin, A Filtering Algorithm for Constraints of Difference in CSPs, AAAI,
1994, pp. 362367.
[16] Generalized Arc Consistency for Global Cardinality Constraint,
AAAI/IAAI, Vol. I, 1996, pp. 209215.
[17] F. Rossi, P. van Beek, and T. Walsh (eds.), Handbook of Constraint Program
ming, Elsevier, 2006.
[18] M. Sellmann, The Theory of Grammar Constraints, in Benhamou [I ], pp. 530
544.
[19] M.A. Trick, A Dynamic Programming Approach for Consistency and Propaga
tion for Knapsack Constraints, Annals of Operations Research 118 (2003),
7384.
[20] M. Wallace (ed.), Principles and Practice of Constraint Programming CP 2004,
10th International Conference, CP 2004, Toronto, Canada, September 27 Oc
tober I, 2004, Proceedings, Lecture Notes in Computer Science, vol. 3258,
Springer, 2004.
[21] A. Zanarini, Exploiting Global Constraints for Search and Propagation, Ph.D.
thesis, Ecole Polytechnique de Montreal, 2010.
[22] A. Zanarini and G. Pesant, Solution counting algorithms for constraintcentered
search heuristics, Constraints 14 (2009), no. 3, 3924 13.
Main Conference Themes
* Design Optimization
* Identification and Inverse Problems
* Numerical Optimization Techniques
* Efficient Analysis and Reanalysis Techniques
* Sensitivity Analysis
* Industrial Applications
Venue
Congress Centre, Instituto Superior Tecnico,
Lisbon, Portugal.
Secretariat
Centre for Mechanical Design
Institute Superior Tecnico
Av. Rovisco Pais, 1049001 Lisbon, Portugal
Phone: +351 218417914; Fax: +351 218417915
Email: engopt2010@dem.ist.utl.pt
SINSTITUTO SUPERIORTtCNICO
UniversidadeT6cnica de Lisboa
About EngOpt 2010
The main goal of EngOpt conferences is to periodically bring together engineers,
applied mathematicians and computer scientists working on research, development
and practical application of optimization methods applied to all engineering
disciplines or developing basic techniques in this field.
Contributions on Engineering Design Optimization, MDO Multidisciplinary Design
Optimization, Inverse Problems, Engineering Simulation Involving Optimization
Techniques and Basic Numerical Techniques in all disciplines of engineering are
invited. Practical applications are strongly encouraged.
Abstract Submission Deadline:
Notification of Acceptance:
Full Paper Submission Deadline:
March 5, 2010
April 2, 2010
June 4, 2010
Conference Chairmen
Helder Rodrigues, Instituto Superior Tecnico, Portugal
Jose Herskovits, COPPE Federal University of Rio de Janeiro, Brazil
Crist6vao M. Mota Soares, Instituto Superior Tecnico, Portugal
For complete and uptodate information, please visit the conference website:
www.engopt2010.org
December 2009
MOPTA 21 LHG
Bethehe PA, 182 Augut20 .
Modeling competition
The MOPTA conference (Modeling
and Optimization: Theory and
Applications) aims to bring together
people from both discrete and
continuous optimization, working on
both theoretical and applied
aspects.
Egon Balas (Carnegie Mellon)
Mung Chiang (Princeton)
Donald Goldfarb (Columbia U.)
Arkadi Nemirovski (Georgia Tech)
Anthony T. Patera (MIT)
H. Edwin Romeijn (U. of Michigan)
Andrzej Ruszczynski (Rutgers U.)
Join the student modeling
competition at MOPTA! Your team
will receive an AIMMS license and
have the chance of modeling a
realworld optimization problem.
The finalist teams will present their
work at MOPTA, where the prize
for the best work will be awarded.
Venue
Practical information
Bethlehem is in the Lehigh Valley,
Pennsylvania, and is conveniently
located an hour's drive from NYC
and Philadelphia. It is easily
reached from Allentown (ABE), JFK,
Newark, La Guardia, and
Philadelphia airports.
Abstract submission:
Early registration:
Conference:
June 16, 2010
July 15, 2010
Aug. 1820, 2010
Contact:
ISE Dept., Lehigh University
200 Packer Ave, Bethlehem PA 18015
Email: mopta@lehigh.edu
Phone: 6107583865
Organizing committee
TamBs Terlaky (chair)
Pietro Belotti
Frank E. Curtis
Imre P61ik
Ted Ralphs
Larry Snyder
Robert H. Storer
Aurelie Thiele
http://mopta.ie.lehigh.edu
Application for Membership
I wish to enroll as a member of the Society. My subscription is for my personal use
and not for the benefit of any library or institution.
D I will pay my membership dues on receipt of your invoice.
D I wish to pay by credit card (Master/Euro or Visa).
Credit card no.
Expiration date
Family name
Mailing address
Telephone no. Telefax no.
Email
Signature
Mail to:
Mathematical Programming Society
3600 University City Sciences Center
Philadelphia, PA 191042688
USA
Cheques or money orders should be made
payable to The Mathematical Programming
Society, Inc. Dues for 2009, including sub
scription to the journal Mathematical Pro
gramming, are US $90. Retired are $45.
Student applications: Dues are $22.50.
Have a faculty member verify your student
status and send application with dues to
above address.
Faculty verifying status
Institution
MOPTA
OPTIMA 81
A Special Workshop in Honor of Paul Tseng
Large Scale Optimization: Analysis, Algorithms and Applications
Fudan University, Shanghai, China
May 21, 2010
This is a special workshop dedicated to Professor Paul Tseng, Uni
versity of Washington, who went missing while on a kayak trip in
Jinsha river, China, on August 13, 2009. The workshop will be held
on May 21, 2010, in Fudan University, Shanghai, China, right be
fore the 2010 Symposium of Chinese Mathematical Programming
Society, to be held in Shanghai University, May 2225, 2010. The
workshop will be a forum for Paul's friends and colleagues to share
reminiscences and pay tribute to an extraordinary individual and an
outstanding researcher.
Professor Paul Tseng has made many fundamental contributions to
the theory and algorithms for large scale optimization. This spe
cial workshop will feature state of the art research in several major
topic areas of large scale optimization where Paul Tseng's work has
had a major impact. These include
(I) efficient algorithms for structured convex programs and net
work flow problems,
(2) error bounds and convergence analysis of iterative algorithms
for optimization problems and variational inequalities,
(3) interior point methods and semidefinite relaxations for hard
quadratic and matrix optimization problems, and
(4) the applications of large scale optimization techniques in signal
processing and machine learning.
The workshop will consist of presentations by invited speakers, and
of a poster session that will address these topics. We invite pa
pers to the poster session, especially from friends, students and
colleagues of Paul Tseng (please send a message regarding a pro
posed poster presentation to the organizers). We expect to select
a subset of the research articles presented in this workshop for a
forthcoming issue of Mathematical Programming, Series B, (coguest
editors: ZhiQuan Luo and Dimitri Bertsekas), and a special issue
of Pacific journal of Optimization (coguest editors: Shuzhong Zhang
and ZhiQuan Luo).
Confirmed invited speakers include:
Dimitri Bertsekas (MIT)
Xin Chen (UIUC)
Duan Li (Chinese University of Hong Kong)
JongShi Pang (UIUC)
Terry Rockafellar (Univ of Washington)
Steve Wright (Univ of Wisconsin)
YaXiang Yuan (Chinese Academy of Sciences)
JeinShan Chen (National Taiwan Normal Univ)
Masao Fukushima (Kyoto Univ)
Angelia Nedic (UIUC)
Liqun Qi (Hong Kong Poly Univ)
Jie Sun (National Univ Singapore)
Yinyu Ye (Stanford Univ)
Yin Zhang (Rice Univ)
Attendance to the workshop is free. We have limited funds to partially cover the local expenses of workshop participants. Any inquiries
should be sent to the workshop organizers:
ZhiQuan (Tom) Luo, University of Minnesota (luozq@umn.edu)
Shuzhong Zhang, CUHK (zhang@se.cuhk.edu.hk)
IMPRINT
Editor: Andrea Lodi, DEIS University of Bologna, Viale Risorgimento 2, 140136 Bologna, Italy. andrea.lodi@unibo.it
CoEditors: Alberto Caprara, DEIS University of Bologna, Viale Risorgimento 2, 140136 Bologna, Italy. alberto.caprara@unibo.it
Katya Scheinberg, Department of Industrial Engineering and Operations Research, Columbia University, 500 W 120th Street, New York, NY, 10027.
katyascheinberg@gmail.com
Founding Editor: Donald W. Hearn
Published by the Mathematical Programming Society.
Design and typesetting by Christoph Eyrich, Mehringdamm 57 / Hof 3, 10961 Berlin, Germany. optima@0x45.de
Printed by Oktoberdruck AG, Berlin.
December 2009
Jesis A. De Loera
Appendix: Details on Experiments
(Counting and Estimating Lattice Points)
Here we present details for experiments with the Gaussian esti
mation of Barvinok and Hartigan. As described in the article, we
needed the maximum objective values and the unique optimal solu
tions of the strictly concave optimization problems from Theorems
I and 2 in order to calculate the Gaussian estimations. These values
were obtained using LOQO and the rest of the calculation was imple
mented inside MAPLE version 12. There is no need for any sophisti
cated computation, one can easily calculate the covariance matrices
and the lattice index using a Smith normal form calculation. Overall
the evaluation step takes a negligible amount of time in all instances,
so we do not record any time of computation.
A. I General knapsacks
The first set of problems consist of knapsackconstrained simplices.
The majority of the instances were randomly generated with co
efficients between I and 20, but we also used a few famous hard
knapsack problems from [I] and a few instances with coefficients in
arithmetic progressions. The presence of these two kinds of tests
we hope is useful on investigating how suitable are these techniques
for feasibility detection and to test the sensitivity of the estimate
to the variation of righthandside vectors. Tables I and 2 present
the data used here and the results. On the line below any instance
ax = b we report the relative error of the estimation after 5000
samples and the true number of lattice points.
In Table 2 we present the results of estimation using the maxi
mum entropy Gaussian estimate: Although there are a few difficult
cases, the majority of instances has relative error less than one.
The Aardalstyle knapsacks (examples 13, 14 in the tables) [1]
show a dramatically poor behavior which seems to be correlated to
the objective function value y of the convex function we maximize.
By Theorem I this number is a close indicator of success to gener
ate a lattice point during sampling. For example, problem number 7
with righthandside 22382774 has no solutions, but if we increase it
by one unit there are 218842 integer solutions. In both cases the val
ues for y and probability parameters were identical (y = 47.58769).
Through sampling we expect to require e476 = (4.7) x 1020 samples
before finding a lattice point.
Nevertheless, for the Aardal examples, in addition to the stan
dard convex maximization of Theorem I we also used the weighted
version and we were surprised to see that in both instances we
found an actual integral feasible solution already using the convex
Table I. Knapsack tests using sampling algorithm, information on relative error (best possible among three sampling tests).
Problem ax = b, sampling error, and number of solutions
#1 a=[2, I I, 18, 4, 17, 19, 6, 9, 2, 10, 16, 4, 18, I, 15, 6, 17, 2, 8, 10, 7, 19, 7, 10, 14] b=5000
Error=0.0309433 numsols=85696414581 33826414663483924452506094742800
#2 a=[5, 10, 10, 2, 8, 20, 15, 2, 9, 9, 7, 4, 12, 13, 19] b=34
Error=0.0 1329 numsols=2056
#3 a=[10, 15, 9, 12, 1 1, 20, 8, 9, 17, 18, 11, 20, 13, 8, 17] b=19
Error=0 numsols=6
#4 a=[ 15, 13, 6, 2, 12, 4, 6, 5, 17, 8, 5, 2, 18, 20, II ] b=500
Error=0.06840749 numsols=242818430132799
#5 a=[19, 14, 18, 15, 8, 10, 14, 12, 9, 13, 16, 1, 6, 13, 14] b=500
Error=0.290757 numsols=348929541791 I
#6 a=[7, 7, 3, 14, 15, 15, 19, 12, 19, 8, 3, 17, 17, 3, 5] b= 50
Error=0.015823 numsols=8438
#7 a=[20601, 40429, 40429, 45415, 53725, 61919, 64470, 69340, 78539, 95043] b=22382775
Error= I numsols=218842
#8 a=[l, 2, 3, 4, 5, 6, 7, 8, 9, 10, I I, 12] b=255
Error=0.131238 numsols=71 660385050
#9 a=[l, 2, 3, 4, 5, 6, 7, 8, 9, 10, I I, 12] b=24
Error=0.019237 numsols= 1380
#10 a=[9, I, 19, 4, 6, 3, 4, 10, 8, 2] b=50
Error=0.0159488 numsols=47761
#11 a=[16,2, II, 15, 11, 3,5, 14,7,2] b=500
Error=0.037474 numsols=65552595947
#12 a=[8, 10, 9, 17, 2, 9, 3, 2, 5, 20] b=25
Error=0.00589 numsols=267
#13 a=[12137, 24269, 36405, 36407, 48545, 60683] b=58925135
Error=l numsols=2
#14 a=[12223, 12224, 36674, 6 119, 85569] b= 89643482
Error=l numsols=l
#15 a=[l, 2, 3, 4, 5, 6] b=25
Error=0.03970 numsols=612
#16 a=[9, 10, 17, 5, 2] b=73
Error=0.00208 numsols=209
#17 a=[9, II, 14,5, 12] b=26
Error=0.0 15148 numsols=3
#18 a=[5, 3, 1, 4,2] b=15
Error=0.02186 numsols=84
#19 a=[5, 13, 2, 8, 3] b=17
Error=0.02991 numsols=12
#20 a=[8, 12, I] b=50
Error=0.1535 numsols= I
OPTIMA 81
optimization procedure. The weights were generated at random, so
this raises the issue of understanding whether for good weights one
can find integer solutions systematically.
Table 2. Knapsack tests for Goussion estimation, we report the relative error.
Problem ax = b, Gaussian estimate relative error, and number of solutions
#1 a=[2, 1I, 18, 4, 17, 19, 6, 9, 2, 10, 16, 4, 18, 1, 15, 6, 17, 2, 8, 10, 7, 19, 7, 10,
14] b=5000
Error= 0.00334 numsols =8569641458133826414663483924452506094742800
#2 a=[5, 10, 10,2,8,20, 15,2,9,9,7,4, 12, 13, 19] b=34
Error = 0.00585 numsols =2056
#3 a=[10, 15, 9, 12, 11,20, 8, 9, 17, 18, 11, 20, 13, 8, 17] b= 19
Error = 0.97855 numsols =6
#4 a=[15, 13, 6, 2, 12, 4, 6, 5, 17, 8, 5, 2, 18, 20, I1] b=500
Error= 0.79371 numsols =242818430132799
#5 a=[19, 14, 18, 15,8, 10, 14, 12, 9, 13, 16, 1,6, 13, 14] b=500
Error= 0.00618 numsols =348929541791 1
#6 a=[7, 7, 3, 14, 15, 15, 19, 12, 19, 8, 3, 17, 17, 3, 5] b= 50
Error= 0.96326 numsols =8438
#7 a=[20601, 40429, 40429, 45415, 53725, 61919, 64470, 69340, 78539,95043]
b=22382775
Error = 118163207.8 numsols =218842
#8 a=[, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] b=255
Error= 0.89587 numsols =71660385050
#9 a=[1, 2,3,4, 5, 6, 7, 8, 9, 10, 11, 12] b=24
Error= 0.02503 numsols =1380
#10 a=[9, I, 19, 4, 6, 3, 4, 10, 8, 2] b=50
Error = 0.01428 numsols =47761
#11 a=[16, 2, II, 15, 11,3,5, 14,7,2] b=500
Error= 3.16010 numsols =65552595947
#12 a=[8, 10,9, 17,2,9,3, 2, 5, 20] b=25
Error= 0.04816 numsols =267
#13 a=[12137, 24269, 36405, 36407, 48545, 60683] b=58925135
Error= 2634013736 numsols =2
#14 a=[ 12223, 12224, 36674, 61119, 85569] b= 89643482
Error = 95905325.28 numsols= I
#15 a=[l,2,3,4,5,6] b=25
Error =0.02599 numsols =612
#16 a=[9, 10, 17, 5,2] b=73
Error = 0.03213 numsols =209
#17 a=[9, 11, 14,5, 12] b=26
Error= 0.81445 numsols =3
#18 a=[5,3, 1,4,2] b=15
Error=0.02923 numsols =84
#19 a=[5, 13, 2, 8,3] b= 17
Error = 0.40546 numsols =12
#20 a=[8, 12, I1] b=50
Error = 1.0890 numsols = I
A.2 Contingency tables and network flows
We continue our investigations with the class of multiway contingency
tables. A dtable of size (nl,..., nd) is an array of nonnegative in
tegers v (vi.,...,i), 1 < ij < nj. For 0 < m < d, an mmarginal
of v is any of the ( ) possible mtables obtained by summing the
entries over all but m indices.
Contingency tables appear naturally in statistics and operations
research under various names such as multiway contingency tables,
or tabular data. Table counting has several applications in statistical
analysis, in particular for independence testing, and has been the fo
cus of much research (see [38] and the extensive list of references
therein). Given a specified collection of marginals for dtables of
size (ni,...,nd) (possibly together with specified lower and up
per bounds on some of the table entries) the associated multiindex
transportation polytope is the set of all nonnegative real valued arrays
satisfying these marginals and entry bounds. The counting problem
can be formulated as that of counting the number of integer points
in the associated multiindex transportation polytope.
First we treat twoway contingency tables. The polytope defined by
a twoway contingency table is called the transportation polytope. We
present the results in Table 3. This family of polytopes has been stud
ied in their lattice points by several authors [8, 14, 16, 32, 72] and
thus are good for testing accuracy of the estimation. We used sev
eral examples of 4 x 4 transportation polytopes, as presented in the
table below. In all cases the relative error was very similar to the
performance with simplices, but in this case we are simply using the
Gaussian estimator proposed by Barvinok and Hartigan [15]. The
results are good. It should be remarked that sampling did not do
well at all for these instances. In fact, in all problems with multiple
constraints (nonknapsack) we had a bad performance of sampling,
thus we aggregated the constraints in a few instances, but that did
not improve the behavior.
Another closely related set of instances are those coming from
flows on networks, which are the lattice points of flow polytopes.
Again, testing accuracy of the estimation we looked at a simple net
work on the complete 4vertex and the complete 5vertex tourna
ments (directed complete graphs). Consider the directed complete
graph Kn. We assign a number to each node of the graph. Then,
we orient the arcs from the node of smaller index to the node of
bigger index. Let N be the node set of the complete graph Kn, let
wi be a weight assigned to node i for i = 1,2,..., n, and let A be
the arc set of K1. Then, we have the following constraints, with as
many variables as arcs:
Y XJi xij = wi Vi c N,
(j,i)arc enters (i,j)arc has taili
Xij > 0 V (i,j) e A.
These qualities and inequalities define a polytope and this polytope
is a special case of network flow polytope. Some tests for the com
plete graphs K4 and Ks, with different weight vectors, are shown in
Table 4.
Next we tested more complicated situations. We consider 3
way transportation polytopes where the constraints are given by
1margins. In the first half of Table 5 one can see the behavior is
like that of the 2way transportation polytopes in Table 3. The sec
Table 3. Testing for 4 x 4 transportation polytopes.
Margins exact # of lattice points
[ I,1, 1, 1], [ I,1, 1, 1] 24
[300,300,300,300], [300,300,300,300] 20269699596926337751
[220, 215, 93, 64], [108, 286, 71, 127] 1225914276768514
[109, 127, 69, 109], [119, 86, 108, 101] 993810896945891
[72, 67, 47, 96], [70, 70, 51, 91] 25387360604030
[179909, 258827, 224919, 61909], [190019, 90636, 276208, 168701] 13571026063401838164668296635065899923152079
[229623, 259723, 132135, 310952], [279858, 170568, 297181, 184826] 646911395459296645200004000804003243371154862
[249961, 232006, 150459, 200438], [222515, 130701, 278288, 201360] 31972024969011 1437887229255487847845310463475
[140648, 296472, 130724, 309173], [240223,223149, 218763, 194882] 322773560821008856417270275950599107061263625
[65205, 189726, 233525, 170004], [137007, 87762, 274082, 159609] 6977523720740024241056075121611021139576919
[251746, 282451, 184389, 194442], [146933, 239421, 267665, 259009] 861316343280649049593236132155039190682027614
[138498, 166344, 187928, 186942], [228834, 138788, 189477, 122613] 63313191414342827754566531364533378588986467
[20812723, 17301709, 21133745, 27679151], [28343568, 18410455,19751834, 665711555567792389878908993624629379187969880179721169068827951
20421471]
[15663004, 19519372, 14722354, 22325971], [17617837, 25267522, 20146447, 63292704423941655080293971395348848807454253204720526472462015
9198895]
[13070380, 18156451, 13365203, 20567424], [12268303, 20733257, 17743591, 43075357146173570492117291685601604830544643769252831337342557
14414307]
Error on estimation
0.36277
0.12179
0.06062
0.11770
0.08014
0.03177
0.07002
0.08213
0.04441
0.01276
0.08974
0.08647
0.99997
0.99987
0.99925
December 2009
Table 4. Goussion estimation for four and five network problems.
Weights on Nodes Number of Lattice points Error
[, 2569, 3820, I108, 5281] 14100406254 0.1179212818
[3842, 3945, 6744, 1043] 1906669380 0.07844816179
[47896, 30744, 46242, 32398] 19470466783680 0.08633129132
[54915, 97874, 64165, 88624] 106036300535520 0.1903691374
[69295, 62008, 28678, 102625] 179777378508547 0.2495250224
[3125352, 6257694, 926385, 8456661] 34441480172695101274 0.9847314633
[12, 8,9,7,4]; 14805; 0.9331835846
[125, 50, 75, 33, 67]; 6950747024 0.8060985686
[763, 41,227, 89, 488] 222850218035543 0.4040804721
[52541, 88985, 1112,55665, 84749] 39971216842426033014442665332 0.9727043353
[45617, 46855, 24133, 54922, 13417] 39971216842426033014442665332 0.9999790656
[69295, 62008, 28678, 88725, 13900] 65348330279808617817420057 0.9998969873
[8959393, 2901013, 85873, 533630, 11240903] 6817997013081449330251623043931489475270 0.9173160458
Table 5. Testing for 3way transportation polytopes. The top half deals with Imargins and the second half with 2margins.
Margins # of lattice points Error
[38,26,87], [69,11,71], [77,74] 2626216761994 0.02956
[381,264,871], [690,123,703], 969328784998192447450409 0.02201
[672,844]
[179909, 258827, 224919], [190019, 10996570128188103700571192439719329377965299537177538734365 0.10601
276208, 197428], [331827,331828]
[31,22,87], [50,13,77], [42,87,1 1] 8846838772161591 0.00185
[531,992,787], [750,913,577], 75262943725025193225827940796161419321644384 0.1 1337
[742,587,91 1]
[11531,9922,13787], 131537460708108801553237287957587623890623217109985417250150842074021 0.12606
[13750,9913,11577],
[9742,13587,11911]
[ 1153100, 992200, 1378700], 13047034222952410155948716772275006229691435459379246218796390438655039 2158349016693448370804082051642740001 I 0.12628
[1375000, 991300, 1157700], [974200,
1358700, 1191100]
[10002 2 2
2 2 2 33 0.10437
2 2 2
1000003 3 3
3 3 3
1000003 3
3 3
L 3 3
I282 199 212
21 14 9 441 1.10058
I 1 2 2
107 80 169
197 135 54 J
329 364
I 23 21
[4 1
1 1 1 ]
(3 copies of) 1 1 1 12 12.3843
1 1 1
ond half has three 3way transportation polytopes given by sets of
2margins. Due to the difference of error the examples indicate that
the method of estimation seems to have its best performance when
the number of variables is much larger than the number of con
straints. This is particularly evident in the last two examples of Table
5. The first considers a 2margin 2 x 3 x 3 transportation polytope
taken from [45]. Second, the 3 x 3 x 3 analogue of the Birkhoff poly
tope has clear inaccuracy. Again, as we observed before, when the
computation involves large entries the convex optimization problem
becomes much harder to solve.
In [29] the authors introduced a class of small but rather difficult
problems for the purpose of detecting feasibility. Roughly speaking
these are problems of the form Ax = b,x e {0, 1} where A is an
m x 10(m 1) matrix and the entries of A are integers between
0 and 99. The ith entry of b is the sum of the corresponding row
of A, divided by 2 and rounded to the next integer value. We inves
tigated the performance of estimation in this kind of problems. We
generated all instances using the generator available at http://did.mat.
unibayreuth.de/alfred/marketsplit.html. We only investigated feasi
ble instances with m = 2, m = 3 and m = 4. A very high percentage
of instances produced are in fact infeasible.
Unfortunately, when we tried the sampling technique in the six
feasible instances we found that most of the the time sampling never
found a single feasible solution. The second test was made using the
Gaussian estimation and it is presented in Table 6. We list the pre
dicted number of solutions and the true number of solutions, if any.
The top part of the table consists of infeasible problems and there
the prediction is consistent with that fact.
OPTIMA 81
Table 6. Testing gaussian estimation on feasible marketsplit problems. The problems on the top are infeasible and those in the block below have only one integer solution.
Thus, with one obvious exception, the estimated number of solutions is of comparable magnitude.
Constraints (of the form Lix = mi)
LI: [73,93, 88, 63, 65, 27, 25, 6, 31, 70]; ml : 270;
L2 : [17,27,32, 35, 36, 63, 64, 37,91,93] : m2 : 247
LI: [30,29,94, 77,46,10,63,40,38,37];ml: 232;
L2 : [16,92,10, 40, 29,16,51,11, 31, 8]; m2 : 152;
LI: [10, 2, 88, 39, 33, 42,48, 67, 25, 74, 86, 34, 76, 41, 20, 32, 42, 78, 26, 63];ml : 463;
L2 : [15,94, 80,1,21,99,54,48,10,46, 71,20,0,60,11, 33,2, 59, 52, 79];m2 := 427;
L3 : [85, 39,14, 61,80, 86,46,23,16,24,86,31,19,18, 85,40,69,39,40, 79];m3 := 490;
LI: [66,43,98,18,69,4,40,3, 72,41,20,52,42, 34, 60, 70,27,43,64, 88];ml: 477;
L2 : [98,17,44,94,68, 84,77, 70,43, 77,3,61, 72,53, 79,41,9,71,97, 81]; m2 : 619;
L3 : [12,69, 33, 6, 55,45,29, 82,88,93, 70,39, 62, 67,85, 31, 51,14,1,46];m3 : 489;
Li : [29,63,85,80,64,87,22,31,5,23,96,8,14,93,23,74,78,72,0,30];mi : 488;
L2 : [15,44,21,83,13,49,40,13,33,46,42,62,10,80,94,26,19,68,10,24];m2 := 396;
L3 : [43,6,84, 58, 51,7,84,29, 79, 36,11,47,33, 32, 30,46, 33,23,11,67]; m3 : 405;
Li : [39,65,40,15,2,35,74,87,33,46,51,21,86,80,70,60,31,30,1,8];mi : 437;
L2 : [12,23,87,46,49,16,14,88,91,39,8,31,4,0,46,58,36,73,45,21];m2 := 393;
L3 : [19,49,94, 58,29,17, 70,61,47, 71,21, 59,46,8,58,95, 76, 72, 36, 68]; m3 := 527;
Li : [69,0,89,59,32,73,56,26,49,42];mi : 247;
L2 : [42,69, 59,0,57, 31, 84,95,61, 77];m2 : 287;
Li : [46,12,3,91,76,7,39,27,95,55]; mi : 225;
L2 : [74,85,49, 39,80,22,16,25,84, 11];m2 : 242;
predicted v.s. real number of solutions
0.0271 /0
0.0435 / 0
0.0954 / 0
0.0784 / 0
0.1049 / 2221
0.1363/748
0.02752/ 10
0.0334/ 8
Table 7. Goussion estimation for binary knapsack problems
Constraints (of the form Lx = m) #0 1 solutions Error
L [116,192,120,401,129];m : 236; I 0.75480
L [11,17,33,13,3,5, 6,7,4];m : 33 9 0.11174
L [16,92,10,40,29,16,51,11,31, 8];m : 152; 4 0.65327
L [46, 12, 3,91, 76, 7, 39, 27,95, 55];m =225; 7 0.33358
L [9,17,16,8,6,15,14,7,8,11];m:= 30; II 0.13587
L [1,2,3,4,5,6,7,8,9,10];m : 12; 13 0.07061
L : [11,4,13,7,3,5,10,5,6,11];m : 33; 29 0.84788
L [1,3,5,7,9,11,13,15,17,19];m:= 19; 6 0.04771
L [1,2, 3,4,5,6,7,8,9,10,11,12,13,14,15]; m: 15 27 0.02667
L [1, 3, 5, 7,9,11,13,15, 17,19,21,23, 25, 27,29,31,33, 35,37,39];m 39 41 0.05350
L [1,3,5,7,9,11,...,2i 1,...,47,49];m:= 49 93 0.03782
L [1,2,3,4,...i,...,20]; m : 20; 64 0.04844
L : [1,2,3,4,...,30];m: 30; 296 0.04371
L: [1,2,3,4,...,40];mn: 40; 1113 0.04075
L : [1, 2,3,..., 50];m: 50; 3658 0.03619
Table 8. Goussion estimation for counting graphs with given degree sequences (feasible cases only)
Graph description or sequence number of graphs Error
3regular graphs with 6 nodes 70 0.00262
3regular graphs with 8 nodes 19355 0.06182
3regular graphs with 10 nodes I 1180820 0.05657
3regular graphs with 12 nodes 11555272575 0.02772
3regular graphs with 14 nodes 19506631814670 0.01170
4regular graphs with 6 nodes 15 0.17947
4regular graphs with 8 nodes 19355 0.06182
4regular graphs with 10 nodes 66462606 0.38190
4regular graphs with 12 nodes 480413921130 0.21204
4regular graphs with 13 nodes 52113376310985 0.07909
4regular graphs with 14 nodes 6551246596501035 0.06839
4regular graphs with 17 nodes 28797220460586826422720 0.02703
[5,6,1,1,1, 1, 1, 1, 1, 1,1,1,1] 7392 1.15481
Finally we considered ten instances of binary knapsack problems
in Table 7. The estimation we present was done using the Gaussian
heuristic. This time the error is again smaller.
There is a wide variety of combinatorial configurations that can be
described as the binary solutions of a problem of the form Ax = b.
For example, for matching of bipartite graphs the matrix A is the
incidence matrix of the graph. Of course, this opens the possibil
ity of using the BarvinokHartigan estimators to approximate var
ious combinatorial enumeration problems. In this section we look
carefully at the performance of the estimation in one wellstudied
combinatorial problem.
For experimentation we have selected the problem of estimating
the number of labeled graphs with a prescribed degree sequence.
Random graphs with given vertex degrees have interest as models
for complex networks (see [23]). The main reasons for choosing
this problem include: (I) There are wellknown tests that allow us
to decide when there are no solutions (see [69]). (2) Researchers
have already documented this problem and there is a wide variety of
formulas available for verification, e.g., regular graphs appear listed in
Sloane's online encyclopedia of integer sequences (see [23] and ref
erences therein). Most of the results are collected in Table 8, but we
also try comparing the estimation with nonfeasible instances, i.e.,
nonrealizable degree sequences. We found here that the estimation
was truly off. For example, there are no 3regular graphs with 13
nodes, but the Gaussian estimation predicted about 0.4445 x 1012
such graphs.
December 2009
References
[I] Aardal, K., Lenstra, A.K., and Lenstra, H.W Jr.: Hard equality constrained
integer knapsacks. In: WJ. Cook and A.S. Schulz (eds.), Integer Program
ming and Combinatorial Optimization: 9th International IPCO Confer
ence, Lecture Notes in Computer Science vol. 2337, SpringerVerlag, 350
366, 2002.
[2] Aardal, K., Weismantel, R., and Wolsey, L.: Nonstandard approaches to inte
ger programming Discrete Applied Math. 123 No. 13, (2002), 574.
[3] Achterberg T, Heinz S., and Koch T. Counting solutions of integer programs
using unrestricted subtree detection ZIB technical report 0809, 2008.
[4] Adleman, L. and Manders, K.: NPcomplete decision problems for quadratic
polynomials. In: Proceedings of the Eighth annual ACM symposium on the
ory of computing, Hershey PA 1976, 2329, ACM, New York, 1976.
[5] Andrews, G., Ekhad, S., and Zeilberger, D.: A short proof of acobi's formula
for the number of representations of an integer as a sum of four squares, Amer
ican Math. Monthly, 100, 274276, 1993.
[6] Avis, D. and Fukuda, K, Reverse Search for Enumeration, Discrete Applied
Math, Vol. 6, 2146 (1996)
[7] Balcioglu, A. and Wood R.: Enumerating NearMin st Cuts. In: D.L. Woodruff
(editor) Network Interdiction and Stochastic Integer Programming ,
vol. 22, series operations research/computer science interfaces, Kluwer,
Boston 2002.
[8] BaldoniSilva, W, De Loera, J., and Vergne, M.: Counting integral flows on
Networks, Foundations of Computational Mathematics, Vol. 4, 277314,
2004.
[9] Barvinok, A.I.: A course in Convexity, Graduate studies in Mathematics, vol.
54, Amer. Math. Soc., Providence RI, 2002.
[10] Barvinok, A.I.: Polynomial time algorithm for counting integral points in poly
hedra when the dimension is fixed, Math of Operations Research, Vol. 19,
769779, 1994.
[II] Barvinok, A.I. and Pommersheim, J.: An algorithmic theory of lattice points in
polyhedra. In: New Perspectives in Algebraic Combinatorics (Berkeley, CA,
19961997), 91147, Math. Sci. Res. Inst. Publ. 38, Cambridge Univ. Press,
Cambridge, 1999.
[12] Barvinok, A.I. and Woods, K.: Short rational generating functions for lattice
point problems, J. Amer. Math. Soc., Vol. 16, 957979, 2003.
[13] Barvinok, A.I., Enumerating contingency tables via random permanents, Com
binatorics, Probability and Computing, v.17 n.l, (2008) 119.
[14] Barvinok, A.I, Asymptotic estimates for the number of contingency tables, inte
ger flows, and volumes of transportation polytopes, International Mathematics
Research Notices 2009 (2009), 348D385.
[15] Barvinok, A. I. and Hartigan, J. Maximum entropy Gaussian approximation for
the number of integer points and volumes of polytopes manuscript 2009, avail
able at www.math.lsa.umich.edu/barvinok/papers.html and at the Arvix
http://arxiv.org/abs/0903.5223
[16] Beck, M. and Pixton, D., The Ehrhart polynomial of the Birkhoff polytope,.
Discrete Comput. Geom. 30, no. 4 (2003), 623D637.
[17] Beck, M.: Counting lattice points by means of the residue theorem. Ramanujan
Journal, 4, no. 3, 299310, 2000.
[18] Behle, M. and Eisenbrand, F 01 vertex and facet enumeration with BDDs In
David Applegate et al. (Eds.), Proceedings of the 9th Workshop on Algo
rithm Engineering and Experiments (ALENEX'07), New Orleans, U.S.A.,
SIAM, p. 158165, 2007.
[19] Brion, M.: Points entiers dans les polyedres convexes. Ann. Sci. Ecole Norm.
Sup., Vol. 21, 653663, 1988.
[20] Brion, M. and Vergne, M.: Residue formulae, vector partition functions and lat
tice points in rational polytopes, J. Amer. Math. Soc., Vol. 10, 797833, 1997.
[21] Benson H., Shanno, D.F and Vanderbei, R.J. A Comparative Study of Large
Scale Nonlinear Optimization Algorithms In High Performance Algorithms and
Software for Nonlinear Optimization, (G. Di Pillo and A. Murli, editors),
94126, Kluwer Academic Publishers, 2003.
[22] BenTal, A. and Nemirovski, A. Lectures on modern convex optimization: anal
ysis, algorithms, and engineering applications, MPSSIAM series on Optimiza
tion, 2001.
[23] Blitzstein, J. and P. Diaconis A sequential importance sampling algorithm
for generating random graphs with prescribed degrees, manuscript (2009).
[24] M.R.Bussieck and M.E.Lubbecke The vertex set of a 0/Ipolytope is strongly
Penumerable Comput. Geom., I 1(2):103109, 1998.
[25] Chen, Y, I. Dinwoodie, A. Dobra, and M. Huber, Lattice Points, Contingency
Tables, and Sampling. Contemporary Mathematics, Vol. 374, 6578. Ameri
can Mathematical Society, (2005).
[26] Chen, Y, Diaconis, P, Holmes, S., and Liu,J. S. Sequential Monte Carlo meth
ods for statistical analysis of tables. Journal of the American Statistical Asso
ciation, 100, (2005) 109120.
[27] Chen, Y, Dinwoodie, I. H., and Sullivant, S. Sequential importance sampling
for multiway tables. The Annals of Statistics, 34, (2006) 523545.
[28] Clauss, P. and Loechner, V: Parametric Analysis of Polyhedral Iteration Spaces,
Journal of VLSI Signal Processing, Vol. 19, 179194, 1998.
[29] Cornuejols, G. and Dawande, M. A class of hard small 01 programs. In R. E.
Bixby, E.A. Boyd, and R.Z. RiosMercado, editors, Integer Programming and
Combinatorial Optimization. 6th International IPCO Conference, Hous
ton, Texas, June 1998. Springer Lecture Notes in Computer Science 1412,
Heidelberg, 1998.
[30] Cox, D., Little, J., and O'Shea D. Ideals, varieties and algorithms, Undergrad
uate Texts in Mathematics, Springer, New York, 1992.
[31] Danna E., Fenelon M., Gu Z., and Wunderling R.: Generating multiple so
lutions for mixed integer programming problems In M. Fischetti and D.P.
Williamson, eds., Integer Programming and Combinatorial Optimization,
12th International IPCO conference, Ithaca New York, June 2007. Springer
Lecture Notes in Computer Science 4513, 280294.
[32] De Loera, J. and Sturmfels, B. Algebraic unimodular counting, Math. Program
ming Ser. B., Vol. 96, No. 2, (2003), 183203.
[33] De Loera,J.A., Haws, D., Hemmecke, R., Huggins, P, Tauzer, J. and Yoshida,
R.: A User's Guide for LattE, software package LattE and manual are avail
able at www.math.ucdavis.edu/latte/ 2003.
[34] De Loera, J. A. and Hemmecke, R. and Tauzer, J. and Yoshida, R.: Effective
lattice point counting in rational convex polytopes, J. Symbolic Comp., vol. 38,
12731302, 2004.
[35] De Loera, J.A, Hemmecke R., Koppe M., and Weismantel R. Integer poly
nomial optimization in fixed dimension. Mathematics of Operations Research.
(2006), vol 31, No. I, 147153.
[36] De Loera, J.A, Hemmecke R. and Koppe M. Pareto optima of multicriteria
integer programs, INFORMS journal of Computing. Vol. 21, No. I, (2009),
3948.
[37] De Loera, J. A. and Onn, S. Markov bases of threeway tables are arbitrarily
complicated J. Symbolic Computation, 4 1:173181, 2006.
[38] Diaconis, P. and Gangolli, A.: Rectangular arrays with fixed margins, In: Dis
crete probability and algorithms (Minneapolis, MN, 1993), 1541, IMA Vol.
Math. Appl., 72, Springer, New York, 1995.
[39] Diaconis, P. and Sturmfels, B. Algebraic Algorithms for Sampling from Condi
tional Distributions. Annals of Statistics, 26( ): 363397, 1998.
[40] Diaz, R. and Robins, S.: The Ehrhart Polynomial of a Lattice Polytope, Annals
of Mathematics, 145, 503518, 1997.
[41] Dyer, M. and Kannan, R. On Barvinok's algorithm for counting lattice points in
fixed dimension. Math of Operations Research 22 (1997) 545549.
[42] Dyer, M. Approximate counting by dynamic programming Proceedings of the
35th Annual ACM Symposium on the Theory of Computing (STOC 03),
ACM Press, pp. 693699, 2003.
[43] Dyer, M., Mount, D., Kannan, R. and Mount, J.: Sampling contingency tables,
Random Structures and Algorithms, Vol. 10, 487506, 1997.
[44] Ehrhart, E.: Polynomes arithm6tiques et m6thode des polyedres en combinatoire,
International Series of Numerical Mathematics, Vol. 35, Birkhauser Verlag,
Basel, 1977.
[45] Fienberg, S.E., Makov, U.E., Meyer, M.M. and Steele, R.J.: Computing the ex
act conditional distribution for a multiway contingency table conditional on its
marginal totals. In: Data Analysis From Statistical Foundations, 145166, Nova
Science Publishers, A. K. Md. E. Saleh, Huntington, NY, 2001.
[46] Garey, M.R. and Johnson, S.J.: Computers and Intractability: A Guide to the
Theory of NPCompleteness, Freeman, San Francisco, 1979.
[47] Ghosh, S., Martonosi, M., Malik, S.: Cache miss equations: a compiler frame
work for analyzing and tuning memory behavior, ACM transactions on pro
gramming languages and systems, 21, 703746, 1999.
[48] Grotschel, M., Lovisz, L., and Schrijver, A. Geometric Algorithms and Com
binatorial Optimization, second edition. Algorithms and Combinatorics, 2,
SpringerVerlag, Berlin, 1993.
[49] Guy, R. K. Unsolved Problems in Number Theory, 2nd ed. New York: Springer
Verlag, 240241, 1994.
[50] Huxley, M.N. Area, lattice points, and exponential sums, Oxford Univ. Press.
Oxford, 1996.
[5 1] Jacobi, C. Gesammelte Werke, Berlin 18811891. Reprinted by Chelsea,
New York, 1969.
[52] Jarre, F Relating maxcut problems and binary linear feasibility problems avail
able at www.optimizationonline.org/DB_FILE/2009/02/2237.pdf
[53] Jerrum, M.R. Counting, sampling and integrating: algorithms and complexity,
Birkhauser, Basel, 2003.
[54] Jerrum, M.R. and Sinclair, A.: The Markov Chain Monte Carlo Method: An ap
proach to approximate Counting and integration, in: Dorit Hochbaum (Ed.),
Approximation Algorithms, PWS Publishing Company, Boston, 1997.
[55] Jerrum, M.R., Valiant, L.G., and Vazirani, V.V. Random generation ofcombina
torial structures from a uniform distribution, Theoretical Computer Science,
43:1690 188, 1986.
OPTIMA 81
[56] Jerrum M., Sinclair A., and Vigoda, E., A polynomialtime approximation algo
rithm for the permanent of a matrix with nonnegative entries. Journal of the
ACM, 51 (4):671697, 2004.
[57] Jones, J.P: Universal diophantine equations, Journal of symbolic logic, 47 (3),
403410, 1982.
[58] Kannan, R. and Vempala, S. Sampling lattice points in proceedings of the
twentyninth annual ACM symposium on Theory of computing El Paso,
Texas USA, 1997, 696700.
[59] Kantor, J.M. and Khovanskii, A.: Une application du Th6oreme de Riemann
Roch combinatoire au polynome d'Ehrhart des polytopes entier, C. R. Acad. Sci.
Paris, Series I, Vol. 317, 501507, 1993.
[60] Koppe, M. A primal Barvinok algorithm based on irrational decompositions To
appear in SIAM Journal on Discrete Mathematics.
[61] Lenstra, H.W Integer Programming with a fixed number of variables Mathe
matics of Operations Research, 8, 538548
[62] Lasserre, J.B. and Zeron, E.S.: On counting integral points in a convex rational
polytope Math. Oper. Res. 28, 853870, 2003.
[63] Lasserre, J.B. and Zeron, E.S A simple explicit formula for counting lattice points
ofpolyhedra. Lecture Notes in Computer Science 4513, Springer Verlag pp.
367381 (2007)
[64] Micciancio, D. and Goldwasser, S. Complexity of lattice problems. A crypto
graphic perspective, Kluwer International Series in Engineering and Com
puter Science, 671. Kluwer, Boston, MA, 2002.
[65] Lisonek, P: Denumerants and their approximations, Journal of Combinatorial
Mathematics and Combinatorial Computing, Vol. 18, 225232, 1995.
[66] Loebl M., Zdeborova L., The 3D Dimer and Ising Problems Revisited, European
J. Combinatorics No. 29 Vol. 3, (2008).
[67] Loechner, V., Meister, B., and Clauss, P: Precise data locality optimization of
nested loops, J. Supercomput., Vol. 21, 3776, 2002.
[68] Macdonald, I. G.: Polynomials associated with finite cellcomplexes, J. London
Math. Soc. (2), 4, 181192, 1971.
[69] Mahadev, N.V.R. and Peled, U.: Threshold graphs and related topics. Annals of
Discrete Mathematics, Vol 56, NorthHolland Publishing Co. Amsterdam.
[70] Mangasarian, O.L: Knapsack Feasibility as an Absolute Value Equation Solvable
by Successive Linear Programming, Optimization Letters 3(2), (2009), 161
170.
[71] Mangasarian O.L. and Recht, B.: Probability of unique Integer solution to a sys
tem of linear equations Data Mining Institute, Univ. of Wisconsin,Technical
Report 090 1, September 2009.
[72] Mount, J.: Fast unimodular counting, Combinatorics, Probability, and Com
puting 9 (2000) 277285.
[73] Morelli, R.: Pick's theorem and the Todd class of a toric variety, Adv. Math. 100,
183231, 1993.
[74] Morris, B. and Sinclair A.: Random walks on truncated cubes and sampling 01
knapsack solutions. SIAM journal on computing, 34 (2004), pp. 195226
[75] Pemantle, R. and Wilson, M.: Asymptotics of multivariate sequences, part I:
smooth points of the singular variety. J. Comb. Theory, Series A, vol. 97, 129
161, 2001.
[76] Pesant G.: Counting and Estimating Lattice Points: Special polytopes for branch
ing heuristics in constraint programming. This issue.
[77] Renegar, J.: A mathematical view of interiorpoint methods in convex optimiza
tion MPSSIAM series in Optimization, 2001
[78] Sankaranarayanan S, Ivancic, F, and Gupta, A.: Program analysis using symbolic
ranges, in Proceedings of the Static Analysis Symposium 2007, vol. 4634 of
Lecture Notes in Computer Science, Springer, 366383.
[79] Schrijver, A.: Theory of Linear and Integer Programming. Wileylnterscience,
1986.
[80] Stanley, R. P: Decompositions of rational convex polytopes. In: Combinatorial
mathematics, optimal designs and their applications (Proc. Sympos. Con
bin. Math. and Optimal Design, Colorado State Univ., Fort Collins, Colo.,
1978), Ann. Discrete Math., 6, 333342, 1980.
[81] Stanley, R.P: Enumerative Combinatorics, Volume I, Cambridge, 1997.
[82] Sturmfels, B.: Grobner bases and convex polytopes, university lecture series,
vol. 8, AMS, Providence RI, (1996).
[83] Soprunov, I. and Soprunova, J.: Toric surface codes and Minkowski length of
polygons, SIAM J. Discrete Math. 23 (2008/09), no. I, 384400.
[84] Szenes, A. and Vergne, M.: Residue formulae for vector partitions and Euler
Maclaurin sums, Adv. in Appl. Math, 30, 295342, 2003.
[85] Thomas, R.: Algebraic methods in integer programming, In Encyclopedia of
Optimization (eds: C. Floudas and P. Pardalos), Kluwer Academic Publish
ers, Dordrecht, 2001.
[86] Vanderbei, R.J.: LOQO user's manual version 3.10 Optimization Methods
and Software, Vol. I I, Issue 14 (1999), 485514.
[87] Vanderbei, R.J and Shanno, D.F.: An InteriorPoint Algorithm for Nonconvex
Nonlinear Programming. Computational Optimization and Applications, 13:
(1999) 231252, 1999.
[88] Verdoolaege, S. barvinok: User guide. Available from URL http://freshmeat.
net/projects/barvinok/, 2007
[89] Welsh, D.J.A.: Approximate counting in Surveys in Combinatorics, (edited by
R. A. Bailey), Cambridge Univ. Press, Cambridge, 1997.
[90] Ziegler, G.M.: Lectures on Polytopes, SpringerVerlag, New York, 1995.

