QUADRATIC PROGRAMMING TECHNIQUES FOR GRAPH
PARTITIONING *
SOONCHUL PARKt, TIMOTHY A. DAVIST, WILLIAM W. HAGER, AND HONGCHAO
ZHANG
Abstract. In a seminal paper (An i. heuristic procedure for .. graphs, Bell
System Technical Journal, 49 (1970), pp. 291307), Kernighan and Lin propose a pair exchange
algorithm for approximating the solution to mincut graph partitioning problems. In their algorithm,
a vertex from one set in the current partition is exchanged with a vertex in the other set when the sum
of the weights of cut edges is reduced. This algorithm along with the related Fiduccia/ I i.1I I
scheme are incorporated in stateoftheart graph I.. 1 I ..i .;.. software such as METIS. In this paper
we show that a quadratic programmingbased block exchange . ,.. . .i. of the Kernighan and
Lin algorithm can yield a .;II ...1 improvement in I i;I ; .., quality.
Key words, graph I .. i...11. mincut, quadratic programming
AMS subject classifications. 65K05, 65Y20, 90C20
1. Introduction. The graph partitioning problem is to partition the vertices of a
graph into several .1ii.. i sets rif i 1 specified size constraints, while minimizing
the sum of the weights of (cut) edges connecting vertices in different sets. Graph
partitioning problems arise in circuit board and microchip design, in other 1 '..'l
problems (see [21]), and in sparse matrix pivoting strategies. In parallel computing,
graph partitioning problems arise when tasks are partitioned among processors in
order to minimize the communication between processors and balance the processor
load. An application of graph partitioning to parallel molecular dynamics simulations
is given in [26].
In [11, 12] we show that the graph partitioning problem can be formulated as
a continuous quadratic programming problem denoted QP1. Since the graph parti
tioning problem is NP hard, computing a global minimizer of QP1 is often not easy.
When continuous solution algorithms, such as the gradient projection method, are
utilized, the iterates I l. ,11 converge to a local minimizer which is not the global
optimum. To escape from this local optimum, we need to make a nonlocal change
to obtain a better iterate, which might then be used as a new starting guess for the
gradient projection method.
In their seminal paper [20], Kernighan and Lin propose an exchange algorithm,
denoted KL, for Ii i i to improve ;,i! given partition of the vertices. A pair of
vertices in the current partition is exchanged if the weights of the edges connecting
the partitioned sets is decreased. Eventually, the algorithm reaches a partition of the
November 14, 2006 This material is based upon work supported by the U.S. National Sci
ence Foundation under Grants 0203270, 0620286, and 0619080 and by the Korean National Science
Foundation under grant BrainKorea21
tscp@knu.ac.kr, School of Electrical Engineering and Computer Science, Kyungpook National
University, Daegu, 702701, F. .,i1.1.i of Korea. Phone 82539507203. Fax 82539506093.
+davis@cise.ufl.edu, http://wAww.cise.ufl.edu/~davis, PO Box 116120, Department of Com
puter and Information Science and Engineering, University of Florida, Gainesville, FL 326116120.
Phone (352) 3921481. Fax (352) 3921220.
hager@math.uf1.edu, http://Awww.math.ufl.edu/~hager, PO Box 118105, Department of Mathe
matics, University of Florida, Gainesville, FL 326118105. Phone (352) 3920281. Fax (352) 3928357.
hozhang@ima.umn.edu, Institute for Mathematics and its iA .1 ii.... (IMA), University of Min
nesota, 400 Lind Hall, 207 Church Street S.E., I ....... .I... MN 554550436
S. C. PARK, T. A. DAVIS, W. W. HAGER, and H. ZHANG
vertices for which ;,~ exchange either increases or leaves unchanged the sum of the
weights of the cut edges.
The KL exchange is an example of a nonlocal change; for our quadratic pro
gramming formulation of the graph partitioning problem, it amounts to movement of
distance V/2. In this paper, we present a generalization of the KL pairwise exchange
in which we allow an arbitrary block of vertices in one set of the partition to be moved
to the other set. We show that the optimal exchange is the solution to a new QP,
denoted QP2, which is related to but lt!I!. i. I, from QP1. The block exchange QP2
is more robust than KL for escaping from a local minimizer in QP1 since there is no
restriction on the number of vertices being exchanged.
Approaches to the graph partitioning problem in the literature include:
(a) Spectral methods, such as those in [16] and [24], where an eigenvector cor
responding to the second smallest eigenvalue (1 [I!. i vector) of the graph's
Laplacian is used to approximate the best partition.
(b) Geometric methods, such as those in [9, 14, 23], where geometric information
for the graph is used to find a good partition.
(c) Multilevel algorithms, such as those in [5, 6, 15, 17, 25, 27], that first coarsen
the graph, partition the smaller graph, then uncoarsen to obtain a partition
for the original graph.
(d) Optimizationbased methods, such as those in [1, 2, 3, 7, 28], where approxi
mations to the best partitions are obtained by solving optimization problems.
(e) Methods that 1* ,!1.. randomization techniques such as genetic algorithms
([22] or [25]).
1 ,~. oftheart algorithms for graph partitioning which achieve both relatively
high IL ,lil partitions and fast execution times include p and i. 1 I S ([17], [18],
[19]). These are multilevel algorithms which use either KL or the related 1 ,,11,
cia/., I 111, . [8] (FM) schemes to improve the partition at each level. In this paper
we show that the final partitions generated by 1l. I IS can be further optimized by
exploiting QP1 and QP2. In a separate paper, we are developing a multilevel imple
mentation of our optimizationbased algorithms where the role of the KL or I I are
either replaced or assisted by the QPbased optimization algorithms at each level.
The paper is organized as follows. In Section 2 we present QP1, while Section 3
derives QP2. In Section 4 we show how to incorporate QP1 and QP2 into a general
algorithm for graph partitioning. Section 5 ;, !! I the potential improvement in a
partition that can be achieved using the QPbased approach.
2. Graph partitioning. Consider a graph with n vertices
V= {1,2,..,n},
and let aij be a weight associated with the edge (i,j). We assume that aii = 0 and
aij = aji for each i and j. The sign of the weights is not restricted. Given lower and
upper integer bounds I and u respectively, we wish to partition the vertices into two
lii.,;,i sets, where one of the sets has between I and u vertices, while minimizing
the sum of the weights associated with edges connecting vertices in different sets. An
optimal partition is called a mincut.
Let us consider the following quadratic programming problem which we denote
QPI:
minimize f(x) := (1 x)T(A + D)x
(2.1)
,.Ii. to 0
QUADRATIC PROGRAMMING AND GRAPH PARTITIONING
where 1 is the vector whose entries are all 1, A is the matrix with elements aij, and
D is a diagonal matrix. When x is binary, the cost function f(x) in (2.1) is the sum
of those aij for which xi = 0 and xj = 1. Hence, when x is binary, f(x) is the sum of
the weights of edges connecting the sets VI and V2 defined by
(2.2) VI = {i : x = 1} and V2 = i: x = 0}.
In [11] we show that for an appropriate choice of the diagonal matrix D, the
mincut is obtained by solving (2.1); that is, (2.1) has a solution x for which each
component is either zero or one, and the two sets VI and V2 in an optimal partition
are given by (2.2). The following result [11, Cor. 2.2] shows how to choose D.
TIHEOREM\ 2.1. If D is chosen so that
(2.3) dii + djj > 2aij
for each i and j, then (2.1) has a 0/1 solution x and the partition given by (2.2) is a
mincut. Moreover, if for each i and j,
dii + djj > 2aij,
then every local minimizer of (2.1) is a 0/1 vector.
The condition (2.3) holds if the diagonal of D is chosen in the following way:
(2.4) djj = max{aij : 1 < i < n}
In the quadratic program (2.1), the variable x is continuous, with components taking
values on the interval [0,1]. Theorem 2.1 claims that this continuous quadratic pro
gram has a 0/1 solution which yields a mincut. As we now show, i,!! feasible point
for (2.1) can be transformed to a binary feasible point while not increasing the value
of the cost function. Hence, ;~i solution to (2.1) with fractional components can be
transformed to a binary solution.
COROLLARY 2.2. If D satisfies (2.3), then for ..., x which is feasible in (2.1),
there exist a binary y which is feasible in (2.1) and f(y) < f(x).
I '., ... We first show how to find y with the property that y is feasible in (2.1),
1Ty is integer, and f(y) < f(x). If 1Tx = u or 1Tx = 1, then we are done since I
and u are integers; hence, we assume that I < lTx < u. If all components of x are
binary, then we are done, so suppose that there exists a nonbinary component xi.
Since ai = 0, a Taylor expansion of f gives
f (x + ae) = f(x) + aV, f(x) a2di,
where e, is the ith column of the i1l. i I I matrix. The quadratic term in the expansion
is nonpositive. If the first derivative term is negative, then increase a above 0 until
either xi + a becomes 1 or 1Tx + a is an integer. Since the first derivative term
is negative and a > 0, f(x + aei) < f(x). If 1Tx + a becomes an integer, then
we are done. If xi + a becomes 1, then we reach a point xl with one more binary
component and with a smaller value for the cost function. If the first derivative term
is nonnegative, then decrease a below 0 until either xi +a becomes 0 or 1Tx +a is an
integer. Since the first derivative term is nonnegative and a < 0, f(x + aei) < f(x).
If 1Tx + a becomes an integer, then we are done. If xi + a becomes 0, then we
reach a point xl with one more binary component and with a smaller value for the
S. C. PARK, T. A. DAVIS, W. W. HAGER, and H. ZHANG
cost function. In this latter case, we choose another nonbinary component of x, and
repeat the process. Hence, there is no loss of . ii. i li; in assuming that 1Tx is an
integer.
Suppose that x is not binary. Since 1Tx is an integer, x must have at least two
nonbinary components, x i and xj. Again, expanding f is a Taylor series gives
f(x + a(e, ej)) = f(x) + a(Vi Vx,).f(x) + a2(2ai dii dj).
By (2.3), the quadratic term is nonpositive for ;,!i choice of a. If the first derivative
term is negative, then we increase a above 0 until either xi +a reaches 1 or xj a reach
0. Since the first derivative term is negative and a > 0, f(x + a(ei ej)) < f(x).
If the first derivative term is nonnegative, then we decrease a below 0 until either
xi + a reaches 0 or xj a reach 1. Since the first derivative term is nonnegative
and a < 0, f(x + a(ei ej)) < f(x). In either case, the value of the cost function
does not increase, and we reach a feasible point x, with 1TX1 integer and with at
least one more binary component. If x, is not binary, then x, must have at least two
nonbinary components; hence, the ;ii [111,. I I process can be continued until all the
components of x are binary. These ;,lii t. ii to x do not increase the value of the
cost function. O
The continuous quadratic programming problem (2.1) is NP hard. Hence, when
continuous solution algorithms, such as the gradient projection method, are applied
to (2.1), the iterates I .;, 11 converge to a local minimizer which is not the global
optimum. In order to escape from this local optimum, we need to make a nonlocal
change in x to locate a deeper valley than that containing the current best approx
imation to a solution of (2.1). The KL exchange is an example of such a nonlocal
change, the length of the movement is v since a 0 becomes 1 and a 1 becomes zero in
x. However, we have achieved much better success in escaping from local minimizers
if we allow 11 ii components of x to change. The next section describes our block
exchange QP.
3. Block exchange. Let x be a 0/1 vector li ;,_ the constraints of (2.1)
and let VI and V2 be the sets defined in (2.2). In a block exchange, the goal is to
move some of the vertices of VI to V2 and some of the vertices of V2 to V1 while
 r.f i i_ the constraint that the number of vertices in VI should be between I and u.
Let y and z be subvectors of x which correspond to the components of x which are 1
and 0 respectively. In other words, the ith component of y corresponds to the ith
vertex in VI, which we now view as an ordered set. Similarly, the ith component of
z corresponds to the ith vertex in V2. We set y, = 1 if and only if the ith element
of Vi is moved to V2. Similarly, let us set zj = 1 if and only if the jth element of
V2 is moved to VI. The number of vertices in VI is initially 1Tx. The constraint
that the total number of vertices in VI lies between I and u after the exchange can be
expressed
(3.1) 1 1Tx < 1TZ 1Ty < U 1Tx.
Let XI and X2 be the support of y and z respectively:
XA = {i:y = 1} and 2 ={j:j =1}.
These sets correspond to the vertices which are exchanged. The indices in XA corre
spond to vertices in VI which are moved to V2; the indices in X2 correspond to vertices
QUADRATIC PROGRAMMING AND GRAPH PARTITIONING
1 \ E 4 E 3
VI\X1 V2\X 2 IX1 V2\X2
FIG. 3.1. Exchange vertices X1 in VI with X2 in V2
in V2 which are moved to VI. The edges which participate in the exchange are the
following (see 1 io, 3.1):
St = Edges between Xl and VI \ XI
S2 = Edges between X2 and V2 \ X2
S3 = Edges between XA and V2 \ X2
S4 = Edges between X2 and VI \ XA
Edges connecting Xl and X2 and edges connecting VI \ XI and V2 \ A2 are not effected
by the exchange so they are ignored.
The change in the number of cut edges due to the exchange of vertices associated
with AI and X2 is given by the expression:
(3.2) Il + 12 13 14
where il denotes the number of elements in the set Si. Before the exchange, the
edges S and S2 are internal edges, while after the exchange, they become external
edges that are included in the collection of cut edges. Before the exchange, the edges
S3 and S4 are external edges, included in the set of cut edges; after the exchange,
these edges are internal edges.
Suppose that the rows and columns of A are  ,iii. 1 11 permuted so that the
leading rows and columns correspond to VI, the support of x, and the trailing rows
and columns correspond to V2. We block partition the resulting A in the form
(3.3) A All A12
where Aii correspond to Vi, i = 1, 2. Similar to (3.2), the change in the weight of the
cut edges associated with the exchange is given by
(3.4) (1 y)TAlly + (1 z)TA22Z 1 Z)TA21y (1 y)TA12z.
The first two terms are the weights of external edges created by the exchange, while
the last two terms are the weight of the prior external edges which became internal
S. C. PARK, T. A. DAVIS, W. W. HAGER, and H. ZHANG
after the exchange. Observe that the quadratic (3.4) can be written
(3.5) l y ( All A12 y
1 A21 A22 z
Motivated by (3.1) and (3.5), we consider the following quadratic programming
problem which we denote QP2:
minimize F(y, z) := 1y ) All +D A12 y
(3.6) 1z A21 A22 + D2 Z
,li,, to 0 < < 1, < < 1, I 1TX< TZTy < uTx.
Here D1 and D2 are diagonal matrices. If y and z are binary, then the terms
(1y)TDly and (1 z)TD2z
have no effect on the cost F since
(1 y)DYy = 0 = (1 z)D z.
As with the quadratic formulation (2.1) of the graph partitioning problem, we show
that that for a suitable choice of D1 and D2, the quadratic formulation (3.6) of the
exchange problem has a 0/1 solution. Moreover, the proof reveals how to convert
a fractional solution to a 0/1 solution without increasing the cost. In the following
theorem, we assume that the original matrix of weights A has been symmetrically
permuted into the form (3.3) so that the leading rows and columns correspond to the
support of a 0/1 vector x feasible in (2.1).
THEOREM 3.1. If x is a 0/1 vector which is feasible for (2.1), and the diagonal
matrix D satisfies the condition
(3.7) dii + djj 2aij > 0 for all i and j,
then (3.6) has a 0/1 solution.
Again, (3.7) is satisfied for the choice of D given in (2.4).
P ".... Since I and u are integers and since x is 0/1, it follows that both I 1Tx
and u 1Tx are integers. Let y and z be feasible in (3.6). We first show that there
exists a feasible point (y, z) for (3.6) with 1TZ 1Ty integer and F(y, z) < F(y, z).
If 1Tz 1Ty is not an integer, then at least one component of either y or z is not
an integer. Suppose that y, is not an integer and let e, denote the ith column of the
1;l. iiil matrix. Expanding F in a Taylor series gives
F(y + aei, z) = F(y, z) + aVy, F(y, z) a2dii
since ai = 0. The last term a2dii is nonpositive due to (3.7). If the first derivative
V, F(y, z) is negative, then increase a above 0 until either yi a = 1 or 1Tz 1Ty a
becomes an integer, whichever occurs first. This leads us to a new point with strictly
smaller cost than the original (y, z) since the first derivative term is negative and the
cost decreases as a increases. If the increase in a causes 1Tz 1Ty a to become an
integer, then we are done. If yi + a becomes 1, then we reach a feasible point y + aei
which has one more binary component.
QUADRATIC PROGRAMMING AND GRAPH PARTITIONING
If the first derivative VyF(y,z) is nonnegative, then decrease a below 0 until
either y, + a = 0 or 1Tz 1Ty a becomes an integer, whichever occurs first. Since
VyF(y,z) is nonnegative, this decrease in a will not increase the value of the cost
function. Again, if 1Tz 1Ty a becomes an integer, we are done. Otherwise, yi + a
becomes zero and we reach a point y + aei which has one more binary component.
By inductively .'1'1 i, these ;.,liI i i,. id to the fractional components of y and z,
we eventually reach a feasible point (y, z) with a better value for the cost function
and with 1Tz 1Ty integer. Thus, without loss of i. !. i I~l we assume that (y,z)
is feasible in (3.6) and 1Tz 1Ty is integer.
Suppose that y has at least two nonbinary components; let yi and yj denote
nonbinary components of y. Expanding in a Taylor series gives
F(y + a(ei ej), z) = F(y, z) + a(Vy, Vy3)F(y, z) + a2(2aij dii djj).
By (3.7) the a2 term is nonnegative for ;i!! choice of a. If the first derivative term
(Vy Vy,)F(y, z) is negative, then we increase a above 0 to decrease the cost. We
continue to increase a until some component of y + a(ei ej) reaches either 0 or 1.
Since 1T(ei ej) = 0, we have
1Tz 1T(y + a(e ej)) = 1Tz 1Ty.
Hence, this ;,lilr i i. il, to components y, and yj of y leads us to a new point with at
least one more binary component and with 1Tz 1Ty integer. The same 1i1L[ ,. ii
process can be applied to the components of z. Hence, when we are done, 1Tz 1Ty
is an integer and y and z have at most one nonbinary component.
Suppose that y has one nonbinary component yi. Since 1Tz 1Ty is an integer,
z must have a nonbinary component denoted zk and y, = zk. Define j = k + IV .
Expanding in a Taylor series gives
F(y + aei, z + aek) = F(y, z) + a(Vy, + V, )F(y, z) + a2(2aij dii djj).
By (3.7) the last term is nonpositive for all choices of a. If the first derivative term is
negative, then we increase a above 0 until yi + a = 1 = zk + a. If the first derivative
term is nonnegative, then we decrease a below 0 until y, + a = 0 = zk + a. In either
case, after these ;.l i tI i. I in the ith component of y and the kth component of z,
the cost value does not increase and the h11. . i .. 1Tz 1Ty does not change; hence,
the new point is binary and feasible in QP2. This completes the proof. O
COROLLARY 3.2. If D is chosen so that the '". .,',.i,, (3.7) is strict, then every
local minimizer of (3.6) is ".,,
JF.,..../ By the ;, i!1 1 given in the proof of Theorem 3.1, ;,~i nonbinary local
minimizer can be pushed to the boundary while improving the value of the cost
function. If the i. ,t 11i 1il (3.7) is strict, then when we push to the boundary, the
value of the cost function is strictly decreased. Hence, ;,~i local minimizer must be
binary. O
4. The algorithm. We now explain how to incorporate the theory developed in
Sections 2 and 3 in an optimization algorithm for the graph partitioning problem. The
overall 1 *I is to apply an optimization algorithm, such as the gradient projection
method, to QP1 until we reach a local minimizer; next, we apply an optimization algo
rithm to the exchange quadratic program QP2 in an effort to escape from the current
local minimum. If we are unable to find a better point, then we stop. Otherwise, use
the x obtained from QP2 as a starting guess in QP, and repeat the process.
S. C. PARK, T. A. DAVIS, W. W. HAGER, and H. ZHANG
We use two t lI. i I optimization algorithms to approximate a solution to QP1
and QP2. In the first optimization algorithm, we approximate the feasible set by
a sphere and we utilize the algorithm in [10, 13] to tin !!ii compute the global
minimum. Typically, a global minimizer for this sphere constrained problem lies
outside the feasible set. Hence, we l i.P I a global minimizing point onto the feasible
set. Such a projection is easily computed in O(n) time. In the second optimization
algorithm, we apply the gradient projection algorithm to either QP1 or QP2. We used
a version of the gradient projection algorithm based on an Al !nii., line search along
the projection arc (see [4, p. 226]).
In more detail, the steps of the algorithm are as follows:
Al. Define xc = al, a = (l+u)/('.!) Let x, be a solution to the following sphere
constrained problem
min f(x) l.. I to 1Tx = ( + u)/2 and ix x r.
Since QP1 has a solution with between I and u ones and with the remaining
entries zero, we choose rx to include all points x with (I + u)/2 ones and with
the remaining entries zero. In other words,
2 /.( 2)2 l+u 2( l++u
( 2 ) 2
A2. Let X2 be the projection of x, onto the feasible set for QP1. If f(x2) > f(xc),
then reduce rx and repeat Al.
A3. 1 i i1 i, from X2, we apply the gradient projection method to QP1 until we
reach a stationary point denoted x3.
A4. Using the method developed in Corollary 2.2, we transform xs to a binary
vector x4 with a better value for the cost function.
A5. Based on the binary structure of x4, we partition A as indicated in (3.5). Let
dy and d, denote the dimensions of y and z. \\ 1il a permutation of A, it
can be arranged so that d, < dy. We define z, = .5 and y, = .5d /dy (hence,
1TZc lTy, = 0). Let x5 = (y, z) be ;,!! solution of the problem
(4.1) minF(y,z) l.i. to 1Tz = 1T, Y + z < r
where
rJ = .7,.I1 + dy + ?1.. I /dy.
The radius r5 of the sphere in (4.1) is chosen large enough to ensure that
all possible solutions to the problem of minimizing F(y,z), ,i11'r i to the
constraint that 1Tz = 1Ty and y and z are binary, are contained in the
sphere.
A6. Let x6 be the projection of x5 into the feasible set of QP2. If F(x6) >
F(y, z,), then reduce re and repeat A5.
A7. 'i I I, i, from x6, we apply the gradient projection method to QP2 until we
reach a stationary point denoted x7.
A8. Using the method developed in Theorem 3.1, we transform x7 to a binary
vector xg.
A9. If the exchange associated with xs improves the partitioning associated with
x4, then we apply the exchange to x4 to obtain the new point x9; set X2 =
x9 and branch to A3. If the exchange associated with x8 does not strictly
improve the partitioning associated with X4, then we are done.
QUADRATIC PROGRAMMING AND GRAPH PARTITIONING
For the numerical experiments reported in this paper, we did not reduce the
radius of the spheres, as suggested in A2 and A6, when the solution of the sphere
constrained problem yielded a poorer objective function value than the centroid of
the sphere. This enhancement will be incorporated in a multilevel version of our
algorithms.
5. Numerical results. The optimizationbased algorithm developed in Section
4 should require much more CPU time than the multilevel technology of 1 1. 1 IS since
the optimization algorithms operate on the entire matrix. We are in the process of
developing compiled code and multilevel I . ,1 .1_ where the optimization method
.1.._ of Section 4 is applied to the compressed graphs generated in the multilevel
approach. As a preliminary assessment of the merits of the optimizationbased strat
for graph partitioning, we applied both p and 11. i. 1 IS to a series of graph
bisection problems. In other words, if n is even, then I = u = n/2, and if n is odd,
then I = u = (n + 1)/2. The partitions generated by p or 1 . 11. I 1S were used as
starting points for the optimization algorithm in step A3 to determine whether the
1i. 1 IS generated partitions could be further improved using the optimization algo
rithms. All the algorithms were implemented in MATLAB, and the test problems
were obtained from the UF Sparse Matrix Library maintained by Tii`;. .i Davis:
http://www.cise.ufl.edu/research/sparse/matrices/
In our numerical experiments, the diagonal of A is always zero. The offdiagonal
elements are constructed as follows: If S is a iii lii Ii matrix in the library, then
then aij = 0 if sij = 0 and aij = 1 otherwise. If S is a m by n ii. ~ 1 1i.1r i. matrix
with m > n, then aij = 0 if (STS)i = 0 and aij = 1 otherwise. If m < n, then
aij = 0 if (SST)ij = 0 and aj = 1 otherwise.
Since our codes are in MATLAB, we could not apply them to all the test matrices
(without expending a huge amount of CPU time). Altogether, we tried 701 test
problems; the mean dimension for A was 1157 and the mean number of edges in
the test problems was 57,1 T. There were 287 problems with dimension greater than
1,000, and there were .11 ; problems with more than 10,000 edges in the graph.
To quantitatively evaluate the improvement provided by the optimization rou
tines, we evaluated the i L 111 i
reduction in number of cut edges due to optimization algorithms
x 100.
the number of cut edges obtained by : 11. I IS
This expression gives the percent improvement in the number of cut edges obtained
by :,',,'1 i;! the optimization algorithms to the final partition generated by I 11. I IS.
In Table 5 we show the percentage of the matrices for which we could improve the
partition using the optimization algorithms. A detailed tabulation of our results is
posted at the following web site:
http://www.math.ufl.edu/~hager/papers/GP/
For each of the matrices where the cut edges were improved, we also compute the
average percentage of improvement. Overall, we could improve the partitions gener
ated by p. 1. 1 I S in about il,' of the problems, and the average improvement was
about il'; We could improve the partitions generated by 1 .1i. 1 IS in about .; I
of the problems, and the average improvement was about 5.7'" For both versions
of II. 1 IS, the greatest improvement occurred in matrices of the largest dimension.
In particular, for matrices of dimensions between 4001 and 5000, the average im
provement for p. 11. 1 IS was ii '! while the average improvement for li. 11. I S was
9.2%.
S. C. PARK, T. A. DAVIS, W. W. HAGER, and H. ZHANG
Dimension Number of Problems with Average
of problem problems Method cut edge reduction improvement
p.. 11 1S 193 (i ') 10.02%
1 to 1000 444
.to 1000 4I I 118 (27%) 5. 1'
p.1. l IS 111 (71! ) 11.37%
1001 to 2000 156 50
,.. I iS 50 (32% ) ',
p. 11. I IS 35 (7 ,) 7.; 1'
2001 to 3000 48
.2001 to 3000 48 11. 1 18 (38%) 4.09%
p..1 11 1S 18 (55".) 1 i'.
3001 to 4000 33 1 (
E, 1. i 1816 (49%) 7.42%
p.11. 1 IS 14 (7', ) 11.82%
4001 to 5000 20 1 .
h., i, iS 112 (60%) 9.;!
TABLE 5.1
Improvement in p and hMETIS due to the optimization
REFERENCES
[1] E. R. BARNES, An for the nodes of a graph, SIAM J. Alg. Disc. Meth.,
3 (1984), pp. 541550.
[2] E. R. BARNES AND A. J. HOFFMAN, spectra, and linear programming, in Progress
in Combinatorial Optimization, W. E. Pulleyblank, ed., Academic Press, New York, 1984,
pp. 1325.
[3] E. R. BARNES, A. VANNELLI! AND J. Q. WALKER, A new heuristic for .. the nodes
of a graph, SIAM J. Alg. Disc. Meth., 1 (1988), pp. 299305.
[4] D. P. BERTSEKAS, P. A. HOSEIN, AND P. TSENG, Relaxation methods for network flow problems
with convex are costs, SIAM J. Control Optim., 25 (1987), pp. 12191243.
[5] T. BUI AND C. JONES, A heuristic for reducing fill in sparse matrix factorization, in Proc. 6th
SIAM Conf. Parallel Processing for Scientific Computation, SIAM, 1993, pp. 445452.
[6] C. K. CIENG AND Y. C. WEI, An improved twoway partioning with stable perfor
mance, IEEE Trans. ComputerAided Design, 10 (1991), pp. 15021511.
[7] J. FALKNER, F. RENDL, AND H. WOLKOWICZ, A computational study of graph
Math. Program., 66 (1994), pp. 211240.
[8] C. M. FIDUCCIA AND R. M. MATTHEYSES, A lineartime heuristic for improving network par
tition, in Proc. 19th Design Automation Conf., Las Vegas, NV, 1982, pp. 175181.
[9] J. R. GILBERT, G. L. MILLER, AND S. H. TENG, Geometric mesh Implementation
and experiments, SIAM J. Sci. Comput., 19 (1998), pp. 20912110.
[10] W. W. HAGER, .J' a quadratic over a sphere, SIAM J. Optim., 12 (2001), pp. 188208.
[11] W. W. HAGER AND Y. KRYLYUK, Graph and continuous quadratic programming,
SIAM J. Alg. Disc. Meth., 12 (1999), pp. 500523.
[12] Multiset graph .. Mathematics of Operations Research, 55 (2002), pp. 110.
[13] W. W. HAGER AND S. C. PARK, Global convergence of SSM for minimizing a quadratic over
a sphere, Math. Comp., 74 (2005), pp. 14131423.
[14] M. T. HEATH AND P. RAGHAVAN, A Cartesian parallel nested dissection SIAM J.
Matrix Anal. Appl., 16 (1995), pp. 235253.
[15] B. HENDRICKSON AND R. LELAND, A multilevel . for .. graphs, Tech. Rep.
SAND931301, Sandia National Laboratory, 1993.
[16] An improved spectral graph .. for mapping parallel computations,
SIAM J. Sci. Comput., 16 (1995), pp. 452469.
[17] G. KARYPIS AND V. KUMAR, A fast and high ... multilevel scheme for . irregular
graphs, SIAM J. Sci. Comput., 20 (1998), pp. 359392.
[18] Multilevel kway .. scheme for irregular graphs, J. Parallel Distrib. Comput.,
48 (1999), pp. 96129.
[19] Multilevel kway hypergraph .. VLSI Design, 11 (2000), pp. 285300.
[20] B. W. KERNIGHAN AND S. LIN, An ,' heuristic procedure for .. graphs, Bell
System Tech. J., 49 (1970), pp. 291307.
QUADRATIC PROGRAMMING AND GRAPH PARTITIONING 11
[21] T. LENGAUER, Combinatorial i. for Integrated Circuit Layout, John Wiley, Chichester,
1990.
[22] J. G. MARTIN, Subproblem optimization by gene correlation with singular value
in GECCO'05, 1 ......... D.C., 2005, ACM.
[23] G. L. MILLER, S. H. TENG, W. THURSTON, AND S. A. VAVASIS, Automatic mesh
in Sparse Matrix Computations: Graph Theory Issues and Algorithms, A. George, J. R.
Gilbert, and J. W. H. Liu, eds., vol. 56 of IMA Vol. Math. Appl., New York, 1993, Springer
Verlag, pp. 5784.
[24] A. POTHEN, H. D. SIMON, AND K. LIOU, sparse matrices with eigenvectors of
graphs, SIAM J. Matrix Anal. Appl., 11 (1990), pp. 430452.
[25] A. J. SOPER, C. WALSHAW, AND M. CROSS, A combined multilevel search and multilevel opti
mization approach to ' .. J. Global Optim., 29 (2004), pp. 225241.
[26] S.H. TENG, Provably good *. and load balancing for parallel adaptive
!.. simulation, SIAM J. Sci. Comput., 19 (1998), pp. 635656.
[27] C. WALSHAW, Multilevel refinement for combinatorial optimisation problems, Ann. Oper. Res.,
131 (2004), pp. 325372.
[28] H. WOLKOWICZ AND Q. ZHAO, Semidefinite programming relaxations for the graph *
problem, Discrete A.ll,.i;..i Math., 9697 (1999), pp. 461479.
