OPTIMAL MULTIFACILITY LOCATION ON
TREE NETWORKS
By
BARBAROS C. TANSEL
A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1979
ACKNOWLEDGMENTS
I am deeply indebted and grateful to Dr. Richard L. Francis, the
chairman of my supervisory committee, for his excellent guidance,
numerous suggestions, and the generosity with which he invested his time
in listening to my ideas. Dr. Francis not only initiated my interest
in location problems but also inspired many of the ideas in this dis
sertation by asking the right questions at the right time.
I owe very special thanks to Dr. Timothy J. Lowe, the cochairman/
chairman of my committee during 19761978, presently of Purdue Uni
versity, for his active interest, overall guidance, and his inspiring
suggestions.
Dr. Francis and Dr. Lowe have shown sincere care about my progress
and their encouragement has been of utmost value in bringing this
dissertation to a completion.
I would also like to express my sincere thanks and appreciation
to the other members of my committee, Dr. Ralph W. Swain, Dr. Donald W.
Hearn, Dr. Antal Majthay, and Dr. Luc G. Chalmet for their interest in
my work and their suggestions during my proposal.
I am grateful to the Department of ISE for providing me with
assistantship during my graduate studies.
Mrs. Adele Koehler has done an excellent job in typing the manu
script. She is fast, accurate, and very observant. I sincerely
recommend her.
This research.was supported in part by NSF Grant #ENG 7617810,
the Army Research Office, Triangle Park, N.C., under contract
DAHC0475G0150, and by the Operations Research Division, National
Bureau of Standards, Washington, D.C.
TABLE OF CONTENTS
Page
ACKNOWLEDGMENTS . . . .... . ii
ABSTRACT . . . .... . . vi
CHAPTER
1 INTRODUCTION AND LITERATURE SURVEY . . ... 1
1.1 Introduction and Overview . . . 1
1.2 Terminology . . . ... .. 4
1.3 Survey of the Network Location Literature . 6
2 DUALITY AND THE NONLINEAR pCENTER PROBLEM AND
COVERING PROBLEM ON A TREE NETWORK ..... ..... 53
2.1 Introduction and Related Work . . ... .53
2.2 Problem Statements and Duality. . . ... 56
2.3 Dual Problem Interpretation . . ... .61
2.4 Covering Algorithm. . . . .. 67
2.5 Dual Problem Solution and the Strong Duality Theorem. 73
2.6 Results for the Covering Problem. . . ... 78
3 A VECTORMINIMIZATION PROBLEM ON A TREE NETWORK. . 84
3.1 Introduction. . . . .... .84
3.2 Problem Statement ................. 85
3.3 Distance Constraints and Characterization of
Efficient Points. . . . . ... 87
3.4 Examples. . . . . ... .94
3.5 Further Results on the Convex Hull Property .... .96
3.6 Algorithm to Construct Efficient Location Vectors 108
3.7 Efficiency for the Case of Rectilinear or
Tchebychev Distances. . . . ... 116
4 THE BIOBJECTIVE mCENTER PROBLEM ON A TREE NETWORK. 122
4.1 Introduction. . . . .. 122
4.2 Problem Statement, Notation, and Definitions. ... 123
4.3 Necessary and Sufficient Conditions for Efficiency. 126
4.4 Construction of the Efficient Frontier. . .. .134
Page
5 SUMMARY AND FUTURE RESEARCH . . .... 149
5.1 Summary. . . . . ... .. .149
5.2 Generalized MultiCenter Problem . ... 150
5.3 The tObjective mCenter Problem: Steps
Towards a Unified Theory ............ 153
5.4 Tree Networks and General Networks . ... 158
REFERENCES .. .. .. . .. . . 161
BIOGRAPHICAL SKETCH .. . . . . ... 170
Abstract of Dissertation Presented to the Graduate Council
of the University of Florida in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy
OPTIMAL MULTIFACILITY LOCATION ON
TREE NETWORKS
By
Barbaros C. Tansel
December 1979
Chairman: Richard L. Francis
Major Department: Industrial and Systems Engineering
In this dissertation we develop a theory for location problems which
involve locating multiple new facilities on a tree network with respect
to existing facilities at known locations.
The first problem we consider is the nonlinear version of the
pcenter location problem on a tree network for which the cost of each
served vertex is a strictly increasing continuous function of the dis
tance between the vertex and the nearest center,and the objective is to
minimize the maximum such cost over all possible locations of the
centers. We present a dual "dispersion" problem which may be inter
preted as the problem of choosing p + 1 (or more) vertices such that
the minimum cost to serve any two of the chosen vertices by a single
common center is as large as possible. We give a weak duality theorem
which applies to all general networks and a strong duality theorem
which applies to all tree networks. The strong duality theorem also
specifies the necessary and sufficient conditions for an optimal solu
tion to either problem. We provide algorithms of polynomial complexity
for solving either problem provided that certain needed inverse functions
can be evaluated in a polynomial order of effort. The pcenter problem
is typically solved with the aid of a nonlinear covering problem for
which we also give a dual with a physical interpretation. We provide
a covering algorithm which solves both the covering problem and its dual
simultaneously.
The second problem we consider is a vectorminimization problem
which involves as objectives the distances between specified pairs of
new and existing facilities and specified pairs of new facilities. We
relate the vectorminimization problem of interest to a distance con
straints problem which imposes upper bounds on the distances between
specified pairs of facilities. We develop the necessary and sufficient
conditions for efficiency by making use of the theory developed for the
related distance constraints problem. Efficient solutions to the
vectorminimization problem of interest are such that in order for any
new facility to be closer to some facility than it already is, it must
in turn be placed farther from some other facility. Based on the
necessary and sufficient conditions,we provide an algorithm which
constructs an efficient location vector from a given nonefficient
solution.
The third problem we consider is a biobjective minimax problem
which involves as objectives the maximum of the weighted distances
between specified pairs of new and existing facilities, and the maximum
of the weighted distances between specified pairs of new facilities.
We again relate the problem to the distance constraints problem and
derive the necessary and sufficient conditions for efficiency by making
use of the distance constrains. Further, we provide an (m(m +
use of the distance constraints. Further, we provide an O(m (m + n
algorithm to construct the efficient frontier, where m and n are,
respectively, the number of new and existing facilities.
v iii
CHAPTER 1
INTRODUCTION AND LITERATURE SURVEY
1.1 Introduction and Overview
Although some mathematical models of location can be traced back
to the early seventeenth century, almost all the work on operational
models for the location of facilities has taken place within the past
22 years, between 1957 and the present. An extensive annotated bibli
ography on locationallocation problems is provided by Lea [78]. A
more recent selective bibliography is given by Francis and Goldstein
[30].
Location problems commonly involve locating a number of new
facilities (sources) in a given location space so as to provide goods
or services to a specified set of existing facilities (demands) under
one or more criteria, and, possibly, subject to a set of constraints.
The quality of the service is typically measured in terms of the dis
tances among the facilities. The use of distances is, perhaps, the
major feature which distinguishes location problems as a special class
of optimization problems. Hence, associated with any location problem
is an underlying location space on which a "distance" is defined.
Several variations of the general location problem are possible,
depending upon the type of location space, the distance function, the
number and areal extent of the facilities, the type of interactions
between the facilities, the objective criteria used, the constraints,
the presence or lack of random elements, and possibly other factors
as well.
Among the several variants, planar location problems received
special attention in the past, starting with the earliest contribu
tions, for example [106]. In such planar problems, one is interested
in locating new facilities in the Euclidean plane with respect to
existing facilities. For continuous planar problems, where any point
in the plane is a feasible location, the typical distance used is the
Z distance, special cases of which are the rectilinear, Euclidean,
p
and Tchebychev norms. For discrete planar problems, where there are
a finite number of candidate locations for new facilities, the distance
between any potential new facility location and any existing facility
is a specified positive number. Such discrete problems, due to the
finite nature of feasible locations, readily lend themselves to integer
programming formulations. The reader is referred to the book by
Francis and White [31] for a discussion of planar problems and a wealth
of references.
A number of real life applications suggest that, in some in
stances, a network space can be a more faithful representation of the
reality than the Euclidean plane. For example, in a road network, a
communication network, or a pipeline system, travel occurs along the
arcs of the underlying network rather than in straight lines or recti
linear paths. Hence, for such problems, the use of shortest path
distances along the arcs of the network can approximate the travel
distance more closely than the k distance. As opposed to planar
problems, network location problems have received much less attention
in the past. As reported by Lea [79], there are some 1500 published
papers on locationallocation problems. Among these, about 80 are on
network location problems, a ratio of a little less than 6%. Hence,
network location problems deserve welljustified attention in future
research.
In this dissertation, we develop a theory for a number of location
problems which involve locating multiple new facilities on a tree net
work with respect to existing facilities at known locations. At this
point we give an overview of the dissertation.
In the remainder of Chapter 1, we specify our terminology and
give a survey of the network location literature. We discuss minimax and
minisum problems/and multiobjective problems involving minimax and
minisum objectives as well as other objectives. Discussed also are
problems with distance constraints. We highlight some of the convexity
properties of trees (see [22]) in relation to the problems discussed.
The chapter ends with a brief discussion of pathlocation problems.
In Chapter 2, we develop a theory for the nonlinear pcenter
problem on a tree network. The problem is a generalization of the
linear pcenter problem which involves locating p new facilities on
a network so as to minimize the maximum weighted distance from any
existing facility to its nearest new facility. Nonlinearity is ob
tained by replacing each weight by a strictly increasing function of
the distance. We formulate a dual "dispersion" problem and prove a
weak duality and a strong duality theorem. The strong duality theorem
also specifies the necessary and sufficient conditions for an optimal
solution to either problem. We provide algorithms of polynomial com
plexity for solving either problem. Discussed also are a covering
problem and a dual "divergence" problem. We provide a covering
algorithm which solves both the covering problem and its dual simul
taneously.
In Chapter 3, we study a vectorminimization problem in relation
to a distance constraints problem. The problem involves as objectives
the distances between specified pairs of new and existing facilities
and specified pairs of new facilities. We extend the results of [32]
to develop a theory for identifying unique solutions to distance con
straints, and use this theory to develop necessary and sufficient
conditions for efficient solutions to the vectorminimization problem
of interest. Further, we provide an algorithm which constructs an
efficient location vector from a given nonefficient solution.
In Chapter 4, we study a biobjective location problem which in
volves as objectives the maximum of the weighted distances between
specified pairs of new and existing facilities, and maximum of the
weighted distances between specified pairs of new facilities. We
characterize efficient solutions and provide an algorithm for construct
ing the efficient frontier.
In Chapter 5, we pose a number of unresolved questions in relation
to the problems discussed and point out directions for future research.
1.2 Terminology
Before discussing the literature we specify our terminology.
An undirected network N = {V,E} is a collection of two sets V
and E, called the set of vertices and the set of edges of N, respec
tively. Each edge in E is described by an unordered pair of vertices.
5
Network N is said to be edge weighted if, associated with each of its
edges, is a specified real number. Given an undirected network
N = {V,E) with positive edge weights, an imbedding of N, written as
N = {V,E}, is a geometric realization of N is some space S such that
there is a onetoone correspondence between the members of V and V,
and E and E, respectively; each edge ecE is a rectifiable arc, and no
two edges in E intersect at more than one point, a vertex. The length
of edge e in E is defined to be the edge weight of the corresponding
member in E. A point of an imbedded network N = {V,E} is any point
along any edge in E, including the vertices. We write xeN to mean x
is a point in N. The distance d(x,y) between any two points x,ycN is
the length of a shortest path P(x,y) joining the two points. The
function d(.,.) satisfies the axioms of a metric on N so that the set
N together with d(.,.) determines a metric space.
The axioms of a metric are as follows: For any two points x,ysN,
1. d(x,y) > 0 if x # y; d(x,x) = 0,
2. d(x,y) = d(y,x),
3. d(x,y) < d(x,u) + d(u,y) for any ucN.
For a more detailed discussion of how to construct a metric space
(N,d) from a given edge weighted network N, the reader is referred to
Dearing and Francis [19], or Dearing, Francis, and Lowe [22]
We restrict ourselves to finite undirected connected networks
that contain no loops and no multiple edges. We omit the term "im
bedded," and simply take a network to mean an imbedded network on
which the distance d(.,.) is defined. For all other networks, we use
the terms "graph," "arcs," and "nodes" instead of network, edges, and
vertices.
Finally, for tree networks, we write T instead of N. In passing,
we note that the shortest path P(x,y) between any two points x,ysT is
unique, as otherwise T would contain a cycle.
1.3 Survey of the Network Location Literature
Historically, the earliest precise mathematical formulation of a
location problem on a network appears to be due to Hakimi [47] in 1964.
Prior to Hakimi's paper, the problem of finding the best threshing
site for harvested wheat was attacked by using a network location model
in 1962 by Hua LoKeng and Others [60]. This model was presented only
at an intuitive level and no mathematical formulation or properties
were given. A (correct) solution procedure was suggested (in the form
of a poem), which was to be discovered independently by Goldman [42] in
1971. Since 1964, a literature of approximately 80 papers has grown
till the present. Several new problems, as well as certain extensions
and generalizations of old problems, have been introduced.
A recent text by Handler and Mirchandani [58 ] discusses ex
tensively a portion of the literature involving minimax and minisum
problems as well as singlefacility biobjective problems involving
the combination of these two objectives.
A "family tree" for network location problems is shown in
Figure 1.1. Although not exhaustive, the family tree covers most of
the problems formulated since 1964. With reference to the family tree
shown in Figure 1.1, network location problems can be broadly classi
fied into two groups: pointlocation problems and pathlocation
problems. Pathlocation problems have been recently introduced by
Figure ].1. Family Tree for Network Location Problems
Slater [102]. A large portion of the literature deals with point
location problems. Pointlocation problems may be classified into
three categories: single objective problems, multiobjective problems,
and a body of results of a general and unifying nature.
In the remainder of this section we give a detailed discussion
of the problems outlined in the family tree.
PointLocation Problems
Here, we consider a number of problems that involve locating new
facilities at points on a network. The general format of the dis
cussion is as follows: For each problem type, we first define a
kernel problem. Then, we discuss the related literature on the kernel
problem, as well as several special cases and extensions of it. We
point out relations between different problem types, whenever such
relations exist.
The pcenter problem
Let N be a network with a vertex set V = {vl,...,v } and an edge
1 n
set E. Denote by X a finite set of points, each of which is in N.
Let I be the set of integers 1 through n. For each vertex v., ieI,
1
define the distance D(vi,X) between vertex v. and the point set X by
D(v.,X) = min[d(vi,x): xeX]. With this definition, D(v.,X) is speci
fied by a nearest point in X to v Let w. and a. be two given numbers
1 1
associated with vertex vi, icI. We call wi a weight and ai an addend.
We assume that each wi is nonnegative and at least one wi is positive.
For any finite point set X CN, define the function f(X) by
f(X) = max[w.D(v.,X) + a.: icl] *
The problem of interest is the following: Given a positive integer p,
find a point set X* = {x*,...,x*}, and a real number r
I p p
such that
r = f(X*) = min[f(X): XI = p, X c N] (1.3.1)
where the symbol j* means the cardinality of a set.
The problem defined by (1.3.1) is called the pcenter problem.
Any set X* of ppoints that solves (1.3.1) is called an absolute p
center of N, and the minimum objective value r is called the pradius.
For p = 1, an absolute 1center is simply called an absolute center
of N.
If in (1.3.1), each xcX is restricted to a vertex location, the
resulting problem is called the vertex restricted pcenter problem and
any set X* C V of p points that solves it is called a vertex restricted
pcenter of N. A vertex restricted 1center is simply called a vertex
center.
We note that the pcenter problem is usually formulated in the
absence of addends. In what follows, we will assume all addends are
zero, unless we explicitly mention them. The case with all w. equal
1
to unity will be referred to as the unweighted case.
With this terminology, the pcenter problem is the problem of
finding p points on a network so that the maximum (weighted) distance
between any demand point and its nearest center is as small as possible.
The problem is perhaps most applicable to the location of emergency
facilities such as fire stations, ambulance centers, and the like, as
10
in such problems a common objective is to provide "good" service to
each demand point by at least one facility within a least possible
distance.
In what follows, we first discuss the 1center problem on general
networks and on tree networks. Then, we discuss the vertex restricted
1center problem. Finally, we will discuss the pcenter problem in
relation to a "covering" problem to be defined later.
1Center problem on a general network. The absolute 1center
problem was defined and solved by Hakimi [47] in 1964. For finding the
absolute center, Hakimi examines the function f on each edge, finds a
best local minimum on that edge, and selects the best among IEJ such
local minima. This method takes advantage of one important property
of f, namely, that it is piecewise linear and continuous on each edge
with at most n(n 1)/2 break points. A local minimum always occurs
either at a break point of f or at an end point of the edge. Hakimi,
Schmeichel, and Pierce [50] showed that Hakimi's method can be imple
mented in 0(JIEn2logn) computational effort and gave a computational
refinement which reduces the effort to O(JElnlogn) for the unweighted
case. Further refinements of the procedure were obtained by Kariv
and Hakimi [65], resulting in an O(JIEnlogn) algorithm for the
weighted case and 0(JEJn) algorithm for the unweighted case. All
these refinements focus on finding the break points and the local
minimum of.f in the most efficient manner.
A somewhat more general version of the 1center problem was con
sidered by Frank [36], and (apparently) independently by Minieka [881,
as Minieka makes no reference to Frank's paper. In this modified
version, called here the continuous 1center problem, each point on
11
the network is a demand point (as opposed only to vertices). The
weight of each point is unity. The objective to be minimized over all
xeN is defined by f(x) = max [d(y,x): yeN]. Both authors showed that
the problem can be reduced to a computationally finite one and pro
posed a solution procedure which is very similar to Hakimi's.
A probabilistic version of the 1center problem was considered
by Frank [34, 35] and a number of bounds were obtained on the expected
value of the 1radius.
For the unweighted case, Singer [101] proved that there exists a
"critical" path, not necessarily a shortest path, connecting two cri
tical vertices such that an absolute center of the network is at the
midpoint of this path.
1Center problem on a tree network. We now concentrate on ab
solute centers of tree networks. Goldman [44] solved the unweighted
case in the presence of addends. Goldman's algorithm is based on the
repeated application of a "trichotomy theorem" that either determines
the edge on which the absolute center lies, or reduces the search to
one of the subtrees obtained by removing all interior points of that
edge. Halfin [51] refined Goldman's algorithm to make it simpler and
computationally more efficient. Halfin's algorithm finds a vertex
center first, and determines the absolute center by examining all
vertices adjacent to the vertex center.
For the unweighted case with no addends, Handler [55] presents
an especially elegant algorithm. Handler's method finds a longest
path of the tree and locates the absolute center at the midpoint of
the path. To find a longest path, Handler chooses an arbitrary vertex
vi, finds a farthest vertex v from v., and then finds a farthest
S 1
12
vertex vt from v The path P(v ,v ) is a longest path and its mid
point is the unique absolute center of the tree. This procedure
requires a computational effort of O(n). Handler's algorithm is
extended by Lin [81] to the unweighted case with addends. Lin showed
that the absolute center of a general network N with vertex addends
can be found by determining the absolute center of an expanded net
work N' whose vertex addends are all zero. Network N' is obtained from
N by adding a new vertex adjacent to each old vertex, with the length
of the edge connecting the two equal to the addend associated with
the old vertex. For a tree network T, the resulting network is a
tree T' and Goldman's 0(n) algorithm can be applied to T'.
The more general case with both weights and addends was considered
by Dearing and Francis [19], and for the case of a tree network an
0(n2) algorithm was given. The DearingFrancis paper appears to be
the first to construct a well defined metric space N with distance
d(.,.) from an arc weighted graph N. This mathematical formality per
mits the use of such concepts as compactness, continuity, and the
extreme and intermediate value theorems. They showed that the distance
d(x,.) is continuous for each fixed x, in turn implying that f(x) is
continuous for every x. From compactness and continuity considera
tions, they proved the existence of an absolute center for all compact
networks, and its uniqueness for all compact tree networks. They
obtained a lower bound on rl which is applicable to all networks, and
proved that it is always attainable for tree networks. Once the lower
bound is determined, it identifies two "critical" vertices, and the
absolute center can be readily located on the path joining the two.
The bound is the maximum of n(n 1)/2 terms, resulting in a
13
computational complexity of 0(n2), and is given by
a : max[a..: 1 < i j L n]
13
where (1.3.2)
w w. d(v.,v.) + w.a. + w.a
.. = 1 1j ji I< <
ij wi + w
Hakimi, Schmeichel, and Pierce [50] proved a theorem that reduces the
computational effort for computing this lower bound. Their theorem
states that if for some a it is true that max[a .: 1 I i 5 n] = a =
max[a ti: 1 i < n] then a is the maximum of all a... A different
solution procedure is also given by Kariv and Hakimi [65] for the
same problem. Rather than computing the lower bound, their procedure
confines the search to successively smaller subtrees until an edge is
obtained. The absolute center is located at the local center (also
the global center for a tree) on this edge using Hakimi's procedure
for finding a local minimum. This algorithm is of 0(nlogn).
A nonlinear version of the 1center problem was considered and
solved by Dearing [18], and by Francis [29]. In this version, each
weight wi is replaced by a monotone increasing function f. of the
distance d(vi,x). Both authors obtained a lower bound similar to the
one defined by (1.3.2). The bound is applicable to all networks and
is always attainable for tree networks.
A "roundtrip" version of the problem was solved by Chan and
Francis [11]. In this version each "demand point" is a pair of ver
tices (v.,u.) and f(x) is the maximum of the roundtrip distances
defined by p.(x) W w.[d(v.,x) + d(x,u.) + a.]. A lower bound, similar
1 1 1 1 1
to the one defined by (1.3.2) is obtained. The bound is again
applicable to all networks and always attainable for tree networks.
Vertex constrained 1center problem. The vertex constrained
1center problem was considered as early as 1869, and perhaps earlier,
by Jordan [63] as a graph theoretic problem. This problem can be
solved by examining the distance matrix of the network, as demonstrated
by Hakimi [47]. Rosenthal, Pino, and Coulter [98] introduced a gener
alized algorithm that solves a number of "eccentricity" problems on
tree networks, one of which is the vertex restricted 1center problem.
In this case, the eccentricity of a vertex is defined to be the
distance from that vertex to a farthest vertex. This generalized
algorithm determines the eccentricity of each vertex by making only
two traversals of the vertices. The vertex center is that vertex
with the minimum eccentricity. Slater [103] considered the problem
of finding the vertex center of a network with respect to subnetworks.
In this version of the problem, each demand is a known collection of
vertices (or a subnetwork induced by the collection). The distance
between a vertex and any such collection is defined by a nearest
element of the collection to that vertex. For a given vertex, the
value of the objective function at that vertex is the maximum of the
distances between that vertex and any such collection. Slater showed
that a matrix D' can be constructed from the distance matrix D of the
network, so that each entry of D' is a distance from a vertex to a
nearest element of a collection. Slater demonstrated that the vertex
center with respect to collections of vertices can be found by
examining the matrix D'.
15
This completes the discussion of the 1center problem. We now
concentrate on the pcenter problem for p > 2.
pCenter problem on a general network. The pcenter problem was
defined by Hakimi [48]. Subsequently, a number of solution procedures
have been suggested. A common characteristic of all these procedures
is that they all rely on solving a sequence of covering problems.
For completeness, we first define a set covering problem and an
rcover problem.
Let A be a matrix of zeros and ones, y a vector of zeroone
variables yi. The problem of minimizing yi so that each row of Ay
i
is greater than or equal to one is called the (minimal) set covering
problem. Given the function f(X) = max{w.D(v.,X): 1 5 i n}, the
problem of minimizing IXI so that f(X) r for some given value of r
is called the rcover problem.
Denoting by q(r) the minimum value of the rcover problem, it
can be readily shown that, if q(r) = p for some r, and q(r') > p for
any r' < r, then r is the pradius and any X which solves the rcover
problem is an absolute pcenter.
In what follows, we concentrate on the absolute pcenter problem
on a general network.
Minieka [87] considered the unweighted case on a general network
and showed that the problem can be reduced to a computationally finite
one. Minieka identifies a finite point set P' such that there exists
an absolute pcenter contained in P = P' U V. A point x on some edge
is a member of P' if and only if x is the unique point on its edge
such that d(v.,x) = d(x,v.) for some two distinct vertices vi and v..
Based on this result, Miniek suggested a rudimentary algorithm that
Based on this result, Minieka suggested a rudimentary algorithm that
16
relies on solving a finite sequence of set covering problems. Using
the framework provided by Minieka, an exact algorithm was developed
by Garfinkel, Neebe, and Rao [38] for the unweighted case. The
algorithm uses the property that the pradius is determined by one of
a finite number of elements, namely, one of the distances between any
vertex and any point in P. Call the points in P edge bottleneck
points and let d.. be the distance between vertex v. and the jth
edge bottleneck point. Let Z and Z be a lower and upper bound on the
value of r Initially Z = 0, and Z is obtained by a trial solution.
P
Among all the distances d.. that fall within the interval [Z,Z], one
of them will determine the value of r Pick one such distance, say
p
dst, with Z < dst < Z, and let r = dst be a specified radius. Now,
we want to know if we can cover all vertices of N within this critical
distance r by using only p points. If we cannot, then clearly r is
too small a radius for p points to cover all vertices. Hence we con
clude the pradius r must be within the interval [r,Z]. In this
case, the lower bound is shifted to r, and the procedure is repeated.
In the other case, we find a set X of p points that cover all vertices
within r, but it is doubtful if this point set is an absolute pcenter.
Clearly, then, the value of r will be within the interval [Z,f(X)].
Hence, the upper bound is shifted to f(X) for this case and the whole
procedure is repeated. Termination occurs whenever the lower and
upper bounds become equal. The rcover part of this procedure is
solved by obtaining a feasible solution, if it exists, to a set cover
ing problem. Let A be a IVI by JPI matrix with entries aij equal one
if vertex v. is within a distance r of the jth edge bottleneck point
and zero otherwise. Then, solving the system y i p, Ay y 1,
i
17
y.i{0,1} will determine whether or not at most p points (in P) can
cover all vertices of N within a radius r. Computational experience
is reported and it is found that the procedure works better for larger
values of p, as in this case the initial upper bound Z is small, and
significant computational savings result in identifying those edge
bottleneck points whose distances fall within the interval [0,Z].
The weighted case on general networks was considered by Christofides
and Viola [15], and an approximate solution procedure was given. The
procedure finds a set X of ppoints whose objective value f(X) is
within some eneighborhood of the actual pradius r The procedure
p
obtains X by solving a sequence of rcover problems with successively
increasing values of r. Termination occurs when the solution of an
rcover problem generates p (or less) points the first time. In the
process, one also obtains approximate solutions for n1, n2,..., p+l
center problems. The solution of each rcover problem is obtained in
two stages: First, all feasible solutions to the rcover problem are
obtained by finding all regions on the network that can be reached by
a vertex within a radius of r. Then, among all these feasible solu
tions, those with minimum cardinality are found by solving a set
covering problem. To find all regions on N reachable by a vertex v.,
one "penetrates" a distance of r/wi along all possible paths originating
at v.. The procedure is repeated for each vertex and the intersections
1
of these penetrations are found. Each maximal intersection defines a
connected region all of whose points are reachable by a subset of
vertices within a radius r. The subset of the vertices is that which
defines the intersection. These regions jointly cover all vertices
of N, and it is possible that a subcollection of the collection of all
18
these regions may also jointly cover all vertices. Hence, to find a
minimum cardinality feasible solution, one needs to choose the minimum
number of regions that jointly cover V. This choice can be made by
defining a zeroone matrix A, so that an entry aij of A is one if
vertex vi is covered by region j, and zero otherwise. Solving the
set covering problem with matrix A will provide a solution to the
rcover problem. Computational experience is reported and it is found
that the procedure works better for small values of p, as the set
covering part of the procedure takes a significant portion of the
total computational time.
An important result is due to Kariv and Hakimi [65]. They showed
that the pcenter problem on a general network is NPcomplete. Kariv
and Hakimi also showed that the weighted case (as well as the un
weighted case) can be reduced to a computationally finite one. Based
on this finiteness property, they gave an algorithm whose order of
complexity is polynomial in IEJ, but exponential in p. To show com
putational finiteness one argues as follows: For any absolute pcenter
X = {x1,...,x }, there will be a subset V. of vertices covered by the
ith center x.. If N. is the (sub)network induced by V., then it can
be shown that the absolute center x* of N. can replace x. without in
creasing the value of the objective function, so that X* = {x*,...,x*}
1 p
is also an absolute pcenter. Hence, one can restrict one's attention
to absolute pcenters every element of which is the absolute 1center
of some subnetwork. The absolute 1center of any subnetwork of N
will occur either at a vertex or at one of at most IEJn(n 1)/2
"suspected" points. A suspected point on an edge is a point x such
that, for some two distinct vertices vi and v., x is a break point on
J
19
its edge of the function f..(.) = max[w.d(vi .), w.d(v.,.)], and
that the two linear pieces defining that breakpoint have slopes of
opposite signs. There can be at most n(n 1)/2 suspected points on
each edge, resulting in a total of O(En 2) suspected points on all
edges. If S is the set of all suspected points together with the set
of all vertices, then there is an absolute pcenter contained in S.
The KarivHakimi procedure selects pi points from S and determines
all the vertices covered jointly by these pi points. All uncovered
vertices are assigned to the pth center. Corresponding to each center,
the 1radius is determined (with respect to the subset of vertices
covered by that point) and the maximum of these 1radii determines
the pradius for that trial solution. The algorithm tries every
possible combination of pi points selected from S and chooses that
combination which minimizes the pradius. The KarivHakimi procedure
is the only exact algorithm available so far for finding an absolute
pcenter of a vertex weighted general network.
A further result on the computational difficulty of the pcenter
problem on a general network is given by Nemhauser and Sheu [92].
They showed that finding an approximate solution to the vertex restricted
or absolute pcenter problem whose value is within 100% or 50%, respec
tively, of the optimal value is NPhard (i.e., at least as hard as
any NPcomplete problem).
Vertex restricted pcenter problem. The vertex restricted p
center problem is considered by Toregas, Swain, ReVelle, and Bergman
[109]. A solution procedure is given which relies on solving a sequence
of minimal set covering problems, each corresponding to a specified
radius r. Given a radius r, a 01 matrix A can be formed with n rows
20
and n columns, so that an entry a.. is 1 if vertex v. is within a
distance r of v., and 0 otherwise. If one solves a set covering
1
problem using the matrix A, the variables whose values are 1 in an
optimal solution determine a feasible solution to the vertex restricted
rcover problem. The set covering problem is solved by relaxing the
integrality constraints. In the case of noninteger termination, a
single cut produced an integer solution in a large proportion of the
cases. Their computational experience indicates that noninteger
termination seldom occurs.
pCenter problem on tree networks and duality. In what follows,
we concentrate on the pcenter problem on tree networks. First, we
define the "continuous" pcenter problem. In the continuous pcenter
problem, each point in T is a demand point as opposed only to vertices.
Weights are absent (or unity). For any XC T, f is defined by
f(X) = max{D(y,X): yeT} and the continuous pcenter problem is to
find an X*C T such that
r = f(X*) = min[f(X): IXJ = p, X C T]
P
Minieka [88] considered the continuous pcenter problem on a
general network and showed that it can be reduced to a computationally
finite one.
Shier [100] considered the continuous pcenter problem on a tree
network and defined a dual "dispersion" problem. The dispersion
problem is to find p+l points on T the nearest two of which are as
far apart as possible. More explicitly, let U be any finite point
set with IUI = p+l and define h(U) by
h(U) = min{d(ui,u.): 1 < i < j < p+1} .
21
The dispersion problem is to find a U* C T such that
h(U*) = max{h(U): UC T, Jul = p+1}
At optimality, Shier's duality result states that
1
r = h(U*)
p 2
for a tree network. The equality may not hold for general networks.
However, Shier showed that the objective value of the continuous p
center problem is always bounded below by onehalf the objective value
of the dispersion problem for any network.
Chandrasekaran and Tamir [14] observed that Shier's duality result
holds when one replaces T by any subset S of T. Chandrasekaran and
Daughety [12] described a procedure for solving the dispersion problem.
They first solve the related problem of locating the maximum number
of points on T such that any two of them are at least X distance
apart for a fixed (positive) X. This problem is solved by working
from "tips" of T to the "center" of T. The general scheme is to use
the algorithm for different values of X, until the number of points
found is p+l and a slightly larger X generates p or less points.
A number of solution procedures have been given for the pcenter
problem on tree networks. We now discuss these procedures.
Handler [57] considered the continuous pcenter problem on a
tree network for the special case of p = 2 and obtained an 0(n)
algorithm. Handler first finds the absolute 1center of T, say x*,
and splits the tree at x* obtaining two disjoint subtrees T1 and T2.
Finding the absolute 1center of each Ti, say x* and x*, determines
an absolute 2center of T.
22
An algorithm of complexity 0(n2 logn) is described by Kariv and
Hakimi [65] for finding the absolute pcenter of a vertex weighted
tree network. They show that there are n(n 1)/2 possible values
for r namely, the numbers a.. = w wjd(v.,v.)/(w + w.) for each
combination of vertices vi, vj. The algorithm computes all these
numbers, arranges them in increasing order, and performs a binary
search on this list of numbers. The search relies on solving an r
cover problem for each value of r chosen from the ordered list {a..}.
The search terminates when the smallest r in the list is found for
which the rcover problem generates at most p points. The covering
part of the algorithm requires a computational effort of 0(n) for each
r, and a total effort of O(nlogn) for all values of r tried during the
binary search. Hence, the computational effort is determined by the
initial computation and ordering of the numbers a ij and is of
0(n2logn).
A similar approach is used by Chandrasekaran and Daughety [12]
to solve the continuous pcenter problem on a tree network. First,
they provided an 0(n) procedure for finding the minimum number of
points needed to cover every point of T within a given radius r.
Then, they provided a method to compute r A further refinement of
the method is given by Chandrasekaran and Tamir in [14]. They proved
that r is determined by one of the numbers d(t,t')/2k, where t and
P
t' are any two tip vertices and k is any integer between 1 and p. The
total computational effort for finding r and applying the covering
P
algorithm is of 0((nlogp) ).
A somewhat different approach, which relies on finding a clique
on a related graph, is given by Chandrasekaran and Tamir [13]. They
23
define an intersection graph G for a fixed value of r as follows: G
r r
has nodes corresponding to demand points v ,... ,v Two nodes of G
1 n r
are connected by an arc if the corresponding demand points can be
jointly covered by a (single) common center within a radius of r.
Once G is formed, finding a "clique cover" of G solves the rcover
problem. A clique cover of G is a minimum number of cliques in G
such that every node is in at least one clique. The solution to the
clique cover problem in G determines a solution to the rcover problem.
The procedure is repeated for different values of r until a smallest
value of r is found for which the clique cover solution generates at
most p cliques. The computational complexity of the procedure is
polynomial. In particular, the computational effort for finding the
minimal clique cover of G is polynomial because G satisfies the
r r
property that any circuit in G with at least four arcs contains a
chord (i.e., an arc which connects two nodes of the circuit and is
not an element of the circuit). For chordal graphs, algorithms of
linear order have been developed (see [39], [97]) for finding a
minimal clique cover.
This completes the discussion of the pcenter problem.
The pmedian problem
The difference between the pcenter and the pmedian problem is
that the objective criterion is changed from minimax to minisum. More
specifically, define the function f(X) for any finite point set X C N
by
f(X) = wiD(vi,X)
ice
24
The pmedian problem is the following: Given a positive integer p,
find a set X* of ppoints such that
f(X*) = min[f(X): IXI = p, X C N] .
Any set X* of p points that minimizes f is called an absolute p
median of N. If each member of X is restricted to a vertex location,
the resulting problem is called a vertex restricted pmedian problem.
Due to a result by Hakimi [47, 48] there exists an absolute pmedian
entirely on the vertices of N. For this reason, the distinction be
tween the vertex restricted and unrestricted versions is insignificant.
Hence, we will take the term "pmedian" to mean a solution to either
version of the problem. A 1median is simply called a median.
The pmedian problem arises naturally in locating plants/ware
houses to serve other plants/warehouses or market areas. The problem
is also motivated by ReVelle, Marks, and Liebman [96] as an example of
a public sector location model where vertices represent population
centers and facilities represent post offices, schools, public build
ings, and the like.
The 1median problem. Hakimi [47] appears to be the first to
define an absolute median. Hakimi proved the important result that
there exists an absolute median at a vertex of the network. This
result reduced the search to a finite number of points. The median
can be found by summing each row of the weighteddistance matrix and
choosing the vertex whose row sum is the minimum. This procedure takes
O(n3) operations to compute the distance matrix followed by 0(n2)
operations to find the median.
25
For tree networks, more efficient algorithms can be devised to
find a median. An 0(n) algorithm was given by Hua LoKeng and Others
[60] and independently by Goldman [42]. The algorithm reduces the
search to successively smaller subtrees until a median is found. At
each stage, one chooses an arbitrary tip vertex (a vertex of degree
one) of the current tree. If the (modified) weight of the selected
vertex is at least as large as half the sum of all weights, a median
is found. Otherwise, that tip vertex is eliminated from further con
sideration together with the edge incident to it and its weight is
added to the weight of the adjacent vertex. The procedure is repeated
with the new (reduced) tree. The algorithm does not require the com
putation of the distance matrix and uses only the incidence relation
ships and the weights.
Goldman's algorithm is based on a "localization theorem" proved
by Goldman and Witzgall [46]. The theorem provides sufficient condi
tions for a subset of N to contain a median. Given a compact subset
S of N, if S satisfies the two conditions (i), (ii), then it contains
at least one median. The conditions are (i) the set S must be a
"majority" set, meaning that the sum of the weights corresponding to
vertices in S must be at least as large as half the sum of all weights;
(ii) the set S must be "gated" in the sense that there must exist a
unique point g in S such that for every s c S and t c NS, it is true
that d(t,x) = d(t,g) + d(g,s). Goldman's algorithm in essence is a
repeated application of this theorem to a tree network. Goldman [43]
also proposed an "approximate" localization theorem which somewhat
relaxes the second condition and guarantees the existence of a point
in S that approximates an actual median.
26
A median of a tree is shown to be the same as a centroidd" of
the tree by Zelinka [120] for the unweighted case and by Kariv and
Hakimi [65] for the weighted case. To define a centroid, consider
the subtrees T,..,T k obtained by removing vertex vi from T. Let
w(T.) be the sum of the weights of the vertices in T., and define
W(vi) to be the maximum of w(T ) for 1 : j ki. A vertex vt which
minimizes W(v.) over all v. in V is said to be a centroid of T. The
location of a centroid is independent of the distances and can be
found by using only the incidence relations. Goldman's earlier
algorithm in essence finds a centroid of T. The generalized algorithm
of Rosenthal, Pino, and Coulter [98] also finds a centroid of T by
making only two traversals of the vertices. All these algorithms are
of O(n) and solve the 1median problem without having to compute the
distance matrix.
We now consider some generalizations of the 1median. Minieka
[88] defined the general absolute median of a network to be any point
on the network that minimizes the sum of unweightedd) distances from it
to the point on each edge that is most distant from it. Minieka showed
that the general absolute median can be strictly interior to an edge;
hence, the search cannot be confined solely to vertices of N.
Slater [103] gave another generalization of the 1median problem.
In this generalization, each demand is a collection of vertices. The
problem is to find a vertex such that the sum of the distances from
that vertex to a nearest element of each collection is minimum.
Slater showed that the set of vertices that solve this problem forms
a connected path in T. For a general network, the problem can be
solved by constructing a matrix that specifies the distances from each vertex
27
to a nearest element of each collection. Simply sum each row of this
matrix and choose the vertex whose row sum is minimum.
Frank considered a probabilistic version of the 1median problem
in [34] where each weight is a random variable with a known distribu
tion. A number of bounds are obtained on the expected value of the
objective function as well as its variance. Some of these results
are generalized by Frank [35] to the case where the weights are jointly
distributed random variables.
We now concentrate on the pmedian problem with p > 2.
pMedian of a network and vertex optimality. A significant
theoretical contribution is due to Hakimi [48]. Hakimi proved that
there exists an absolute pmedian contained in V. Certain generaliza
tions of this result have been given in subsequent work.
Levy [80] proved that the (vertexoptimal) result holds when the
weights w. are replaced by concave cost functions c (.) of the distance
between vi and its nearest median.
Goldman [41] generalized the result to the case of a "twostage"
commodity. More specifically, one distinguishes a vertex as being a
source or a destination. Let (Vs,Vd) be a sourcedestination pair,
and let x. and x. be the nearest medians to v and vd, respectively.
1 3 s d
Then the cost of transferring the commodity from source v to destina
s
tion vd is the sum of three transport costs, namely, w dd(v ,xi) +
w d(x.,x.) + w* d(x.,v ). In general, if X = {x ,...,x } is a median
sd 1 3 sd (jd p
set, one does not know which median is the nearest to v or vd; hence,
the cost associated with a sourcedestination pair (s,V d) is
given by
fsd(X) = min [sdd(vsx + wsdd(Xij) + w*dd(x,vd)
xi x CX
28
and the objective to be minimized is f(X) = Y [fsd(X): (vsv d)cVxV].
Goldman showed that there exists an optimal X* contained in V, and
conjectured that the result holds for any multistage problem.
Hakimi and Maheshwari [49] proved a stronger version of Goldman's
conjecture. In this version, there are multiple commodities for each
sourcedestination pair, and each commodity goes through multiple
stages. Furthermore the cost of transport from one stage to the next
is a concave nondecreasing function of the distance. More specifically,
let Msd be the set of commodities to be transferred from source v to
sd s
destination vd, and let g(m) be the number of stages commodity meMsd
is to go through. For a given location set X = {x ,...,x }, denote
1 p
by yr xi(r) the location where the rth stage processing takes place.
The cost of transferring commodity m from source vs to destination vd
is given by Csdm[d(vs,y1)] + Csdm[d(y1Y2)] + ... + Csdm[d(yg(m)' d)]
where C sdm(.) is a concave nondecreasing function of the distance.
Denoting this quantity by f (Y), with YC X, IYJ = g(m), the minimum
cost of transfer for commodity m is given by f sdm(X) = min[fsd(Y):
sdm sdm
Y C X, IYi = g(m)]. The cost of transferring all commodities from v
to vd is obtained by summing over all commodities, that is,
fsd(X) = [fsdm(X): meMsd]. The total cost of the system is obtained
by summing the cost fsd(*) over all sourcedestination pairs, that is,
f(X) = [fsd(X): (vs,vd)CVxV]. Hakimi and Maheshevari proved that
there exists a minimum X* of f(X) contained in V.
Wendell and Hurter [111] considered a more general form of the
problem where the transportation cost functions are permitted to
differ from edge to edge. The transport cost on any edge is a non
decreasing concave function of the distance. They proved that it is
29
sufficient to consider the vertices of the network under such a cost
structure. Furthermore, they obtained the conditions under which it
is necessary for the solution to occur at the vertices. In particular,
they showed that nonvertex optimal locations can occur in any given
edge, only when transportation costs are linear with distance over
that edge and in that case, when and only when the slopes of these
linear cost functions are in a special relation. Hence, if at least
one cost function over some edge is nonlinear, then no interior point
of that edge can be in an optimal solution. If the same situation
holds for every edge, then a solution must necessarily occur at the
vertices of the network.
Solution approaches. Kariv and Hakimi [66] showed that the p
median problem on a general network is at least as hard as NPcomplete
problems. For the case of tree networks, however, algorithms of
polynomial complexity have been developed. Matula and Kolde [85]
3 2
suggested an O(n p ) algorithm for finding the median of a tree net
2 2
work. Kariv and Hakimi [66] proposed an O(n p ) algorithm for the
same problem.
For general networks, a number of solution procedures have been
developed subsequently, all based on the vertexoptimality result.
Their common characteristic is that they all confine the search to
vertex locations. The solution procedures can be grouped in three
categories: mixedinteger programming approaches, branchandbound
techniques, and heuristics.
ReVelle and Swain [95] formulated the problem as a linear integer
program with 0,1 variables. The solution is obtained by applying the
primal simplex algorithm to the associated linear program. In case
30
of noninteger termination, a branchandbound scheme is recommended
to resolve the problem with integers. Their computational experience
indicates that noninteger termination seldom occurs. Toregas, Swain,
ReVelle, and Bergman [109] formulated a modified version of the problem
as a mixed integer program. The modification is the presence of upper
bounds on the distance between any vertex and its nearest facility.
This formulation makes use of a related but simpler problem. This
simpler problem is to minimize the number of facilities needed to cover
all vertices of N within a specified critical distance. This problem
is formulated as a set covering problem, and solved by ignoring the
integer requirements. In case of noninteger termination, a single cut
produced an integer solution in a large proportion of the cases. A
somewhat different approach to solve the relaxed linear program is
to use a decomposition scheme rather than applying the primal simplex
algorithm. Swain [105] used a DantzigWolfe decomposition approach
to solve the associated linear program. Garfinkel, Neebe, and Rao
[37] independently developed a decomposition approach similar to
Swain's. In case of noninteger termination, they used group theoretics
and a dynamic programming recursion to obtain an integer solution.
A second approach taken is to solve the problem using a branch
andbound technique. Khumawala [68] applied a branchandbound method
of Land and Doig [77] type, to solve both the set covering problem and
the modified pmedian problem formulated by Toregas et al. He showed
that the branchandbound approach is computationally efficient for
the former but not for the latter. Narula, Ogbu, and Samuelson [91]
presented a branchandbound scheme which relies on obtaining the
bounds by solving the Lagrangian relaxation of the integer programming
31
formulation using a subgradient optimization method. Another branch
andbound method was developed by Jarvinen, Rajala, and Sinervo [62].
Their procedure looks for np vertices that do not belong to a p
median. This method works better for larger values of p, since np
is smaller in this case reducing the number of possibilities. A
similar branchandbound procedure was given by ElShaieb [24]. The
procedure is based on construction of a source set (i.e., pmedian)
and a demand set. Starting with both sets empty, a location is added
to either set at each iteration. Whenever the number of elements in
a source set reaches p, or the number of elements in a demand set
reaches np, a feasible solution is obtained. An optimal solution is
eventually identified using the lower bounds.
A third approach taken is to use heuristics. A number of
heuristics have been developed by Maranzana [84], Teitz and Bart [107],
and Khumawala [69, 70].
For a discussion of a number of the solution approaches from a
computational standpoint, the reader is referred to Hillsman and Rush
ton [59], and Khumawala, Neebe, and Dannenbring [71].
Stochastic networks and vertexoptimality. A number of pro
babilistic versions of the pmedian problem have been considered in
the literature. Mirchandani and Odoni [89, 90] extended Hakimi's
vertex optimality result to the case of a stochastic network whose
edge lengths are random variables. Berman and Larson [2] considered
a stochastic network where the availability of servers (centers) is a
random variable. They showed that under suitable conditions there
exists at least one optimal set of locations on the vertices of such
a network.
This completes the discussion of the pmedian problem.
32
The distance constraints problem
The distance constraints problem involves locating new facilities
on a network so that they are within specified distances of existing
facilities as well as within specified distances of one another. The
distance constraints arise naturally in a locational context if one
wishes to require that a service facility be within a specified time
(distance) of any point in the region it serves. Alternatively, in a
military context, one may want to locate a number of units in such a
way that units are neither too far from their supply bases, nor too
far from one another, in order that one unit may reinforce another if
necessary.
To state the problem, let N be a network with the vertex set
V = (vl,...,v n. Denote by X = (xi,...xm) any location vector in Nm,
the mfold Cartesian product of N by itself. Define the sets I and
IC as follows: IB = {(j,k): 1 < j < k < m), IC {(i,j): 1 r i m,
1 5 j n). Here, the pairs (j,k) and (i,j) are assumed to be un
ordered. Let I and IC be two nonempty subsets of IB and IC,
respectively, and suppose we are given nonnegative finite numbers bjk
for each (j,k)el and c.. for each (i,j)cI .
B 13 C
The problem of interest is to find a location vector XeNm, if it
exists, such that the constraints (1.3.3) are satisfied.
d(xi,v.) 5 cij (i,j)eIC
(1.3.3)
d(xj,xk) < bjk (jk)clB
Any vector XENm satisfying (1.3.3) is called a feasible location
vector. The distance constraints are said to be consistent if there exists
at least one feasible location vector XeNm
33
Goldman and Dearing [45] provide a conceptual discussion of, and
a motivation for, considering such problems. The distance constraints
are formally defined by Dearing, Francis, and Lowe [22] on a network.
It was established in [22] that, in a well defined sense, the distance
constraints define convex sets under the assumption that the under
lying network is a tree. Furthermore, the distance constraints always
define convex sets if and only if the network is a tree.
Based on the results obtained in [22], Francis, Lowe, and Ratliff
[32] considered the distance constraints on tree networks in more
detail. They established the necessary and sufficient conditions for
distance constraints to be consistent, and also devised algorithms
that find a feasible location vector whenever one exists. In what
follows we briefly discuss the results obtained in [32].
Distance constraints for a single new facility. For the case of
a single facility, Francis et al. showed that there exists a feasible
point xeT satisfying d(x,vi) < c. for ieI if and only if the in
equalities d(v.,vk) cj + ck are all satisfied for 1 S j < k n.
j k j k
An equivalent statement of the single facility distance constraints
can be given in terms of "neighborhoods" around vi of radii ci. De
fine the neighborhood N(u,r) around a point usT of radius r to be the
set of all points xeT for which d(u,x) S r. Then, a point x satisfies
the constraints d(vi,x) < ci, ieI,if and only if x is in each neigh
borhood N(vi,ci), isI,if and only if x is in the intersection
n
n N(v.,c.). It follows then that the single facility distance con
i=l
strains d(x,v.) < c., iCI,are consistent if and only if d(v.,vk) k
c. + ck for 1 I j < k < n if and only if each pairwise intersection
N(v ,cj) $ f N(vk,ck) is nonempty for 1 j < k n. Based on this
34
property, a "sequential intersection procedure" was developed that
n
determines the composite neighborhood N(a,r) n N(vi,ci), with
i=l
unique center a and radius r, by intersecting the neighborhoods
N(vi,ci) one at a time in an arbitrary order. The procedure can be
implemented in 0(n) operations. The composite neighborhood N(a,r)
contains all alternate feasible points when the constraints are con
sistent, and N(a,r) is always a convex compact subset of the tree
network. A result was also given by Francis et al. that provides a
sensitivity analysis on the constraints with no additional computa
tional effort. Supposing that the distance constraints are consistent
with the original upper bounds c., consider an sperturbation of the
upper bounds, i.e., for some c > 0 define the new upper bounds to be
c.c, iel. If N(a,r) is the composite neighborhood corresponding to
1
the original upper bounds, then it can be shown that for any e with
0 5 e < r, the eperturbed constraints remain consistent and the set
of feasible points to the sperturbed system is given directly by
N(a,re).
Distance constraints for the multifacility case. For the multi
facility case, the necessary and sufficient conditions for the con
sistency of distance constraints are given in terms of n(n 1)/2
inequalities called "separation conditions." The separation condi
tions are defined by means of an auxiliary graph constructed by using
the sets I and IC. Let G be the graph with nodes N., 1 5 i < m,
corresponding to new facilities,and nodes E., 1 j 5 n,corresponding
to existing facilities. The arc set A of G contains (N.,E.) if
(i,j)CIC and (N.,Nk) if (j,k)slB. The arc length of (Ni,Ej) is ci.
and of (N.,Nk) is bjk. Under the (reasonable) assumption that G is
Jk K jk
35
connected, denote by L(E.,Ek) the length of a shortest path connecting
nodes E. and Ek for 1 : j < k n. It was proven in [32] that the
distance constraints are consistent on a tree network if and only if
the inequalities L(E ,Ek) 2 d(v ,vk) are satisfied for 1 S j < k 5 n.
These inequalities are called the separation conditions. The proof
of the consistency of the distance constraints implying the satisfac
tion of the separation conditions uses only the triangle inequality
and hence is applicable to all networks. The reverse implication
always holds for tree networks, but may fail to hold for general net
works. The proof of the reverse implication is constructive and
actually finds a feasible location vector under the assumption that
the separation conditions are satisfied. The method that constructs
such a feasible location vector is termed the "Sequential Location
Procedure" in [32]. The method can best be described with the aid of
a physical model. One may imagine that the tree is represented by
appropriately inscribing straight line segments on a board such that
each segment represents an edge. At vertex v., strings of length c..
are fastened for each new facility j such that (i,j)elC. A tip vertex
is chosen arbitrarily and all strings fastened at that vertex are
pulled tight towards the adjacent vertex. If all strings reach the
adjacent vertex, they are simply engaged there with their loose ends
free to be pulled tight in some future iteration. Also the tip vertex
together with the edge incident to it is removed from the model. The
procedure is repeated with the resulting tree. In the other case,
not all the strings reach the adjacent vertex when pulled tight. Among
those which do not reach the adjacent vertex one which is shortest is
selected, and the end point of this string determines the location of
36
the new facility it is associated with. All the strings pulled tight
from the chosen tip are engaged at this new facility location. The
feasibility of this location is checked with respect to all existing
facilities and all other new facilities already placed on T. If the
feasibility check is passed, new strings are fastened at this location
associated with that new facility and other unplaced new facilities for
which the distances are of concern. The procedure continues, treating
each placed new facility like an existing facility, until, either all
facilities are placed, or the current tree reduces to a point, in
which case, all remaining new facilities are placed at that point.
If the separation conditions hold, the procedure always finds a
feasible location vector. The algorithm is of O(m(m+n)) and is conjectured
to be a best order algorithm in [33], for determining the con
sistency of the distance constraints.
Extensions of the results obtained in [32] are given by Francis,
Lowe, and Tansel [33]. These extensions focus on the analysis of
binding separation conditions which in turn determine the "uniquely"
located new facilities. A separation condition that holds at equality
is said to be a binding separation condition. If L(E.,Ek) = d(v.,vk)
is a binding separation condition, then any shortest path P(E.,Ek) in
the auxiliary graph G is said to be a tight path. New facility i is
said to be uniquely located at point Xi if in every feasible solution X to
the distance constraints the location x. is the same. It was shown
in [33] that a new facility i is uniquely located if and only if node
N. lies on at least one tight path. As an immediate consequence of
1
this property the distance constraints has a unique feasible solu
tion if and only if each N., 1 & i m, lies on at least one tight path
1
37
in the graph G. Furthermore, if some path P(E.,Ek) is a tight path,
then the nodes representing facilities in the path occur with the same
ordering and spacing in the path as do the locations representing the
facilities in the path P(v.,vk) on T. This result enables one to
locate the new facilities that appear in a tight path immediately,
without having to use the Sequential Location Procedure.
A multifacility minimax application of the distance constraints
is given in [32, 33] and a multiobjective application is given in [33].
These two applications will be discussed subsequently.
mCenter problem with mutual communication
Let N be a network with vertex set V = {vl,...,vn} and edge set
E. Suppose the sets IB and IC are given with IB C {(j,k): 1 j < k < m}
and I C {(i,j): 1 < i m, 1 < j n}. We assume that we are given
positive weights Vjk for each (j,k)IB and wij for each (i,j)eIC. For
each location vector XeNm, define the functions f (X), fc(X), and
f(X) as follows:
fB(X) = max[vjkd(xj,xk): (j,k)eIB]
fC(X) = max[w ijd(xi,v ): (i,j)eIC] ,
f(X) = max[fB(X), f(X)] .
The mcenter problem with mutual communication is the following:
Find a location vector X*cNm such that
Z* E f(X*) = min[f(X): XeNm] .
38
The problem differs from the pcenter problem in two respects:
(i) the distance between any vertex v. and any new facility xi may be
of concern as opposed only to the distance between v. and the nearest
new facility to v.; (ii) certain distances between new facilities are
J
of concern, as opposed to the absence of interactions between new
facilities. For the case of a single new facility the two problems
coincide.
In this problem, the new facilities may be thought to fulfill a
supporting task to other new facilities as well as servicing those
existing facilities that are a priori assigned to them.
Certain planar cases of the multifacility minimax problem have
been studied by Dearing and Francis [20], Elzinga, Hearn, and Randolph
[25], Wendell and Peterson [113],.and Francis [28].
The problem on a network is defined by Dearing, Francis, and Lowe
[22] in the presence of distance constraints. It is established in
[22] that the function f is a convex function on a tree network. The
existence of a solution is guaranteed due to compactness and con
tinuity considerations. Furthermore, it is shown that it suffices to
consider only new facility locations in the convex hull of the existing
facility locations in order to solve the problem.
The problem on a general network was shown to be NPhard by Kolen
[72 ]. For the case of a tree network, the problem is solved by
Francis, Lowe, and Ratliff [32 ] by using an equivalent formulation in
terms of distance constraints (with variable right hand sides). The
solution procedure finds Z* first, by using the separation conditions.
Then an optimal feasible location vector X* is constructed by using the
Sequential Location Procedure described in [32]. To find Z* an
39
auxiliary graph G is formed with nodes N1,...,N ,E ,...,E Graph G
contains arcs (N.,E.) with lengths 1/w.. corresponding to pairs
(i,j)cIC, and arcs (Nj,Nk) with length 1/vjk corresponding to pairs
(j,k)eIB. It is assumed that G is connected, for otherwise the problem
decomposes into subproblems. For each pair of existing facility nodes
E Ek, define L(E ,Ek) to be the length of a shortest path in G
connecting Ej and Ek. Francis et al. showed that Z* is given by
max{d(vj,vk)/L(Ej,Ek): 1 S j < k 5 n). The distances d(vj,vk) can be
computed in 0(n2) operations for a tree network (see [23]), and the
shortest path lengths L(E.,Ek) are readily computable in 0(n3) opera
tions. When Z* is computed, the Sequential Location Procedure de
scribed in [32] can be applied in O(m(n+m)) operations to find a loca
tion vector X* that solves the problem.
mMedian problem with mutual communication
Define the functions gB, gC, and g by the following expressions:
For each XENm
B(X) E [vjkd(xj,xk): (j,k)I] ,
gc(X) [wijd(xi,vj): (i,j)eIC] ,
g(X) = gB(X) + gC(X)
The mmedian problem with mutual communication is the following:
Find a location vector X* in Nm such that
Z* E g(X*) = min[g(X): XNm] .
40
The problem differs from the pmedian problem in two respects:
(i) the distance between any vertex and any new facility may be of
concern as opposed only to the distance between a vertex and the near
est new facility to it; (ii) certain distances between new facilities
are of concern as opposed to the absence of interactions between new
facilities in the pmedian problem. For the case of a single new
facility, the two problems are identical.
Planar cases of the problem using rectilinear or Euclidean dis
tances have received considerable attention and efficient solution
procedures have been developed. A thorough discussion of these prob
lems is given in the book by Francis and White [31]. Other references
on planar problems are Cabot, Francis, and Stary [6], Bindschedler and
Moore [3], Francis [27], Eyster, White, and Wierville [26], Pritsker
and Ghare [94], Wesolowsky and Love [115, 116], and Picard and Ratliff
[93].
The problem on a network is defined by Dearing, Francis, and Lowe
[22] in the presence of distance constraints. It was established in
[22] that the problem is a convex optimization problem for all data
choices if and only if the network is a tree. For the case of a general
network, it is known that there exists an optimal solution on the
vertices of N. This result and certain generalizations of it have
been given by Goldman [41], Levy [80], Hakimiand Maheshwari [49], and
Wendell and Hurter [111]. These references are already discussed
under the pmedian problem. The problem was shown to be NPhard by
Kolen [72 ] on a general network, and no solution procedures have
been developed yet.
41
For the case of a tree network, the mmedian problem with mutual communi
cation is solved by Dearing and Langford [21], and by Picard and
Ratliff [93].
The approach used by Dearing and Langford is to embed the tree T
into the Euclidean space Rp, for some p, so that the distance between
any two points on the tree is equal to the rectilinear distance between
the corresponding points in Rp. The problem in RP with rectilinear
distances decomposes into p subproblems, each of which can be solved
by using known techniques given in Francis and White [31], or, perhaps
more efficiently, by applying the network flow procedure discussed in
Cabot, Francis, and Stary [6]. For reducing the computational effort,
the embedding procedure is carried out with respect to a minimal path
decomposition of T into p edge disjoint paths (each edge is in one and
only one path). Each path in a minimal path decomposition corresponds
to a dimension in R .
The approach taken by Picard and Ratliff in [93] takes advantage
of the vertexoptimality condition and determines an optimal solution
(on the vertices of T) by solving a sequence of at most ni minimum
cut problems, each on a graph containing at most m+2 nodes. The
method is based on a result that an optimal location vector can be
found independently of the edge lengths, by using only the incidence
relations between vertices and the weights. In this respect, the pro
cedure is in the same spirit as Goldman's algorithm for finding a
median of a tree. Each cut problem corresponds to an edge of the
tree. To be more explicit, the removal of all interior points of an
edge e leaves two disconnected components, T1 = T (e) and T2 T(e).
Corresponding to edge e, a graph G = G(e) is constructed having nodes
42
1 through m corresponding to new facilities, a source s and a sink t.
Graph G contains arcs (s,i) and (i,t) for 1 5 i m and arcs (j,k) for
each pair (j,k)clB. The capacity of arc (j,k) is the weight vjk. The
capacity of arc (s,i) is given by [Wir: Vr ET, (i,r)cI], and the
capacity of arc (i,t) is given by [ [wiq: VqeT2, (i,q)Ic ]. If
(Q,Q) is a minimum capacity st cut of G, with scQ, tcQ, then all new
facility locations x. for which the corresponding node i is in Q are
1
in T1 in an optimal solution. Similarly, all x. for which the node j
is in Q are in T2 in an optimal solution. The procedure is a repeated
application of this minimum cut problem with respect to each edge,
until an optimal vertex location is determined for each x.. During
the process, each x. whose location is determined is treated like an
J
existing facility. The method is described originally for the
analogous rectilinear distance problem on the plane, which, in turn,
decomposes into two subproblems, each on a line.
Multiobjective location problems on networks
Multiobjective optimization problems, sometimes known as vector
optimization problems, involve decision making under two or more
criteria. More explicitly, a set (finite or infinite) S of alterna
tives is specified and n (possibly noncommensurable) objective func
tions are to be minimized over S. Let f ,...,f be n numerical func
1 n
tions defined on S, and define f(x) = (fl(x),...,fn(x)) for all xeS.
The multiobjective optimization problem (VMP) is the following:
Vmin f(x)
xcS
In general, the minima of the functions fl,...,f do not coincide.
In order for the minimization to be meaningful, one needs tointrodu
In order for the minimization to be meaningful, one needs to introduce
43
the concept of "efficient solutions." A point x in S is said to be
efficient if there does not exist a point y in S such that f.(y) 5 f (x)
for 1 i 5 n and fk () < fk(x) for at least one index k. One is
interested in finding and characterizing the set of efficient solu
tions to (VMP). An efficient point is sometimes known as an undominated
point. A point which is not efficient is said to be dominated.
Kuhn and Tucker [76] and Koopmans [74] are among the first to
introduce the concept of efficiency. Geoffrion [40] extended the con
cept to "properly efficient" points and provided a comprehensive
theoretical framework for subsequent research. Necessary and suf
ficient conditions for efficient points to be properly efficient are
given by Wendell and Lee [112]. Some of the later contributions are
due to Yu [117], Yu and Zeleny [118, 119], Bitran and Magnanti [4],
Wendell [110], and Bergstresser, Charnes, and Yu [1]. We note that
there are other approaches to multicriteria decision making, such as
goal programming, multiattribute utility theory, construction of
outranking relations, and interactive programming techniques. For
general information on multicriteria decision making, the reader is
referred to Roy [99], Starr and Zeleny [104], Cochrane and Zeleny
[16], Keeney and Raiffa [67], and Thiriez and Zionts [108]. A survey
of multicriteria decision making is given by Chalmet [7].
Multiobjective location problems (on the plane or on networks)
have begun receiving attention only recently. Kuhn [75] appears to
be the first to consider a multiobjective location problem on the
plane. Kuhn considered the problem of minimizing the vector of
Euclidean distances from a variable point to a set of fixed points on
the plane, and showed that the set of efficient solutions is the convex
44
hull of the fixed points. Wendell, Hurter, and Lowe [114] considered
the same problem with rectilinear distances and provided algorithms of
0(n2 )and 0(n3 ) for generating efficient points. A most efficient
algorithm of O(nlogn) was developed by Chalmet and Francis [8] for
the same problem. McGinnis and White [83] considered the problem of
minimizing the sum of and the maximum of weighted rectilinear distances
from a variable point to a set of fixed points on the plane and formu
lated the problem as a parametric linear program for which known solu
tion techniques exist. Juel [64 ] considered the same problem for
the case of multiple new facilities and gave an equivalent parametric
linear program. Chalmet, Francis, and Lawrence [ 9 10 ] considered
two variants of an efficient design problem, where the location
variable (a design) is a planar region of specified positive area
but of unknown shape.
A few papers have been produced on multiobjective location
problems on networks. In what follows we discuss these problems.
The centdian problem. The single facility "centdian" problem
involves the sum of and maximum of weighted distances from a new
facility to a set of existing facilities at vertices of N. To define
the problem, let w. and h. be two positive weights associated with
1 1
vertex v., ieI = {1,...,n}. For each point xeN define:
1
m(x) {wid(vi,x): iEl} ,
c(x) E max[h.d(v.,x): iel]
1 1
f(x) E (m(x), c(x)) .
45
The problem of interest is to find all efficient points with
respect to f(x).
Halpern [52] isthe first to consider this problem. Halpern
formulated the problem in a slightly different manner by considering
a convex combination of m(x) and c(x). For any fixed X, 0 _< A 1,
define f(A,x) and f*(A) by
f(X,x) Xm(x) + (1 X) c(x) for xsN ,
f*(X) min[f(X,x): xcN] (1.3.4)
In Halpern's terminology, the function f(X,x) is called a centdian
function and any point x* x*(A) that solves (1.3.4) is called a
centdian point.
In [52] Halpern considered this problem on a tree network with
weights h. all equal to unity. Defining x and x to be the (vertex)
1 m c
median and the absolute center of T, respectively, Halpern proved that
for any given X, the centdian x*(X) is located at either x or on
c
one of the vertices located on the path P(x ,x ). This theorem pro
m c
vides the basis for a simple and efficient algorithm to locate the
centdian by inspecting the vertices on P(x ,xc). Further, Halpern
showed that, if the absolute center xe is known, then the centdian
can be found by determining the median of a tree T' that is identical
to T except that T' contains an additional vertex v x with the
n+1 c
1
associated weight wn1 = 1 1.
Handler [56] formulated the same problem on a tree network in a
slightly different manner by using the median function as a constraint.
In Handler's formulation one is interested in solving the problem
46
P for each given a, where P is defined as follows:
e(a) = min[c(x): m(x) 5 a, xcT]
Efficient solutions are obtained by parameterizing on a. Handler's
results closely parallel Halpern's.
The problem on a general network is studied by Halpern [54],
using the convex combination approach. Halpern showed that the problem
is a computationally finite one. Computational finiteness follows
from the result that f(X,x) is a continuous, piecewise linear function
of x on each edge and attains its minimum at one of a finite number of
points. Defining Q(e) to be the union of the end points of edge e
with the set of local minima of c(x) on e, the minimum of f(X,x) over
all x on edge e is a member of Q(e) for any given X, 0 5 X 5 1. De
fining Q E U {Q(e): esE}, it follows that the centdian x*(X) is con
tained in Q for any X. Further, Halpern showed that the function
f*(X) = min[f(X,x): xcN] is a continuous, piecewise linear, concave
function of X for 0 < X < 1. Based on these results, Halpern provided
an algorithm which constructs f*(X) and identifies x*(X) for
0 X < 1. To construct f*(X), the algorithm inspects each edge one
at a time and computes the set Q(e), unless a simple test indicates
that edge e cannot contain any centdian for any X. An upper bound
on f*(X) is carried through and improved, whenever possible, by
examining the members of Q(e).
Centdian problem and duality. In [53], Halpern studied the cent
dian problem on a general network from a different angle and obtained
a duality relationship. Using an approach similar to Handler's median
constrained problem, Halpern defined two problems, a median constrained
47
and a center constrained one. More specifically, for real X and p
define the functions m*(A) and c*(p) as follows:
m*(X) = min[m(x): c(x) 5 A] (1.3.5)
c*(P) = min[c(x): m(x) < ] (1.3.6)
In general for some values of X (p), the constraint c(x) s X
(m(x) < p) may not admit any feasible solution. However, real inter
vals C and M can be defined so that for any XeC and for any pcM, the
constraints in (1.3.5) and (1.3.6) admit a feasible point. To define
C, let 0 be the set of all minima to min[c(x): xsN], and let S be
c m
the set of all minima to min[m(x): xeN]. Let x be a point in Q0 that
c
minimizes the value of m(x) over all x in Q Similarly, let y be a
c
point in 0 that minimizes the value of c(y) over all y in 0 Then
m m
C and M are defined as follows:
C = [c(x), c()]
M = [m(y), m(x)] .
With these definitions Halpern's duality theorem can be stated
as follows:
a) Given any peM, with A = c*(p), we have c*(m*(X)) = A.
b) Given any XeC, with p = m*(X), we have m*(c*(p)) = p.
For a tree network, the functions m* and c* are 11 and onto.
It follows from the duality theorem that the function m* and c* are
inverses of each other for a tree network. For a general network,
the functions m*, c* need not be onto, i.e., the image of the domain
48
may only be a proper subset of the range. Hence, the inverse property
holds only for some members of C and M for a general network.
Now, we consider a more general multiobjective problem due to
Lowe [82]. The problem involves a single facility to be located on a
tree network with respect to m convex objective functions.
Multiobjective convex location problem (on a tree). Let T be
a tree network and let fl,...,f be m convex continuous bounded func
tions each of which is defined on T. In general, not all points in T
may be feasible with respect to f.. Let Q. be a convex compact subset
of T which contains all feasible points x with respect to the ith
optimizer. The set Q.may be defined by specifying its extreme points,
1
or by means of distance constraints, or by other means. We assume
m
that Qi is known or computable. Define Q = Qi and assume that Q
i=l
is nonempty. The problem of interest is to find all efficient points
in Q with respect to the vector minimization problem defined below:
Vmin[f(x): xeQ C T]
where,
f(x) = (fl(x),...,fm(x)) for all xT .
We note that Q is a convex compact subset of T as it is the
intersection of m convex compact subsets Q of T. For a formal dis
cussion of convexity on a network, the reader is referred to Dearing,
Francis, and Lowe [22]. Loosely speaking, Q a convex subset of T,
means Q is connected or that the (shortest, unique) path connecting
any two points in Q is contained in Q.
Lowe makes no assumptions on the specific forms of the objective
functions. Under the convexity assumptions, Lowe proves that a convex
49
compact subset T* of T can be identified that contains all efficient
points. To identify T*, define Rt to be the set of all minima to the
1
unconstrained problem min[f.(x): xET]. If R* intersects the feasible
1 1
set Q, define St to be this intersection. Otherwise, St is the unique
1 1
closest point in Q to R*. Having defined each S*, 1 S i m, if their
i i
intersection is nonempty, then the set of all efficient points is
given by T* = n {S: 1 5 i 5 m). If this intersection is empty, then
1
T* is the smallest compact convex subtree that intersects each St. It
1
can be shown that each RY, S* is convex, compact, and that T* is a
1 1
convex compact subset of T. Lowe's theorem assumes a knowledge of
set of minima to each f. as well as a knowledge of Qi and hence Q.
We note that the functions c(x) and m(x) in the centdian problem are
both convex on T. Hence, Halpern's results can be obtained by apply
ing Lowe's theorem.
Now, we consider a multiobjective problem which involves multiple
new facilities to be located on a tree network so that the distance
between each specified pair of new and existing facilities, and each
specified pair of new facilities is, roughly speaking, "as small as
possible." The problem is defined by Francis, Lowe, and Tansel [33]
as a sequel to the distance constraints problem, and solved by making
use of the separation conditions. Here, we call the problem, the
"multifacility vector minimization problem."
The multifacility vector minimization problem (on a tree network).
Let T be a tree network and let IC, IB be given nonempty sets with
IC C {(i,j): 1 i m, 1 < j : n} and IB C {(j,k): 1 j < k : m}.
The problem of interest is to locate m new facilities on T at points
x1,...,xm so that each distance d(xi,v ) (i,j)clC and d(xj,xk) (j,k)el
50
is "as small as possible." More specifically, we wish to find all
efficient location vectors X = (x,...,x ) in T with respect to the
1 m
vector minimization problem
Vmin[D(X): XcTm]
where D(X) is the vector of distances d(xi,vj) (i,j)clC and d(x ,xk)
(j,k)eIB. The vector is formed by assuming any convenient ordering
of the members of the sets IC and IB
Francis, Lowe, and Tansel [33] characterized efficient points by
making use of distance constraints. By definition, a location vector
Z in Tm is efficient if an only if there does not exist a location
vector X in Tm such that D(X) S D(Z) and D(X) # D(Z). Given a location
vector Z, let b = d(z ,zk) for (j,k)IB and c.. = d(z.,v.) for
jk J k 1 1 J
(i,j)CIC, and define the distance constraints (DC) of interest by
d(xi,v.) 5 cij (i,j)EIC
1 J 1 C
d(xj,xk) bjk (jk)cIB
We note that DC is always consistent, as Z is always feasible
to DC, and hence the separation conditions are always satisfied. The
separation conditions for DC are defined by constructing a graph G
with nodes N., 1 & j 5 m, corresponding to new facilities and nodes
Ei, 1 5 i 5 n, corresponding to existing facilities. For each
(i,j)eIC, the arc (N.,E.) is in G with length c.i, and for each
(j,k)elB, the arc (Nj,Nk) is in G with length bjk. We recall that a
point xi is uniquely located in every feasible solution to DC if and
only if the corresponding node N. is in at least one tight path in G,
1
51
where a path of G joining any two existing facility nodes Es and Et
is said to be tight if the length of the path is equal to the distance
between the vertices v and v in T corresponding to nodes E and E ,
s t s t
respectively. For any given location vector Z, denote by A.(Z) the
collection of locations of uniquely located facilities whose nodes are
adjacent to N in G. Let H[A (Z)] be the convex hull of A (Z), i.e.,
the smallest connected subtree containing all points in A.(Z).
1
With these definitions, it was proven in [33] that the following
conditions are equivalent:
(i) Z is efficient.
(ii) Z is the unique solution to DC.
(iii) Each N. is in at least one tight path in G.
1
(iv) Each Z. is contained in H[A.(Z)], 1 5 i m.
This completes the discussion of multiobjective location problems
on networks.
Path Location Problems
Here, we consider three versions of a path location problem posed
by Slater [102]. To define the problems, let P denote any path con
necting any two vertices in a network N. For any vertex veV and any
path P, define the distance D(v,P) to be the distance from v to a
nearest vertex in P. Also define the branch weight bw(P) of a path
P to be the maximum number of vertices in any component of NP. The
three versions of the problem are the following:
min C D(v,P) (1.3.7)
P C N veV
52
min max D(v,P) (1.3.8)
P CN veV
min bw(P) (1.3.9)
P C N
In Slater's terminology, any path P* that solves (1.3.7) is called
a core of N. Among all paths that solve (1.3.8), one with the fewest
vertices is called a path center of N. Similarly, among all the paths
that solve (1.3.9), one with the fewest vertices is called a spine
of N.
Slater obtained a number of properties of these problems for
tree networks. In particular, Slater showed that the path center of
T is unique and contains the vertex center of T, and that the spine of
T is unique and contains the centroid (equivalently, the vertex
median) of T. We recall that a centroid of T is any vertex v that
minimizes the maximum number of vertices in any component of Tv.
Also, Slater proposed two algorithms of linear order for determining
the path center and the spine of T.
CHAPTER 2
DUALITY AND THE NONLINEAR pCENTER PROBLEM AND COVERING
PROBLEM ON A TREE NETWORK
2.1 Introduction and Related Work
We consider the problem of locating p new facilities on a tree
network with respect to n existing facilities at known locations so as
to minimize the maximum "loss." The problem is an extension of the
linear pcenter problem to the nonlinear case. We assume a strictly
increasing, continuous "loss" function is associated with each of a
finite number of demand points (existing facilities) whose argument
is the distance between the corresponding existing facility and its
nearest new facility. Our formulation permits the use of quite general
loss functions provided that they are continuous and strictly increas
ing with the travel distance. The term "loss" is used generically
and may refer to any form of inconvenience such as cost, disutility
of service, travel time, etc.
In locating emergency service facilities, the disutility due to
"late" service may be too great beyond a certain "threshold" response
time. Such sharp changes in the disutility of service can be re
flected into the model by using nonlinear functions. Hurter and
Schaefer [61 ] justify and use such functions in a fire setting. As
pointed out by Dearing [18], a study by Kolesar et al. [73 ] revealed
that the travel time for fire trucks can be approximated by a particular
continuous, nonlinear, increasing function of the distance.
53
54
The literature on the pcenter problem is discussed in detail
in Chapter 1. Here, we give a brief review of the more closely re
lated work. Except for p = 1, we know of no literature on the non
linear pcenter problem. For p = 1, the only references we are aware
of which deal with the nonlinear case are Dearing [18] and Francis
[29]. Both authors showed that the minimax loss with respect to any
two existing facilities is a lower bound on the maximum loss with
respect to all existing facilities, and that the largest of the lower
bounds determines the minimax loss to all existing facilities on a
tree network. This result is an instance of the duality result we
will present in this chapter.
The linear (weighted or unweighted) pcenter problem is shown to
be NPcomplete on a general network by Kariv and Hakimi [65], and by
Nemhauser and Sheu [92].
The linear 1center problem on a tree network is well solved (see
Goldman [44], Halfin [51], Lin [81], and Dearing and Francis [19]).
For p > 1, the linear pcenter problem on tree networks is considered
by various authors. Handler [57] provided an 0(n) algorithm for
finding the 2center of a tree for the unweighted case. Kariv and
Hakimi [65] gave an 0(n2logn) algorithm for tree networks which relies
on solving a sequence of covering problems for the weighted case with
p > 1. A similar procedure for the unweighted continuous pcenter
problem on a tree network is given by Chandrasekaran and Daughety
[12]. A vertexrestricted version of the problem is solved by
Chandrasekaran and Tamir [13], and relies on solving a sequence of
clique covering problems on a related intersection graph.
55
The first duality relationship involving tree network location
problems can be found in Meir and Moon [ 86 ]. Cockayne, Hedetniemi,
and Slater [17 ] obtained a more general version of the result given
in [86 ]. The results in [ 86 ] and [17 ] closely parallel our duality
result for the covering problem and its dual. Shier [100] discovered
a "dispersion" problem which is dual to the continuous unweighted
pcenter problem. The dispersion problem of Shier is to choose p+l
points in the tree network the nearest two of which are as far apart
as possible. Chandrasekaran and Tamir [14] observed that Shier's
duality holds when the problems are defined with respect to a subset
of the tree. For the case where this subset is a finite collection
of demand points, their result is an instance of the duality relation
ship we will present in this chapter, as applied to the unweighted
linear case.
At this point we give a brief overview of the chapter. In Sec
tion 2, we define the (nonlinear) pcenter problem and a dual "dis
persion" problem. We state and prove a weak duality theorem applicable
to all networks, and state a strong duality theorem applicable to
tree networks. In Section 3 we give a physical interpretation
of the dual dispersion problem. In Section 4 we study a covering
problem and present an algorithm, COVER, for solving it. The covering
algorithm provides the basis of our solution procedure to the pcenter
problem as well as the dual dispersion problem and yields a construc
tive approach for proving the strong duality theorem. In Section 5 we
present an algorithm, OPTKLIQUE, which provides a constructive proof
of the strong duality theorem, while solving the dual problem. Addi
tional results for the covering problem, including a "divergence"
problem dual to the cover problem, are given in Section 6.
56
2.2 Problem Statements and Duality
We suppose given a finite undirected tree network with positive
arc lengths and denote by T an imbedding of the given network having
as edges rectifiable arcs. For any two points x,ycT, let d(x,y)
denote the shortest path distance between x and y.
Let J {l,...,n} and denote by V {vl,...,v n (VC T) a collec
tion of distinct vertex locations of "demand points" or "existing
facilities." Let X = {x1,...,x } (X C T) denote a finite collection
1 P
of "centers" or "new facilities." For jeJ, define the distance of v.
to its nearest center by D(X,v.) = min{d(x.,v.): 1 5 i 5 p}, and.let
J
Sj E max{d(x,v.): xsT}. Also, for jeJ, we assume given a real valued
function f., continuous and strictly increasing, with domain [0,6.]
and (clearly) range [fj(0),f.(6.)]. For X C T, IXI < m, we define
the function f by
f(X) = max{f.(D(X,v.)): jJ} .
The Primal pCenter Problem is as follows: Find a pcenter X*
for which
r = f(X*) = min{f(X): X C T, IXI = p} (2.2.1)
As discussed in Dearing and Francis [19], due to compactness of
T and continuity of d(x,.) on T for each fixed xET, an optimal solu
tion X* to (2.2.1) exists and is contained in the convex hull of V.
With a and n defined by a = max{f.(0): jeJ} and n = min{f.(0.):
jeJ}, we shall assume a < n, for if a = fs(0) > ft (6t ) = n, say, then
the function ft would always be dominated by (strictly smaller than)
57
f and hence f could be deleted from the definition of f without
s t
changing f. Further, we assume p n1, as otherwise the pcenter
problem is trivial.
So as to state the dual problem, we define Bjk = kj for j,keJ by
Bjk = min max{f (d(x,v )), fk(d(x,vk))}
Jk xcT J k k
For j,keJ with j < k we define ajk max{f.(0), fk(0)} and
bjk min{f (6 ),fk(6k)}. We note that a n implies [ajk,bjk] # 0.
The following lemma, the results of which are proven in [29], provides
a closed form expression for Bjk'
Lemma 2.2.1. For any j,keJ with j _< k we have:
1 1
(i) The function f + fk exists, is stricly increasing, continuous,
3 k
has domain [a.j,bjk] 3 0, and range [L jk,U ], where Lj =
jk jk jk Jk jk
1 1 1 1
(f + f )o(a ) and U = (f1 + f )o(bjk).
j k jk )k j k)kk
(ii) d(v ,vk) < Ujk'
(iii) The function (f. + f ) exists, is strictly increasing and
3 k
continuous, has domain [Ljk,U.k] and range [ajk,bjk].
1 1 1
(iv) Bjk = (f I + fk) o(max{d(v. ,vk), Ljk)
We remark that either jk = ajk or .jk = (f1 + fk) o(d(v.,vk));
Bjk E [ajk,bjk], and jj. = f.(0). The closed form expression for 0jk
given in Lemma 2.2.1 facilitates construction of the dual problem.
We define the dual objective function g on subsets of V as follows:
For any K C V with IKI > 2
g(K) max{gl(K), g2(K)}
gl(K) E min{ ij: vi,vj E K, i # j}
g2(K) max{f.(0): v. c K} .
J J
58
The Dual Dispersion Problem is as follows: Find a subset K* of
V such that
g(K*) = max{g(K): K C V, IKI = p+1} (2.2.2)
We remark that the dispersion problem is meaningfully defined for
2 < p+l < n. The primal pcenter problem is trivial for p > n. Hence,
we shall restrict p to 1 S p < ni.
In what follows in this section, we prove a Weak Duality Theorem
(W.D.T.) and state a Strong Duality Theorem (S.D.T.) (proven in Sec
tion 5). At the end of this section, we give an example problem
illustrating definitions and results.
In the W.D.T. we shall use the fact (readily proven as in [18]
or [29]) that a < f(X) for any XC T, XI < m.
Theorem 2.2.1. (Weak Duality Theorem). Assume 1 < p < n1. For any
X C T with IXI = p, and any K C V with JIK = p+1, we have f(X) > g(K).
Proof. There are two cases: g(K) < a or g(K) > a. In the former
case we have g(K) < a 5 f(X). In the latter case, we note that
g(K) = gl(K) > a > g2(K). Since jXI = p < p+l = JIK, at least two
demand points in K must be served by a single center. In other words,
for some 'v t v K with s # t, and some center xsX, we have
fs[D(X,v )] = fs[d(x,vs)] 5 f(X)
(2.2.3)
ft[D(X,v )] = ft[d(x,vt)] < f(X)
Using the definitions and the inequalities in (2.2.3), we have
g(K) = gl(K) Bst < max{fs[d(x,v )],ft[d(x,vt)]} f(X).
Remark 2.2.1. We note that the conditions JIX = p and JKJ = p+l can
be replaced by jXI < p and/or JKI > p+l, respectively, and the proof
59
will still apply. Furthermore, the proof applies to any network, as
no special properties of tree networks are used.
We now state the S.D.T. We remark that the S.D.T. requires the
assumption of a tree network. In effect, network cycles may create a
"duality gap."
Theorem 2.2.2. (Strong Duality Theorem). For any p, 1 5 p s n1,
there exists an X* C T with IX*l = p and K* C V with IK*l = p+1 such
that f(X*) = g(K*).
It is evident from the W.D.T. that X* solves the primal pcenter
problem and K* solves the dual dispersion problem.
Before presenting an example problem, we find it convenient to
view the dual problem as defined on "cliques" of a complete graph G.
We define G to be the undirected complete graph with node set J,
where node j of G represents vertex v. of T. To any arc (i,j) of
J
G, i # j, we assign the length i.., and, to any node j of G, we assign
the node weight j.. = f.(0). We call any complete subgraph K of G a
clique. We note that any nonempty subset of V induces a clique in G
and vice versa. For this reason, an equivalent definition of g(.) on
cliques of G can be given by defining gl(K) to be the length of a
smallest arc in a clique K of G, g2(K) to be the maximum of the
weights of nodes in K, and letting g(K) = max{gl(K), g2(K)}. If the
number of nodes of a clique K is known to be q, we call K a qclique
and (sometimes) write K Defining C (G) to be the collection of all
q q
qcliques of G, an equivalent statement of (2.2.2) is as follows:
Find a clique K* for which
p+l
g(K* ) = max{g(K): K c C (G)}
p+1 p+1
60
Whether K refers to a subset of V or a clique of G, we prefer to
call K a clique as long as it is clear from the context what K
refers to.
As an example of the nonlinear pcenter problem, suppose that the
function associated with node v. is fj(y) = w (y + h ) for y c [0,6 ],
where wj h, and 0 are given parameters. Appropriate restrictions
are placed on the parameters to ensure that the f. are strictly in
creasing on [0,6.]. We note that the linear weighted pcenter problem
is a special case of this problem generated by choosing 6 = 1, h. = 0,
and w. > 0 for all j.
J
For the given form of f., the following are readily verified:
J
1 1/1
f. (r) = (r/w.) h., r [f.(0), f.(6.)]
f (r) + f (r) = r /[/w) + (1/w.) ] (h. + h.) ,
1i 1j 3
r e [aij, bj] ,
1 1 1 w 0 6
(f. + f ) o(y) = j (y + h + h.)
i j 1/0 1/0 6
[w. + w. ]
1 J
y c [Lij, Uij]
Then, using the characterization of B.. as given in Lemma 2.2.1,
we have
ij dij if Lij d(vi',vj)
ij.. = (2.2.4)
Pmax[fi(0), f.(0)] if Lij > d(vi,vj) ,
1 j
where
61
w.w. 0
y = L 1/ and d..= [d(v,v.) + h + h.
(wi + w.
Consider the tree network shown in Figure 2.1, where the numbers
on the arcs represent arc lengths. The data given with Figure 2.1
corresponds to the parameters for j=1,...,6 where clearly, each f is
strictly increasing. Using (2.2.4), the .ij values for this problem
are shown in Table 2.1 along with the node weights f.(0). Figure 2.2
shows the dual graph G associated with the problem, where the number
next to each node j is the node weight and the number on the arc between
nodes i and j is 8... Using Figure 2.2 it can be verified that the
optimal cliques (specified here by their nodes) and associated g
values are K* = {3,4}, g(K*) = 13829.76; K* = {1,3,6}, g(K*) = 3600;
2 2 3 3
K* = {1,3,5,6}, g(K*) = 1664.64; K* = {1,3,4,5,6}, g(K*) = 784; and
K* = {1,2,3,4,5,6}, g(K*) = 225. Due to the duality theory, it then
6 6
follows that the r for p=1,...,5 are, respectively, 13829.76, 3600,
P
1664.64, 784, and 225.
2.3 Dual Problem Interpretation
We imagine two conservative adversaries, an aggressor A and a
defender D. Defender D has defense forces placed at vertex locations
V1,...,v Aggressor A will attack a single vertex in V. Although D
knows A will attack a vertex, he will not know the vertex attacked
until the attack occurs.
Defender D has p response forces which he must position at loca
tions defined by a pcenter X. Interpret tree distances to be travel
times, so that D(X,v.) is the minimum time to respond to v. from a
J *J
62
V6
Data
6 2
9
25
16
36
4
9
f(y) = w(y + h ) 0
Figure 2.1. Example Nonlinear pCenter Problem
63
Bi. Values and Node Weights for Example
i
1
2
(ci) = 3
ij.
4
5
j
.4
225
3600
3600
3600
3600
13829.76
3600
3600
8464
900
4356
4556.25
11664
784
1664.64
1 2 3 4 5 6
0 0 64 0 0 144
fj(0)
Table 2.1
64
Dual Graph for Example
3600
144/. )
Figure 2.2.
65
center in X. Assume A and D know functions f ...,f so that
f.(D(X,v.)) is D's loss if A attacks v. and D responds to the attack
in a time of D(X,v.). For convenience, we refer to the loss A in
flicts on D as A's gain.
Aggressor A knows D has p response forces, but does not know how
D will position his response forces. Thus A acts conservatively and
bases his decision on a worst case analysis. If A decides to attack
v. without threatening any other vertices, A reasons that D will cor
rectly guess v. is to be attacked and will position a response force
J
at v.. Hence A assumes his gain will be f.(0), if he decides to
J J
immediately attack v. without a prior threatening strategy. In order
to gain more, A concludes that he must threaten, i.e., pretend to
attack, q vertices, q > 1, so that even if D knows which q vertices
are threatened, D does not know which vertex A will attack until the
attack occurs. Thus D is forced to respond to the threat by position
ing his response forces optimally with respect to these q vertices.
Hence if A threatens K C V, he assumes D will choose a pcenter X
which minimizes f(X:K ) 2 max{f.(D(X,v.)): v. c K }. Thus, with
q J J J q
q p, A assumes D knows K and will position a response force at
q
every vertex in K so that A can gain at most g2(K ). The best A
q 2 q
can do in this case is to choose a K which contains some vertex v
q s
for which f (0) = a. Hence, if q 5 p, A's maximum possible gain is
at most f (0). (Parenthetically, we remark that if f (0) = r ,
s s p
p < n, then it can be shown that not all f.(0) have the same value.
J
If all f.(0) do have the same value, then r > a.) On the other hand,
J p
if A chooses a subset K with q > p, D is unable to position a response
q
force at every vertex in K even if he knows K so A will gain at
q q
66
least g2(K ). Hence A observes if he chooses some K with q > p which
contains a vertex v for which a = f (0), then his gain is at least
s S
a = g2(K ). However, A recognizes that there may be some other K
with q > p, which may or may not contain vs, but which yields him a
gain strictly greater than a. For this reason A restricts himself to
those subsets of V with cardinality greater than p and realizes that
if he chooses some K with q > p, then there is at least one pair of
q
vertices in K which D can cover by only a single response force. If
q
v. and v. are one such pair in K which are covered only by a single
response force, say at x, then clearly A obtains a gain of at least
.., as .. = min{max(f (d(x,v.)), f (d(x,v.))): x e T} <. max{f (d(x,vi)),
A
f.(d(x,v.))}. Since A does not know which pairs of vertices D will
cover by single response forces, once he chooses K A acts conserva
tively, and assumes that D will cover a pair va,vb e K for which
ab = min{i..: v.,v. K i # j}. That is, by choosing a K with
ab ij ij q q
q > p, A guarantees himself a gain of at least ab = gl(K ). Hence
A's minimum gain due to threatening K is g(K ) = max{gl(K ), g2(K )},
q q 1 q 2 q
so A chooses a K* with q > p which maximizes g(K ) over all K C V
q q q
with q > p.
The question arises as to why A should choose p+l vertices to
threaten, and no more. By virtue of the W.D.T. and the remark follow
ing it, if X* is an optimum pcenter then f(X*) > g(K ) for all K
q q
with q > p+l. Thus r = f(X*) is an upper bound on A's gain due to
threatening K But the S.D.T. implies there is a p+1clique, say
q
K* which attains this upper bound. Hence A need threaten no more
than p+1 vertices to maximize his gain, as A cannot obtain any addi
tional gain by threatening more than p+1 vertices.
67
There is also the possibility that A will make a false threat,
that is, attack a vertex not among the ones he threatens. If D be
lieves the threat is false and continues to act conservatively, he
will simply choose a pcenter X* to minimize f. But since there exists
a p+1clique K* such that g(K* ) = f(X*), the greatest loss D can
p+1 p+l
incur, given X*, is the same as if he believes A's optimal threat to
be real, and acts accordingly. Hence A cannot gain more by making a
false threat.
2.4 Covering Algorithm
In this section we study a covering problem, and present an
algorithm for solving it. Our primary interest in the algorithm is
the fact that it provides a constructive approach for proving results
about the primal and dual problem. For this reason we purposely keep
the algorithm simple, and use an analog string model to provide insight
into the algorithm. The development of both the string model and the
algorithm is motivated by an earlier string algorithm given in [32].
As in [32], an equivalent algebraic version of the algorithm is
readily obtainable. We remark that two other quite efficient algo
rithms [14], [15], exist for solving the covering problem, but they
do not lend themselves readily to our needs.
At this point we state the Covering Problem: Given r and the
runction f, compute
q(r) = min{IXI: f(X) 5 r, X CT} (2.4.1)
It is readily seen that the covering problem has a feasible solution
if and only if a 5 r. Further, with J(r) E {j: r < f.(6.)}, we shall
*J J
68
assume J(r) # 0, for if J(r) = 0 then the condition f(X) 5 r holds
for all X C T and we (trivially) have q(r) = 1.
The above assumptions permit the following equivalent statement
of the covering problem:
minimize IXI
subject to
1
D(X,v.) : fI (r), j e J(r) (2.4.2)
J J
We refer to the covering algorithm as COVER. In order to state
COVER a few definitions are convenient. We may imagine that the tree
is represented appropriately by inscribing straight line segments on a
planar surface such that each segment represents an arc. We fasten
1
strings of length f. (r) to each node v.,j e J(r), of the inscribed
j J
tree, where, by convention, we allow strings of zero length. Every
fastened string has one end permanently affixed to the planar surface.
In addition, during the use of the algorithm we engage previously
fastened strings at various points on the tree. When a string is
engaged, some point of the string is permanently affixed to the tree
such that there is no slack in the portion of the string so far en
gaged. When strings are removed, we imagine that they are physically
deleted from the string model.
During each iteration of the procedure, we partition the original
tree into two subsets: one green, the other brown. The green subset
is always a tree, denoted as GT (for green tree), while the brown sub
set consists of one or more subtrees of the original tree T, each of
which is "rooted" at a node of the green tree. By convention, a root
69
node t will be in both GT and the associated brown subtree, denoted
as BT(t).
COVER
0) Initialize to GT = T, k = 0. For every tip vertex v. of T define
J
1
BT(v.) = {v.}. For every j e J(r) fasten a string of length f. (r)
J J J
at v.. Define U = 0.
J o
1) Choose a tip t of GT. If GT = {t} go to 6). Else find a(t), the
vertex in GT adjacent to t.
2) If no strings are engaged or fastened at t, remove from GT the
subarc [t,a(t)] joining t and a(t), attach [t,a(t)] to BT(t), and go
to 1). Else go to 3).
3) Pull all strings at t tight towards a(t). If all tight strings
reach a(t) then engage them at a(t), remove [t,a(t)] from GT, attach
[t,a(t)] to BT(t), and go to 1). Else go to 4).
4) Add 1 to k. Choose a shortest string engaged or fastened at t.
Find the (unique) vertex, say v(k), at which the shortest string is
fastened. Construct Uk = U k U {V(k)}. Find the farthest point, say
y, from t on [t,a(t)] to which the shortest string can reach. Locate
xk at y. Assign all strings at t to xk and remove these strings.
Attach [t,y] to BT(t) to obtain BT(xk), and remove [t,y] from GT.
Go to 5).
5) Assign to xk all other strings in GT which can reach xk, and re
move all such strings. If no strings remain then let U = Uk and stop.
Else return to 1).
6) Add 1 to k. Locate xk at t. Assign all strings at t to xk. Of
the strings at t choose any one, and find the vertex v(k) to which
the chosen string is fastened. Let U = Uk U {v(k) }, and stop.
70
Note that each time COVER places a center at xk in step 4) it
identifies an associated vertex v(k) which we call the distinguished
vertex associated with xk. When centers xl,...,xk have been placed
in step 4), we call Uk = {v(1),...v(k) } the distinguished set
associated with {xl,...,k}. If the algorithm places q centers in
total, then the set U defined by the algorithm consists of vertices
v(1)",.,V(q), the first q1 of which are distinguished vertices
(when q > 2). The last vertex is distinguished only if x is placed
in step 4). Letting X = {x ,...,x }, we call U the primary set
q
associated with X, and call v(i) the primary vertex associated with
x., i = 1,...,q. We note that the primary vertices v(1),...,v(q) are
distinct, for as soon as a primary vertex is identified, its string
is removed, and thus the vertex is not available for any subsequent
identification. Likewise the centers xl,...,x are distinct, for if
x. = x. with i < j, then all strings assigned to x. would have been
assigned earlier to x., and so x. would not have been located. Hence
it follows that IUI = IXl = q, and U # 0, since IXj ? 1. The primary
vertices will be of theoretical significance in proving our results.
We now establish some properties of COVER.
Property 2.4.1. COVER finds a feasible solution X to the covering
problem with IXI : n.
Proof. We first note that termination is clearly finite, since at
each iteration either at least one string is removed, or some entire
arc of T becomes colored brown. Since there are at most n strings
initially, it follows that the X constructed satisfies IXI 5 n.
Choose any v.,j e J(r), and denote by x(j) the center to which
v. is assigned. Since the string fastened at v. reaches x ,
JJ J)
71
1
d(x(j,v.) < f (r). As D(X,v.) 5 d(x j,v.) it follows that X is
(j)' J J (j) J j
a feasible solution.
Property 2.4.2. For any nonempty distinguished set Uk, with vertices
numbered so that Uk = {v,...,v k}, we have
v. E BT(x.), 1 j 5 k (2.4.3)
J J
1
d(x.,v.) = fl (r), 1 j 5 k (2.4.4)
Proof. Expression (2.4.3) is obvious. To show (2.4.4), choose any v.
3
in Uk. Let t be the tip vertex chosen at the first of the iteration
in which x. is placed. The algorithm causes the string at v. tobe
J J
pulled tight along every edge connecting v. to t, and to be pulled
tight along [t,x.], with the string end point coinciding with x..
1
Thus d(v.,t) + d(t,x.) = f (r). But v. e BT(t) and x. e TBT(t) or
x. = t so that d(vj,t) + d(t,x.) = d(v.,x.). Thus, (2.4.4) follows.
JJ J J
Property 2.4.3. Let X = {x1,...,x } be the feasible solution con
structed by COVER, with vertices numbered so that U = {v,...,v q} is
the primary set associated with X. Assume q > 1. Then
1 1
d(v.,v.) > f (r) + f (r) for 1 i < j q (2.4.5)
i 3 i j
Proof. We know the first q1 members of U are distinguished vertices.
Hence Property 2.4.2 implies
v. e BT(x.), 1 < i < q1 (2.4.6)
1 i
1
d(v.,x.) = f (r), 1 r i < q1 (2.4.7)
1 1 i
For i < j, x. is placed prior to x.. Since v. is assigned to x. and
1 JJ kj
72
not to xi, for 1 i < j 5 q, v. was not in BT(x.), and the string at
v. did not reach x.. Hence
J I
v. e TBT(x.), 1 < i < j < q (2.4.8)
1
d(xi,v.) > f (r), 1 < i < j 5 q (2.4.9)
But (2.4.6) and (2.4.8) give d(vi,v.) = d(vi,x.) + d(xi,v.) for
1 5 i < j 5 q, from which, on using (2.4.7) and (2.4.9), (2.4.5)
follows.
We shall need the following remark, proven in [32]:
Remark 2.4.1. Given any a.,a. s T and s.,s. > 0, there exists a.point
x in T for which d(x,a.) 5 s. and d(x,a.) < s. if and only if d(a.,a.)
1 1 J J 1 J
5 s. + s..
1 j
We are now ready to establish the optimality of COVER.
Theorem 2.4.1. Given any r for which a < r and J(r) # 0, COVER solves
the covering problem.
Proof. Let X = {x1,...,x q be the point set found by COVER. Property
2.4.1 implies X is feasible to the problem. If q = 1, X is clearly
optimal. If q > 1, let the vertices be numbered so that U = {vl,...,v }
is a primary set associated with X. By Property 2.4.3, d(v.,v.) >
1 1
f. (r) + f. (r), for 1 : i < j < q. Remark 2.4.1 implies there exists
1 j
1 1
no x in T for which d(x,v.) < fi (r) and d(x,v.) < f (r) for any
i, j in {1,...,q} : J(r) with i < j. Hence it is impossible to cover
any two members of U with a common center. Thus, since JUI = q, any
feasible solution X to the covering problem satisfies lxi > q. Since
q = IXI and X is feasible to the problem, X is thus an optimum feasible
solution.
73
We remark that the covering problem may be of as much interest,
from both a theoretical and applications point of view, as the pcenter
problem. In Section 6, we will present a problem which is dual to the
covering problem and show that the primary set identified by COVER
solves the dual of the covering problem. Furthermore we will charac
terize q(r) as a step function, and provide a formula for q(r)
assuming that r is known for 1 < p 5 n1.
2.5 Dual Problem Solution and the Strong Duality Theorem
Based on the W.D.T. and properties of COVER we now present a
proof of the S.D.T. The proof is constructive in that we use an
algorithm called OPTKLIQUE which, given the optimal objective value
of the primal problem, constructs an optimal solution to the dual
problem. We then show that the objective values of the pair of prob
lems are equal. As a byproduct the proof also establishes that
r e R, where, for convenience, we define R E { ..: 1 i 5 j < n}.
p 1J
We find it useful to summarize Theorem 2.4.1 and Property 2.4.3
as follows:
Lemma 2.5.1. Given any r for which a 5 r and J(r) # 0, the following
assertions are true:
(a) COVER finds an optimum solution X to the covering problem with
q(r) = IXI.
(b) Whenever q = q(r) > 1, any primary set U = {v(1)"...,(q)
associated with X satisfies
g(U) = gl(U) > r
(2.5.1)
74
Proof. (a) is just Theorem 2.4.1.
(b) From Property 2.4.3, for any vi,v. E U, i # j, we have d(v.,v.) >
1 1 1 1
f (r) + f (r) > f (a) + f (a) where r a a = a.. Thus,
i j i j
1 1 1
d(v.,v.) is in the domain of (f + f ) from which, upon using
Lemma 2.2.1 and the definitions of g, gl, and g2, (2.5.1) follows.
In the algorithm OPTKLIQUE we assume that r is given for some
value of p, 1 p < n1. OPTKLIQUE constructs an optimal solution to
the associated dual problem.
OPTKLIQUE
1) If r = a, take K* to be any p+1clique in V containing a vertex
p p+l
v for which f (0) = a, and go to 3). Else, given r > a, compute
s s p
r' = max{f.. e R: V.. < r } and choose any r for which r' < r < r .
p J1 1J P P p
Go to 2).
2) Apply COVER with the chosen value of r to find an optimum solution
X and its associated primary set U, with IXI = q = IU. Note r < r
P
implies IXI > p, so q k p+l. Take K*+ to be any subset of U con
p+1
sisting of p+l members of U. Go to 3). (If q > p+l, there will be
alternative optimal cliques.)
3) If K*+I is any clique found in either step 1) or 2), then g(K* ) =
r and the W.D.T. guarantees K* is an optimum solution to the dual
p p+l
problem.
Before proving the correctness of the algorithm, we note, since
a = hh for some h, that a < r implies a 5 r', and thus the r chosen
in step 2) is one for which a feasible solution exists to the covering
problem.
Theorem 2.5.1. Given r for any p, 1 p < n1, the clique K* con
p p+1
structed by OPTKLIQUE satisfies
75
g(K* ) = r (2.5.2)
Furthermore, K* solves the dual dispersion problem.
p+1
Proof. Let X* be an optimum pcenter solution to the primal problem
so that IX*I = p and f(X*) = r Since r 5 a we consider the cases
P P
r = a and r > a. Let us apply OPTKLIQUE for each case.
P P
For r = a, K* is chosen in step 1) so that IK*  = p+1 and
p p+1 p+1
a = f (0) = g(K*). The W.D.T. gives g(K* ) < f(X*). But then,
s 2 p+1 p+
a = g2(Kp+) = g(K*+) = f(X*) = r = a, establishing (2.5.2) for
this case.
For r > a, define R {(.. e R: r 5 P..} C R. Since r > r > r'
P 13 p ij P P
there exists no (.. in R for which r < B.. < r Thus 8.. > r implies
1J 1J P i3
B.. > r and so it follows that
13 P
R = {..: r < ..} (2.5.3)
Let U be the primary set identified by COVER for the chosen r,
r' < r < r By Lemma 2.5.1, U satisfies g (U) > r from which it
P P
follows that 3.. > r for v.,v. e U, i # j. Hence, (2.5.3) implies
3.i R v.,v. E U, i # j (2.5.4)
3J 1 j
Since IU > p+l, let K* be that subset of U identified in step 2).
p+1
We have the following string of inequalities:
rp = f(X*) > g(K*) (2.5.5)
2 l(K*+1) (2.5.6)
= min{ij: vi,vj K*I+, i # j} (2.5.7)
> min{ij : vi,vj e U, i # j} (2.5.8)
> min{i.j e R} (2.5.9)
> rp (2.5.10)
76
where (2.5.5) follows from the W.D.T., (2.5.6) and (2.5.7) follow
from the definitions of g and gl, (2.5.8) follows from K*+ C U,
(2.5.9) follows from (2.5.4), and (2.5.10) follows from the definition
of R. Hence, every inequality holds as an equality, establishing
(2.5.2) for this case.
The assertion that K* solves the dual problem is immediate from
p+1
f(X*) = g(K* ) and the W.D.T.
p+1l
We note that Theorem 2.5.1 provides a proof of the S.D.T. since in
the statement of the S.D.T. we take X* to be an optimum pcenter solu
tion to the primal problem and K* as constructed by OPTKLIQUE. We
p+l
also note that the duality theory provides necessary and sufficient
conditions for a pcenter to be optimal, which, as far as we know, are
the first such conditions for this problem.
We remark, just as with the linear pcenter problem, that if we
define Bs = min{j..: Bij R, q(.ij) : p}, then st = r Clearly
q(r ) 5 p and q( st) S p. The S.D.T. implies r e R, and thus the
definition of Bst gives st < r Let p' = q(st) and let X solve
S tSts p St
the cover problem for r = 0 so that f(X ,) st Since p p',
"st p st
append to X (if necessary) any pp' center locations to obtain the
pcenter X Clearly D(X ,v.) D(X ,,v.) for v. 6 V, and thus
f(Xp) : f(Xp,). Hence r f(X) (X (X ,) 5 8st r so s = r
P p p p p st p st p
and X is an optimum solution to the pcenter problem. This remark
p
permits the use of the same procedures as discussed in [65] to compute
r efficiently, by performing a binary search over the (ordered) list
p
R, applying COVER for every r chosen from R until a smallest st in R
st
is found for which COVER finds p or less points. Once r is computed
in this manner, OPTKLIQUE requires an additional application of COVER
77
for any r, r' < r < r and solves the dual dispersion problem. This
P P
approach is essentially a primal approach for solving both problems.
An alternative approach which directly works with the dual graph is
given by Chandrasekaran and Tamir [13] for the unweighted linear p
center problem, which works directly with what would be a subgraph of
our dual graph G. Due to absence of weights and addends, their
approach does not require the use of node weights (and for that matter
the function g2) in the dual graph. For a given value of r, Chandra
sekaran and Tamir define an intersection graph IG with node set J and
arcs (i,j) for those indices i,j e J for which 5.. 5 r. Their pro
1j
cedure is based on a graph theoretic procedure given by Gavril [39]
and solves the covering problem by finding a minimum clique cover of
IG (minimum number of cliques such that every node is in at least one
clique). As a side result, their approach identifies a maximal anti
clique in IG (a maximal set of nodes in IG no two of which are con
r r
nected with an arc). Due to "chordal" properties of IG as discussed
r
in [39], the cardinality of a minimum clique cover of IG is equal to
the cardinality of a maximal anticlique in IG This result is a
r
special instance of the duality result we will present in Section 6
for the cover problem, as applied to the linear unweighted case.
Furthermore, for r = r Chandrasekaran and Tamir [39] proved a duality
relationship for the unweighted pcenter problem using the above
properties of IG We remark that their duality results can be
r
directly proven by using the algorithm OPTKLIQUE, and by appropriately
specializing our S.D.T. for the linear unweighted case.
We now demonstrate the use of OPTKLIQUE by determining K* for
the example problem. From our previous analysis, r3 = 1664.64. Since
78
r3 > a = 144, we compute (from Table 2.1) r=max{Bij E R: B < r 3 = 900.
3Ij 3
We next must apply COVER using a value of r where 900 < r < 1664.64.
Figure 2.3 shows the results of using COVER with r = 1296. In the
figure, the loose ends of the strings are shown as wavy lines. Brown
subtrees are shown as crosshatched arcs of the original tree. Each
separate drawing of the tree (a)g)) is for a subsequent iteration of
COVER. Figure 2.3a) demonstrates the initialization step, where for
1
r = 1296, the f (r), j = 1,...,6 are 12, 7.2, 7, 6, 18, and 8, re
spectively. The numbers next to the strings are the lengths of the
loose ends. In the figure, we indicate which tip of the green tree
is chosen at each return to step 1) of COVER. In addition, the suc
cessive distinguished vertex sets Uk are indicated.
After the final iteration, we note that the primary vertex set
U is {v3,v1,v6,v5} which, from our previous analysis, we know to be
K*
4'
2.6 Results for the Covering Problem
In this section we present a "divergence" problem which is dual
to the covering problem. We give a weak duality and a strong duality
result and prove that the primary set identified by COVER solves the
dual problem. The term "divergence" is chosen to represent the
physical interpretation, discussed later, in which the attacker A
chooses a "divergent" set of vertices to threaten. Further, the term
permits a distinction to be made between the two different dual prob
lems. Also, in this section, we demonstrate how having optimum solu
tions to the pcenter problem for all p, 1 5 p 5 n, enables us to
completely characterize the function q(r).
U2 = {v 3v I
U2 3' 1
c)
Choose; e v
1 b6
U3 = v 3,vl, 6
Choose v5
c )= (V3
c)
U4 = {v3',V1,6'v5}
U = {v, vl, v6, v 5
( ( 3, I, (b, S
Figure 2.3. OPTKLIQUE for p = 3 for Example
I nIt il 1 1zat ol n
Choose vl
Chooli v3
C(lw'iii v 2
80
The Divergence Problem is as follows: Given r and the function
g, compute
q(r) max{lUl: g(U) > r, UC V} (2.6.1)
That is, the problem is to find the maximum number of existing facili
ties no two of which can be jointly covered by a single center within
a radius of r. Equivalently, among all cliques of G whose gain is
larger than r, the problem is to find one with the maximum number of
nodes. The dual problem is feasible for r < rl, as, if r > rl there
does not exist a subset U of V for which g(U) > r. On the other hand,
the primal cover problem is feasible for r > a. Hence, we shall re
strict r to a < r < rl in order to ensure feasibility to both
problems.
Theorem 2.6.1. (Weak Duality Theorem). Assume a < r < r1. For any
feasible solution X to the primal cover problem, and any feasible
solution U to the dual divergence problem, we have jIX > Iul.
Proof. By feasibility of U and the assumption of the theorem we have
g(U) = gl(U) > r > a > g2(U) from which it follows that
ij > r v,v. E U, i # j (2.6.2)
Suppose IXI < jIU. Then, the same approach as in the proof of Theorem
2.2.1 implies there exist vsvt U, s # t, such that Bst 5 f(X) < r,
contradicting at least one inequality in (2.6.2). Thus, IX >? ul.
Theorem 2.6.2. (Strong Duality Theorem). Assume a < r < rl. Let X
be a feasible solution to the covering problem constructed by COVER.
Then, the primary set U associated with X solves the dual divergence
problem with
Ixl = q(r) = q(r) = JUI .
(2.6.3)
81
Proof. By definition of a primary set we have jIX = IUI. By assump
tion r < rl so that IXI = IUl k 2. Lemma 2.5.1 implies g(U) = gl(U) > r.
Hence U is a feasible solution to the dual problem. Theorem 2.6.1 im
plies q(r) 1 q(r). By feasibility of X and U, and the fact that
IXJ = Iul, we have IXI : q(r) ? q(r) 2 Iju = JIX. It follows that
X solves the cover problem, U solves the dual problem, and (2.6.3)
holds.
We remark that the above proof is an alternative to the proof of
Theorem 2.4.1 for establishing the optimality of X to the covering
problem. Hence, an application of COVER solves both problems simul
taneously.
At this point we give an interpretation of the pair of problems.
The defender D specifies an upper bound r on his loss against an attack
to any vertex and will position response forces as necessary so that
his loss will not exceed r. Each response force is an "expense" for
D. Hence, D's problem is to choose the fewest possible response
forces. The attacker A knows that D will not tolerate a loss exceeding
r. Hence, A recognizes that, no matter how many vertices he threatens,
D will have a sufficiently large number of response forces to respond
and that the loss A inflicts on D will always be less than or equal
to r. For this reason, A decides that he should not (hopelessly) try
to inflict a loss to D exceeding r, and that, instead, he should force
D into using as many of his response forces as possible. Hence,
should A choose a subset U of V with g(U) > r, he knows that no two
vertices in U can be jointly covered by a single response force by D
within the specified upperbound r. Thus, D, not tolerating a loss
exceeding r, will have to allocate one response force for every vertex
82
in U. In total, any feasible X which D chooses will satisfy IXI > IUI,
which is what the W.D.T. asserts. By virtue of the S.D.T., if U is
A's optimal choice, D can choose exactly jul response forces positioned
at, say X, with X = 1U and still respond to an attack to any vertex
in U (as well as in VU) without incurring a loss exceeding r. If A
threatens more than q(r) = J11 vertices, say, a subset U of V, then
IUI > q(r) implies g(U) < r (infeasibility). Thus, D would not be
forced into allocating a single response for every member of U. In
fact, even if A threatens every vertex in V, then D still needs ex
actly q(r) = q(r) = IUI response forces to respond to the threat
feasibly. Thus, if each threat is an "expense" for A, he need threaten
no more than q(r) vertices. On the other hand, D adopts an optimal
strategy against A's best threat by minimizing the number of response
forces with respect to V.
Continuing our consideration of the covering problem, we now re
verse the usual procedure, and view the pcenter problem as a device
for solving the covering problem for all values of r for which the
covering problem is feasible, that is, for a : r.
The following lemma is the key to using the pcenter problem to
solve the covering problem. Define r = for convenience.
o
Lemma 2.6.1. Let p e J. If r < r then
S p p1'
q(r) = p for r < r < rp1
Proof. We first note rn < r n < ... rl < rO. Also, clearly,
q(r ) 5 p for p e J. Now for rl 5 r since q is nonincreasing we
have 1 q(rl) 2 q(r) 1, establishing the claim if p = 1. Consider
the case p e {2,...,n}. From rp r < rp we have p > q(r ) > q(r) _
Suppose q(r) =s, with s < p. Let X,
q(rp1). Suppose q(r) = s, with s < p, implying s < p1. Let X,
83
with IXI = s, solve the cover problem for r. We then have f(X) S r <
r 1 < r contradicting the definition of r Thus q(r) = p for
r r < r
p pl
It now follows, if we define the set
P = {(pl,p): p E {2,...,n}, r < r } ,
p pi
that
Sp for r r < r1, (pl,p) e P
q(r) = (2.6.4)
1 for r 1 r
The formula (2.6.4) completely defines the function q(r), since r = a,
n
and the cover problem is feasible if and only if a 5 r. Hence if we
solve the pcenter problem for all p and compute r2,...,r then we
have an explicit formula for q(r), and we see that the r completely
define the function q. For example, if r6 = r5 < r4 = r3 < r2 = rl,
then q(r) = 5 for r5 < r < r4, q(r) = 3 for r3 = r < r2, and q(r) = 1
for r1 < r. Also, the proof of the lemma does not require the assump
tion that the location network is a tree. Thus the formula for q(r)
is still valid if the location network has cycles.
CHAPTER 3
A VECTORMINIMIZATION PROBLEM ON A TREE NETWORK
3.1 Introduction
We consider a vectorminimization problem on a tree network which
involves as objectives the distances between specified pairs of new
facilities and specified pairs of new and existing facilities. In many
location problems, especially in the public sector, it may be necessary
to build a number of public facilities which are to be shared by a number
of communities. If the optimizers cannot agree on a single objective
function, the analyst is faced with the problem of locating the facili
ties in such a manner that all parties are satisfied with the end
result. In such a case, the optimizers can agree to rule out "dominated"
solutions and consider only "efficient" solutions.
The related literature on multiobjective location problems is
discussed in Chapter 1 under Multiobjective location problems on
networks. Here, we concentrate on characterizing efficient solutions
to the vectorminimization problem of interest. We relate efficient
solutions to a distance constraints problem studied by Francis, Lowe,
and Ratliff [32]. Extensions of results in [32] are given by Francis,
Lowe, and Tansel [33]. We use the theory developed in [32] and [33]
to establish the necessary and sufficient conditions for efficient
location vectors (parenthetically, we remark that the results we proved
in [33] are also given in our Dissertation Proposal defended on June 8,
1979).
84
85
At this point, we give an overview of the chapter. In Section 2,
necessary definitions and notation are given and the vectorminimiza
tion problem of interest is defined. In Section 3, we relate the
problem to distance constraints, give a number of related properties
of distance constraints, and establish the necessary and sufficient
conditions for a location vector to be efficient. In Section 4, we
provide examples of efficient and nonefficient location vectors.
Section 5 is devoted to a further refinement and simplification of one
of the necessary and sufficient conditions, namely, "the convex hull
property." In Section 6, we provide an algorithm, SEVCA, which con
structs an efficient solution from a given location vector. In Sec
tion 7, we characterize efficient solutions for the analogous problem
in the pdimensional Euclidean space with rectilinear (p = 2) or
Tchebychev (p 2) distances.
3.2 Problem Statement
We suppose given a finite, undirected tree network, and denote
by T an imbedding of the given network. Let V : {v ,...,v } be a set
of n distinct vertices of T. We assume existing facility i is located
at vertex vi, i E {l,...,n}. For j e {1,...,m}, denote by x. a point
to be determined in T as the location of new facility j. We define Tm
to be the mfold Cartesian product of T by itself and define a location
vector X in Tm to be the ordered mtuple (x,,...,x ) with each x e T,
j {1,...,m}. Sometimes, we refer to a location vector X in Tm as a
point in Tm
As in [22], given points x,y e T, we define the line L(x,y) to be
the union of all points in the shortest path connecting x and y. In
86
addition, given a finite point set P C T, we define the convex hull
H(P) to be the smallest (embedded) subtree of T containing all points
in P. We note that for any two points p,p' e P, the line L(p,p') is
contained in H(P).
We denote by IC the set of pairs (i,j) for which the distance
d(xiv ) is of concern. Similarly, IB is the set of pairs (j,k) for
which the distance d(x.,xk) is of concern. We remark that it need not
be the case that IC includes all possible pairs of new and existing
facility indices, nor IB includes all possible pairs of new facility
indices. With these definitions, the problem of interest is to "mini
mize" each of the distances specified by (3.2.1);
d(x.,v.) (i,j) C IC
1 J C
(3.2.1)
d(xj,xk) (j,k) I .
For X e Tm, we denote by D(X) the vector each of whose components
is a distance specified by (3.2.1). The vector is formed by assuming
any convenient ordering of the members of IC and IB. The vector
minimization (Vmin) problem of interest is
Vmin{D(X): X e Tm} (3.2.2)
With respect to (3.2.2), a location vector Z e Tm is said to
dominate a location vector X in Tm if D(Z) < D(X) and D(Z) # D(X).
A location vector Z which is not dominated by any other location vector
is said to be efficient. An equivalent definition of efficiency is as
follows: Z e Tm is efficient if and only if X e Tm and D(X) 5 D(Z)
imply D(X) = D(Z).
87
Our main interest is to characterize efficient location vectors
and devise an algorithm for constructing efficient location vectors
from a given (dominated) location vector.
3.3 Distance Constraints and Characterization
of Efficient Points
We make extensive use of the results obtained in [32, 33] for
distance constraints to establish the necessary and sufficient condi
tions for efficient points. The Distance Constraints (DC) are defined
in [32] (independent of the efficiency problem) as follows: Given the
sets IC and IB and nonnegative upper bounds cij and bjk, find a point
X = (x1,...,x ) in Tm, if it exists, such that
d(xi,v.) c.. (i,j) C IC
(3.3.1)
d(xj,xk) b bjk (j,k) e IB
Corresponding to DC, we define Graph BC (GBC) as the undirected
graph having nodes E1,...,En, N1,...,N ; for every (j,k) e I there
is an arc (Nj,Nk) of length bjk between nodes Nj and Nk; for every
(i,j) C IC, there is an arc (N.,E.) of length cij between nodes N.
C 1 J ij i
and E.. We further assume that the sets IB and IC are such that GBC
J B C
is connected, as otherwise DC decomposes into independent sets of con
straints which may be analyzed separately.
Given a nodepath between any two nodes f and f in GBC, we de
P q
note the path by P(f ,f ) and denote the length of the path by LP(f ,f ).
We define L(f ,f ) to be the length of any shortest path in GBC between
P q
nodes f and f Subsequently, unless we specify otherwise, it should
P q
88
be understood that any path we refer to is a simple path between some
two existing facility nodes E and E .
P q
Results on Distance Constraints
The distance constraints are said to be consistent if there exists
at least one feasible solution to (3.3.1).
The following result is established in [32].
Theorem 3.3.1. The distance constraints are consistent if and only if
d(v ,v ) < L(E ,E), 1 p < q n (3.3.2)
pq p q
The inequalities (3.3.2) are termed the Separation Conditions
[32], since each term on the right specifies an upper bound on how
separate two existing facility locations can be. Except when stated
otherwise, we assume throughout the chapter that the separation condi
tions hold, and thus (equivalently) DC is consistent.
We call a path P(E ,E ) between E and E in GBC a tight path if
p q p q
LP(E ,E ) = d(v ,v ). We note that since we assume DC is consistent,
p q p q
it necessarily follows if P(E ,E ) is a tight path, that LP(E ,E ) =
P p P q
L(E ,E ). Any path P(E ,E ) for which LP(E ,E ) > d(v ,v ) is called
P q p q pq
a slack path.
We say that new facility i is in a tight path if there exists at
least one tight path containing N.. Every path containing N. is slack
1 1
if there is no tight path which contains N..
The motivation for the above terminology is due to a string graph
representation of GBC. This string graph is also useful for obtaining
problem insights. When knots representing nodes E and E are pulled as
P q
P pq
89
If then the string graph is placed upon the tree T, i.e., the strings
only lie on arcs of T, a path is tight when it is necessary to pull the
string graph tight in order to place the knots representing E and E
on v and v respectively, while a path is slack if the string path
P q
must literally be slack when the two knots are placed to coincide with
v and v
P q
A priori, one might think that the occurrence of a tight path
would be rare. However, we shall see that tight paths occur in a
quite natural way when the separation conditions are used in the analy
sis of efficient location vectors. Further, the notion of tight paths
permits the specification of necessary and sufficient conditions for
DC to have a unique solution.
We now relate unique locations to tight paths. By definition,
new facility i is uniquely located if it has the same location in every
feasible solution to DC. Since we later refer to a collection of
facilities, which contains possibly both existing and new facilities,
being uniquely located, we note that existing facilities are uniquely
located by definition.
Theorem 3.3.2, which we proved in [33], specifies the necessary
and sufficient conditions for a new facility to be uniquely located.
Theorem 3.3.2. New facility k is uniquely located if an only if node
Nk lies in at least one tight path P(E ,E ).
Corollary 3.3.2. Distance constraints have a unique solution if and
only if node Nk lies on at least one tight path in GBC for k = l,...,m.
We now give an additional property of a tight path we proved in
[33]. The property will be used in proving our main result on efficient
points.
90
Property 3.3.1. If P(E ,E ) is a tight path in GBC, then
p q
(i) every facility represented by a node in P(E ,E ) is uniquely
located,
(ii) the locations of facilities corresponding to nodes in P(E ,E )
occur with the same ordering and spacing on the line L(v ,v ) in
p q
T as do the corresponding nodes in P(E ,E ).
As an illustration of Property 3.3.1, suppose P(E1,E5) is a tight
path with nodes E1, N2, N3, E5. Then, the locations v1, x2, x3, v
are unique. Furthermore, they occur in the given order on the line
L(v1,v5) with d(v1,x2) = c21, d(x2,x3) = b23, d(x3,v5) = c35, where
c21, b23, c35 are the lengths of the arcs in the path. This example
is illustrated in Figure 3.1.
C b C Tight Path
SC21 23 35 P(EGE5)
v xin T
v1 x2 x3 v5
Figure 3.1.. Illustration of Property 3.3.1.
We now consider the problem of determining when an arc lies on a
tight path. As an arc lies on a tight path if and only if it is not
the case that all paths containing the arc are slack, we consider the
91
equivalent problem of determining when an arc lies only on slack paths.
The following property, which we proved in [33], characterizes the con
ditions under which an arc in GBC is not contained in any tight path.
Property 3.3.2. Let DC be consistent. Let (f.,f.) be any arc in GBC,
of positive length e.., whose length is reduced by some positive amount
C. Let DC (GBC ) be the distance constraints (graph) obtained from
DC(GBC) by replacing e.. by eij C.
(a) Evey path containing (f.,f.) in GBC is slack if and only if e can
be chosen (with s > 0) so that DC is consistent.
(b) Whenever every path containing (f.,f.) is slack, E can be chosen
(with e > 0) so that DC is consistent and at least one of the follow
ing is true:
(i) at least one path in GBC containing (f.,f.) is tight;
(ii) the length of (f.,f.) in GBC can be reduced to zero.
1 C
Finally, we will use the following lemma proven in [33].
Lemma 3.3.1. Given points a,b e T, suppose that d(a,b) = a + 3.
Then, the inequalities d(x,a) a, d(x,b) B are consistent if and
only if they have a unique solution and the inequalities hold as
qualities.
Necessary and Sufficient Conditions for Efficiency
Given a location vector Z, we let U = D(Z) and define the distance
constraints of interest by D(X) < U, where the entries in U define the
bjk and cij by bjk = d(zj,zk) for (j,k) C I'B and cij = d(zivj) for
(i,j) c IC. We use the bjk and cij to define GBC in the customary
manner. As before, we may assume GBC is connected, for otherwise the
problem of finding efficient location vectors decomposes into
92
independent subproblems. Further, we note that DC is always consistent,
as Z is certainly feasible to DC, and hence, by Theorem 3.3.1, the
separation conditions are always satisfied. For convenience, for any
location vector Z, we denote by A*(Z) the collection of locations of
uniquely located facilities whose nodes are adjacent to N. in GBC. We
1
denote by 11[A*(Z)] the convex hull of A*(Z), the imbedding of the
i i
smallest subtree of T spanning all the elements of A*(Z).
1
With the above definitions we can present a family of equivalent
conditions for a location vector Z to be efficient.
Theorem 3.3.3. Given a location vector Z used to define DC and GBC,
the following are equivalent:
(a) Z is efficient;
(b) Each N. is in at least one tight path in GBC;
I
(c) Z is the unique solution to DC;
(d) z. E H[A*(Z)] for i = l,...,m.
Proof. The equivalence of (b) and (c) is a direct consequence of
Theorem 3.3.2 and the fact that Z is always a feasible solution to
DC, while (c) clearly implies (a). To show (a) implies (c), suppose
Z is not the unique solution to DC. Color every new facility node
in GBC which is not contained in any tight path blue. Color all the
other (new or existing facility) nodes red. Equivalence of (b) and
(c) implies every blue node represents a new facility which is not
uniquely located, while every red node represents a (new or existing)
facility which is uniquely located. By assumption there is at
least one blue node. By connectedness of GBC, there is at least
one arc which connects some blue colored node, say, N to some red
colored node, say, F Furthermore, arc (N ,F ) has positive
q p q
