NONLINEAR APPROXIMATION TECHNIQUES TO SOLVE NETWORK
FLOW PROBLEMS WITH NONLINEAR ARC COST FUNCTIONS
By
ARTYOM NAHAPETYAN
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
2006
Copyright 2006
by
Artyom Nahapetyan
I dedicate this work to my parents, Nina Hovhanni i, and Garush Nahapetyan,
who alvx supported me in my studies.
ACKNOWLEDGMENTS
I would like to thank my chair and cochair, Prof. Siriphong L ,i.1 i,.! i ich
and Prof. Donald W. Hearn, for their valuable advice, support and guidance during
my studies. Our meetings and discussions were ahvl very helpful.
Also I would like to express my sincere gratitude to the committee members
Prof. Panos Pardalos, Prof. William Hager, and Prof. Ravindra Al!n I, for their
encouragement. Especially, I am grateful to Prof. Panos Pardalos for his valuable
si.. I ir 1 and advice on the supply chain problems I have worked on.
The tremendous support from my parents is invaluable, and there are no words
to express my appreciation for that.
Finally, I would like to thank all my friends and collaborators who made my
studies enjov 1'l1' and productive.
TABLE OF CONTENTS
page
ACKNOWLEDGMENTS ................... ...... iv
LIST OF TABLES ................... .......... viii
LIST OF FIGURES ................................ x
ABSTRACT ....................... ........... xi
CHAPTER
1 INTRODUCTION .................... ....... 1
2 A BILINEAR REDUCTION BASED ALGORITHM FOR CONCAVE
PIECEWISE LINEAR NETWORK FLOW PROBLEMS ........ 5
2.1 Introduction to the C!i ipter ................... ... 5
2.2 A Bilinear Reduction Technique for the Concave Piecewise Linear
Network Flow Problem ........... ..... ....... .. 7
2.3 Concave Piecewise Linear Problems with Separable Objective
Functions .............................. 11
2.4 Dynamic Cost Updating Procedure ................ 13
2.5 On the Dynamic Slope Scaling Procedure ....... ....... 16
2.6 Numerical Experiments ......... ................ 19
2.7 Concluding Remarks ............... ....... .. 21
3 ADAPTIVE DYNAMIC COST UPDATING PROCEDURE FOR SOLVING
FIXED CHARGE NETWORK FLOW PROBLEMS ........ 22
3.1 Introduction to the C!i lpter ................... ... 22
3.2 Approximation of the Fixed CI irge Network Flow Problem by a
TwoPiece Linear Concave Network Flow Problem . ... 24
3.3 Adaptive Dynamic Cost Updating Procedure . . 27
3.4 On the Dynamic Slope Scaling Procedure . . ..... 30
3.5 Numerical Experiments .................. ..... .. 31
3.6 Concluding Remarks .................. ....... .. 34
4 A BILINEAR REDUCTION BASED ALGORITHM FOR SOLVING
CAPACITATED MULTIITEM DYNAMIC PRICING PROBLEMS... 35
4.1 Introduction to the C!i lpter .................. .... 35
4.2 Problem Description .................. ....... .. 37
4.3 A Bilinear Reduction Based Algorithm for Solving AC'\ Il)P Problem 43
4.4 Numerical Experiments .................. ... .. 46
4.5 Concluding Remarks .................. ....... .. 49
5 DISCRETETIME DYNAMIC TRAFFIC ASSIGNMENT MODELS
WITH PERIODIC PLANNING HORIZON: SYSTEM OPTIMUM 50
5.1 Introduction to the C'!i lter .................. .... 50
5.2 Periodic Planning Horizon ......... .... .... ....... 53
5.3 DiscreteTime Dynamic Traffic Assignment Problem with Periodic
Time Horizon ................... ....... 54
5.4 Bounds for the DTDTA Problem .................. .. 67
5.5 Numerical Experiments .................. ...... .. 71
5.6 Concluding Remarks .................. ....... .. 76
6 A NONLINEAR APPROXIMATION BASED HEURISTIC ALGORITHM
FOR THE UPPERBOUND PROBLEM ....... ......... 78
6.1 Introduction to the C!i lpter ................... ... 78
6.2 Nonlinear Relaxation of DTDTAU Problem ............. 83
6.3 Nonlinear Relaxation Based Heuristic Algorithm . .... 86
6.4 Numerical Experiments .................. ...... .. 89
6.5 Concluding Remarks .................. ....... .. 91
7 A DYNAMIC TOLL PRICING FRAMEWORK FOR DISCRETETIME
DYNAMIC TRAFFIC ASSIGNMENT MODELS ............. 92
7.1 Introduction to the C!i lpter ................... ... 92
7.2 The Reduced TimeExpanded Network and UE Solution . 97
7.3 The Dynamic Toll Set .................. ..... .. 103
7.4 Dynamic Toll Pricing Problems .............. .. .. 108
7.5 Illustrative Examples .................. .. .... .. .. 110
7.6 Concluding Remarks .................. ..... .. 113
8 DIRECTIONS OF FUTURE RESEARCH . . 115
APPENDIX
A COMPUTATIONAL RESULTS FOR CHAPTER 2 . . .... 117
B COMPUTATIONAL RESULTS FOR CHAPTER 3 . . .... 121
C COMPUTATIONAL RESULTS FOR CHAPTER 4 ............ 125
D COMPUTATIONAL RESULTS FOR CHAPTER 6 ............ 127
E COMPUTATIONAL RESULTS FOR CHAPTER 7 ............ 129
REFERENCES .. .... ............................ 131
BIOGRAPHICAL SKETCH ........ ........ ............ 140
LIST OF TABLES
Table page
51 Demand patterns .................. ............ .. 72
52 Optimal solutions to the twoarc problem. ............... 73
53 Solutions from the lower and upperbound problems: linear travel cost
function .................. ................. .. 75
54 Solutions from the lower and upperbound problems: quadratic travel
cost function. .................. .............. .. 75
55 Quality of refined upper and lowerbound solutions: linear travel cost
function .................. ................. .. 76
56 Quality of refined upper and lowerbound solutions: quadratic travel cost
function. .................. ................. .. 76
61 Equivalent objective functions .................. .... .. 88
62 Distributions of parameters of randomly generated travel time functions 89
71 Additional constraints .................. ....... .. .. 110
72 Distributions of parameters of randomly generated travel time functions 111
A1 Set of problems. .................. ............. .. 117
A2 Computational results of sets 118: quality of the solution and the CPU
times .................. .................. .. 118
A3 Computational results of sets 118: DSSP vs. DCUP. . .... 119
A4 Computational results for sets 1930. .............. .. 120
A5 Computational results for the combined mode. . . ...... 120
B1 Set of problems. .................. ............. .. 121
B2 Computational results of groups G1 and G2: quality of the solutions and
the CPU times. .................. ............ 122
B3 Computational results of groups G1 and G2: the percentage of problem
where one of the algorithms finds a better solution than another one. 123
B4 Computational results of groups G3 and G4 ............... ..124
C1 The quality of the solution: Procedure 6. ................ 125
C2 The quality of the solution: Procedure 5. ................ 126
C3 The CPU time of the procedures. ................ ..... 126
D1 Computational results of the experiments. ............... 127
D2 Computational results of the combined mode. ............. ..128
E1 The total collected toll and the total cost for each problem and parameter
S.. ........................................ .. 129
E2 The number of toll collecting centers for each problem and parameter e.. 129
LIST OF FIGURES
Figure page
31 Approximation of function fa(xa). ............ . .. 25
32 ,a (xa) and 0a (X,) functions. .................. ..... 28
41 The price and the revenue functions. . ......... 38
51 Linear versus circular intervals. .................. .... 53
52 Events occurring in two consecutive planning horizons. . .... 54
53 Threenode network. ............... ..... .... 55
54 Time expansion of arc (1, 2) at t = ................ .. 56
55 Timeexpansion of the threenode network. ................ 58
56 Oa(Xa(t)) C ( 6, s] versus Xa(t) E (1( 6), 1(6)] . . 68
57 Twoarc network ................ ........... .. 72
58 Fournode network. ............... .......... 74
61 Two feasible solutions. ............... ...... 80
71 4Node network and traffic demand. ............. .. 92
72 User equilibrium flows and travel times. ................ 93
73 System optimum flows and travel times. ................ 93
74 Tolled user equilibrium flows and travel times. .. . ..... 94
75 The value of at. ............... ............ .. 111
D1 Two Networks. ............... ............ 127
E1 9node network. .................. .. ...... ...... 129
E2 The toll vector for different values of E in the MinRev(E) problem. 130
E3 The toll vector for different values of E in the MinCost(E) problem. 130
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
NONLINEAR APPROXIMATION TECHNIQUES TO SOLVE NETWORK
FLOW PROBLEMS WITH NONLINEAR ARC COST FUNCTIONS
By
Artyom Nahapetyan
August 2006
C('! ,': Siriphong L i.v !. wi! )panich
Cochair: Donald W. Hearn
Major Department: Industrial and Systems Engineering
In this dissertation we investigate network flow problems with nonlinear arc
cost functions. The first group of problems consists of concave piecewise linear
network flow, fixed charge network flow, and dynamic pricing problems that
arise in the areas of supply chain management and logistics. Based on the MIP
formulation, we construct bilinear reduction problems, in which the global solution
of the latter is a solution of the initial formulation. To solve the reduction problem,
we propose some heuristic algorithms. In the experiments, we compare the solution
provided by our algorithm with an exact solution as well as a solution provided by
other heuristic algorithms in the literature. Numerical experiments on randomly
generated instances confirm the quality of the algorithms.
The second group of problems is related to the dynamic traffic assignment
problem. In particular, we consider a periodic discrete time dynamic traffic
assignment problem (DTDTA), in which the travel time is a function of the number
of cars on the road, and the planning horizon is circular. The mathematical
formulation belongs to the class of nonlinear mixed integer problems. To obtain an
appropriate solution to the problem, we construct a linear mixed integer problem
for providing an upper bound and discuss an approximation scheme based on the
bounding problem. However, the bounding problem involves binary variables, and
when the problem is large, it is hard to solve. To overcome these difficulties, we
propose a heuristic algorithm based on a bilinear relaxation of the problem. Using
an approximate solution, in the dissertation we develop a toll pricing framework for
the dynamic case. In particular, based on a feasible vector of DTDTA we describe
a set of valid tolls and discuss several toll pricing problems. By constructing
an appropriate timeexpanded network, one may consider a similar toll pricing
framework for other solutions obtained, for example, from a simulation.
CHAPTER 1
INTRODUCTION
Network flow problems are minimization/maximization problems with
underlying network structure. Although there are different representations of the
network, perhaps the most popular one is based on flow conservation constraints
via a nodearc incidence matrix. Apart from the flow conservation constraints,
many problems have additional restrictions on the variables, e.g., nonnegativity
and lower/upper boundary constraints. Based on the objective function and other
additional constraints the problems can be classified as linear or nonlinear, where
the latter can be further decomposed into convex, concave, or other problems.
The linear problems assume that the constraints as well as the objective
function are linear. Polynomial time algorithms for solving the problems are
well known. Some classical examples of the network flow problems with linear
constraints include shortest path, minimum spanning tree, minimum cut, maximum
flow, minimum cost network flow, and other problems. Based on the optimality
conditions and other properties of the problem, several algorithms have been
proposed to solve the problems. For details on the linear network flow problems,
see Al!,i et al. [2].
Despite the nice theoretical results developed for the linear problems, most of
the practical problems are not linear, i.e.; the objective function and/or some of the
constraints are nonlinear. If it is a convex minimization (concave maximization)
problem with a convex feasible region then any local minimum (maximum)
is a global solution of the problem, and appropriate algorithms such that the
FrankWolf algorithm, gradient based and direction finding methods, can be used
to solve the problem. A large variety of algorithms for solving convex minimization
(concave maximization) problems can be found in Bazaraa et al. [5]. When the
objective function is not convex (concave) and/or the feasible region is not convex,
these algorithms do not necessarily converge to a global optimal solution. Finding
a global solution is a hard task, and global optimization techniques are required to
solve the problem (see, e.g., Horst et al. [54] and Horst and Tuy [55]).
In this dissertation, we consider two groups of nonconvex network flow
problems. The first group includes piecewise linear network flow, fixed charge
network flow, and dynamic pricing problems (see C'! lpters 2, 3, and 4) that have
a large variety of applications in the production 1p1 ,liiiir. scheduling, investment
decision, network design, location of plants and distribution centers, pricing
policy, and many other practical problems that arise in supply chains, logistics,
transportation science, and computer networks. It is well known that the problems
in their general form are NPHard; therefore, there are no polynomial time
algorithms to solve the problems unless P = NP. Although the mathematical
formulation belongs to the class of linear mixed integer problems, solving large
problems requires a large amount of CPU time and memory. On the other hand,
one can consider approximation techniques that are able to provide a goodquality
solution using less computer resources. Many of these techniques employ a linear
relaxation of the problem. Unlike those in the literature, we propose nonlinear
reduction techniques to solve the problems. In particular, in all three problems
we develop a method to reduce the problem to a bilinear one and propose a
heuristic algorithm to solve the resulting problem. In the numerical experiments we
compare the results with an exact solution (or the best feasible solution) provided
by MIP solvers, as well as with Dynamic Slope Scaling Procedure (DSSP), since
it is known to be one of the best heuristic algorithms to solve such problems.
Numerical experiments on randomly generated problems confirm the quality of the
solutions provided by our algorithms. In particular, it outperforms the DSSP in
the quality of the solution as well as in the computational time. In addition, we
transform the problems into alternative continuous network flow problems with flow
dependent cost function and prove that a global solution of the resulting problem is
a solution of the initial MIP formulation. Despite an unusual structure of the cost
function, the mathematical formulation of the problems is similar to the system
optimum problems arising in the traffic assignment modelling. Using the same cost
function, we also construct a variational inequality problem similar to those in the
transportation literature and prove that the DSSP converges to a solution of the
resulting problem; i.e., it provides an equilibrium solution. However, the problem
requires finding a system optimum solution, and the algorithms we propose finds an
approximate solution to the problem.
The second group of problems is related to the dynamic traffic assignment
problem. Unlike the static case, where the travel time is a function of the arc flow,
the dynamic models involve three variables: inflow rate, outflow rate, and density,
and the travel time can be a function of all three variables. In the literature several
continuous and discrete time models have been proposed for different travel time
functions. The model in this dissertation assumes that the travel time is a function
of the density, and all cars that enter an arc at the same point of time experience
the same traffic conditions; therefore, they leave the arc at the same time. In
addition, the models in the literature assume that the network is empty at the
beginning and the end of a planning horizon. In the case when some cars are
present in the network, the time to enter the network for those cars is unknown,
and it is hard to model the propagation of the cars in the network. Unlike other
models in the literature, we consider a periodic planning horizon and assume
that the processes repeat themselves from one period to another (see C'! plter
5). The mathematical formulation of the problem minimizes the total delay and
belongs to the class of nonlinear mixed integer problems, a hard problem to solve.
By linearizing the objective function and the constraints, we construct linear
mixed integer problems that provide upper and lower bounds. The solution of
the bounding problems can be made arbitrarily close to a solution of the initial
formulation by decreasing the discretization parameter. However, the bounding
problems involve binary variables, and it is hard to solve large problems using
MIP solvers. In ('!i lpter 6 we discuss a heuristic algorithm based on a nonlinear
relaxation of the problem. In particular, we construct a continuous bilinear
problem, which provides a tighter lower bound than the LP relaxation. Using the
bilinear relaxation, the heuristic algorithm aims to find an integer solution, which
has an objective function value close to the one provided by the relaxation problem.
Another problem of interest is the toll pricing framework for the dynamic
traffic assignment problem (see ('!i lpter 7). Similar to the static case, we construct
a set of valid toll vectors such that a system optimum solution is a solution of
the tolled user equilibrium problem. The latter is a user equilibrium problem
where the arc cost functions include tolls in addition to the travel times. A key
component in the development of such technique is the reduced timeexpanded
(RTE) network constructed based on a feasible vector. Using the network, we show
that a feasible vector is a user equilibrium solution if and only if it is a solution of
a linear problem with an underlying RTE network structure. The latter allows the
construction of a set of valid tolls and formulation of a toll pricing problem with a
secondary objective, and we provide several examples of such problems.
CHAPTER 2
A BILINEAR REDUCTION BASED ALGORITHM FOR CONCAVE
PIECEWISE LINEAR NETWORK FLOW PROBLEMS
2.1 Introduction to the Chapter
The cost functions in most of the network flow problems considered in the
literature are assumed to be linear or convex. However, this assumption might not
hold in practical realworld problems. In fact, the costs often have the structure of
a concave or piecewise linear concave function (see Guisewite and Pardalos [45] and
Geunes and Pardalos [41]). We consider the concave piecewise linear network flow
problem (CPLNF), which has diverse applications in supply chain management,
logistics, transportation science, and telecommunication networks. In addition,
the CPLNF problem can be used to find an approximate solution for network flow
problems with a continuous concave cost function. It is well known that these
problems are NPhard (see Guisewite and Pardalos [45]).
This chapter deals with a nonlinear reduction technique for the linear mixed
integer formulation of the CPLNF problem. In particular, the problem is reduced
to a continuous one with linear constraints and a bilinear objective function. The
reduction has an economical interpretation and its solution is proven to be the
solution of the CPLNF problem. Based on the reduction, we propose an algorithm
for finding a local minimum of the problem, which we refer to as the dynamic cost
updating procedure (DCUP). In the chapter, we show that DCUP converges in a
finite number of iterations.
The theoretical results presented in this chapter can be extended to a more
general concave minimization problem with a separable piecewise linear objective
function and linear/nonlinear constraints. It should be emphasized that Horst
et al. [54] (see also Horst and Tuy [55]) discuss a bilinear program with disjoint
feasible regions and prove that the problem is equivalent to a subclass of piecewise
linear concave minimization problems. The results in this chapter show that
any concave minimization problem with a separable concave piecewise linear
objective function is equivalent to a bilinear program. It is well known that an
optimal solution of a general jointly constrained bilinear program belongs to
the boundary of the feasible region and is not necessarily a vertex (see Horst et
al. [54]). However, the reduction technique presented in this chapter has a jointly
constrained feasible region with a special structure and it is still equivalent to a
concave piecewise linear program. From the latter it follows that two parts of a
solution of the problem are vertices of two different polytopes that are "joined"
by a set of constraints. In that sense, these types of problems are n, .'/ 1.; joined
bilinear programs.
The CPLNF problem can be transformed into an equivalent network flow
problem with flow dependent costs function (NFPwFDCF). Using NFPwFDCF,
it can be shown that the dynamic slope scaling procedure (DSSP) (see Kim and
Pardalos [61] and [62]) converges to an equilibrium solution of NFPwFDCF.
Although DSSP provides a solution, which can be quite close to the system
solution, it is well known that the equilibrium and the system solutions in general
are not the same. On the other hand, DCUP converges to a local minimum of the
problem. In the numerical experiments, we solve different problems using DCUP
and DSSP and compare the quality of the solution as well as the running time.
Computational results show that DCUP often provides a better solution than
DSSP and uses fewer iterations and less CPU time. Since DCUP starts from a
feasible vector and converges to a local minimum, one considers first solving DSSP
and then improving the solution using DCUP. The numerical experiments using
this combined mode are provided as well.
For the remainder, Section 2.2 discusses the nonlinear reduction technique
for the CPLNF problem. Section 2.3 generalizes the results from Section 2.2 for a
concave piecewise linear problem with a separable objective function. Section 2.4
describes DCUP and theoretical results on the convergence and the solution of the
procedure. In Section 2.5 we prove that the solution of the DSSP is an equilibrium
solution of a network flow problem with flow dependent cost functions. The results
of numerical experiments on DCUP and DSSP are provided in Section 2.6, and
finally, Section 2.7 concludes the chapter.
2.2 A Bilinear Reduction Technique for the Concave Piecewise Linear
Network Flow Problem
Let G(N, A) represent a network where N and A are the sets of nodes and
arcs, respectively. The following is the mathematical formulation of the concave
piecewise linear network flow problem (CPLNF)
min fa(x)
aEA
s.t. Bx = b (21)
xa e [A, A"'] Va EA (22)
where B is the nodearc incident matrix of the network G, and fa(xa) are piecewise
linear concave functions, i.e.,
fa(X + xa ( f/(Xa)) Xa e [A2, A')
fa(xa) = ... ...
cjaXa + aS( ffa(xa)) xa G [A/a 1, al]
with ca > c > > c. Let Ka = {1,2,..., na}. Notice that because of the
concavity of fa(Xa), the function can be written in the following alternative form
fa(Xa) = min{f(xa)} = min{c xa + s}.
kEKa kEKa
Using binary variables, y, k e Ka, one can formulate the CPLNF problem as
the following linear mixed integer program (CPLNFIP).
min a ac 5 E kak
aEA kEKa aEA EkEKa
s.t. Bxr b (23)
Sx= X Va e A (24)
kEKa
a A <1Y Xa< < A Va c A (25)
kEKa kEKa
S 1 Va A (26)
kEKa
xk < My Va E A (27)
S> 0, y {0,1} Va e A and k E Ka (28)
where M is a sufficiently large number.
In the above formulation, equality (26) makes sure that Va E A, there is only
one E Ka such that y = 1 and y = 0, Vk e Ko, k / The correct choice of
( depends on the value of Xa and has to satisfy constraint (25). In particular, if
Xa E [A1, A4] then from constraints (25) and (26), it follows that y 1. As
for the rest of the constraints, inequality (27) ensures that xr = 0 if y = 0, and
qualities (23) and (24) make sure that the demand is satisfied and the sum of xk
over all indices k E Ka is equal to the flow on arc a. In addition, it is easy to show
that the objective of the problem is equivalent to the objective of CPLNF and one
concludes that the CPLNF and the CPLNFIP problems are equivalent.
Consider a relaxation of the CPLNFIP problem where constraint (27) and
the integrality of yj are replaced by
rk k= Xa
a ata
(29)
and y > 0, respectively. Observe that in the resulting problem constraint (24)
is redundant and follows from (26) and (29); therefore, it can be removed from
the formulation. In addition, notice that one can remove the variable xk from
the formulation as well by substituting (29) into the objective function. The
mathematical description of the resulting problem is provided below and we refer to
the relaxation problem as CPLNFR.
min g(x, y) [: Zc xa > y f as
r,y
aEA LkEKa aEA kEKa aEA kEKa
s.t. Bx = b (210)
a y1a < Xa< < A Va A (211)
kEKa kEKa
S 1 Va A (212)
kEKa
x_ > 0, y_ >0 Va A and k Ka (213)
Lemma 2.2.1. Any feasible vector of the CPLNFIP problem is feasible to the
CPLNFR.
Proof: Observe that constraints (210)(213) are present in the CPLNFIP
problem. Therefore, any feasible vector of the CPLNFIP problem satisfies
constraints (210)(213). U
Lemma 2.2.2. Any local optimum of the CPLNFR problem is either feasible to
the CPLNFIP or or ca be used to construct a feasible vector of CPLNFIP with the
same objective function value.
Proof: Let (x*, y*) denote a local minimum of the CPLNFR problem. From the
local optimality it follows that g(x*, y*) < g(x*, y) in the cneighborhood of y*.
However, observe that by fixing the value of the vector x to x* in CPLNFR, the
problem reduces to a linear one; therefore y* is a global minimum of the resulting
LP. In addition, notice that the problem can be decomposed into IA problems of
the following form
min c x X + s k [c X + s]y
aya a aya a a a* a
keKa kEKa kEKa
s.t. A k1 y < x A ky (214)
kEKa kEKa
y =a 1 (215)
kEKa
y >0 Vk c K (216)
Let x e [A* Ak*]. As we have mentioned before, fa(x,) = minkEKafa(xa)};
therefore
f(x*) min{fk(x*)} min {c + s}.
EKKa keKa
Observe that by assigning y = 1 and yk = 0, Vk e Ka, k / k*, the resulting
vector ya (i) satisfies constraint (214) because xa e [A '*, A*] and (ii) ya
argmin{ [cKa[ + s kKk = 1,ac > 0}. Based on the above, one
concludes that ya is an optimal solution of the problem. If x* C (A 1, A *) then
y is the unique solution of the problem because cxk + s > c *x + s Vk e Ka,
k / k*; therefore, y* y. If x A 1 or x A, there are exactly two binary
solutions of the problem, and both have the same objective function value. As a
result, either one can be used to construct a binary solution y. A similar result
holds for all arcs a E A. Regarding variable xa, given (x*, y*), the only feasible one
is x a = x* and x = 0, Vkc Ka, k / k*. U
The following theorem is a direct consequence of the above two lemmas.
Theorem 2.2.1. A 1 ,l.',l optimum of the CPLNFR problem is a solution or ca be
used to construct a solution of the CPLNFIP .
Proof: From Lemma 2.2.2, it follows that a global optimum of CPLNFR is either
feasible to CPLNFIP or can be used to construct a feasible solution with the same
objective function value. Since all feasible vectors of CPLNFIP are feasible to
CPLNFR (see Lemma 2.2.1), one concludes that a global solution of CPLNFR
leads to a solution of CPLNFIP. U
From Theorem 2.2.1, it follows that solving the CPLNFIP problem is
equivalent to finding a global optimum of the bilinear problem CPLNFR. If
the solution of CPLNFR is not feasible to CPLNFIP then the proof of Lemma
2.2.2 provides an easy way to construct a feasible solution with the same objective
function value. Other properties of local minima of the CPLNFR problem are
discussed in Section 2.4.
It is noticed that the CPLNFR problem has the following economic
interpretation. Observe that because of equality (212), yJ E [0, 1], and one
can interpret the variables yS as weights. Under this assumption, one can view
the objective function as the sum of the weighted averages of the variable costs
multiplied by the flow, [LkEK, c>ya] Xa, and the fixed costs, Ek:K sak In other
words, the objective function consists of the weighted averages of functions fa(xa).
However, the weights have to satisfy constraint (211), where the flow, Xa, is
bounded by the weighted averages of the left and the right ends of the intervals
[Ak1, A ], k E Ka. According to Lemma 2.2.2, a local (global) optimum leads to a
solution where the weights are either equal 0 or 1.
2.3 Concave Piecewise Linear Problems with Separable Objective
Functions
Consider the following generalization of the CPLNF problem where constraint
(21) is replaced by a requirement x E X C R".
CPLPwSOF:
n
min f(xi)
i=1
s.t. xEX
Xi E [A, An1
where the fi(xi) are piecewise linear concave functions, i.e.,
ci Xi s+slj=f (x)) x [A',\1)
fi(xi) =
Cii "nn i ni 1
ci xi + s( f'(xi)) ax E [An 1 ,An']
with ci > c > > c'. Let Ki = {1, 2,..., ni}. Although the theoretical results
in the previous section are derived for the concave piecewise linear network flow
problem, one can replace constraint (210) by x e X, i.e.,
CPLPwSOFR:
nm
x,y
i=1 kEKi
s.t. xEX
E k1 k< xi < k k
Ai Yi A
kEKi kEKi
1
kEK,
xi > 0, yk > 0,
and the bilinear reduction technique used before, as well as Lemmas 2.2.1 and
2.2.2, and Theorem 2.2.1, are still valid. As a result, one concludes that the
CPLPwSOF and the CPLPwSOFR problems are equivalent in the sense that
a solution of the CPLPwSOF problem can be easily constructed from a global
solution of the CPLPwSOFR problem.
If the set X is a polytope then CPLPwSOFR is a bilinear program with a
jointly constrained linear feasible region. Let Y = {y EkeK y 1, y? > 0}
and X+ = {xlx e X,xi e [A, Ai ]}. Denote by V(X+) and V(Y) the sets
of vertices of the polytopes X+ and Y, respectively. Notice that the sets X+
and Y are "j. 1I 1 by the constraints >kEKi 1i Y < i _< kEK, A" It is
well known that an optimal solution of a general bilinear program with jointly
constrained feasible region occurs at the boundary of the feasible region and is not
necessarily a vertex (see Section 3.2.2, Horst et al. [54] and the related problem
set). However, CPLPwSOFR is equivalent to CPLPwSOF. In particular, if (x*, y*)
is a global solution of CPLPwSOFR then from Theorem 2.2.1, it follows that x* is
a solution of CPLPwSOF. The latter is a concave minimization problem where the
feasible region is a polytope. It is well known that the solution of such a concave
minimization problem is one of the vertices of the polytope; therefore x* e V(X+).
In addition, from the theorem it follows that there exists y E V(Y) such that
(x*, ) is a global solution of CPLPwSOFR problem. In that sense, CPLPwSOFR
is a ;, ., l./; joined bilinear program. The above discussion is summarized in the
following two theorems.
Theorem 2.3.1. A concave minimization problem with a separable piecewise linear
objective function, CPLPwSOF, is equivalent to a bilinear 1i. '",'r CPLPwSOFR.
Theorem 2.3.2. Let (x*, y*) be a solution of problem CPLPwSOFR. If the
set X+ is a ..1;,/. pe, then (i) x* E V(X+), and (ii) 3 y E V(Y) such that
Zi=fl kKi J (iI'l :i"l EkeKi Jik(i' .
2.4 Dynamic Cost Updating Procedure
In this section, we discuss an algorithm for solving the CPLNFR problem. In
general, bilinear programs similar to CPLNFR are not separable in the sense that
the feasible sets of variables x and y are joined by common constraints. By fixing
one of the variables to a particular value, the resulting problem transforms into a
linear one.
Consider the following two linear problems which we refer to as LP(y) and
LP(x), where y (x) denote the parameter of the problem LP(y) (LP(x)), i.e., fixed
to a particular value.
LP(y)
aEA EK, ICYa
s.t. Bx b
Xa C [0, \a]
LP(x)
min [caxa + sa
aEA kEKa
S.t. A k1 < ky k A
kEKa kEKa
kEKa
S> 0
In the LP(y) problem we assume that variables y4 are given, and the Xa are
the only decision variables. Similarly, in the LP(x) problem, the Xa are given,
and the y are the decision variables. As we have shown in the proof of Lemma
2.2.2, problem LP(x) can be decomposed into IA problems and the solutions
of the decomposed problems are binary vectors, which satisfy constraint (214).
Therefore, given vector x, a solution of the LP(x) problem can be found by a
simple search technique where y = 1 if Xa E [a 1, A].
We propose a dynamic cost updating procedure (DCUP), where one considers
solving the problems LP(x) and LP(y) iteratively, using the solution of one
problem as a parameter for the other (see Procedure 1). Although in the procedure
the initial vector yo, is such that yl0 = 1 and y0 = 0, Vk E Ka, k / 1, one can
choose any other binary vector, that satisfies constraint (212). It is noticed that
a similar iterative procedure has been used for solving a bilinear program with
a disjoint feasible region (see, e.g., Horst et al. [54] and Horst and Tuy [55]). In
Procedure 1 : Dynamic Cost Updating Procedure
Step 1: Let yo denote the initial vector of 0o, where y0 = 1 and o = 0, Vk e
Ka, k / 1. m  1.
Step 2: Let xm = argmin{LP(yml)}, and y = argmin{LP(x')}.
Step 3: If y"' y"1 then stop. Otherwise, m  m + 1 and go to Step 2.
the DCUP, LP(y) does not include constraint (211). In other words, L(x) is the
CPLNFR problem with fixed variable y and relaxed constraint (211). The latter
allows using the iterative procedure to solve bilinear programs with a ; .,../;/l; joined
feasible region. Let V represent the feasible region of CPLNFR and (x*, y*) be the
solution of the DCUP.
Theorem 2.4.1. The solution of the DCUP .,I.f the following .! ,il.:l;'
min g(x*,y) g(x*,y*) min g(x,y*).
(x*,y)EV (x,y*)EV
Proof: Because of the stopping criteria, x* = argmin{LP(y*)} and y*
argmin{LP(x*)}. From the latter it follows that x* = argmin(x,y*)evg(x, y*) and
y* = argmin(X*,y)Evg(x*,y).
Theorem 2.4.2. If y* is a unique solution of the LP(x*) problem then (x*,y*) is a
local minimum of CPLNFR.
Proof: See the proof of Proposition 3.3, Horst et al. [54]. 0
In the proof of Lemma 2.2.2 we have shown that y* is not unique if and
only if one of the components of vector x*, x*, is equal to the value of one of the
breakpoints A However, observe that x* = argmin{LP(y*)}, and the feasible
region of problem LP(y*) does not involve breakpoints. As a result, in practice it is
unlikely that x* is equal to one of the breakpoints.
Theorem 2.4.3. Given ,i.; initial binary vector yO that .,/I.:. constraint (212),
DCUP converges in a finite number of iterations.
Proof: Let yo denote the initial binary vector, x1 = argmin{LP(yo)} and
y = argmin{LP(x1)}. According to the assumption of the theorem, Va E A,
there exists only one ( e Ka such that yo = 1 and y0 = 0, Vk e Ka, k / $ .
If x cE [A1, A ] then the corresponding components of the vector yo do not
change their values in the next iteration, i.e. yl = 1 and yl = 0, Vk E Ka,
k $ IHowever, if x1 e [A1A ], then y = 1 and yl = 0,
/ 0 0. Hoevr if0'EO
Vk E Ka, k / (. In addition, notice that c4xk + s > cix + s. As a result,
g(xl, yo) > g(xl, y') > g(x2, y) and one concludes that the objective function
value of the CPLNFR problem does not increase in the next iteration. To
prove this by induction, assume that the objective function does not increase
until iteration m. Similar to the above, one can show that g(xm+l, y') >
g(xm+l, ym+l) > g(xm+2, ym+1). The constructed nonincreasing sequence, i.e.,
{g(xO yo), g(xl, y'), g (x2, y),..., g(xm+, ym) g(xm+l, m+), g(xm+2, ym+1)...},
is bounded from below by the optimal objective function value, and one concludes
that the algorithm converges.
Observe that in each iteration the procedure changes the binary vector
y. If ym = t1 then the procedure stops and g(xm, y 1) g(xm, y")
g(xm+l, y'). If there exist mT and m2 such that mi 1 > m2 and y"l yp2, then
xm1+1 = argminLP(yp )} = argminLP(y 2)} x= +1, i.e., g(xm2+1, m2)
g(xm2+l, yml) = g(xml+, ym). From the nonincreasing property of the sequence
it follows that g(x"2+1, y2) = g(x2+l, y2+1); therefore, yp2 = y2+1 and the
algorithm must stop on iteration m2. From the latter it follows that all vectors y",
constructed by the procedure before it stops, are different. Since the set of binary
vectors y is finite, one concludes that the procedure converges in a finite number of
iterations. U
2.5 On the Dynamic Slope Scaling Procedure
In this section, we discuss another equivalent formulation of the CPLNF
problem with a slightly different objective function. Using the new formulation, we
prove some properties regarding the solution of Dynamic Slop Scaling Procedure
(DSSP) (see Kim and Pardalos [61] and [62]).
Although in the CPLNF problem there are no restrictions on the values of
parameters sa and Ao, by subtracting s' from function fa(Xa) and replacing the
variable Xa by Xa = x A', one can transform the problem into an equivalent one
where s, = 0 and A = 0. Therefore, without loss of generality, we assume that
s = 0 and A 0.
a a
To investigate DSSP, let
1 Xa (0, A1]
Ca X E (A,2
(^ x,) > 0
F (x,) 
Fa(Xa) X > 0 ... ...
M Xa = 0
na a (A n1 na]
M a 0
where M is a sufficiently large number. Consider the following network flow
problem with flow dependent cost functions Fa(xa) (NFPwFDCF).
min FT(x)x
x
s.t. Bx = b (217)
Xa E [0, A] (218)
where F(x) is the vector of functions Fa(Xa).
Theorem 2.5.1. The NFPwFDCF problem is equivalent to the CPLNF problem.
Proof: Observe that both problems have the same feasible region. Let x be a
feasible vector. If Xa > 0 then Fa(xa)xa = f Xaz fa(za). On the other
hand, if Xa = 0 then Fa(xa)xa = 0 = fa(xa). From the above it follows that
FT(x)x = aEA fa(Xa), and one concludes that NFPwFDCF is equivalent to
CPLNF. U
Let NFPwFDCF(F) denote the NFPwFDCF problem where the cost
vector function, F(x), is fixed to the value of the vector F. Notice that the
NFPwFDCF(F) problem is a minimum network flow problem with arc costs Fa
and flow upper bounds A=. DSSP starts with initial costs F = ca + s" /Aa and
solves the NFPwFDCF(Fo) problem (see Procedure 2). Then it iteratively updates
Procedure 2 : Dynamic Slope Scaling Procedure
Step 1: Let F = c"a + s~a/A~a be the initial arc costs and
x1 = argmin{NFPwFDCF(F0)}. m < 2
Step 2: Update the arc costs, F < Fa(x'1), and let
xm+l argmMinNFPwFDCF(Fm1)}.
Step 3: If x"+l xm then stop. Otherwise, m < m + 1 and go to Step 2.
the value of the cost vector using the function Fa(xa), F' = Fa(x''1), and solves
the resulting NFPwFDCF(F") problem. In the cost updating procedure, different
variations of the algorithm use different values for M. In particular, one may
consider replacing M by F~1 or maxn
Theorem 2.5.2. The solution of the DSSP is a user equilibrium solution of the
network flow problem with the flow dependent cost functions Fa(xa).
Proof: Assume that DSSP stops on the m*th iteration, and let x* denote
the solution of the procedure. From the stopping criteria it follows that x*
argmin{(Fm)rx Bx = b, x E [0, A"]}. If xr / 0 then F7* = Fa(x*). On the
other hand, if x = 0 then one can replace Fa* by a sufficiently large M without
changing the optimality of x*. As a result, x* = argmin{FT(x*)xBx = b,Xa E
[0, An] }. From the latter it follows that x* is a solution of the following variational
inequality problem
find feasible x* such that FT(x*)(x x*) > 0, VXa E [0, A], Bx = b,
and one concludes that x* is an equilibrium solution of the network flow problem
with arc cost functions Fa(xa). U
If the arc costs are not constant, it is well known that the equilibrium and
system optimum solutions may not be the same (see e.g., Pigou [87], Dafermos
and Sparrow [29], and N 1,;, n.:y [77]). However, in the NFPwFDCF problem we
are interested in the system optimum solution, x*, where FT(x*)x* < FT(x)x,
VXa C [0, AX"], Bx b.
2.6 Numerical Experiments
In this section, we provide computational results for the dynamic cost
updating procedure and compare the solution of the DCUP with solutions provided
by DSSP and CPLEX.
The set of problems is divided into five groups that correspond to the networks
with different sizes and numbers of supply/demand nodes. For each group we
randomly generate three types of demand; U[10, 20], U[20, 30], or U[30, 40],
and consider 5 or 10 linear pieces (see Table A1, Appendix A). In Kim and
Pardalos [62] the authors consider increasing concave piecewise linear cost functions
for experiments. Although the bilinear reduction technique as well as DCUP are
valid for any concave piecewise linear function, to remain impartial for comparison
we generate similar increasing cost functions. Doing so, first for each arc we
randomly generate a concave quadratic function of the form g(x) = ax2 + 3x.
Notice that the maximum of the function is reached at the point x = %. Next
we divide the interval [0, 2] into na equal pieces, i.e. [A 1, A ] [(ki) 2, ]
k E Ka {1, 2,..., n}. Finally, we construct the function fa(xa) by approximating
the function g(x) in the breakpoints A\, i.e. fa() (A) + 3\A. There
are 30 problems generated for each choice of the group, the demand distribution
and the number of linear pieces. We use the GAMS environment to construct the
model and CPLEX 9.0 to solve the problems. Computations are made on a Unix
machine with dual Pentium4 3.2Ghz processors and 6GB of memory. All results are
tabulated in the Appendix A.
Sets 118 have a relatively small network size, and it is possible to solve
them exactly using CPLEX (see Table A2). The relative errors for those sets are
computed using the following formulas
RED p fDCUP exact
REDCUP ) xact
Jexact
REDSSP fDSSP fexact
REDSSP ) 
fexact
In addition to the relative errors, we compare the results of DCUP versus DSSP.
In Table A3, columns B, C, and D describe the percentage of problems where
DCUP is better than DSSP, DSSP is better than DCUP, and they are the same,
respectively. The numbers in column A are the averages (maximum values) of the
numbers REDSSP REDCUP, given REDSSP REDCUP > 0. According to the
numerical experiments DCUP provides a better solution than DSSP in about 41.
of the problems and the same solution in 3:'.. of problems. Also notice that DCUP
requires fewer iterations to converge and consumes less CPU time. Regarding
CPLEX, the computational time varies from several seconds in the sets 16 to
several thousands of seconds in the sets 1318.
In the case of the problem sets 1930, CPLEX is not able to find an exact
solution within 10,000 seconds of CPU time, and the best found solution is not
better than the one provided by the heuristics; therefore, we compare the results
of DCUP versus DSSP. In Table A4, columns B and D describe the percentage
of problems where DCUP is better than DSSP, and DSSP is better than DCUP,
respectively. The numbers in columns A and C are computed based on the formula
fDSSPDCUP given fDP fDCUP > 0, and DCUfDSSP, given fDCUP fDss > 0,
respectively. Observe that the percentage of problems where DCUP is better
than DSSP is higher in the problems with large demands. On average, in 59' of
problems DCUP finds a better solution than DSSP using fewer iterations and less
CPU time.
In the above numerical experiments, we have used the vector yo (Va E A,
0o = 1 and y0 = 0, Vk E Ka, k / 1) as an initial binary vector. However,
DCUP can start from any other binary vector that satisfies constraint (212). In
particular, one can considers the solution of DSSP as an initial vector and use
DCUP to improve the solution. Table A5 compares the results of DCUP versus
DSSP where column A is similar to the one in Table A4 (i.e. the numbers in the
column are computed based on the formula fDSSPfDCUP given fDSSPfDCUP > 0),
fDSSP
and column B is the percentage of problems that have been improved. Observe
that the percentage of problems where DCUP improves the solution of the DSSP
increases with the size of the network and the demand. Overall, the DCUP
improves the solution of the DSSP in about i' of the problems.
2.7 Concluding Remarks
In this chapter, we have shown that the concave piecewise linear network flow
problem is equivalent to a bilinear program. Because of the special structure of
the feasible region of the reduced problem, we are able to prove that the optimum
is attained on a vertex of the dii iiil parts of the feasible region. In addition, we
have shown that the results are valid for a general concave minimization problem
with a piecewise linear separable objective function.
Based on the theoretical results, we have developed a finite convergent
algorithm to find a local minimum of the bilinear relaxation. The computational
results show that the dynamic cost updating procedure is able to find a near
optimum or an exact solution of the problem using less of CPU time than CPLEX.
In addition, we compare the quality of the solution and the running time with
the dynamic slope scaling procedure. Since DCUP is fast, one can aim to find the
global minimum by randomly generating the initial binary vector and running
DCUP. In addition, DCUP can be used in cutting plane algorithms for finding an
exact solution.
CHAPTER 3
ADAPTIVE DYNAMIC COST UPDATING PROCEDURE FOR SOLVING
FIXED CHARGE NETWORK FLOW PROBLEMS
3.1 Introduction to the Chapter
During the twentieth century due to a variety of applications many researchers
focused their attention on the fixed charge network flow problem (FCNF). In
particular, production p11 iii:"i' scheduling, investment decision, network design,
location of plants and distribution centers, pricing policy, and many other practical
problems that arise in the supply chain, logistics, transportation science, and
computer networks can be modeled as a FCNF problem (see, e.g., Geunes and
Pardalos [41]).
The FCNF problem is well known to be NPHard and belongs to the class
of concave minimization problems. The problem can be modeled as a 01 mixed
integer linear program (see Hirsch and Dantzig [50]) and most solution approaches
utilize branchandbound techniques to find an exact solution (see Barr et al. [4],
Cabot and Erenguc [12], Gray [44], Kennington and Unger [59], and Palekar et
al. [83]). Since the concave minimization problem attains a solution at one of the
vertices of the feasible region, Murty [76] proposed a vertex ranking procedure to
solve the problem. However, finding an exact solution is computationally expensive
and it is not practical for solving large problems. Some heuristic procedures
are discussed in Cooper and Drebes [27], Diaby [31], Khang and Fujiwara [60],
and Kuhn and Baumol [63]. Recently Kim and Pardalos [61] (see also Kim and
Pardalos [62]) proposed a heuristic algorithm, Dynamic Slope Scaling Procedure
(DSSP), to solve the fixed charge network flow problem. The procedure solves a
sequence of linear problems, where the slope of the cost function is updated based
on the solution of the previous iteration. The algorithm is known to be one of the
best heuristic procedures to sole FCNF problems.
Note that all approaches to solve the problem are based on linear approximation
techniques. Instead, we approximate FCNF by a concave piecewise linear network
flow problem (CPLNF), where the cost functions have two linear pieces. A proper
choice of the approximation parameter ensures the equivalence between FCNF
and the resulting CPLNF problem. However, finding the proper parameter is
computationally expensive; therefore, we propose an algorithm that solves a
sequence of CPLNF problems by gradually decreasing the parameter of the
problem. We prove that the stopping criteria of the algorithm is consistent in the
sense that a solution of the last CPLNF problem in the sequence is a solution of
the FCNF problem.
Despite the above mentioned theoretical results, the algorithm requires finding
exact solutions of the CPLNF problems, which are NPHard (see Guisewite and
Pardalos [45]). In C!i plter 2 (see also Nahapetyan and Pardalos [79]), we have
shown that the CPLNF problem is equivalent to a bilinear program. In addition,
we have proposed a finite convergent dynamic cost updating procedure (DCUP)
to find a local minimum of the resulting bilinear program. To solve the FCNF
problem, in the algorithm one transforms the CPLNF problems into equivalent
bilinear programs and uses the DCUP to solve the resulting problems. We refer
to the combined algorithm as the adaptive dynamic cost updating procedure
(ADCUP).
Similar to the result presented in the C(i plter 2, we prove that the solution
provided by DSSP is a solution to a variational inequality problem, which is
formulated based on the feasible region of the FCNF problem. Although in general
an equilibrium solution and a system solution are not the same, the difference
between the objective function values of the solutions can be fairly small. On
the other hand, ADCUP is a heuristic procedure for finding a system optimum
solution. To compare these two procedures, we conduct numerical experiments
on 36 problems sets for different networks and choices of cost functions. There
are 30 randomly generated problems for each problem set. In the experiments,
we compare ADCUP versus DSSP in terms of the quality of the solution as
well as CPU time. In addition, for small networks we find an exact solution of
the problems using MIP solvers of CPLEX and compute relative errors. The
computational results show that ADCUP provides a near optimum solution using
a negligible amount of CPU time. In addition, the procedure outperforms DSSP in
the quality of the solution as well as CPU time. The difference between solutions is
more noticeable in the cases of small general slopes and large fixed costs.
For the remainder, Section 3.2 discusses the approximation technique and
establishes the equivalence between the FCNF problem and a CPLNF problem
with a special structure. A solution algorithm for solving the FCNF problem is
provided in Section 3.3. Some properties of the DSSP are introduced in Section 3.4.
The results of numerical experiments on ADCUP are summarized in Section 3.5,
and finally, Section 3.6 concludes the chapter.
3.2 Approximation of the Fixed Charge Network Flow Problem by a
TwoPiece Linear Concave Network Flow Problem
This section discusses an approximation of FCNF by concave piecewise linear
network flow problems (CPLNF). In particular, by choosing a sufficiently small
approximation parameter one can guarantee the equivalence between the FCNF
and the CPLNF problems.
Consider a general fixed charge network flow problem constructed on a
network G(N, A), where N and A denote the sets of nodes and arcs, respectively.
Let f,(a() denote the cost function of arc a E A, and
S I
Figure 31. Approximation of function fa,(.).
fa (Xa) CaXa + Sa Xa E (0, a,]
0 Xa= 0
where Aa is the capacity of the arc. The fixed charge network flow problem can be
stated as follows.
FCNF:
min f(x) fa(/a)
x
aEA
s.t. Bx b,
XaC [0, Aa, Va A,
where B is the nodearc incident matrix of network G, and b is a supply/demand
vector.
Observe that the cost function is discontinuous at the origin and linear on the
interval (0, Aa]. Although we assume that the flows on the arcs are bounded by Aa,
the bounds can be replaced by a sufficiently large M, and the problem transforms
into an unbounded one.
Let aE E (0, Ao], and
X) CaXa + Sa Xa C [Fa, Aal
a[(X)=a)
Caa Xa Xa C [0, a)
where ca = Ca + Sa/Ea Observe that ,' (xa) = fa(Xa), Vx, E {0} U[Fa, A] and
' (Xa) < fa(xa), VXa E (0, E) (see Figure 31). Consider the following concave
piecewise linear network flow problem
CPLNF(E):
min 0(x) = (Xa)
aEA
s.t. Bx b,
XaG [0, Aa, Va e A,
where E denotes the vector of Ea. The function 0((x) as well as problem CPLNF(E)
depend on the vector E. In the discussion below, we refer to 0"(x) and CPLNF(E)
as Eapproximations of the function f(x) and the FCNF problem, respectively.
Denote by x* and x" the optimal solutions of the FCNF and the CPLNF(E)
problems, respectively. Let V represent the set of vertices of the feasible region,
and 6 minf{xal x" E V, a c A, x > 0}. That is, 6 is the minimum among all
positive components of all vectors x" e V; therefore, 6 > 0.
Theorem 3.2.1. For all E such that Ea e (0, al] for all a e A, 0~(x") < f(x*).
Proof: Notice that (x*) < fa(x), Va c A; therefore, "(x*) < f(x*). Since
x' = argmin{CPLNF(E)}, the statement of the theorem follows. U
Theorem 3.2.2. For all E such that Ea e (0,6] for all a c A, O~(x") = f(x*).
Proof: To prove the theorem by contradiction, assume that 0"(x") < f(x*).
Observe that CPLNF(E) is a concave minimization problem; therefore, solutions
of the problem attain on a vertice of the feasible region. From the latter it follows
that xa > 6 > E, or a = 0, Va c A. As a result, (xa) =fa,(x), Va E A,
Q ^(x ) f(xE) < f(x*). The latter contradicts the optimality of x*, and one
concludes that "'(x") = f(x*). U
From Theorem 3.2.1 it follows that for all E such that Ea e (0, Al] for all a c A,
CPLNF(E) provides a lower bound for the FCNF problem. In addition, Theorem
3.2.2 makes sure that by choosing a sufficiently small e > 0 (e.g., Fa = 6, Va E A),
both problems have the same solution; therefore, FCNF is equivalent to a concave
piecewise linear network flow problem.
3.3 Adaptive Dynamic Cost Updating Procedure
In this section we discuss an algorithm for finding a solution of the fixed
charge network flow problem. As we have shown in the previous section, the
problem is equivalent to CPLNF(E), where E is such that aE E (0, 6], for all a E A.
However, it is computationally expensive to find the value of 6. Instead, we propose
solving a sequence of CPLNF(E) problems by gradually decreasing the values of Ea.
Consider Procedure 3. In Step 1, the procedure assigns initial values for
E,. Step 2 solves the resulting CPLNF problem. Notice that CPLNF(e1) is
indeed a linear problem, because [ a] = {Aa}. If 3a E A such that x" e
(0, F7), we decrease the value of aE to a where a is a constant from the open
interval (0, 1). Assume that the procedure stops at iteration k and let xk
argmin{CPLNF(ek)}.
Lemma 3.3.1. For all e such that 0 < F < Va E A, problems CPLNF(E) and
CPLNF(k) have the same set of optimal solutions.
Proof: Let x" = argmin{CPLNF()e}. Consider the following sequence of
inequalities
_(xk) > 0X>) > kX) > O{X (31)
The first and the third inequalities follow from the optimality of x" and xk in the
CPLNF(E) and the CPLNF(Ek) problems, respectively. Since ,a < Ek, Va e A, from
Procedure 3
Step 1: Let e < Aa. m  1.
Step 2: Solve problem CPLNF(E") and let x" = argmin{CPLNF(')}.
Step 3: If 3a e A such that x~ e (0, E) then Fe+1 < a e, m m + 1, and go
to step 2. Otherwise, stop.
Figure 32. (xa) and (x) functions.
Figure 3 2. 't (xa) and aa (Xa) functions.
the definition of Eapproximation it follows that (xa) > (Xa), Va E A and
Xa e [0, Aa] (see Figure 32), and the second inequality follows.
Observe that because of the stopping criteria, x = 0 or x E [e a].
k 'k
Since aE < Fa, (Xa) a (Xa:r), Va e A and Xa {0}U[ aAa]; therefore,
Q (xk) = O(xk). The latter together with (31) insures that Q(xE) = 0(k).
Since both problems, CPLNF(E) and CPLNF(ek), have the same objective function
value at xr and xk, one concludes that they are equivalent. U
Theorem 3.3.1. A solution of the CPLNF(Ek) is a solution of the FCNF problem.
Proof: From Lemma 3.3.1 it follows that VE such that 0 < aE < KE, Va E A, the
CPLNF(E) and the CPLNF(Ek) problems have the same set of solutions. On the
other hand, by choosing 0 < aE < min{E, )}, CPLNF(E) is equivalent to the FCNF
problem (see Theorem 3.2.2), and the statement of the theorem follows. U
From Theorem 3.3.1 it follows the consistency of the stopping criteria in
the sense that an optimal solution of the resulting problem, CPLNF(Ek), is an
optimal solution of the FCNF problem. Observe that Procedure 3 requires solving
a sequence of concave piecewise linear network flow problems, which are NPHard.
To overcome this difficulty, one considers a bilinear relaxation technique to solve
the CPLNF problems (see C'! lpter 2). In Section 2.2, we have shown that the
CPLNF problem is equivalent to a bilinear program. To solve the bilinear problem,
we propose the dynamic cost updating procedure (DCUP), which finds a local
minimum of the problem and can be used in Step 2 of Procedure 3 to find a
solution of the CPLNF(E") problem. The resulting algorithm is summarized in
Procedure 4, which we refer to as adaptive dynamic cost updating procedure
(ADCUP). Below we provide the mathematical formulation of the bilinear problem,
which is equivalent to CPLNF( '"). For details on the formulation of the problem,
finite convergence and other properties of the DCUP we refer to C'! Ilpter 2.
Problem CPLNFR( ') is defined by:
minm [cI y + cat.] a + s~ g
x,y
aEA aEA
s.t. Bx b,
?Ya < Xa < ?'I a" + AaYa, Va E A,
Y a + Ya = 1, Va e A,
Xa > 0, Y > > 0, and ga > 0, Va e A.
The ADCUP is a heuristic algorithm to find a solution to the FCNF problem.
Note that the choice of a has a direct influence on the CPU time of the procedure
as well as the quality of the solution. In particular if the value of the a is close
to one, the value of the F decreases slowly and the procedure requires a large
number of iterations to stop. On the other hand, small values of the parameter can
worsen the quality of the solution. In the numerical experiments, we use a = 0.5
because in our test problems the procedure with that parameter provides a fairly
highquality solution using 14 seconds of CPU time (see Section 3.5).
Procedure 4 : Adaptive Dynamic Cost Updating Procedure (ADCUP)
Step 1: Let \ < Aa, 0 = 0, y 0, andy 1. m  1.
Step 2: Run DCUP to solve the CPLNFR("') problem with initial vector
(4r1n 1). Let (, ,) be the solution that is returned by DCUP.
Step 3: If 3a e A such that e (0, F) then eF+1  7a', m  n + 1, and go
to step 2. Otherwise, stop.
3.4 On the Dynamic Slope Scaling Procedure
This section discusses some properties of the dynamic slope scaling procedure
proposed by Kim and Pardalos [61] (see also Kim and Pardalos [62]). In the paper,
the authors discuss four variations of DSSP based on the choice of the initial vector
and the slope updating scheme. However, regardless of the initial vector and the
updating scheme, DSSP provides an equilibrium type of solution. To prove the
statement, we first transform FCNF into an alternative problem then prove that
the solution provided by DSSP is a solution of a variational inequality problem
constructed based on the new formulation. The theoretical results provided below
are very similar to those in C'! lpter 2, where we have shown that the property
holds for the concave piecewise linear network flow problem.
Let
'X) Xa > 0aiC a +a s,, c (0, Aa,
M X" 0 M X= 0
where M is a sufficiently large number. Consider the following network flow
problem with flow dependent cost functions F,(xa).
NFPwFDCF:
min FT (x)
s.t. Bx = b, (32)
Xa[0,X"], VacA, (33)
where F(x) is the vector of functions F,(Xa).
Theorem 3.4.1. The NFPwFDCF problem is equivalent to the FCNF problem.
Proof: (See proof of Theorem 3.4.1, ('! Ilpter 2) U
Let NFPwFDCF(F) denote the NFPwFDCF problem, where the vector
function F(x) is fixed to the value of the vector F. In the first step, DSSP
solves NFPwFDCF(F) problem with an initial vector F = F0. Let xm
argmin{NFPwFDCF(Fm)}. Next, DSSP iteratively updates the value of the
vector F using the solution x", i.e., F"+1 = F,(xj), and solves the resulting
NFPwFDCF(Fm+1) problem. The procedure stops if xm+l = xm. In Kim and
Pardalos [61], the authors propose different updating schemes, where they replace
M by a smaller value. However, the next theorem proves that regardless of the
initial vector F0 and the updating scheme, the final solution provided by DSSP is a
solution of a variational inequality problem.
Theorem 3.4.2. The solution of DSSP is the solution of the following variational
.,' .,. ;l,.;i problem
find x* feasible to (3 2) and (3 3) such that FT(x*)(x x*) > 0, Vx feasible to
(32) and (33)
Proof: Assume that DSSP stops on iteration k, and let x* = argmin{(Fk)TxlBx
b, xa E [0, OA ]}. From the stopping criteria it follows that x* = xk. If x* > 0
then Ff = Fa(x*). On the other hand, if x* 0 then one can replace Ff
by a sufficiently large M without changing the optimality of x*. As a result,
x* = argmin{FT(x*)x Bx = b,Xa E [0, A']}. From the latter it follows that
FT(x*)(x x*) > 0, for all feasible x. U
From Theorem 3.4.2 it follows that the solution of DSSP, x*, satisfies the
inequality FT(x*)x > FT(x*)x*, Vx feasible to (32) and (33), i.e., x* is an
equilibrium solution. However, since NFPwFDCF is equivalent to the FCNF
(see Theorem 3.4.1), one is interested in finding a feasible x such that FT(x)x >
FT(x)x, Vx feasible to (32) and (33), i.e., x is a system optimum solution. Notice
that the equilibrium and the system optimum solutions may not be the same,
unless Fa(xa) is constant.
3.5 Numerical Experiments
This section discusses numerical experiments on the adaptive dynamic cost
updating procedure. We solve all problem sets using ADCUP as well as DSSP (see
Kim and Pardalos [61]). To compare the results of the heuristic procedures, in the
case of small problems we find an exact solution using CPLEX MIP solver and
compute relative errors. In the case of large problems, CPLEX is not able to find
an exact solution within 5,000 seconds of CPU time; therefore, we compare the
results of DCUP versus DSSP.
In the experiments, we solve problems using all four variations of DSSP and
choose the best solution to compare with the solution provided by the ADCUP.
In addition to the final solution (the solution that the algorithm returns when it
stops), during the iterative process DSSP as well as ADCUP construct feasible
vectors that might have a better objective function value. In the procedures, we
record those vectors and choose the best one. The comparison between the best
solutions of both algorithms is also provided. With regard to the computational
time, we compare the CPU time of ADCUP versus the best CPU time among four
variations of DSSP.
There are four groups of test problems based on the size of the network and
the number of supply/demand nodes (see Table B 1, Appendix B). For each group,
we construct different types of functions fa(x,), where the slope and the fixed cost
are generated randomly according to the specified distributions. In total, there
are nine sets of problems (nine types of function f,(xa)) for each group, i.e., one
set of problems for each choice of distribution for the slope and the fixed cost.
There are 30 randomly generated problems for each problem set. The components
of the supply/demand vector are generated uniformly between 30 and 50 units.
The model is constructed using the GAMS environment and solved by CPLEX
9.0. Computations were made on a Unix machine with dual Pentium 4 3.2Ghz
processors and 6GB of memory. All results are tabulated in the Appendix B.
Tables B2 and B3 illustrate the computational results for groups G1 and G2.
Since the size of those problems is not big, we have solved the problems exactly
using the CPLEX MIP solver. The relative errors are computed based on the
following formulas
READCUP (i fADCUP exact
READCUP\ *)
fexact
REDSSP fDSSP fexact
REDSSPy *)
fexact
From the results it follows that on average ADCUP provides a better solution
than DSSP using less CPU time. Notice that ADCUP outperforms DSSP in the
comparison of the final solutions as well as the best solutions. Although there
are some problems where DSSP provides a better solution than ADCUP, the
percentage of those problems is fairly small and decreases with the increase of the
size of the network. In addition, observe that the quality of the solutions provided
by both algorithms changes with the choice of the distributions for the slopes and
the fixed costs. In particular, both algorithms provide a near optimum solution
for the problems with a larger slope and smaller fixed cost. Although the relative
error of both algorithms increases with the decrease of the slope and the increase of
the fixed cost, observe that the quality of the solutions provided by DSSP changes
more than those provided by ADCUP.
In the case of groups G3 and G4, we compare ADCUP versus DSSP using the
following formula
DSSP ADCUPfDSSP fADCUP
min{fDssp, fADCUP}
The computational results on those groups are summarized in Table B4. Similar
to the previous two groups, one observes that on average ADCUP provides a
better solution than DSSP. Notice that DSSP consumes much more CPU time
before termination than ADCUP. In addition, the percentage of problems where
ADCUP provides a better solution than DSSP is higher than in the previous cases.
Similar to groups G1 and G2, the difference between the solutions provided by
both algorithms is small for the problem sets with a larger slope and smaller fixed
cost. When the slope decreases (or the fixed cost increases), ADCUP provides a
perceptibly better solution than DSSP.
3.6 Concluding Remarks
Unlike other models in the literature, we consider concave piecewise linear
network flow problems to solve fixed charge network flow problems. A proper
choice of parameter a guarantees the equivalence between the CPLNF(E) and
the FCNF problems. Based on the theoretical results, we propose a solution
algorithm for the FCNF problem, where it is required to solve a sequence of
CPLNF(E) problems. To find a solution of CPLNF(E), the algorithm employs the
dynamic cost updating procedure. The computational results show that ADCUP
outperforms DSSP in the quality of the solution as well as CPU time. Although
in the computations we choose a = 0.5, one can use a higher value in an attempt
to improve the quality of the solution. Note that a large value of the parameter
increases the CPU time of the procedure. Although ADCUP often provides an
exact solution, it is not guaranteed because DCUP converges to a local minimum of
the bilinear relaxation problem, CPLNFR(F').
In the numerical experiments, we have shown that the relative error of the
solutions of both procedures increases in the cases of small slopes and large fixed
costs. To explain this phenomena, observe that by decreasing the value of the slope
the angle between function fa(xa) and the first linear piece of function ,a' (xa)
increases (see Figure 31). As a result, ,' (x,) does not approximate the function
f,(ax) as well as in the case of large variable costs. The same discussion applies to
the case of a large slope.
CHAPTER 4
A BILINEAR REDUCTION BASED ALGORITHM FOR SOLVING
CAPACITATED MULTIITEM DYNAMIC PRICING PROBLEMS
4.1 Introduction to the Chapter
Supply chain problems with fixed costs and production planning problems
involving lotsizing have been active research topics during resent decades. Many
research papers have addressed singleitem problems with additional important
features such as backlogging, constant and varying capacities, and different
cost functions (see, e.g., Gilbert [43], Florian and Klein [36], van Hoesel and
Wagelmans [53], Loparic et al. [70], and Loparic et al. [71]). It is well known that
incapacitated problems can be reduced to a shortest path problem. Florian and
Klein [36] studied capacitated singleitem problems, where they characterized
the optimal solution and proposed a simple dynamic programming algorithm for
problems in which the capacities are the same in every period. The singleitem
problems with varying capacities are known to be NPhard. A classification
of different problems and a survey on existence of a polynomial algorithm for
solving problems for different classes can be found in Wolsey [100] and Pochet and
Wolsey [88]. Tight formulations for polynomially solvable problems are discussed in
Miller and Wolsey [75] and Pochet and Wolsey [88].
Almost all practical problems involve multiple items, machines and/or levels,
and polynomial results for those problems are limited. Using binary variables,
one can construct a mixed integer linear programming (\!IP) formulation
of the problem with an imbedded network structure. To solve the problem,
branchandbound and cutting plane algorithms have been used (see, e.g., Barr
et al. [4], Cabot and Erenguc [12], Gray [44], Kennington and Unger [59], Palekar
et al. [83], Marchand et al. [72], and Wolsey [100]). In addition, several heuristic
algorithms have been proposed (see, e.g., Cooper and Drebes [27], Diaby [31],
Khang and Fujiwara [60], Kuhn and Baumol [63], van Hoesel and Wagelmans [52],
Kim and Pardalos [61] and [62], Nahapetyan and Pardalos [79] and [80]).
In this chapter we discuss a capacitated multiitem dynamic pricing (C' \I)P)
problem where one maximizes the profit by choosing a proper production level as
well as pricing policy for each product. In the problem, the demand is a decision
variable, and in order to satisfy a higher demand one needs to reduce the price of
the product. On the other hand, reducing the price can decrease the revenue, which
is the product of the demand and the price. In addition, the problem includes
an inventory and production cost for each product, where the latter involves a
setup cost. The objective of the problem is to find an optimal production strategy,
which maximizes the profit subject to production capacities that are !I ied"
by the products. Different variations of a singleitem uncapacitated problem
with deterministic demands are discussed by Gilbert [43], Loparic et al. [71], and
Thomas [96]. A capacitated singleitem problem with time invariant capacities is
discussed in Geunes et al. [42]. The polynomial algorithms proposed by the authors
are based on the corresponding results for the lotsizing problems.
In C'!i lpters 2 and 3 (see also Nahapetyan and Pardalos [79] and [80]) we have
proposed a bilinear reduction technique, which can be used to find an approximate
solution of concave piecewise linear and fixed charge network flow problems. A
similar technique is proposed to solve the C' I )P problem. In particular, we
consider a bilinear reduction technique of the problem and prove that solving the
C'\ I)P problem is equivalent to finding a global maximum of the bilinear problem.
The latter belongs to the class of bilinear problems with disjoint feasible region,
and one considers a heuristic algorithm to find a solution of the problem. The
heuristic algorithm employs a well known iterative procedure for finding a local
maximum of the problem (see, e.g., Horst et al. [54] and Horst and Tuy [55]).
Numerical experiments on randomly generated problems confirm the efficiency of
the algorithm.
For the remainder, Section 4.2 provides a linear mixed integer formulation
of the problem and discusses a bilinear reduction of the problem. We prove that
solving the C'\ 1 )P problem is equivalent to finding a global maximum of the
bilinear reduction. In Section 4.3 we propose a heuristic algorithm for solving the
bilinear problem. Numerical experiments on the algorithm are provided in Section
4.4, and finally, Section 4.5 concludes the chapter.
4.2 Problem Description
In this section we provide a nonlinear mixed integer formulation of the
problem. Using some standard linearization techniques, the problem can be
simplified. To solve the problem, we propose a bilinear reduction technique and
prove some properties of the bilinear problem.
Let P and A represent the set of products and discrete times, respectively.
In addition, let f(p,j)(d) denote the price of product p at time j as a function
of the demand d, and g(pj)(d) = f(p,j)(d)d, i.e., g(pj)(d) represents the revenue
obtained from selling d amount of product p at time j. In the problem, we assume
that f(pj)(d) and g(p,j)(d) are nonincreasing and concave functions, respectively
(see Figures 41). If f(p,j)(d) is a concave function, then it is easy to show that
concavity of g(p,j)(d) follows.
Let x(p,i,j) denote an amount of product p that is produced at time i to satisfy
the demand at time j, and y(p,i) represent a binary variable, which equals one
if product p is produced at time i and zero otherwise. Assume that inventory
costs, c(, production costs, c ,, and setup costs, c as well as production
capacities, Ci, are given, where p, i, and j represent the product, producing time,
(,j)( g(pj)(d)
/fmax
(pj)
A quadratic function
A linear function
dmax d 7 dmax d
d(PJ) d(pj)
Figure 41. The price and the revenue functions.
and selling time, respectively. The following is the mathematical formulation of the
C'\ I )P problem.
CMDP :
max .P ,. l (p,i) Y(13i)
pEP jE'e ieAiji pEP idjEAij pep iEAz
s.t. X(p)
pEP jeAzli
SX(p,i,j) < CiY(p,i), Vp C P and i c A,
jEz'li
X(p,i,j) > 0, Y(p,i) E {0, 1}, Vp P and i,j e A.
The objective function of the problem maximizes the profit, which is the
difference between the revenue and the costs. The latter includes the inventory
as well as the production costs. The first constraint ensures the satisfaction of
the capacity restrictions, and the second one makes sure that y(p,i) equals one if
jeAjE' 0.
Although the above formulation belongs to the class of nonlinear mixed integer
programs, using standard techniques one can approximate the revenue function by
a piecewise linear one and linearize the objective function. Doing so, observe that
from the concavity of the revenue function it follows that there exists a point, d(pj),
where the function reaches its maximum (see Figure 41). As a result, producing
and selling more than d(pj) is not profitable, and at optimality xiEAi
d(p,). To linearize the revenue function, divide 0, d(pj) into N intervals of equal
length. Let 4k i) denote the end points of the intervals, i.e., 4, i = kd(pj)/N,
Vk c {1,..., N} U{0} = K U{0}, and g Pj) represents the value of the revenue
function at the point 4k ), i.e., g = gP, )) ) ,' Using those
parameters, construct the function
N
k=0
N
A (pj),
k1
where it is required that
N N
(P'') k k d k
k 0 k 1 (pj)'
iE^\i
Vp C P and j E A,
N
A k 1, A ) >0, Vp P and je A,
k0
and Ak(p,) / 0 for at most two consecutive indices k. Observe that g(p,j)(d) is a
concave nondecreasing function on the interval 0, d(pj) ; therefore, its piecewise
linear approximation, i.e., g(p,j)(A(p,j)), preserves the same property. From the
latter, it follows that in the maximization problem the requirement on A) being
positive for at most two consecutive indices k can be removed from the formulation,
and it is satisfied at optimality (for details see C'! plter 11, Bazaraa et al. [5]). The
following is the mathematical formulation of the approximation problem:
max A3P) > >1 KcU + .] Cr, X(pYi ~ ji
x,y,A p ptE
pEP jEA kEK pEP i,jEAli
s.t. 3 Y X(ZPij) < C' V1 e A,
pEP jEAJi
.(p,ij) < Cy(pi), Vp c P and i c A,
jEAJi
X (pij) dk k,) VpcPand j A,
iEAl_
9(pj) (A(p,j))
N
A(py) = 1,
k0
x(p,i,j) > O, A j > 0, Y(p,i) e {0, 1},
Vp E P and j e A,
Vp E P, i,j E A and k e KU {0}.
Next, we simplify the formulation using nonnegative variables x kj), k c K,
instead of x(p,ij), where x (j)k represents the amount of product p that is produced
at time i and sold at time j using unit price g kj)/ i) fk,) (pj)(
Doing so, the third constraint in the above formulation can be replaced by the
following one:
k1 k X dk Ak
xP(y) d(PJ) (pj),
tEAli
Vp c P,j c A and k E K.
The latter can be used to remove the variable Ak from the formulation. In
particular, the fourth constraint is replaced by inequalities
k
S_< 1,
kEK iEAli
Vp P and jE cA,
and in the objective
g()) A (3) ~ (3 x( ,,) Vp j c A and k K,
iEdli<_j
where fj) f1,=)k (k, i)). The following is the resulting alternative formulation of
the approximation problem, which we refer to as AC\ i )P:
ACMDP:
max Y (P j)X(P l)
pEP iEA jeAi<_j kEK
s.t. X < ci,
pEP jEAti<_j kEK
x kj) < Cy(p,), Vp
jEAije kEK
st
c(pi) (p,i)
Vi eA,
 P and i E A,
(41)
(42)
k
S< 1, Vp P andj A, (43)
kEKiA\li
X _k) > 0, ,i) {0,1}, Vp e P, i,j A and k e K, (44)
where qg, j) = () c" ,. Observe, that at optimality x(,j) 0 for
all indices such that qk < 0, and those variables can be removed from the
formulation. Therefore, without lost of generality, in the analysis below, we assume
that kq,) > 0.
Define X = {xlx > 0 and x(pi,j) are feasible to (41) and (43)}, and
Y = [0, 1]IPlHl. Consider the following bilinear program:
ACMDP B:
max i'j)X(pi) (pi) Y(pi) (Xy)
pEP izA jEAli<_j kEK
s.t. x EX and y E Y.
Theorem 4.2.1. Any local maximum of the A('CI)PB problem is feasible or can
provide a feasible solution of the AC 'II)P problem with the same objective function
value.
Proof: Let (x*, y*) denote a local maximum of the AC I 1)PB problem. Observe
that by fixing x to the value of the vector x*, the AC' I I)PB problem reduces to
a linear one, and y* is an optimal solution of the resulting problem. Assume that
Ep E P and i E A such that y*i) E (0, 1), i.e., y*, is a fractional number. If
yEI<3 ZkEK q(p,ij) pi,) (p,) < 0 or 2Ejij< ZkEK q( ip,)Xi,) ,) (p,) > 0
then it is possible to improve the objective function value by assigning y*, = 0
or y*,i 1, respectively. The latter contradicts the optimality of (x*,y*). On the
other hand, if jElij kEkK q pij)(P,) i ,. = 0 then by changing the value of
the variable y*,) to zero the objective function value remains the same. Construct
a vector y, where y(p,) = [YP,* Note that (x*, y) is feasible to constraints (41),
(43) and (44). If (x*, y) violates constraint (42) then 3p E P and i c A such
that E <7 zkEKKij) > 0 and y(p,) = 0. From the local optimality of (x*, )
it follows that CeAli
produce product p at time i. Furthermore, by assigning x*,) = 0, Vj E A and
k e K, the objective function value of the AC\ I I)PB problem remains the same.
Let x denote the resulting vector, i.e.,
k 0 if ECa< ECkEK q(pj)xk() " cs 0 5
(*, i f k *k k CSt > 0
( i(pj) .,
( Xj) if ZjAzJiZkEK q Cij) ,i,) >
The vector (x, y) is feasible to the AC'\ I)P as well as the AC' \I)PB problem and
has the same objective function value as (x*, y*). *
Theorem 4.2.2. A 11..1l maximum of the AC('iPB problem is a solution or can
be used to construct a solution of the AC('DP problem.
Proof: Observe that any feasible solution of the AC' I 1)P is feasible to the
AC'i\ )PB problem. Furthermore, if (x, y) is feasible to the AC'i\I)P problem then
j) (p i' j) ~ (p, )yP(p, )= z z ^(Pij) (P ,)  c(p,i) Y(p,i)
jEAli
From the latter it follows that the AC\ lI)PB problem can be obtained from
AC\ I)P by replacing the objective function by
a q ( )  Y(p,i),
xpP iEz jEAli<_j kEK K
removing constraint (42) and relaxing the integrality of the variable y(p,i). In
other words, the AC\ i )PB problem is a relaxation of the AC'\ I )P problem. From
Theorem 4.2.1 it follows that a global solution of AC'\ I )PB is a solution (or leads
to a solution) of the AC' \ 1)P problem. U
The above two theorems prove that solving the AC'i\I)P problem is equivalent
to finding a global maximum of the AC' I 1)PB problem. In particular, one can
solve the AC'\ i I)PB problem and if the solution is not feasible to the AC'\ iI )P
problem, then use the method described in the proof of Theorem 4.2.1 to construct
a feasible one with the same objective function value.
4.3 A Bilinear Reduction Based Algorithm for Solving ACMDP
Problem
In this section we discuss a heuristic algorithm for solving the ACi\ I)PB
problem, which employs a well known iterative procedure for finding a local
maximum of the AC'\ i I)PB problem.
Observe that the problem belongs to the class of bilinear programs. By fixing
vector x or y to a particular value, the problem can be reduced to a linear one. Let
LP(x) and LP(y) denote the corresponding linear programs, i.e.,
LP(x) : ma ZpeP EieA [LEjAi< kEK C ,i p j st ] y(p,), and
LP(y) : maX pEP iEA jEAi KCeK ['pi'j) (ip) ,i,j)'
Notice that the solution of the LP(x) is easy to obtain. In particular,
k k st < 0
{ O if Zeli
V(Ip,i) 
1 ) if YjEAlJi 0
is an optimal solution of the problem. The Procedure 5 describes a well known
algorithm, which starts from an initial binary vector and converges to a local
maximum of the AC'\ I)PB problem in a finite number of iterations (see Horst and
Tuy [55] or Horst et al. [54]).
However, the procedure has the following disadvantage. Let (x", y') represent
the solution obtained on iteration m, and assume that 3p E P and i E A such that
y =,) 0. As a result, in the LP(ym) problem q(P,
Procedure 5
Step 1: Let yo denote an initial binary vector of y(p,i). m  1.
Step 2: Let xm ar,,,i., {LP(y 1)}, and ym = ,I.nt.,, {LP(xm)}.
Step 3: If y" y"1 then stop. Otherwise, m < m + 1 and go to Step 2.
k E K, and perturbations of the values of the corresponding variables xzk ) do not
change the objective function value. Furthermore, because the products I! ire"
the capacity and other products can have a positive cost in the LP(ym) problem, it
is likely that the value of the variable ximk decreases in the next iteration. From
the latter it follows that y'()1 = 0. To summarize, if y"(,i) 0 then it is likely that
(i) y( =,) 0, Vn > m, and (ii) the final solution is far from being a global one.
To overcome those difficulties, next we propose an approximation problem, which
avoids having zero costs in the objective of the LP(y) problem.
Let )(x(pL)) = E ECK pij)pi) and
(p ,) ( ( ,j) (pij),
1! jEAjti
where (p,) > 0 and x(p,,) is the vector of xk .). It is easy to show that both
functions have the same value, (pi), on the hyperplane CEAl
E(p,t) +c ,t Furthermore, p (i((p)) > (xi) ,) if ZE Aiz kEK 1i p,) >
E(p,i) + and p(p,)(X(p,)) < (pX)(X(p,) if EAi
(pi) + cst Define
S(Xy) 0 >1 > LYp,)(x(p,t))u(p,t) + (#5pt)(X(p,t))(1 1(p,i))]
pEP iEA
where E denotes the vector of E(p,t). The function op(x, y) depends on the value
of the vector E, and "(x, y) > p(x, y), for all E > 0 and (x, y) feasible to
the AC\Ii)PB problem. Observe that if E 0 then Y(ti)(x(p,i)) 0, and
P(x, y) p(x, y). In that sense, p"(x, y) is an Eapproximation of p(x, y), and
it approximates the function from above. By replacing the objective function of
the AC\ I )PB problem by the function (x, y), we refer to the resulting problem
as Eapproximation of the AC\ I )PB problem and denote by AC\ I )PB(E).
Construct the corresponding LP"(x) and LP"(y) linear problems by fixing vectors
x and y in the ACi\I)PB(E) problem to a particular value, i.e.,
Procedure 6 :
Step 1: Let (p,i) be a sufficiently large number, and yo be such that y = 1,
VpE P and i E A. m 0.
Step 2: Construct the Eapproximation problem ACi\ I)PB(E) and run
Procedure 5 to find a local maximum of the problem, where y" is an initial
binary vector. Let (x"m +l1) denote the local maximum.
Step 3: If 3p E P and i e A such that EY:Ei K p,
and EjEAi 0 then < aE, m < m + 1 and go to Step 2.
Otherwise, stop.
LP(x) : max pEp Ei (, p,)(x(p,i))y(p,i) + () (x((p,))(1 y(p,,)), and
yEY
LP"(y) : maX YpP,i, EkEK,jEAj i) (p) ,+ 1 Yp))i X))
As before, the solution of the LP"(x) problem is easy to obtain by assigning
0 jf Zke K V k k cst <
i(p,i) 
0 if jE zjiA j Y kEfiK q(pi,j) (p,ij) > '(pl)
{ 0 if j )(X(P,)) < ) (X(,))
Sif ) X(p)) > X ))
The heuristic procedure (see Procedure 6) starts with a sufficiently large E and
finds a local maximum of the resulting Eapproximation problem. If the stopping
criteria is not satisfied in Step 3 then it decreases the value of the vector E to aE,
where a is a constant from the open interval (0, 1), and the process continues using
a new Eapproximation problem. Observe that Procedure 6 uses vector y' from the
previous iteration as an initial vector.
The procedure depends on two parameters: the initial vector E and the value
of a. The value of E(p,t) depends on parameters of the problem, and one can
consider them equal to the maximum profit, which can be obtained by producing
only product p at time i. Although such maximization problem is easy to solve
using standard LP solvers, for large IPI and A one finds computationally
expensive solving the problem for all pairs (p, i) c P x A. Instead, we propose
an algorithm for finding the values of (p,i) (see Procedure 7). Observe that
x, : j*) < i ), and the maximum additional profit that can be obtained using the
variable x j)is q(i .k i). Using this property, for all pairs (p, i) the procedure
iteratively finds the maximum among q ,dk i), and assigns the demand (or the
remaining of the capacity) to the corresponding variable xij).k The value of (p,i)
is computed based on the formula E(p,) = jeAz, < k eK C ,ij)Xpij) cst .. As for
the parameter a, its larger value increases the computational time of the procedure,
and it is likely to provide a better solution.
4.4 Numerical Experiments
In this section we discuss numerical experiments conducted on randomly
generated problems. The problems are solved by Procedure 5 as well as Procedure
6 using different values for the parameter a. The latter procedure employs
Procedure 7 to find an initial value for the vector E. In addition, we solve the
problems by the MIP solver of CPLEX using the ACl\ I)P formulation. In the
cases where the MIP solver is not able to solve large problems within posted CPU
and memory limitations, we compare the solutions of the procedures with the
best solutions found by CPLEX. The main purpose of the computations is the
performance of the procedures for different capacities.
Procedure 7
Assign xzi) = 0, Vp e P, i,j e A, and k e K
for all p e P and i c A do
C C ,j) q~,) max = max{q(~p, i)k c K, j A,j > i}
while C / 0 and qmax / 0 do
Let jmax and kmax are such that qmax = q.m~ma,.m
Assign xl, ,max, = min{C',., ), Ca C X jmax p,ijmax = 0, Vk
K, and qmax max{q 4k'j), i),k e K,j A,j > i}
end while
(pi) jEaJ
end for
In the numerical experiments we consider problem sets with different numbers
of products, P = 5, 10, or 20, and time horizons, A = 12 or 52. For each
problem set we randomly generate capacities for all i E A using the formula C
PI U, where U is a random number uniformly generated from interval [10,100],
[50,150], [100, 200], or [150,250]. Note that all intervals allow generating capacities
that are tight at optimality with respect to the revenue function discussed below.
In addition, using term IPI one generates capacities that depend on the number
of products. The latter allows comparing of results across different numbers
of products. As for the costs, we generate the production costs cp' and the
inventory costs c" i) according to the uniform distributions U[20, 40] and U[4, 8],
respectively. Observe that on average the inventory cost is equal to 211' i of the
production cost. Finally the setup cost c, i)is generated uniformly from interval
[600, 1000].
In the experiments we restrict ourself by considering only linear price functions
of the form f(p,)(d) = fa f_ /d, I4 d. To avoid generating functions that at
optimality result in unrealistically large profits, we introduce an index /, where
Vk _^ LJ < YV (if() cPr \) X$,J) cst il* 1i)
__^ V^ fv^ in + (,Pr ,,*k + st *
EpeP Ez [ 'e j YkK C i) L Z,)(p,i,j) + .+ t (p,i)j
That is, the index measures the amount of the profit per unit of investment and
it is computed based on the optimal or the best solution provided by CPLEX.
By generating fma' and nC i according to the uniform distributions U[70, 90]
and U[500, 1000], respectively, at optimality (i) / E [0.7, 1.3], (ii) all capacities
considered above are tight, and (iii) in most of the cases the satisfied demand
is less than d(pj) (see Figure 41). In addition, the proposed price function and
distributions of the costs and capacities allow generating problems that have an
optimal objective function value ranging from hundreds of thousands to several
millions. Finally, in the construction of the piecewise linear approximation of the
revenue function we use N = 10.
The model is constructed using the GAMS environment and solved by CPLEX
9.0 with a CPU restriction of 2000 sec and a memory restriction of 2Gb, where the
latter is the memory that is required to store the tree in the branchandbound
algorithm. Computations are made on a Unix machine with dual Pentium 4 3.2Ghz
processors and 6GB of memory. The results are tabulated in the Appendix C.
In the experiments we solve 10 randomly generated problems for each problem
set and capacity. Tables C 1 and C2 compare the results provided by CPLEX
with the solutions provided by both procedures. The relative error is computed
using the formula
ObjCPLEX ObjProc.6(5)
RE( ,)=
ObjCPLEX
In the Table C 1, column A indicates the number of problems where the heuristic
procedure finds a better solution than CPLEX. Note that CPLEX is able to
provide an exact solution for all capacities from the problem set 512. In all other
cases, the solver stops after reaching the CPU limit or the memory limit and
returns the best found solution. Although the relative optimality gap of the final
solutions of those problem sets varies from _' to 5'. we believe that the solution
is an optimal or close to an optimal one, and the large optimality gap is due to
imperfect lower bounds. The fact that the heuristic procedures provide a slightly
better solution in the case with A = 52 than A = 12 partially confirms our
assumptions.
The relative errors in the Table C 1 confirms the effectiveness of the heuristic
procedure. In particular, in the 1i I ii iiy of the problems the heuristic algorithm is
able to provide a solution within 1 from the optimal one or the best one provided
by CPLEX. Observe that the larger value of a provides a better solution and the
number of problems where the heuristic procedure finds a better solution than
CPLEX is increasing with the size of the problem. By comparing with the solutions
provided by Procedure 5 (see Table C2) one notices that Procedure 6 outperforms
the Procedure 5, and it is more stable with changes in the capacities. As for
the CPU time (see Table C3), the heuristic procedures require fewer resources
than CPLEX. In addition, unlike CPLEX the heuristic procedures do not require
gigabytes of memory to store the tree.
4.5 Concluding Remarks
We have discussed a bilinear reduction scheme for the capacitated multiitem
dynamic pricing problem, where solving the latter is equivalent to finding a global
solution of the former. Based on theoretical results of the reduction problem, two
procedures have been proposed to find a global maximum of the problem. The
first one is a well known technique and has been intensively used to solve other
bilinear problems. Because of the reasons discussed in Section 4.3, in the very
beginning of the iterative process the procedure eliminates some products from the
further consideration. The latter worsen the quality of the solution returned by
the procedure. In the second procedure we construct approximate problems and
gradually decrease parameters of the problems. As a result, during the iterative
process the costs of the eliminated products remain positive and the procedure
considers them again if need be. Although the second procedure requires more
CPU time to stop than the first one, it provides a higherquality solution.
CHAPTER 5
DISCRETETIME DYNAMIC TRAFFIC ASSIGNMENT MODELS WITH
PERIODIC PLANNING HORIZON: SYSTEM OPTIMUM
5.1 Introduction to the Chapter
Since Merchant and Nemhauser (see, Merchant and Nemhauser [73] and
[74]) first proposed their model in 1978, there have been a number of papers
(see, e.g., Carey and Subrahmanian [23], Carey [15], Carey [13], Carey [14],
Friesz [37], Friesz [38], Wie et al. [98], C('. ,1 and Hsueh [25], Janson [57], Ho [51],
Ziliaskopoulos [104], DrissiKaitouni and HamedaBenchekroun [32], Li et al. [66],
Kaufman et al. [58], Boyce et al. [11], Ran and Boyce [90], Ran et al. [92], and
Wie et al. [99]) discussing variational inequality or mathematical programming
formulations for the dynamic traffic assignment problem with the assumption that
the planning horizon is a set of discrete points instead of a continuous interval.
_M iw, of these papers use a dynamic or timeexpanded network (see, e.g., Al!mi et
al. [2]) to simultaneously capture the topology of the transportation network and
the evolution of traffic over time. Implicitly or otherwise, these papers typically
assume that there is no traffic at the beginning of the planning horizon (or at time
zero) and that all trips must exit the network prior to the end. When there are cars
at the time zero, the times at which these cars enter the network must be known in
order to determine when they will exit the arcs on which they were travelling. In
practice, data with such details do not generally exist.
There are two main factors that distinguish the models in papers referenced
above. First, some (e.g., Merchant and Nemhauser [73], Carey and Subrahmanian
[23], Ho [51], Carey [15], Ziliaskopoulos [104], Kaufman et al. [58], Garcia et
al. [40]) seek a system optimal solution and others (e.g., Janson [57], Wie et
al. [98], C'!, i, and Hsueh [25], and DrissiKaitouni and HamedaBenchekroun [32])
compute a user equilibrium instead. The other factor is the travel cost function
used by these models. Among other parameters (physical or otherwise), a travel
time or cost function may depend on the number of cars on the link and the input
and output rates. 1T ,tn (e.g., Carey and Ge [20], Carey and McCartney [18],
Carey [16], Carey [17], Lin and Lo [69], Han and Heydecker [46], Daganzo [28])
have analyzed the effects of travel cost functions on various models. Some (e.g.,
Lin and Lo [69] and Han and Heydecker [46]) have shown that some travel cost
functions are not consistent with the models that use them.
Similar to Carey and Srinivasan [21], Carey and Subrahmanian [23], Carey
[15], C('! i and Hsueh [25] and Kaufman et al. [58], the model in this chapter
is based on the timeexpanded network. However, instead of assuming that the
network is empty at the beginning or at the end, we treat the planning horizon
as a circular interval instead of linear. For example, consider the interval [0, 24],
i.e. a 24hour planning horizon. When viewed in a linear fashion, it is typically
assumed that there is no car in the network at times 0 and 24. In turn, this implies
there is no travel demand after time k < 24. Otherwise, cars that enter the
network after time k cannot reach their destinations by time 24, thereby leaving
cars in the network at the end of the horizon. On the other hand, if there is a car
entering a street at 23:55h (11:55 PM) and exiting at 24:06h (12:06 AM, the next
day) in a circular planning horizon, the exit time of this car would be treated as
00:06h instead. When accounted for in this manner, it is possible to determine the
exit time for every car that is in the network at time zero without requiring any
additional data. Additionally, models that view the planning horizon in a circular
fashion are more general in that they include those with a linear planning horizon.
By setting the travel demands and other variables during an appropriate time
interval to zero, models with a circular planning horizon effectively reduce to ones
with a linear horizon.
It is often argued that the number of cars at the beginning and the end of
the horizon are small and solutions to DTA are not drastically affected by setting
them to zero. When the paths that these cars use do not overlap, the argument
is valid. However, when these cars have to traverse the same arc in reaching their
destinations, the number of cars on the arc may be significant and ignoring it may
lead to a solution significantly different from the one that accounts for all cars.
This chapter makes two main assumptions. One requires the link travel time
at time t to be a function of only the number of cars on the link at that time.
Carey and Ge [20] show that the solutions of models using functions of this type
converge to the solution of the LighthillWhithamRichards model (see Lighthill
and Whitham [68] and Richards [93]) as the discretization of links into smaller
segments is refined. Because minimizing the total travel time or delay mitigates
its occurrence, models discussed herein do not explicitly address spillback. On
the other hand, the models can be extended to handle spillback using a technique
similar to the one in Lieberman [67] or an alternative travel time function that
includes the effect of spillback (see, e.g., Perakis and Roels [86]). However, as
indicated in the reference, using such a function may not lead to a model with a
solution.
For the remainder, Section 5.2 defines the concept of periodic planning
horizon. Section 5.3 formulates the system version of the discretetime dynamic
traffic assignment problem with periodic planning horizon or DTDTA and prove
that a feasible solution exists under a relatively mild condition. To our knowledge,
there are only four papers (Brotcorne [10], Smith [95], Wie et al. [98], and Zhu and
Marcotte [103]) that address the existence issue and some (see, e.g., Smith [95] and
Zhu and Marcotte [103]) consider this small number to be lacking. All four deal
I I vs
0 T
Figure 51. Linear versus circular intervals.
with user equilibrium problems instead of system optimal. Section 5.4 describes
two linear integer programs that provide bounds for DTDTA. Section 5.5 presents
numerical results for small test problems and, finally, Section 5.6 concludes the
chapter.
5.2 Periodic Planning Horizon
The models in this chapter assume that the planning horizon is a halfopen
interval of length T, i.e., [0, T). Instead of viewing this interval in a linear fashion,
the interval is treated in a circular manner as shown in Figure 51. In doing
so, time 0 and T are the same instant. For example, time 0:00h and 24:00h (or
midnight) are the same instant in a 24hour div. For this reason, T is excluded and
the planning horizon is halfopen. To make the discussion herein more intuitive, we
often refer to the planning horizon as a 24hour d4v, i.e., T = 24. In theory, the
planning horizon can be of any length as long as events occur in a periodic fashion.
If an event (e.g., five cars enter a street) occurs at time t, then the same event also
occurs at time t + kT, for all integer k > 1.
Because the planning horizon is circular, events occurring tomorrow are
assumed to occur in the same interval that represents todiv. For example, consider
a car that enters a street at tl = 23:00h (or 11 PM) todci and traverses the street
until it leaves at t2 = 01:00h (or 1 AM) tomorrow. (See Figure 52.) In a circular
planning horizon, these two events, a car entering and leaving a street, occur at
t O/T
vs
0 ti T t2
Figure 52. Events occurring in two consecutive planning horizons.
time 23:00h and 01:00h in the same interval [0, 24). In general, if a car enters a
street at time t1 < T and takes r < T units of time to traverse, then the two events
are assumed to occur at t1 and mod{ti + 7, T} on the interval [0, T).
5.3 DiscreteTime Dynamic Traffic Assignment Problem with Periodic
Time Horizon
Although, it is possible to formulate the dynamic traffic assignment problem
with a periodic time horizon as an optimal control problem, solving it is typically
troublesome (see, e.g., Peeta and Ziliaskopoulos [85]). This section presents a
discretetime version of the problem in which the interval [0, T) is represented as
a set of discrete points, i.e., A {0, 6, 26,. T 6}, where 6 = and N is
a positive integer. (In general, the subdivision of the planning horizon need not
be uniform. For example, the subdivision during the period between 22:00h to
06:00h may be coarser than the one for the period between 06:00h to 22:00h.) In
order to avoid using fractional numbers in the set of indices and to simplify our
presentation, we typically assume that 6 = 1.
To formulate the problem, let G(N, A) represent the underlying transportation
network where N and A denote the set of nodes and arcs in the network,
respectively. It is convenient to refer to elements of A either as a single index a
or a pair of indices (i,j). The latter is used when it is necessary to reference the
two ends of an arc explicitly. Furthermore, C is a set of origindestination (OD)
Figure 53. Threenode network.
pairs and the travel demand for OD pair k during the time interval [t, t + 6], t E A,
is ht.
There is also a travel time function associated with each arc in the network. In
the literature (see, e.g., Wu et al. [101], Ran and Boyce [89] and Carey et al. [19]),
these functions can depend on a number of factors such as inflow and outflow
rates and traffic densities. We assume in this formulation that 0a, the travel time
associated with arc a, depends only on the number of cars on the arc. Furthermore,
Oa is continuous, nondecreasing and bounded by T, i.e., 0 < a((w) < T,
Vw E [0, .1,], where .i, is a sufficiently large upper bound for the range of Oa(w)
and there is no feasible solution whose flow on arc a can exceed .i,. In particular,
a(0) represents the freeflow travel time on arc a.
We use the dynamic or timeexpanded (TE) network (see, e.g., Section 19.6
in A!,li et al. [2]) to determine the state of vehicular traffic in the system at
each time t c A. To illustrate the concept of time expansion, consider the static
network with three nodes shown in Figure 53 or the threenode network. In
this network, all arcs have the same upper bound value, ., = M, and there
is only one OD pair, (1,3). Let the planning horizon be the interval [0, 5) and
6 = 1. Thus, A {0, 1, 2, 3, 4}. The travel time function for every arc is Q and
1.5 < O(w) < 4,Vw c [0, M]. To construct the TE network, the travel time also
needs to be discretized. In general, the set of possible discrete travel times for arc a
1 (2
00
1 2
Figure 54. Time expansion of arc (1, 2) at t = 1.
is F, { { s : 0 = ], 0 < w < 1, }. For our example, the set of possible discrete
travel times for each arc is F, {2, 3, 4},Va.
To incorporate the time component in the TE network, every node in the
static network (or static node) is 'expanded' or replicated once for each t E A.
For the threenode network, static node 1 is transformed into five TE nodes, one
for each t E A, in the TE network. For example, node 1 is expanded into nodes
lo, 11, 12, 13, and 14 in the TE network. (See Figure 54.) Similarly, each arc (i,j)
in the static network (or static arc) is replicated once for each pair of (t, s), where
t E A and s E F(cj). Consider arc (1, 2) in the threenode network. Cars that
enter this arc at time 1 can take 2, 3, or 4 units of time to traverse depending
(as assumed earlier) on the number of cars on the arc at t = 1. To allow all
possibilities, arc (1,2) is expanded into three TE arcs (11, 23), (1, 24), and (11,20).
The latter represents a car that enters arc (1,2) at time 1, takes 4 units of time to
traverse, and leaves the arc at time 5 or time 0 (or mod(1 + 4, 5)) of the following
day. Similar expansion applies to each t E A. In general, each static arc (i,j)
expands into A x IP(yj) T TE arcs of the form (it, jmod (t+s,T)),V t E A, s cE (j).
Figure 55 dipl'i the complete time expansion of the threenode network. In
addition to the timeexpanded nodes and arcs, the figure also di pl! the travel
demand at the origin TE nodes (i.e., node 1t, Vt c A) and decision variables g(k)t
representing number of cars arriving at the destination node d(k) of OD pair k at
time t, i.e., at node 3t,Vt E A.
To reference flows on TE arcs, let y>(t,s) denote the amount of flow for
commodity k that enters static arc a at time t E A, takes s E Fa units of time to
traverse it, and then exits the arc at time mod{t + s, T}. In particular, if a = (i,j),
then the subscript a(t, s) refers to TE arcs of the form (it,j mod (t+s,T)),V t E A, s E
F(i,). In addition, Ya(t,s) EkEC (t,s) represents the total flow on arc a(t, s).
To compute the time to traverse a static arc at time t, let
a(t,) = {(, S) : 7 = [t 1]r, [t 2]r, [t S]r, S c Fa .
where
[ q if q > 0
T+q ifq < 0
In words, Qa(t) contains pairs of entrance, T, and travel times, s, for static arc a
such that, if a car enters static arc a at time T and takes s time units to traverse it,
the car will still be on the arc at time t. For example, if t = ll:00h and the time to
traverse arc a is five hours for the previous five consecutive time periods, then cars
entering arc a at time T 10:00h, 9:00h, 8:00h, 7:00h, and 6:00h will be on the arc
at ll:00h. (We assume here that cars entering arc a at, e.g., 6:00h are still on the
arc at ll:00h even though it is scheduled or expected to leave at ll:00h.) When t
is relatively near the beginning of the planning horizon, the notation []T accounts
for cars on the arc at time t that enter it from the previous d4 i. Continuing with
the foregoing example, let t = 3:00h instead. Then, cars entering arc a at time 
Figure 55. Timeexpansion of the threenode network.
2:00h, 1:00h, 0:00h, 23:00h, and 22:00h are still on the arc at 3:00h. Using the set
Qo(t), the total amount of flow on static arc a at time t or Xa(t) is (T,s)Ea(t) Ya(r,s).
There are two additional sets of decision variables. One set consists of za(t,s),
a binary variable that equals one if it takes between (s 6) and s units of time
to traverse arc a at time t. In the formulation below, the value of Za(t,s) depends
on Xa(t) and, for each t, Za(t,s) = 1 for only one s E Fa. The other set consists of
g9, a vector with a component for each node in the TE network. Component it
of gk is set to zero if i is not the destination node of OD pair k. Otherwise, g(k)t
where d(k) denotes the destination node of OD pair k, is a decision variable that
represents the amount of flow for commodity k that reaches its destination, d(k), at
time t.
Below is a mathematical formulation of the discretetime dynamic traffic
assignment problem with periodic planning horizon (DTDTA).
mm in a(a(t)) Yat
(x,y,z,g) t aA I sEFa
s.t. Byk +gk bk Vk C
gd(k)t ht Vk G C
tEA tEA
a(t,s) E (t Vt e A, a c A and se c
kEC
sEVa
Xa(t) Y Ya(r,s) Vt
(Ts)EQa(t)
YE Za(t,s) 1 Vt E z
sEra
)oa(t,s) < oa(Xa(t)) < Y. SZa(t,s)
SEra
Ya(t,s) < 1 ,,.,)
E A and a c A
N and a c A
Vt c A and a E A
Vt c A,a E A and a c Fa
(53)
(54)
(55)
(56)
(57)
a(t,s), 9g(k)t X(t) > 0, za(t,s) E {0, 1}
Vt e A,a E A, s Fa and k C (58)
In the objective function, serP Ya(t,s) represents the number of cars that enter arc
a at time t and, based on our assumption, these cars experience the same travel
time, Qa(xa(t)). Thus, the goal of this problem is to minimize the total travel time
or delay. Using constraint (710), the objective function can be equivalently written
as
min MYY)
( tEA aEA (T,s)EQ(t) SEFa
or, more compactly, as
min () fY,
where Y and K(Y) are vectors of arc flows (Ya(t,s)) and travel times
(a((,Ts)Ean( Ya(r,s))) whose components are defined so that their inner product is
consistent with the summations.
Constraint (78) ensures that flows are balanced at each node in the TE
network. In this constraint, B denotes the nodearc incidence matrix of the TE
network and bk is a constant vector with a component for each TE node and
defined as follows:
k 0 if i o(k)
hk if i = o(k)
where o(k) denotes the origin node of OD pair k. Constraint (79) guarantees
that the number of cars arriving at the destination node d(k) equals the total
travel demand of OD pair k during the planning horizon. Then, constraint (710)
computes the total flow on each TE arc and (54) determines the number of cars
that are still on static arc a at time t.
In combination, the next three constraints, i.e., constraints (55) (57),
compute the travel time for the cars that enter arc a at time t and only allow flows
to traverse the corresponding arc in the TE network. In particular, constraint
(55), in conjunction with (56), chooses one (discretized) travel time s E Fa
that best approximates ,a(Xa(t)), i.e., a(x,(t)) E (s 6, s]. When a represents
arc (i,j), constraint (57) only allows arc (it, imod(t+s,T)) to have a positive flow.
Otherwise, (7) forces flows on arc (it, imod(t+7,T)), for T E Fa and T / s, to be zero.
Finally, constraint (711) makes sure that appropriate decision variables are either
nonnegative or binary.
As formulated above, the travel time associated with Za(t,s) in equation (56)
can only take on discrete values from the set Fa while the travel time in the
objective function varies continuously. Although it may be more consistent to use
discrete values of travel times in the objective function, the above model would
provide a better solution because the true travel time is used to calculate the
total delay. The model also has interesting properties discussed in Section 5.4.
In addition, the treatments of travel times in both the objective function and
constraints can be made consistent by solving the (approximation) refinement
problem also discussed in the same section.
Under a relatively mild sufficient condition, we show below that DTDTA has
a solution by constructing a feasible solution. In fact, the solution we construct
below is generally far from being optimal. However, it suffices for the purpose of
proving existence. Let Ra(t) be a set of discrete times at which a car enters arc a
and still remains on the arc at time t. Below, we refer to Ra(t) as the enterremain
set. Given xa(t), Ra(t) C A is a union of two sets, i.e.,
Ra(t) ={w A: w < (t 1),w+ L(xi(w)) > t}U
{w : w > (t + 1),w+ ((xw) >t}.
In addition, let Ua(t) denote the total flow into arc a at time t. When Ua(t) is
given for each t E A, the lemma below shows that a set of Xa(t), Ya(t,s), and Za(t,s)
consistent with constraints (54)(57) and relevant conditions in (711) exists
when if is sufficiently large.
Lemma 5.3.1. Assume that Ua(t) is known for a given a E A and all t E A. If 31,
is suff:.' i.l i o then there exists a set of Xa(t), Ya(t,s) and Za(t,s) that .,l:fi.
constraints (54) to (57) and the relevant conditions in (7 11).
Proof: (Because a is given, we discard the subscript a in places where there
is no confusion in order to simplify our notation.) Below, we construct sequences
{sm}, {zm(,S)}, {(Y~t,)}, {x(t)} and {R~} whose limits yield the set of decision
variables feasible to constraints stated above.
For m = 1, let
s = [(a(0)/6l, i.e., s' is the discretized free flow travel time for arc a,
z(t,) 1 and (ts) 0, Vs e A, s / s1
1 andY 0t
Y1(t,s) a() at)and Y(t,) 0, Vs A, s / s}
a t )at ,
Ri e :w < (t ),+s >t}U{ A : > (t+1),w+s T T>t},
and
Si I
(t) ZWERa "oiy)
As defined above, R1 is the enterremain set based on the travel time s1, a vector of
1,Vw E A. For m > 2, let
zat,s) tand  a (s A, t
s( /
n ,) 1 and z ,) 0, Vs e A, s / sm
(,)) d() a (nd Y = Vs A,
Rm {w A : < (t ),w+s > t}U{ A :w > (t+1),w+s T>t},
and
(t) ZwERYa a(,s)
Sequences {st}, {x t)} and {R} constructed above are monotonically
nondecreasing. Consider the sequence {s}. Observe that s2 > sl,V t c A because
Xi(t) > 0, V t A, and as assumed earlier O a() is nondecreasing. It follows that,
for any t A, + s2 > > t and + s T > + s1 T>t. Thus, uw R
for~ WntA W3s ss s _
implies that w E R2, i.e., R1 C R2 for all t E A. The latter, and the fact that Ua(t)
is nonnegative, imply that xa(t) = Yw ERY (,) > ERt a(w,s) X t) t E A.
Assume that the claim is true up to some fixed m. For all t E A, s'l
[ra(x t))/1 > [a(X1t) )/] s, because ) > x1 and a(.) is
nondecreasing. Using an argument similar to above, R C C R+1 and x"+1 > X "
a(t) a (t)
Thus, the three sequences are monotonically nondecreasing. In addition, all three
sequences are bounded, i.e., s' < T, R' C A, and xZ < tea U) and, therefore,
convergent. Let s', R', and xt) be their limits. Based on our construction,
s = [Oa(x(t))/6], ats) = 1 and z"t,s) 0, Vs E A, s / s'. In combination,
these ensure that constraints (55) and (56) are satisfied. Our construction also
implies that Y'sO) U a() and Yo (t, 0, Vs e A, s / s". Because if, is
sufficiently large, Y~t,,) satisfies constraint (57).
In the limit, R' = {w E A : u < (t 1), u + s > t} U { EA :
w > (t + 1),w + s T > t}. Thus, R' is consistent with s' and x't)
yoo because Y'
EER aR w,s) (T
xt) satisfies (54). Furthermore, xz and Y ) are both nonnegative and z"t,)is
(t a(t) (t,)nonnegative a is
binary. Thus, the proof is complete. U
In the above proof, if, for some m, s' is larger than the maximum travel time
for arc a, i.e., max{s :s E Fa} (or, equivalently, xZ > .1,), then Ua(t) is infeasible
or not compatible with the upper bound i. ,.
To establish the existence of a feasible solution to DTDTA, recall that
G(N, A) denotes the (static) transportation network. For the theorem below,
assume without loss of generality that each node in N can be either an origin or
destination, but not both. If node i is both an origin and a destination, then we
create a dummy node i' and use node i as the origin node and i' as a destination.
For example, consider OD pairs (i,j) and (j, i). In this case, i and j are both
origins and destinations. When the dummy nodes are added, the two OD pairs
become (i,j') and (j, i'). Let pk denote a path in G(N, A) connecting the OD pair
k, i.e., pk E pk. The set of these paths, {pk : k E C}, induces a subgraph
G(N, A), where N C N and A C A denote the sets of nodes and arcs, respectively,
belonging to the paths in T. For each i E N, define [i+] = {(i,j) : (i,j) A} and
[i] = {(j,i) : (j, i) c A}. In words, [i+] and [i] are the sets of arcs in G(N,A)
that emanate from and terminate at node i, respectively. Also, let order(i) denote
a topological order of node i (see Al!ni et al. [2]). If (i,j) A and G(N,A) can be
topologically ordered, then order(i) < order(j).
Theorem 5.3.1. Assume that if, is suff. .ii:l, '1,n'/ for all a E A and a node
can be either an origin or a destination, but not both. Then, DTDTA has a feasible
solution, if there exists a path pk for each k E C such that the .l'.q'rl, they induce
is i. ,. 1.:.
Proof: Let T be a set of paths, one per OD pair, such that the subgraph,
G(N, A), it induces has no cycle. Thus, N can be ordered topologically. (See
Al!li et al. [2].) Below, we construct a feasible solution one arc at a time and in
a topological order using Lemma 5.3.1 and only the paths in T. The latter implies
k
that Ya(t,s) = (t,s) =x(t) 0 for all a A.
Let node i E N be of topologicall) order 1 and, for each arc a in [i+], define
Q(a) to be the set of paths in T that contain or use arc a, i.e., Q(a) = {k :
a E pk, k E C}. (It is not necessary to index Q(a) with i because each arc a
can belong to only one [i+].) For each k E Q(a), arc a must be the first arc in
path pk because node i is of order 1. Let Ua(t) = ckQ(a) hk. Because .1, is
sufficiently large, Lemma 5.3.1 ensures that there exist Xa(t), Ya(t,s), and Za(t,s)
feasible to (54) (57) and the relevant conditions in (7 11). Let y(t, ()) = h
and /(ts) = 0,V s c A, s / s o(t). So constructed, these y(t,s)'s are consistent with
Ya(t,s) and satisfy the flow balance equation (78) for node i.
To construct the variables Xa(t), Ya(ts), Za(t,s) and y(ts) for arcs emanating
from nodes of higher order, assume that the decision variables for arcs emanating
from nodes with order m or less have been constructed and let node i be of order
(m + 1).
Case 1: The set [i+] is empty. Then, i must be a destination node for some
commodity k, i.e., i = d(k). For a c [i], k c Q(a) and t c A, set
(ta )a(t)
{*: t+", t} {f: t+", Tt}
For each k E Q(a), every demand hk uses arc a. Thus, rk, as constructed must
satisfy the appropriate constraints in (78) and (79).
Case 2: The set [i+] is not empty. Let a E [i], also a nonempty set. Assume
that a (q, i). Then, order(q) < order(i) and, by the above assumption, xa(t),
Ya(t,s), Za(t,s), and k(t,s) are available.
Consider an arc a c [i+]. For each a c [i], define Q(a, a) = {k : a E pk a
pk, k e C} and, for each k E Q(a, a), let U(,k) denote the flow into arc a at time t
for OD pair k. Then,
> tj")= Ec)+ nE > : sc
e(t) )(t)
ti t\' ," t}{ t+ )T=t}
and the total flow into arc a at time t is Ua(t) = keQ(,a) U(t)I Because I, is
sufficiently large, Lemma 5.3.1 ensures that Xa(t), Ya(t,s), ya(t,s) and za(t,s) feasible to
relevant constraints exist.
Thus, when carried out in the topological order for every arc in A, the above
process must produce a feasible solution to DTDTA. U
The theorem above assumes that each .[ is sufficiently large so that it is
feasible to send the entire flow for each OD pair along a single path. Although this
assumption appears to be stringent, it can be made less so by allowing the flow for
each OD pair to traverse over several paths as long as they do not induce cycles in
G(N, A). With more cumbersome notation, the above argument can be extended to
the case with multiple paths per OD pair as well.
When applied to the above example in which the OD pairs (i,j) and (j, i)
become (i, j') and (j, i'), the ... i lic subgraph assumption implies that the paths
from i to j' and from j to i' cannot form a cycle. Intuitively, this means that
there must exist two routes between the original nodes i (e.g., home) and j (e.g.,
work) with no road in common. These routes need not be optimal and there is no
requirement in our formulation or algorithms to use them. They are used only to
established the existence in Theorem 5.3.1.
The FirstInFirstOut (FIFO) condition requires that cars entering an arc at
time t must leave the arc before those entering after time t. In the literature, many
(see, e.g., Ran et al. [91], Zhu and Marcotte [103], and Parakis and Roels [86])
assume that the travel cost function satisfied certain conditions to ensure FIFO.
To avoid making additional assumptions, we ensure FIFO by adding the following
constraints to DTDTA instead. Doing so may make the problem harder to solve
because of the additional constraints.
t + E sza(t,s) < t + E sza(t,s), Va Al and t,t E A : (t + 6) < t
SECa SECa
t+ SZa(t,s) < (t + T) + Y SZa(t,s), Va c A1 and t,t E A: (t + J) < t
sEPa SECa
When t and t represent two instances of time on the same d4i, the first inequality
ensures that cars entering arc a at time t leave the arc before those that enter at
time t > t. On the other hand, t and t may refer to times on consecutive div e.g.,
t = 08:00h todci and t = 09:00h yesterday. Because of our periodic assumption,
these two times are on the same interval [00:00h, 24:00h) and t (incorrectly)
appears to be an earlier time than t. To distinguish times on consecutive di4
and preserve FIFO, the second equation represents tod i's time t (e.g., 08:00h of
tod(vI) as (t + T) (e.g., as 08:00h of yesterday plus T) and forces cars entering the
arc at this time to depart after those that enter at yesterday's time t (e.g., 09:00h
yesterday).
5.4 Bounds for the DTDTA Problem
As formulated in the previous section, DTDTA is a nonlinear optimization
problem with binary decision variables, a difficult class of problems to solve. This
section describes mixed integer programs for obtaining an approximate solution to
DTDTA as well as bounds for the optimal delay.
Except for constraint (56), the constraints for DTDTA are linear. To develop
a linear version of (56), assume that the travel time function, Q0, is invertible for
all a E A. For example, if Oa is a continuous and increasing function, then Oa1
exists on the interval [,(0), ,(a ,)]. (See Figure 56.) Under this assumption,
Qa(Xa(t)) C (s 6, s] if and only if Xa(t) C (a 1(s 6), oa1()]. Thus, the requirement
(s 6)Za(t,s) < Oa(Xa(t)) < sza(t,s) is equivalent to 01(s 6)Za(t,s) < Xa(t) <
al(s)z(t,,). Recall that Fa {s :s ['='], 0 < w < 11,}. Let si].
Then, (si 6) g [a (0), O,(a(,)] and al'(sl ) is not well defined. (In Figure 56,
a1(si 6) = Oa(1) is not well defined.) In this thesis, we set a1'(si 6) = 0.
Using this convention, constraint (56) can be replaced by the following linear
equivalence:
Sa1s I )Za(t,s) < Xa(t) Ql(s)za(t,s) Vt c A and a c A (5 9)
sEPa SECa
The following lemma implies that there exist linear functions that approximate
the objective function of DTDTA.
Lemma 5.4.1. There exist vectors ql and q, such that qY < (Y)TY < qfY for
all Y feasible to DTDTA.
Oa a(Xt0
2 
S1 
1 (2) a1 (3) a( (4)
Xa(t)
Figure 56. Oa(xa(t)) E (s s] versus Xa(t) E ((1(S 6), a1()].
Proof: As defined earlier, )(Y)TY = Y ~(Xa(t)) Y Ya(t,s) From constraint
tE aEA Lsea
(56), the following hold for any feasible solution to DTDTA:
y ( 6)za(t)] [ Y(ts) )j (
te aEA sEFera sera
and
N(Y)TY< S Sa(t) E(a(ts)
ted EA LseF, seFa
The summand of the last set of summations (i.e., K sZa(t,s) s8 Ya(t,s) ) can
LsrFa I LsEFa J
be simplified. Constraint (57) implies that Ya(t,s) > 0 only if Za(t,s) = 1. In
addition, constraint (55) ensures that, for each pair (a, t), Za(t,) = 1 for some
s E Fa and za(t,s) = 0,V s E Fa,s / s. This implies that Ya(t,s) > 0 and
Ya(t,s) 0, Vs F,, / s and
Y SZa(ts) Ya(t, s) SYa(t,s) sYa(t,s)
sera sera I sera
A similar result holds for the first set of summations. Thus, the above inequalities
reduce to the following
E SE (s J)Y(6,s)<
tEz aEA sEFa tEz aEA sEFa
Let ql and q, be two constant vectors with a component for each arc in the
TE network such that [q]a(t,s) = (s 6) and [q,]a(t,s) = s, respectively, for all
a A, tE A, and sE Fa. Then,
q1T 555 (s 6)ya(ts) < qjy)Ty < 55 SYa(t,) qY. U
tEz aEA sEFa tEz aEA sEFa
Let S(6) denote the feasible region defined by linear constraints (78) (55),
(59), (57), and (711) and, for convenient, (Y, Z) represents an element in S(6).
In addition, let (Y', ZI), (Y*, Z*), and (Y", Z") be solutions to the lowerbound
problem (or min{qTy : (Y, Z) E S()}), the original problem (or min{j(Yy)T
(Y, Z) c S(6)}), and the upperbound problem (or min{qY (Y, Z) e S(6)}),
respectively. Then, the following lemma holds.
Lemma 5.4.2. For i,., 6 > 0, qTY1 < (Y*)T* < qTU < qTy.
Proof: In following sequence of inequalities, the first one holds because Y* is
feasible to the lowerbound problem and the second follows from Lemma 5.4.1:
qYlV < qlY* < F(Y*)TY*.
Similarly, the following sequence also holds
q(Y*)TY* < (Yu)Ty" < qY".
Combining the above two sequences yield the first two inequalities in the lemma.
Finally, the last inequality holds because yl is not necessarily optimal to
min{qTY: (Y, Z) e S()}. U
In view of the above lemma, the solutions to the upper and lowerbound
problems are approximations of the solution to the original problem. The theorem
below states that the approximation can be made arbitrarily close to the original
problem by choosing a sufficiently small 6.
Theorem 5.4.1. Given c > 0, there exists 6 > 0 such that qTV" {Y*)Y* < e
and (Y*)Ty* (Y
Proof: By construction, qu = qi + 6e, where e is (1, 1, 1)T. Let Hk denote the
travel demand for OD pair k during the entire planning horizon, i.e., Hk = teA ht
and set H = Ekec Hk. For each t c A, sa(t) is such that sa(t) e Fa and
z(ts ) 1. In words, Sa(t) is the approximate travel time for (static) arc a at
time t in the optimal solution (Y', ZI).
Then, the following sequence must hold:
0 < (q qi)Tyl
6eTYl
E E E y y s)
aEA kEC tEA sEra
aEA kEC tEA
< 6 EE Hk
aEA kEC
6H E 1
aEA
S6H IA
The first inequality follows from Lemma 5.4.2. Then, the above relationship
between q1 and q, and letting EckEC Y(t,s) denote individuals components of YI
yield the first two qualities. The third equality follows from the definition of
Sa(t). Following this, the second inequality holds because the total amount of
flow on (static) arc a for OD pair k during the entire planning horizon cannot
exceed Hk. The sum of the latter is H, a constant that can be factored out of
the summation over A. This validates the penultimate equality. Finally, the last
equality follows from the fact that ZaEA 1 simply denotes the number of elements
in the set A. Choosing 6 = HA guarantees that qY qfY' < e. When combined
with the results in Lemma 2, the latter implies that qTY" N(Y*)TY* < c and
_(y*)Ty* q Ty
The approximate solution Yu can be improved by solving an additional
nonlinear program. In particular, consider the approximation refinement problem
min{t(Y)TY : (Y,Z") E S(6)}, i.e., this is the original problem with Z = Z".
Doing so makes it possible to remove TE arcs corresponding to z (t,) 0 from the
TE network and discard decision variables Xa(t) and constraints (55) and (57)
from the problem. In DTDTA, we use Xa(t), the number of cars on arc a at time t,
to compute the travel time on arc a and, subsequently, to select which TE arc to
use or which Za(t,s) to set to one. Thus, when Z is given, Xa(t) becomes unnecessary.
Additionally, let s(t) be such that z"ts()) = 1 for each t E A. Then, constraint
(59), originally (56), reduces to requiring E(rs)EQ Ya(T,s) to be in the interval
(s(t) 6, s(t)]. In other words, the original problem with Z = Zu is a nonlinear
multicommodity flow problem with the latter as side constraints.
Let Y" be an optimal solution to min{)(Y)TY : (Y, Z") e S(6)}. Then, the
following corollary shows that Y" better approximates Y*.
Corollary 5.4.1. (Y*)TY* < +(Y")TY" < ((yu)TY" < qTY"
Proof: In the above sequence of inequalities, the first one follows because Y* is
optimal to the original problem and Yu is only feasible. The second holds because
Y" is feasible to min{)(Y)TY : (Y, Z") e S(6)}. Finally, the last is due to Lemma
5.4.1. U
5.5 Numerical Experiments
We conducted numerical experiments using small test networks to empirically
verify our understanding of DTDTA as well as to evaluate the efficiency and
effectiveness of the approximation schemes discussed in previous sections.
Table 51. Demand patterns
Time
Traffic Intensity 0 1 2 3 4 5 6 7 8 9 Total
Low 20 25 30 35 40 40 35 30 25 20 300
Medium 30 35 40 45 50 50 45 40 35 30 400
High 40 45 50 55 60 60 55 50 45 40 500
In all problems, the planning horizon is [0, 10) and the travel cost functions
are either linear, i.e., O(w) = 1.5 + 2.5(1'), or quadratic, i.e. O(w) = 1.5 + 2.5(1o )2,
where w is the number of cars on the arc. We consider the three different demand
patterns dip'1l ,i'1 in Table 51. In all three patterns, travel demands at discrete
points increases gradually until time 4, levels off briefly, and then decreases
gradually after time 5. The individual demands in the three patterns are different
and represent three traffic intensities: low, medium, and high. We used GAMS [39]
to implement and solve all problems using NEOS Server of Optimization [82]. In
particular, we used SBB [94] to solve our nonlinear integer programming problem,
i.e., DTDTA, XPressXP [102] to solve our linear integer programs, i.e., the lower
and upperbound problems, and CONOPT [26] to solve our linearly constrained
optimization problems, i.e., the approximation refinement problems. All CPU times
reported herein are from the NEOS server.
To empirically verify that DTDTA problem is not convex, we first consider
the twoarc network in Figure 57 that has one OD pair. We let 6 = 1. Thus, the
discretetime planning horizon is A = {0, 1, ,9}. We use the above quadratic
Figure 57. Twoarc network.
Table 52. Optimal solutions to the twoarc problem.
Solution 1 Solution 2
Inflow Travel time Inflow Travel time
Time al a2 al a2 al a2 al a2
0 0 20 1.600 1.500 20 0 1.500 1.600
1 25 0 1.500 1.600 0 25 1.600 1.500
2 0 30 1.656 1.500 30 0 1.500 1.656
3 35 0 1.500 1.725 0 35 1.725 1.500
4 0 40 1.806 1.500 40 0 1.500 1.806
5 40 0 1.500 1.900 0 40 1.900 1.500
6 0 35 1.900 1.500 35 0 1.500 1.900
7 30 0 1.500 1.806 0 30 1.806 1.500
8 0 25 1.725 1.500 25 0 1.500 1.725
9 20 0 1.500 1.656 0 20 1.656 1.500
travel time function for both arcs and the function yields travel times in the
interval [1.5, 4.0]. Because 6 = 1, the set of discrete travel times is F {2, 3, 4}.
Using the low traffic intensity demand pattern in Table 51, we solved DTDTA
using SBB and terminated it when the relative optimality gap is less than 0.005 (or
0.5'.). There are two optimal solutions (see Table 52) to the twoarc problem with
an optimal total d. 1 iv of 450.
Consider the first solution, labelled 'Solution 1', in the Table 52. At time 0,
there are 20 cars to travel from node 1 to node 2. At this time, there are also 20
cars already on arc al. These cars enter the arc at time 9 and have not reached
their destination at time 0. Because DTDTA assumes that the time to traverse
arc al depends on the number of cars on the arc at the entrance time, the travel
time for arc al at time 0 is 1.5 + 2.5( 0)2 1.6. On the other hand, there is
no car on a2 at time 0. Cars that enter the arc at time 8 already left the arc by
time 0. Thus, the travel time for a2 at time 0 is 1.5, the freeflow travel time. To
minimize the travel time, all 20 cars entering the network at time 0 must travel on
a2. In fact, every car in Solution 1 travels at the freeflow travel time of 1.5. Thus,
there cannot be any solution with less total d, 1 iv and Solution 1 must be optimal.
Figure 58. Fournode network.
Because of the symmetry in the data, switching the flows between the two arcs in
the network yields Solution 2, another optimal solution. Furthermore, it is easy to
verify that every convex combination of these two solutions is feasible to DTDTA
and yields, on the other hand, a larger total delay, thereby confirming empirically
that the objective function is not convex.
Additionally, the "extreme" travel behavior di p i 1 in Table 52 may not
be intuitive. This is due to the assumption that the system operator is extremely
sensitive to the difference in travel times and is willing to switch routes in order to
save a minute amount of travel time.
When the network is large, it would be too timeconsuming to solve DTDTA
optimally or otherwise. In our experiments, we consider four approximate solutions
to DTDTA: (Yl, Z), (Y, ZU), (Yl, Z), and (YU, Z), where the last two are
refinements of the first two. To evaluate the quality and the computation times
of these solutions, we consider the fournode network in Figure 58 with two OD
pairs, (1, 4) and (2, 4). In our experiments, both OD pairs have the same demand
pattern and all arcs have the same travel cost function, linear or quadratic, as
specified above.
First, we solved the lower and upperbound problems with using two levels of
discretization, 6 = 1 and 6 = 0.5. As before, when 6 = 1, the discretetime planning
horizon is A = {0, 1, ,9}. On the other hand, when 6 = 0.5, A becomes
Table 53. Solutions from the lower and upperbound problems: linear travel cost
function.
Traffic = 1 6 = 0.5
Intensity ql Y q, Y" Gap ql Y q Y'" Gap
low 820.0 1580.0 760.0 1187.5 1560.0 372.5
medium 1200.0 2230.0 1030.0 1705.0 2230.0 525.0
high 1500.0 2875.0 1375.0 2187.5 2870.0 682.5
{0, 0.5, 1, 1.5, ... ,9, 9.5}. For the comparison below (see Tables 53 and 54), we
assume that, when 6 = 0.5, there is no demand at fractional times ( e.g., at 0.5,
1.5, 2.5, etc.) and the demands at integral times (i.e., 1, 2, 3, etc.) are as shown in
Table 51.
For both types of travel cost functions, the size of the optimality gap (i.e.,
quY" q1Yy) decreases by approximately 5('. as 6 decreases from 1 to 0.5.
However, the results in Tables 53 and 54 sl', 1 that the reduction in the gap
is due mainly to the improvement in the solution, yl, of the lowerbound problem.
The approximate travel d1i, as estimated by Yu change relatively little for the
two values of 6.
Tables 55 and 56 compare the solutions from DTDTA, (Y*, Z*), against
two approximations, (Y", Z") and (Yi, ZI). As in the twonode problem, we solve
DTDTA using SBB to obtain a (integer) solution (Y*, Z*) with less than 0.5'.
relative optimality gap. To obtain (Y", Z"), we first solve the upperbound problem
using XPressMP to obtain (Y", Z"), a (integer) solution with less than 0.5'.
optimality gap, and, then, solve the approximation refinement problem (with
Table 54. Solutions from the lower and upperbound problems: quadratic travel
cost function.
Traffic = 1 6 = 0.5
Intensity q1 Y qu Y" Gap qf Y q Y" Gap
Low 600.0 1200.0 600.0 900.0 1200.0 300.0
Medium 822.2 1644.5 822.2 1233.3 1644.5 411.1
High 1124.5 2248.9 1124.5 1686.7 2248.9 562.2
Table 55. Quality of refined upper and lowerbound solutions: linear travel cost
function.
(Y*, Z*) (y", ZU) (Yl Zl) Rel. cpu
Traffic cpu* (1i cpu' Err Ratio
Intensity Delay (sec) Delay (sec) Delay (sec) ( .) pu cpu*
Low 1337.50 27.42 1385.00 2.57 1392.50 2.38 3.55 10.7 11.5
Medium 1800.00 15.92 1.., ; 2.66 1815.30 2.90 0.85 6.0 5.5
High 2290.00 95.02 2327.50 4.07 2315.00 1.25 1.09 23.3 76.0
Z = Z") using CONOPT to obtain (Y", Z"). The solution (Y', Z') are obtained in
the same manner. In the two tables, the CPU times for the two approximations are
times for solving both bounding and refinement problems.
For both linear and quadratic travel time functions, the two approximation
schemes provide solutions with relatively small errors using much less CPU time
required to solve DTDTA (see the ratios of the cpu times in Tables 55 and 56).
For quadratic travel time functions, the approximate solutions are identical to
DTDTA solutions, except for the high traffic intensity case when the approximate
solutions are slightly better (by 0.0 .'.).
Table 56. Quality of refined upper and lowerbound solutions: quadratic travel
cost function.
(Y*, Z*) (y", Z) (Y' Z') Rel. cpu
Traffic cpu* < cpu Err Ratio
Intensity Delay (sec) Delay (sec) Delay (sec) ( ) cpu
Low 1054.50 0.88 1054.50 0.09 1054.50 0.08 0.00 9.8 11.0
Medium 1. I ; SO 6.62 1. ; SO 0.14 1'. ; SO 0.34 0.00 47.3 19.5
High 2129.80 501.17 2128.60 0.10 2128.60 0.13 0.06 5011.7 .. 2
5.6 Concluding Remarks
This chapter formulates a discretetime dynamic traffic assignment problem
(DTDTA) in which the planning horizon is treated in a circular fashion and events
occur periodically. Doing so allows positive flows on the network both at the
beginning and at the end of the planning horizon. The structure underlying the
formulation is the timeexpansion of the (static) network representation of streets
77
and highi,. The resulting problem is a nonlinear program with binary variables,
a difficult class of problems to solve. Alternatively, two linear integer programs are
constructed to obtain approximate solutions and bounds on the total travel delay.
It is shown that solutions from the latter can be made arbitrarily close to solutions
of DTDTA. Furthermore, numerical results from small test problems si,l. 1 that
solving the linear integer program is more efficient.
CHAPTER 6
A NONLINEAR APPROXIMATION BASED HEURISTIC ALGORITHM FOR
THE UPPERBOUND PROBLEM
6.1 Introduction to the Chapter
In C'! ipter 5 we have discussed a periodic discrete time dynamic traffic
assignment problem, which is constructed based on two assumptions: (i) all
cars that enter the arc during the same time interval experience the same
travel time and leave the arc during the same time interval, and (ii) the travel
time is a function of the number of cars on the road (see also Nahapetyan and
L i. !..ii I)anich [78]). As we have seen, the initial mathematical formulation
of such model leads to a mixed integer problem with linear constraints and a
nonlinear objective function. By linearizing the objective, one can construct an
upper and a lower bound problems. Although the solution of such problems can
be made arbitrarily close to the solution of the initial problem by decreasing the
discretization parameter 6, observe that the bounding problems belong to the class
of linear mixed integer program, which are computationally expensive to solve. In
the case of the bounding problems, the task becomes more challenging because of
the special structure of the feasible region.
Observe that the model is constructed based on a timeexpanded network
and there are binary variables, a(ts), associated with the arcs of the network.
By decreasing the parameter 6, the number of the binary variables increases. For
example, given a traffic network G(N, A) and a set of possible discrete travel
times FP(6), the total number of the binary variables in the DTDTAU problem
is IA(65) EaEA Ia()l If 6 reduces to 6/2, then IF,(6/2) = 21,(6)1,A(6/2) =
21A(6), and the total number of the binary variables in the new problem is
221A()l aEA I a() 1. Because of resource limitations, MIP solvers cannot solve
large problems.
In this chapter we consider a heuristic algorithm to solve the bounding
problems. Although the same technique can be applied to solve both bounding
problems, we mainly focuss on the upper bound problem DTDTAU. For
convenience of reference, we restate the problem below.
min qTY
(x,y,z,g)
s.t. Byk+ k bIk Vkc C (61)
Sgd(k)t h Vk C (62)
tEA tEA
Ya(t,s) Y (t,s) Vt c A, a E A and s E F, (6 3)
kEC
Xa(t) > Ya( ,s) Vt e and a c A (64)
(T,s)Ena(t)
Za(t,) 1 Vt A and a c A (6 5)
sEPa
SOa'( 6)Za(ts) < Xr(t) "< E a'(s)Za(ts) Vt e A and a e A (66)
sEPa sEPa
a(t,s) < ,,, ., Vt e A,a E A and a E Fo (67)
k k
Ya (t,s) 9d(k)t X(t) > 0, a(t,s) {0, 1} Vt A, a e A, s E Fa and k C C (68)
In the wellknown heuristic algorithms such as neighborhood search, greedy
algorithm or tabu search, it is required to move from one feasible solution to
another. However, finding a feasible solution to the DTDTAU problem is not easy.
To demonstrate, consider a onearcnetwork, a = (1, 2), a linear travel time function
o(Ka(t)) = 0.3 + 0.05Xa(t), and a set of possible (discrete) travel times Fa {1, 2, 3}.
In addition, assume that the time horizon [0, 5) is divided into five intervals, and
10 cars enter into the arc at each discrete time t E A {0, 1, 2, 3, 4}. By assigning
those cars to the arcs a(t, 1) of the TE network, one concludes that Xa(t) = 10,
a(xa(t)) = 0.8, and Za(t,1) = 1, Vt A, which satisfies constraints (65) and
(66) (see the left network in Figure 61). It can be shown that this is an optimal
solution to the DTDTAU problem. However, there is another feasible solution,
where 10 cars are assigned to the arcs a(t, 2) (see the right network in Figure 61).
Using those settings, Xa(t) = 20, Oa(xa(t)) = 1.3, and Za(t,2) = 1, Vt A, which again
satisfies constraints (65) and (66). Assume that the second solution is known
and one decides to improve the solution and move to another feasible solution by
assigning the cars at time t = 0 to arc a(0, 1), i.e., Za(0,1) = 1 and Za(0,2) = 0. As
a result, Xa(l) = 10 and Qa(xa(l)) = 0.8, which violates inequality (65). The same
follows from the change of other arcs; thus, we conclude that the second solution
is isolated in the sense that the .,1i ,i:ent solutions, i.e., changing only one arc,
are infeasible. Finding an .,ii ,i:ent feasible solution becomes more complicated
when the static network G(N, A) is larger because, with respect to a given path,
the changes on upstream arcs have an influence on the flows of downstream arcs.
Because heuristic algorithms similar to the neighborhood search, greedy algorithms,
and tabu search require moving from one feasible solution to a neighboring feasible
solution, the difficulties of finding a neighboring solution makes inappropriate the
use of such techniques.
10 o 10 10
10 10 1
10 10
12 u 22 1 2 s22
10 /10 /
13 /23 23
10 10
1424 24
Figure 61. Two feasible solutions.
Another approach to solve the DTDTAU problem is the relaxation of the
integrality of the variable Za(t,s) and constructing an equivalent formulation with
continuous variables. To do so, replace the constraints Za(t,s) E {0, 1} by inequalities
0 < Za(t,s) < 1, i.e. z e [0, 1], n = A(5) EaeA 1a(5)1, and Za(t,s)(1 Za(t,s)) < 0.
The latter can be included into the objective function with a penalty. As a
result, the problem reduces to a continuous concave minimization one and a
global solution of the resulting problem is a solution of the DTDTAU problem
(see, e.g., Horst et al. [54] or Horst and Tuy [55]). Although it is known that an
optimal vector of the binary variables, z*, represents one of the vertices of the n
dimensional unit cube (each vertex corresponds to an integer solution), because of
constraints (61)(68) most of them are infeasible and it is hard to find an optimal
one.
Observe that the LP relaxation of the DTDTAU problem provides a lower
bound, which is far from an optimal one. To illustrate, consider the DTDTAU
problem, where the constraints Za(t,s) E {0, 1} are replaced by the inequalities
0 < Za(t,s) < 1. To find optimal values of variables y(t,) and g(k)t in the resulting
problem, it is sufficient to solve the following linear problem.
min qTY
(y,9)
s.t. Byk + gk bk Vk C
t k14 Vk c C
tEA tEA
a(t,s) > 0, g(k), > 0, Vt e A, a A, s e F and k C
Because the arcs a(t, 1) have a lower cost in the objective than the arcs a(t, s),
s / 1, one concludes that at optimality only the arcs that corresponds to the free
flow travel time, i.e., the arcs a(t, 1), Va E A and t E A, have positive flows. Using
the optimal values of those variables it is easy to compute values of the variables
X,(t) through equations (63) and (64). The optimal values of the variables Za(t,s)
can be obtained by solving the following system of equations.
ZsEra Za(t,s) = 1
a )Za(to) < a(t)
e a(s)Z(ts) > x (6 9)
Za(t,s) C [0, 1]
Y*
Za(t,s) > Ma()
Notice that the last inequality in (69) is satisfied VZa(t,s) E [0,1, s / 1, because
Y(t,s) = 0, Vs c Fa, s / 1. In the case of s 1, a sufficiently large value of .3,
reduces the inequality to Za(t,1) > 0 and makes sure that arcs a(t, 1) are allowed to
have positive flows given any positive values of the variable Za(t,1) (see inequality
(67)). Other equations in (69) are easy to satisfy and it can be shown that for
any value of x*(t) the set of solutions to the system is not empty and not unique.
However, because of the congestion it is highly unlikely that at optimality of
the DTDTAU problem all drivers experience the free flow travel time and one
concludes that a solution of the LP relaxation of the problem is not realistic, and
the optimal objective function value of the relaxation problem is far from the
optimal value of the objective function of DTDTAU.
Despite all complications described above, the DTDTAU problem has the
following useful property: if at optimality the total inflow into arc a at time t
is zero, i.e., C r Y(ts) = 0, then constraint (67) is satisfied for any value of
Za(t,s); therefore, corresponding constraints (65)(67) can be removed from the
formulation and the solution of the resulting problem remains the same. Unknown
values of the variables Za(t,s) can be restored by solving the system of equations
(65)(66) using the values of xa(t,s).
The above analysis motivates developing a lower bound problem for the
DTDTAU, which is (i) tighter than the LP relaxation, (ii) easier to solve
than the concave minimization problem discussed above, and (iii) preserves the
above mentioned property. In particular, in this chapter we consider a nonlinear
relaxation of the problem with bilinear constraints. Using the relaxation technique,
we propose a heuristic algorithm to solve the DTDTAU problem.
For the remainder, Sections 6.2 discusses the nonlinear relaxation of the
DTDTAU problem. Using the relaxation, in Section 6.3 we propose a heuristic
algorithm to solve the DTDTAU problem. Numerical experiments on the
algorithm are provided in Section 6.4, and finally, Section 6.5 concludes the
chapter.
6.2 Nonlinear Relaxation of DTDTAU Problem
Consider the following continuous nonlinear minimization problem, which we
refer to as DTDTAR.
min qY
(x,y,z,g)
s.t. Byk +gk bk VE C (610)
9d(k), = hi Vk cC (611)
tEA tEA
Ya(t,) = (t,s) Vt e A,a c A and s e (612)
kEC
a(t) = Ya(r,s Vt c A and a A (613)
(T,s)ena(t)
SI(s 6y)a(t,s) < Xa(t) Ya(tr) < E .la(S)Y(ts) (614)
SECa rEFa SECa
Vt e A and a e A
y(t,s) > 0, ,, > 0, ,Xa() > 0 Vt e A,a c A,s c Fa and k C (615)
Observe that (i) in the DTDTAR problem constraints (610)(613) are the
same as corresponding constraints of the DTDTAU problem, (ii) the DTDTAR
problem does not include the binary variables and constraints (65) and (67), and
(iii) the constraints (66) are replaced by bilinear constraints (614).
Theorem 6.2.1. The DTDTAR problem is equivalent to the LP relaxation of the
DTDTAU problem with additional constraints of the form
Za(t,s) Y. Ya(t,r)= Ya(t,), (6 16)
rECF
Proof: Consider the following two case:
Case 1: rEr. Ya(t,r) / 0. From equality (616) it follows that Za(t,s)
Ya(t ,) e [0, 1, Vs E F,. The latter satisfies constraints (65) and (67) given
ECra Ya(t,r)
a sufficiently large [,. thus, the constraints can be removed from the formulation.
In addition, observe that after appropriate substitutions of the variables Za(t,s) the
constraint (66) transforms into the constraint (614), and the variables Za(t,s) can
be removed from the formulation.
Case 2: ErErF Ya(t,r) = 0. Observe that equation (616) and constraint (67)
are satisfied for any value of the variable Za(t,s), s E Fa. Because constraint (67)
is redundant, remove the variables Za(t,s), s E Fa, from the formulation as well as
corresponding constraints (65)(67). In addition, notice that constraint (614) is
satisfied and can be added to the formulation without changing the feasible region.
Based on the above ,n i i one concludes that both problems have the same
optimal objective function value and the same optimal values for the variables
Ya(t,s) and Xa(t). In addition, the optimal values of the variables Za(t,s) can be
obtained using the values of y*,s) and x*(). In particular, if rEFr Y*(t,r) / 0 then
z*, *t Otherwise, given the vector x*, to find the optimal value of the
Z (ts) ErEa (t r)
variables Za(t,s) it is sufficient to solve the system of equations (65)(66). U
Theorem 6.2.2. The DTDTAR problem provides a tighter lower bound solution
for the DTDTAU problem than the LP relaxation.
Proof: Consider a feasible solution, (y, x, z), to the DTDTAU problem.
Observe that it satisfies equation (616) because from constraints (65)(67)
it follows that either both sides of (616) are zero, i.e., Za(t,s) = Ya(t,s) = 0, or
Za(t,s) = 1 and Zrer, Ya(t,r) Ya(t,s). As a result, adding the equality (616) to
the DTDTAU problem does not change the feasible region. Next, observe that
according to the Theorem 6.2.1 the LP relaxation of the resulting problem is
equivalent to the DTDTAR problem; therefore, the solution of the DTDTAR
problem is a lower bound of the DTDTAU problem.
From Theorem 6.2.1 it also follows that the DTDTAR problem is equivalent
to the LP relaxation of the DTDTAU problem with additional constraints of
the form (616). Next we show that the constraints are not redundant unless
at optimality all drivers experience the free flow travel time. In particular, the
constraint (616) requires sending a portion of the total flow, i.e., Za(t,s) YrEr Ya(t,r),
along the arc a(t, s) if ErEr, Ya(t,r) / 0. On the other hand, recall that in the LP
relaxation of the DTDTAU problem, at optimality only arcs that correspond to
the free flow travel time have a positive flow, i.e., y,) = 0, Vs E Fa, s / 1, and
the variables Za(t,s) may have positive values for all s E Fa, s / 1, as long as they
solve the system (69) (see Section 6.1). It can be shown that the solution satisfies
equation (616) only if za(t,i) = 1 solves the system (69). The latter holds only
in trivial problems with no congestion. From the above analysis it follows that
the DTDTAR problem has a tighter feasible region than the LP relaxation of the
DTDTAU problem. U
The DTDTAR belongs to the class of global optimization problems because
constraint (614) is neither convex nor concave. However, it is computationally
more attractive than the concave minimization problem discussed in Section 6.1.
In particular, the DTDTAR problem has fewer constraints and variables, and
does not have a penalty term in the objective function. In addition, the DTDTAR
problem preserves the above mentioned property; if at time t an arc, a, has no
inflow, i.e., Eser, Ya(t,s) = 0, then constraint (614) is satisfied for any value of the
variable Xa(t).
6.3 Nonlinear Relaxation Based Heuristic Algorithm
The nonlinear relaxation problem DTDTAR can be very useful to develop
a heuristic algorithm for solving the DTDTAU problem because solving the
DTDTAR problem is a trade off between satisfying constraint (614) and solving
the LP relaxation of the DTDTAU problem. (Notice that the latter is equivalent
to the finding the shortest path in the TE network.) As a result, the optimal values
of the variables Xa(t) better approximate the optimal values of the corresponding
variables of the DTDTAU problem. However, the solution is unlikely to be feasible
to the original problem; thus it is essential to find an integer solution that has the
objective function value as close as possible to the one provided by the DTDTAR
problem.
In the heuristic algorithm (see Procedure 8), first we solve the LP relaxation of
the DTDTAU problem, which provides an initial solution for DTDTAR problem.
Next the procedure solves DTDTAR problem. In Step 3, finding the values of
Za(t,s) is easy and can be accomplished by performing a simple search technique. By
fixing the binary variables of the DTDTAU problem to the values of Za(t,s), i.e.,
Za(t,s) Za(t,s) the problem reduces to a linear one. If the resulting LP is feasible
then the algorithm stops and returns the solution. Otherwise, the algorithm runs
the UpSet procedure (see Procedure 9) then goes to Step 2.
Procedure 8 : Heuristic Algorithm for Solving DTDTAU Problem
Step 1: Solve the LP relaxation of the DTDTAU problem, and let the solution
be an initial solution for solving the DTDTAR problem in the next step.
Step 2: Solve the DTDTAR problem, and let (yR, xR) denote the solution of
the problem
Step 3: Let Xa(t,s) R(t,) and find binary variables Za(t,s) that satisfy
constraints (65) and (66).
Step 4: In the DTDTAU problem, fix the binary variables to the values of
Za(t,s) and solve the resulting LP problem.
Step 5: If the LP problem is feasible, stop and return the solution. Otherwise
run the UpSet procedure and go to Step 2.
Procedure 9 : (UpSet) Setting the upper bounds on variables Ya(t,s)
for all a c A, t c A do
Compute values of smax and smin
nmax max{sls E Fa, Ya(ts) / 0} ser Y(t's) 0
m 0 Otherwise
m min{ss E FC Y(ts) / 0} Eer Y(t) / 0
S 0 Otherwise
if smax smiTn 0 then
relax all bounds on variables Ya(t,s), Vs E Fa
else if max Smin m < 1 and Y("'t"). > a then
2rer Ya(t,r)
for all Vs E F, do
if s > smax + 1 then set the upper bound of Ya(t,s) to zero. Otherwise, relax.
end for
else
for all Vs c Fa do
if s > smax 1 then set the upper bound of Ya(t,s) to zero. Otherwise, relax.
end for
end if
end for
In the DTDTAR problem, at optimality variables Ya(t,s) may have positive
values for different indices s C F,. However, in the DTDTAU problem only one
of them is allowed to be positive. Procedure UpSet restricts the flow on the arcs
of the DTDTAR problem in order to avoid using a large variety of indices from
the set Fa and, at the same time, remain close to the optimal objective function
value. For example, consider an arc, a, and assume that Fa {2,3,..., 8}. If in
the TE network arcs a(t, 2) and a(t, 8) are used then the procedure sets the upper
bound on the variable Ya(t,s) to zero. As a result, in the next iteration the solution
is required to use indices 2, 3,..., 7 that are more compact in the sense that the
smallest and the largest indices are closer to each other. However, if the set of
used arcs consists of only two i,, _!lhor" indices and the arc with a larger index,
e.g., (, carries a significant portion of the total inflow, e.g., the flow on the arc
a(t,() is greater than a (Ya(t,(c) + Ya(t,c)), where a c [0, 1], then one may consider
the settings too restrictive and allow a positive flow on the arc a(t, ( + 1). In the
numerical experiments, we found it more useful to take the values for parameter a
from the interval [0.2,0.5]. A similar procedure applies to the case when only one
arc is used. Finally, if the total inflow into the arc is zero, i.e., SEa Y4(t,s) = 0,
then we relax restrictions on the variables Ya(t,s), Vs E F,. Although the DTDTAR
problem is nonconvex and requires finding a global optimum, the UpSet procedure
potentially eliminates the current solution from further consideration by narrowing
the feasible region.
The above described heuristic algorithm may not converge and the performance
of the algorithm is discussed in Section 6.4. However, if the algorithm does not
converge, an alternative objective function (see Table 61) can be used to solve
the problem. To construct the alternative objective function, observe that the
cost vector q, consists of discrete travel times s E Fa. On the other hand, the
variable Xa(t) is computed based on the set of indices a,(t) (see equation (613)). In
particular, each arc a(t, s), s E Fa, is included into at least one of the sets ao(t), for
some t E A. Notice that the arc may be included into more than one set, and the
total number of such sets is equal to s/6. (Note that s/6 is an integer because s is
a multiple of 6.) As a result,
TY E E 6 a(s) [ Ycjx j 5 5 X 6JT,
aa aEA tEA rEQa() aEA tEA
where e = (1, 1,..., 1). Although the second objective function includes parameter
6, it is not necessarily decreasing with the value of 6. In fact, by decreasing 6,
the number of variables Xa(t,s) and the total number of sets Qa(t) that includes arc
a(t, s) increase, thereby increasing eTx.
Table 61. Equivalent objective functions
Objective Function 1 Objective Function 2
min qY min 6eTx
(x,y,z,g) (x,y,z,g)
