Citation

## Material Information

Title:
Multiplicative programming theory and algorithms
Creator:
Boger, George
Publication Date:
Language:
English
Physical Description:
vii, 137 leaves : ; 29 cm.

## Subjects

Subjects / Keywords:
Algorithms ( jstor )
Approximation ( jstor )
Efficiency objectives ( jstor )
Heuristics ( jstor )
Linear programming ( jstor )
Mathematics ( jstor )
Objective functions ( jstor )
Optimal solutions ( jstor )
Polyhedrons ( jstor )
Polytopes ( jstor )
Decision and Information Sciences thesis, Ph. D ( lcsh )
Dissertations, Academic -- Decision and Information Sciences -- UF ( lcsh )
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

## Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1999.
Bibliography:
Includes bibliographical references (leaves 131-136).
General Note:
Printout.
General Note:
Vita.
Statement of Responsibility:
by George Boger.

## Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright [name of dissertation author]. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
021549200 ( ALEPH )
43702775 ( OCLC )

## This item has the following downloads:

Full Text

MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS

By

GEORGE BOGER

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS
By
GEORGE BOGER
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1999

ACKNOWLEDGMENTS
I would like to thank my entire supervisory committee Dr. Harold Benson, Dr.
Selcuk Erenguc, Dr. Asoo Vakharia, and Dr. Richard Francis for their time and helpful
feedback on my dissertation. I am especially grateful to my committee chairman, Dr.
Benson, for suggesting the topic of multiplicative programming problems and for his
tremendous assistance and unending support. Without his help, this dissertation would
not have been completed. I would also like to thank Mr. Erijang Sun for proving some
theoretical results needed to support my dissertation topic.
I am also grateful to the DIS department chairman, Dr. Erenguc, for providing an
assistantship and for allowing me to teach undergraduate courses during my time at the
University of Florida. The teaching experience was an enjoyable and rewarding
experience.
I would like to thank my family for their encouragement and emotional support. I
would also like to thank my colleagues in the Ph.D. program for their friendship and their
support.
Finally, I am in debt to my master's degree advisor, Dr. Frederick Buoni, at the
Florida Institute of Technology, for his guidance. He suggested multiple objective linear
programming as a topic for my thesis. While working on the thesis, I met Dr. Benson
during a visit to FIT to present a talk related to multiple objective linear programming.
ii

Dr. Benson agreed to serve on my master's degree committee and later recruited me for
the DIS Ph.D. program.

Eage
ACKNOWLEDGMENTS ii
ABSTRACT vi
CHAPTERS
1 INTRODUCTION 1
1.1. The Multiplicative Programming Problem 1
1.2. Reformulations of the Multiplicative Programming Problem 4
1.3. Purpose and Organization of the Dissertation 6
2 A REVIEW OF THE LITERATURE ON MULTIPLICATIVE
PROGRAMMING PROBLEMS 9
2.1. Organization of the Literature Review 9
2.2. Methods to Solve Problems (LMP2), (GLMP), and (CLMP) 13
2.2.1. Methods Based on Quadratic Programming 15
2.2.2. Methods Based on Searching the Outcome Set 17
2.2.3. Methods Based on Solving a Parametric Master Problem 22
2.2.4. Methods Based on Polyhedral Annexation 28
2.3. Extensions of Algorithms for Problem (LMP2) to Solve Problem
(LMP) when p g 3 32
2.4. Methods to Solve Problems (CMP), (GCMP) and (CCMP) 32
2.4.1. Methods Based on Solving a Reformulated Problem 33
2.4.2. A Method Based on Outer Approximation 37
2.5. Methods to Solve Problem (LMP) as a Concave Minimization
Problem 38
3 CONCAVE MULTIPLICATIVE PROGRAMMING PROBLEMS:
ANALYSIS AND AN EFFICIENT POINT SEARCH HEURISTIC
FOR THE LINEAR CASE 40
3.1. Introduction 40
3.2. Analysis 41
3.3. Efficient Point Search Heuristic 52
3.4. Computational Results 62
3.5. Discussion 69
iv

4 A GENERAL MULTIPLICATIVE PROGRAMMING PROBLEM IN
OUTCOME-SPACE 71
4.1. Introduction 71
4.2. Results for the General Case of Problem (Pyf) 73
4.3. Results for Convex and Polyhedral Cases of Problem (Pyi) 78
4.4. Discussion 96
5 AN OUTCOME-SPACE CUTTING-PLANE ALGORITHM FOR
LINEAR MULTIPLICATIVE PROGRAMMING 98
5.1. Introduction 98
5.2. Theoretical Prerequisites 100
5.3. Outcome-Space, Cutting-Plane Algorithm 104
5.3.1. Strict Local Optimal Solution Search 105
5.3.2. Cutting Plane Construction 107
5.3.3. Termination Test 109
5.3.4. Outcome-Space, Cutting-Plane Algorithm 110
5.4. Implementation 114
5.5. Example 119
5.6. Concluding Remarks 124
6 SUMMARY AND FUTURE RESEARCH 125
6.1. Introduction 125
6.2. Future Research on the Heuristic Algorithm 125
6.3. Future Research on an Global Solution Algorithms 127
REFERENCES 131
BIOGRAPHICAL SKETCH 137
v

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS
By
George Boger
December 1999
Chairman: Harold P. Benson
Major Department: Decision and Information Sciences
Multiplicative programming problems are mathematical optimization problems in
which the objective function contains a product of several real valued functions defined
over a common domain and the feasible decisions are described by a nonempty set. These
optimization problems have some important applications in engineering, finance,
economics, and other fields. Multiplicative programming problems, however, are difficult
global optimization problems that are known to be NP-hard.
This dissertation has two purposes. The first is to develop and test a heuristic
algorithm that finds a good solution, though not necessarily a globally optimal solution,
for the linear multiplicative programming problem. The second purpose is to develop a
global solution algorithm for the linear multiplicative programming problem that is
potentially more efficient than existing algorithms for this problem.
vi

To evaluate the effectiveness in practice of the heuristic algorithm, we have
written a FORTRAN computer program and used it to solve 260 randomly generated
linear multiplicative programming problems of various sizes. Our experimental results
show that the computational requirements of the heuristic algorithm are not overly
burdensome when compared to the effort required to solve a linear multiplicative
programming problem.
The framework of the outcome-space, cutting-plane algorithm is taken from a
pure cutting plane, decision set-based method developed by Horst and Tuy for solving
concave minimization problems. By adapting the approach of this method to an outcome-
space reformulation of the linear multiplicative programming problem, rather than
directly applying the method to the original decision set formulation, it is expected that
considerable computational savings can potentially be obtained. We also show how
additional computational benefits might be obtained by implementing the new algorithm
appropriately. To illustrate the new algorithm, we apply it to the solution of a sample
problem.
vii

CHAPTER 1
INTRODUCTION
1.1. The Multiplicative Programming Problem
Multiplicative programming problems are mathematical optimization problems in
which the objective function contains a product of several real valued functions defined
over a common domain and the feasible decisions are describe by a nonempty set. These
problems occur is a wide variety of application areas.
For example, Konno and Inori (1989) studied a bond portfolio optimization
problem in which the portfolios performance is measured by a number of indices such as
the average coupon rate, the average terminal yield, and the average length to maturity.
The goal of the portfolio manager is to improve the performance of the portfolio by
purchasing or selling bonds in the marketplace subject to some limiting constraints. The
manager must consider multiple incomparable objectives such as maximizing the average
terminal yield and minimizing the average maturity time. Konno and Inori choose to
optimize several objectives simultaneously by multiplying them together since the
objectives do not share a common scale.
Another example of a multiplicative programming problem, given in Maling,
Mueller and Heller (1982), is a packaging problem encountered in designing very large-
scale integrated circuit (VLSI) chips and laying out building floor plans or manufacturing
plant facilities. In the problem, the overall rectangular dimensions of the feasible layout
1

2
plans are constrained rather than fixed. Different layout plans with differing overall
rectangular dimensions are obtained according to how the components of a system are
arranged within each plan. The objective is to find the arrangement of components that
minimizes the overall layout area subject to certain constraints on the area and the
perimeter of the layout.
Henderson and Quandt (1971, p. 15) also give an application of multiplicative
programming problems. Their example is from microeconomics. In their example, a
rational consumer wishes to find a combination of two commodities to purchase from
which he will derive the highest possible level of satisfaction. Budgetary constraints and
the availability of the commodities limit the quantities the consumer may purchase. The
consumers level of satisfaction is captured by his utility function, which is assumed to be
the product of the quantities of the two commodities. The rational consumers problem is
then formulated as maximizing his utility function subject to the budgetary and
commodity availability constraints.
The multiplicative programming problem or, more briefly, the multiplicative
program, may be formulated mathematically as
where pg 2 is an integer, X c R", and,foreach j = 1,2,..., p, fÂ¡:X > R satisfies
/;(x)s 0 for all xe X. For simplicity we will assume throughout this proposal that the
minimum of problem (Px) is achieved at some point x' e X. In addition we will assume
that p is significantly less than n since this holds for virtually all applications of

3
multiplicative programming problems. If fÂ¡(x) = 0 for some je {l, 2,...,p} and some
x e X, then clearly x is a global optimal solution. This condition can be checked by
solving p minimization problems min \f] (x)|x si], j = 1,2,..., p. Therefore, we may
assume without loss of generality that, for each j = 1,2,..., p, fÂ¡ (x)> 0 holds for all
xe X.
The objective function h of problem (Px) is generally not a convex function. As
a result, problem (Px) belongs to a class of nonconvex programming problems called
global optimization problems. In contrast to convex programming problems, there may be
many local minima for problem (Px) that are not globally optimal. Conventional local
optimization methods based on gradients, subgradients, conjugate directions, or the
Karush-Kuhn-Tucker conditions, for instance, are at best guaranteed only to find a local
minimum. These methods must then terminate, since there is neither a local criterion for
certifying the global optimality of a given solution nor a way to determine how to proceed
to a better solution if the solution is not globally optimal. From the perspective of
computational complexity, problem (Px) is a difficult problem that is known to be NP-
hard even when the objective function is simply h(x)= x, x2 and the feasible region X
is a polyhedron (Matsui 1996).
When in addition to the assumptions given previously for problem (Px), X is a
convex set and, for each j = 1, 2,.... p, fÂ¡:X*R isa concave function, we obtain the
concave case of problem (Px), called the concave multiplicative programming problem.
The convex case of problem (Px), called the convex multiplicative programming

4
problem, is obtained when, in addition to the assumptions made previously for problem
(Px), X is a convex set and, for each j = 1,2,..., p, fÂ¡-.X-*R is a convex function.
A special linear case of problem (Px), called the linear multiplicative programming
problem, is obtained when, in addition to the assumptions make previously for problem
(Px), X is a compact polyhedron and, for each j = 1,2,..., p, fr.X-^R is a linear
function (Konno and Kuno 1992).
1.2. Reformulations of the Multiplicative Programming Problem
During the 1990s there has been a resurgence of interest in problem (Px).
Encouraged by the rapid advances in high speed computing, researchers began developing
and testing new methods for solving global optimization problems that arise in practical
applications, including problem (Px).
Included among the global optimization methods used to solve problem (Px) for
the special case when p = 2 are various parametric simplex method-based algorithms
(e.g., Konno and Kuno 1992, Konno and Kuno 1995, Konno, Yajima, and Matsui 1991,
and Schaible and Sodini 1995), branch and bound procedures (e.g., Kuno 1996 and Muu
and Tam 1992), and various other types of algorithms (e.g., Konno and Kuno 1990,
Pardalos 1990, and Tuy and Tam 1992).
When p > 2, globally solving problem (Px) has been shown empirically to
require considerably more computational effort than when p = 2 (see, e.g., Ryoo and
Sahinidis 1996). A smaller number of the algorithms for solving problem (Px) when

5
p > 2 solve the problem directly without reformulating it as an outcome-space problem.
Included among these, for instance, is the polyhedral annexation algorithm of Tuy (1991).
Most of the algorithms for solving problem (Px) when p > 2, however, solve the
problem indirectly by globally solving an outcome-space reformulation of the problem
instead. This is because in practical applications p is routinely much smaller than n,
often by two or more orders of magnitude. As a result, working in /?'' is computationally
less challenging than working in if".
Let ye R1 denote the p-vector with yth entry equal to yjt j = 1,2,..., p. For
each j = 1,2,...,p, let yÂ¡e R satisfy
yÂ¡ ^sup fÂ¡{x\ s.t.xe X,
where y. = + is possible, and let yeR denote the vector withyth entry equal to y
j = 1,2,..., p. Let f(x) denote the vector f(x)=[fÂ¡(x\f2(x\...,fp(x)]T, where
fj.X-tR, j = 1,2,..., p, are the functions used in defining problem (Px). Thoai
(1991) and later Konno and Kuno (1995) based their outer approximation algorithms for
respectively solving the convex and linear cases of problem (Px) on one of the more
direct reformulations of problem (Px) as an outcome-space problem. Their reformulation
is given by
(PYi) min s-t.y Y-,
7=1
where
Ys = {ys R',|/(x)gygy for somex6 X ).

6
Falk and Palocsay (1994) based their branch and bound, image space algorithm
for the linear case of problem (Px) on another outcome-space reformulation that is
closely related to problem (Pyi). Their reformulation is given by
(Pr) nnj^y,, s.t. yeY,
where
Y = {ye Rp\y = Cx forsomexe X}
and C is a (pxn) matrix whose rows are cj, j = 1, 2,..., p.
1.3. Purpose and Organization of the Dissertation
This dissertation has two main purposes. The first is to develop and test a heuristic
algorithm that finds a good solution, though not necessarily a globally optimal solution,
for the linear case of problem (Px). The second purpose is to develop an exact global
solution algorithm for the linear case of problem (Px) that is potentially more efficient
than existing algorithms for this problem.
Since the linear multiplicative programming problem is known to be an NP-hard,
multiextremal global optimization problem, it is inherently more difficult to globally
solve than a convex programming problem of the same size. In some application cases, a
solution will adequately meet the requirements of a user; see, e.g., Konno and Inori
(1989). In these cases, the use of a heuristic algorithm seems to be appropriate for finding
a satisfactory solution. To date, however, there is no known heuristic algorithm tailored to
finding a good solution for the linear multiplicative programming problem. In their
review of algorithms for solving problem (Px), Konno and Kuno (1995) do not mention

7
any heuristic algorithms for problem (Px), and our survey of the literature has revealed
none.
To develop the heuristic algorithm, we first analyze the concave multiplicative
programming problem. The analysis yields a new way to write a concave multiplicative
programming problem as a concave minimization problem. As a result, a concave
multiplicative programming problem can be solved by using any existing concave
minimization algorithm without resorting to a reformulation of the problem. We also
show that some relationships exist between concave multiplicative programming
problems and certain multiple-objective mathematical programs. These relationships are
exploited to develop the heuristic algorithm for the linear case of problem (Px).
For cases where a linear multiplicative program must be solved for an exact global
optimal solution, we expect that globally solving the outcome-space reformulation (PyS)
instead will result in a significant decrease in the computational effort over that required
to directly solve the problem. This is because in typical applications of linear
multiplicative programs, p is several orders of magnitude smaller than n. As a result,
working in Rp should be computationally less challenging than working in R".
To globally solve the outcome-space reformulation (PyS) of a linear multiplicative
program, we develop an outcome-space, pure cutting plane algorithm that works in Rp.
The framework for the algorithm is taken from a pure cutting plane, decision set-based
concave minimization method developed by Horst and Tuy (1993). We show how to
adapt this method to solving the reformulation (P,,) of a linear multiplicative program
for a global extreme point optimal solution. Once this global solution is found, we can

8
recover a globally optimal solution for the linear multiplicative program in decision
space. As a further computational enhancement, we also show that for purposes of
implementation, the mechanics of the outcome-space, cutting-plane algorithm can be
applied to the smaller problem (Pr) instead of problem (Pyi).
The organization of the proposal is as follows. In Chapter 2 we present a review
of the literature on multiplicative programming problems. In Chapter 3 we analyze the
concave multiplicative programming problem, apply the results to develop a heuristic
algorithm for the linear multiplicative programming problem, and report test results using
the heuristic algorithm on some randomly-generated problems. In Chapter 4 we analyze
the reformulation problem (Pyi) and show that, under certain convexity assumptions on
Y-, problem (Pyi) has a global extreme point optimal solution y" e Y-. We then present
a procedure that is guaranteed to find a strict local optimal extreme point solution for the
reformulation problem (Pyi) of the linear multiplicative program. In Chapter 5 we
present an outcome-space, cutting-plane algorithm for globally solving a linear
multiplicative program. The algorithm employs the strict local optimal search procedure
presented in Chapter 4. We also illustrate the algorithm by applying it to the solution of a
sample problem. Finally, in Chapter 6, we give an overall summary and conclusions, and
we discuss directions for further research.

CHAPTER 2
A REVIEW OF THE LITERATURE ON MULTIPLICATIVE PROGRAMMING
PROBLEMS
2.1. Organization of the Literature Review
In this chapter we present a review of the literature on methods proposed for
solving multiplicative programming problems. The only known literature review on
multiplicative programming problems appears in Konno and Kuno (1995). In their
literature review Konno and Kuno defined multiplicative programming problems as a
class of minimization problems containing a product of several convex functions either in
its objective function or in its constraints. They included problems in which the
objective function contained the summation of a convex function and the product of
convex functions.
Konno and Kuno (1995) organized their literature review based on whether the
problem data are linear or nonlinear and on the number of functions that appear in the
objective function. They considered solution methods for the following multiplicative
programming problems.
The first multiplicative programming problem considered by Konno and Kuno is
the special case of quadratic programming
(LMP2) min/(x) = (^c1,x^ + di)((Jc2,x^ + 2). s.t. xeD,
where D \= {xs Rn\Ax S b, x g o} is a non-empty polytope (bounded polyhedron) in
9

10
which A is an mxn matrix, be Rm, and, for each i = 1,2, c' e R \{0} and dÂ¡e R. In
addition, it is assumed that, for each xe D, (c' ,x'j + di> 0, i = 1,2.
The second multiplicative programming problem that they considered is the
convex multiplicative programming problem
(CMP) min/(x) = Â¡(x\ s.t.xs X,
M
where X c R" is a nonempty, compact, convex set and, for each j = 1,2,..., p,
fj-.R^R is a convex function that satisfies fÂ¡(x)> 0 for all xe X.
Konno and Kuno (1995) considered two special cases of problem (CMP): (1) the
case where p = 2 and (2) the case where p S 2 and the problem data are linear. The
second case may be defined as the following extension of problem (LMP2):
(LMP) min f(x) = + s.t.xe D,
i=i
where p S 2 is an integer and, for each / = 1,2,..., p, (c',x'j + dÂ¡ > 0 holds for all xe D.
Finally, Konno and Kuno (1995) considered three classes of problems related to
problem (CMP). In the first class is the following problem:
(GCMP) min/(x) = f0(x) + /2hÂ¡{x)f2J(x), s.t.xe X,
where, for each j = 0,1 2q, fj :"->/< is a convex function that satisfies (x) > 0
for all xe X.
The second class is a special case of (GCMP) in which q = 1 and the problem
data are linear. This class may be defined as the following extension of problem (LMP2):

11
(GLMP) min/W = (c,A:) + ((c',^:} + ,)((c2,j:) + 2), s.t.xe D,
where c e Rn and c,dt, i = 1,2, and D are defined as in problem (LMP2).
The third class of problems considered by Konno and Kuno (1995) is the
minimization of a convex function over a feasible region that includes a product of
convex functions in its constraint set.
Konno and Kunos coverage of the literature is not exhaustive. They focused on
algorithms that have been demonstrated by computational experiments to be practical for
reasonably large problems (Konno and Kuno 1995, p. 370). Algorithms proposed by
Konno, Kuno, and their associates have been tested on randomly generated problems and
the results reported. However, computational results have not been reported by most of
the other researchers and therefore their methods were not included in the review.
Since the publication of the review by Konno and Kuno, two more multiplicative
programming problems have been discussed in the literature. The first problem adds a
convex function to the objective of problem (LMP2) to obtain the problem:
(CLMP) min/(x) = g(x)+(^c',^ + d,X(c2,^ + Â£i2), s.t.xeD,
where g : R" > R is a twice differentiable convex function and c,dÂ¡, i = 1,2, and D
are defined as in problem (LMP2). The second problem adds a convex function to
problem (CMP) to obtain the problem:
(CCMP) min/(*) = /(*)+ft/,(*), s.t.xe X,
;'=i
where / : R" > R is a convex function that satisfies f0(x)> 0 for all xe X and
j = 1,2,..., p, and X are defined as in problem (CMP).

12
The emphasis of this review will be on optimization problems in which a product
of functions appears in the objective function. Optimization problems with objective
functions that are comprised of a summation of a function and the product of functions
are also included in the review. Methods proposed for solving these problems may be
adapted to solve a problem whose objective function is strictly a product of functions by
setting the added function to the null function. The functions that appear in the objective
function will be either convex or linear functions since to date these are the only
multiplicative programming problems to appear in the literature. In this review we will
not consider optimization problems in which a product of functions appears in the
constraint set.
Like the review of Konno and Kuno (1995), this literature review is organized
based on whether the problem data are linear or nonlinear and on the number of functions
that appear in the objective function. It is divided into the following four sections. Section
2.2 reviews the methods proposed to solve problems (LMP2), (GLMP), and (CLMP).
Section 2.3 reviews the methods to solve problem (LMP) that are extensions of methods
for problem (LMP2). Section 2.4 reviews the methods to solve problems (CMP),
(GCMP), and (CCMP). Section 2.5 reviews the methods to solve problem (LMP) as a
concave minimization problem.
The rationale for organizing the literature review in this way is as follows.
Historically, the first algorithms for solving multiplicative programming problems were
specifically proposed for solving problem (LMP2). Problems (GLMP) and (CLMP) are
grouped with problem (LMP2) since they were conceived as extensions of that problem.
Several of the algorithms proposed for solving problem (LMP2) can be extended to solve

13
the problem (LMP), since they do not depend upon having only two functions in the
product term of the objective function. Problems (LMP2), (LMP), (GLMP), and (CLMP)
contain linear functions and polyhedral feasible regions. Algorithms for solving these
problems are implemented with the aid of the simplex method, which is used to solve
linear programming subproblems. The problems (CMP), (GCMP), and (CCMP) contain
nonlinear data and must rely on other optimization methods to solve nonlinear convex
programming problems. The latter three problems are therefore placed in a separate
group. Problems (GCMP) and (CCMP) are included in the group with problem (CMP)
because only one article addresses each problem, and they were conceived as extensions
of problem (CMP). Finally, two articles appeared in the literature that proposed solving
problem (LMP) as a concave minimization problem using techniques that the authors had
previously developed.
Table 2.1 gives a summary of the multiplicative programming problems
considered in this literature review along with the assumptions placed on the feasible
region and the objective function of each problem.
2.2. Methods to Solve Problems (LMP2), (GLMP), and (CLMP)
The methods for solving problem (LMP2), (GLMP), and (CLMP) are further
divided into four categories. In the first category are those methods that analyze problem
(LMP2) as a special case of quadratic programming. In the second category are
algorithms that analyze problem (LMP2) by searching the outcome set. In the third
category are the algorithms that solve an easier parametric programming problem rather
than directly solving problems (LMP2), (GLMP), and (CLMP). In the last category are

Table 2.1. Summary of Multiplicative Program Types and Assumptions on Problems
Problem
Assumptions on the
Feasible Region
Objective Function
Assumptions on the Objective Function
LMP2
D is a bounded polyhedron.
((c1,jc) + i)((c2,x) + 2)
(c,x'j + dt >0, i = l,2, for all xeD.
GLMP
D is a bounded polyhedron.
(c,x) + ((c',x) + d<){(c\x) + d2)
(c,x^>0 and (c' ,xj + dÂ¡>0, i = l,2,forall
16 D.
CLMP
D is a bounded polyhedron.
g(x)+{(c',x) + dll(c\x) + d2)
g : R" > R is a twice differentiable convex
function and (c, + d, >0, i = 1, 2, for all
xe D.
LMP
D is a bounded polyhedron.
1=1
(c ,x^) + dÂ¡ >0, i = 1, 2,...,p, for all leD.
CMP
X is a compact convex set.
fi/,M
7=1
Foreach 7 = 1,2, fÂ¡:R"-^>R isa
convex function that satisfies fj (x) > 0 for all
xe X.
GCMP
X is a compact convex set.
/oW+i/jy-iW/yW
7=1
Foreach 7 = 0,1, /j.:/?"/? isa
convex function that satisfies fÂ¡(x)>0 for all
xe X.
CCMP
X is a compact convex set.
/w+rt/jW
7=1
Foreach 7 = 0,1, ...,p, isa
convex function that satisfies /j (x) > 0 for all
xe X.

15
two algorithms that solve problem (LMP2) based on the method of polyhedral
annexation.
2.2.1. Methods Based on Quadratic Programming
Since the objective function of problem (LMP2) can be expressed as
f(x) = {{c', xj + d,)((cJ, x) + di)= ^ xTQx + rTx + d,d2,
where re R", and Q is a real symmetric nxn matrix, problem (LMP2) is a special class
of quadratic programming. Swamp (1966a and 1966b) was the first researcher to analyze
problem (LMP2) in this way, but he did not propose any exact solution algorithms. His
two articles are included in the literature review for completeness. Pardalos (1990) also
analyzed problem (LMP2) in this way, and he proposed an exact global solution
algorithm.
Swamp (1966a) showed that if both linear functions (c ,xj + dÂ¡,i = 1,2 are
positive over the feasible region D, the objective function / is quasiconcave over D, It
is well known that generally for any local minimizer of a quasiconcave function over a
polytope, there exists an extreme point local minimizer over the polytope that has the
same function value. Swamp proposed a simplex based method for finding such a local
optimal solution. The key to the algorithm is a test that determines if entering a given
nonbasic variable into the current simplex basis will lower the objective function value. A
simplex basis of a local optimal solution can be reached by beginning at any feasible
basis and moving through a sequence of simplex tableaux by pivoting in qualifying
nonbasic variables until none remain. Once a local optimal solution is found, the

16
algorithm stops. No information is available to either certify the global optimality of the
solution or to determine how to proceed to an improved solution.
In another work, Swarup (1966b) formulated the following parametric linear
program by introducing an auxiliary variable t, and moving one of the linear functions
into the constraint set:
(MP1) minF(x;i*) = (c',x') + dÂ¡,
s.t. xe D,
(c\x) + d2=Â£, | SO.
Since (c2 ,xj + d2 appears in the constraint set, dual pricing information is
available to determine the value of (c' + dÂ¡ as Â£ is set to achievable values of
(c2 ,x'j + d1 over D. Swarup derived a test that uses this information to determine when
q is set to a level that corresponds to a local optimal solution. All local optimal solutions
can then theoretically be found by parametrically solving problem (MP1) over all
achievable values of A global optimal solution x of problem (LMP2) can then be
found by identifying a global solution (x\ Â£') of problem (MP1).
Pardalos (1990) observed that if c' and c2 are linearly independent, then the
Hessian matrix Q of the objective function of problem (LMP2) has one positive
eigenvalue and one negative eigenvalue, and the remaining eigenvalues are equal to zero.
By applying the spectral decomposition theorem of linear algebra, the objective function
can be rewritten in terms of two variables. The problem can then be solved by examining
the vertices of an orthogonal projection of the feasible region D into a two-dimensional

17
polytope in the space of the two variables used in the rewritten objective function.
Pardalos (1990) proposed an algorithm that enumerates all vertices of the two-
dimensional polytope until an optimal vertex is found. The algorithm may require an
exponential number of steps, but its average computational time complexity is bounded
by a polynomial.
2.2.2. Methods Based on Searching the Outcome Set
The objective function of problem (LMP2) can be expressed as the composite
lp( each ye R2, i//(y) = y, y2 The mapping

y = (yp y2) where y,:= (c',x^ + dÂ¡ andy2 .= (c2,x'j + d2. Since y, and y2 are linear
functions, (p is a linear transformation and hence the linear structure of D is preserved
(Rockafellar 1970). The image of D under (p is then the compact, convex polyhedron
Y := {ys /f 2|jy, = ^c',x^ + dpy, = (c2,x'j + d2 for some xe d\
called the outcome polyhedron. A global optimal solution of problem (LMP2) can be
found by finding a point of Y that globally minimizes the product y, y2. Since the
search is conducted in Y c R2 rather than R", it may be possible to economize on the
computational effort required to solve problem (LMP2).
Three articles, Aneja, Aggarwal, and Nair (1984), Falk and Palocsay (1994), and
Thoai (1991) proposed algorithms for solving problem (LMP2) based on searching the
outcome set using outer approximation techniques. Outer approximation is a global
optimization technique that uses a decreasing sequence of simple sets to approximate the

18
feasible region. The approximations are used in a series of optimization problems that are
easier to solve than the original problem. These optimization problems are sequentially
solved until a global optimal solution to the original problem is found. The technique has
been very useful in solving global optimization problems in which the feasible region Z
is a polytope and the global optimal solution is known to be an extreme point of Z. In
this form of outer approximation, the algorithm begins by finding a simple polytope
P0 3 Z with an easily defined inequality representation and an easily calculated set of
vertices. A series of algorithmic iterations follows that builds a sequence of decreasing
polytopes P0 r> TÂ¡ z> z> Z in which one polytope is generated in each iteration. In an
iteration k of the algorithm, the original objective function is evaluated at the extreme
points of Pk to find an optimal solution v. If v* is an extreme point of Z, then v* is a
global optimal solution to the original problem. Otherwise, a portion of Pk \ Z is cut off
to form The point v* is part of the region cut off; i.e., v is not included in the
polytope The cut is made by adding a constraint called a cutting plane constraint to
the constraint set that defines Pk. The cutting plane constraint adds additional vertices to
PM that were not present in Pt and therefore they must be calculated.
Aneja, Aggarwal, and Nair (1984) proposed an algorithm that examines the
solutions associated with the bicriterion programming problem:
(BCP) VMIN (y, = (c', x) + d,, y2 = (c\x) + d2),
s. t. xe D.

19
The intent of problem (BCP) is to simultaneously minimize the two criterion
functions y, and y2. Conflicts usually exist between the two criterion functions that
prevent a single point of D from simultaneously minimizing both functions. The usual
notion of an optimal solution used in single objective linear programming is replaced by
the concept of efficient solutions when discussing the solutions of problem (BCP). A
solution x is an efficient solution of problem (BCP) if i 6 D and, whenever for each
i = l,2, (c ,x^ + dii{c ,x^j + d. for some re O, then (c1, x'j + dÂ¡ = (c1, x^j + di, Â¡' = 1,2.
The set of efficient points of D is mapped by (p into a set of points on the surface of Y
called the efficient frontier.
Aneja, Aggarwal, and Nair (1984) showed that a global optimal solution of
problem (LMP2) is attained at an efficient extreme point x of D that is mapped by (p
into an extreme point (y, y2) >n the efficient frontier of Y. Their algorithm searches the
efficient frontier for an extreme point that minimizes y,-y2 by using a modified outer
approximation technique. Initially the legs of a right-angle triangle form the first
approximation of the efficient frontier. The rise and the run values of the slope of the
hypotenuse are two positive scalar values. The functions y, =(c' ,x^ + dt and
y2 =(c2,x'j + d2 are multiplied by these values and then summed to form a single linear
objective function. This objective function is then minimized over the feasible region D.
It is well known that the minimizer Jc of such a linear program is an efficient extreme
point of D (Steuer 1986). The solution to the linear program finds another point (y,, y2)
on the efficient frontier that is used to subdivide the initial triangle into two triangles. The

20
algorithm is then repeated using each of the smaller triangles. The algorithm terminates
when there are no more extreme points of the efficient frontier that need to be searched.
In the algorithm of Aneja, Aggarwal, and Nair (1984), a new vertex must be
calculated for each triangle. This is easily done by solving two systems of two equations
in the unknowns y, and y2. This special technique however, can not be easily extended
to handle cases where p > 2.
Falk and Palocsay (1994) also proposed a solution algorithm that searches among
the extreme points of Y using a modified outer approximation technique. In the first
phase of the algorithm, the two linear programs
/, = min^c'.x^ + d, and l2 = m'm(c2, xj + d2
are solved for optimal solutions x' and x2 respectively. Two initial vertices y1 and y2
of Y are then
y' = [(cl,x') + dv(c2,x') + d2) and y2 = {(c',x2) + d (c2,x2) + d2).
An initial polytope in outcome-space containing an optimal solution for the problem
(YP) min fj y,
^ Â¡i 1
is y, S /, and y2 2; l2 and an inequality aÂ¡y, + a2y2 S1, where a, and a2 are determined
such that a,y, + a2y2 = 1 passes through the point
yk = argmin {y'|(yÂ¡, yj), (yy22)}
In each iteration of the algorithm, values for a, and a2 are updated and a linear program
of the form

21
(YLP) min a.y, + a1y1
yeY
is solved to remove portions of the initial polytope from the search for an optimal
solution for problem (YP). The new vertices generated at each iteration are easily
calculated since the isovalue contours of problem (YLP) are linear. The algorithm
terminates when the optimal value of problem (YLP) is one.
The algorithm proposed by Thoai (1991) for solving problem (LMP2) uses an
outer approximation technique that begins by enclosing the outcome set Y in a rectangle
Pa. In an iteration k of the algorithm, the extreme point (v,,v2) of the outer
approximation that yields the lowest value of the product y, y2 is found. A linear
program is then used to determine if the extreme point (v,,v2) maps to a feasible point x
of D. If not, information is obtained from the linear program to generate a cutting plane
constraint that slices off the extreme point (v,,v2) from the polytope Pk. The new
vertices generated by the cut are then calculated using a conventional approach (see
Horst, Pardalos, and Thoai 1995 or Horst and Tuy 1993). Since the method of
determining these new vertices is not dependent on the fact that the dimension of the
outcome set is two, Thoais algorithm can be extended to handle cases where p> 2.
In the algorithms of Aneja, Aggarwal, and Nair (1984) and Thoai (1991), the only
variations in the linear programs used in successive iterations involve changes in
objective function coefficients. The authors gain some computational efficiency by
restarting the simplex method at the optimal solution of the previous iteration. Only a few
simplex pivots are then generally needed to produce a new optimal solution.

22
2.2.3. Methods Based on Solving a Parametric Master Problem
The difficulty in solving problem (LMP2) is caused by the product form of the
objective function. Konno and Kuno (1992) added a parameter Â£ and formed the
following problem that they called the master problem:
(MP2) min F{x\!;) = |((c1,x) + rf,)+ j((c2,xj + d2),
s.t. xe D, Â£ >0.
Notice that for a fixed value of problem (MP2) is a linear programming
problem. To solve problem (MP2), Konno and Kuno proposed using a parametric
objective function simplex method to the find critical values of at which new bases
become optimal. The values of the objective function F are then evaluated at these
bases. A global optimal solution (r*, Â£*) of problem (MP2) is found by choosing the
basis that minimizes F over these values. Konno and Kuno (1992) showed that if
(je*, j') is an optimal solution of problem (MP2), then x is a global optimal solution of
problem (LMP2).
Konno and Kuno tested this algorithm on randomly generated problems (LMP2)
with nonnegative problem data that ranged in size from (m, n) = (30, 50) to (220, 200).
Their computational experiments showed that the amount of computational time needed
to solve problem (LMP2) is not much different from that required to solve linear
programs of the same size.
In Konno and Kuno (1995) the authors slightly simplified the above parametric
method by redefining the auxiliary parameter so that convex combinations of the two
linear functions are used in the objective function of problem (MP2). This modification

23
makes it easier to find critical parameter values, since the interval [O, l] over which the
auxiliary parameter ranges is bounded. The rest of the method remained the same.
Although Konno and Kuno (1992) did not explicitly say it, their algorithm can be
viewed as searching the efficient extreme points of problem (BCP) for one that is a global
optimal solution of problem (LMP2). Notice that for a sufficiently small value an
extreme point optimal solution (x, Â§') to problem (MP2) coincides with an optimal
solution x of the linear program nn{lc2,x'j + djÂ¡xe d\ Similarly, for a sufficiently
large value J", an extreme point optimal solution (x", Â£') coincides with an optimal
solution x" of the linear program min{^c',j^ + d,|jce Â£>} For any fixed value |>0,the
objective function f(x,|J is a composite objective function formed by multiplying the
two linear functions by positive values and summing the result. It is well known that any
extreme point minimizer of such a composite objective function over the feasible region
D is an efficient extreme point of the problem (BCP) (Steuer 1986). The efficient
extreme points of problem (BCP) are found by solving linear programs for parameter
values between and As Aneja, Aggarwal, and Nair (1984) have shown, the global
solution lies at an efficient extreme point of D in problem (BCP).
A disadvantage of the algorithm of Konno and Kuno is that it may require many
pivots to solve problem (MP2) for all possible parameter values. This will especially be
true if there is a great conflict between the two linear functions of the objective function.
If for example c2 = -c,, then every extreme point of D is an efficient extreme point of
problem (BCP). Since the size of the set of extreme points of the polytope D grows

24
exponentially with D, the number of optimal solutions to problem (MP2) over the entire
range of parameter values grows exponentially with D and is not bounded by a
polynomial. Konno and Kuno in fact observed that the computational time increased as
the number of local minima increased. An additional disadvantage of the Konno and
Kuno algorithm is that many of the pivots performed will be unnecessary when they are
to bases that do not improve on a previously found solution.
In another paper, Konno and Kuno (1990) added a convex function to the
objective function of problem (LMP2) to obtain the problem (CLMP). With this addition,
the objective function may no longer be quasiconcave and therefore, the global minimum
may not necessarily be attained at an extreme point of the feasible region D.
To solve problem (CLMP), Konno and Kuno (1990) proposed an algorithm that
solves a parametric master problem which, for a fixed parameter value, is a nonlinear
convex programming problem. The algorithm involves solving this master problem a
finite number of times, once for each of a finite number of prechosen values for the
parameter. A troublesome aspect of the algorithm is that it is difficult to determine the
proper parameter values to choose. The authors suggested choosing values for the
parameter that are equally spaced in the interval of possible parameter values and solving
the resulting master problems to determine a neighborhood containing a globally optimal
solution to problem (CLMP). A local search is then done in that neighborhood for a
globally optimal solution using the Karush-Kuhn-Tucker conditions. Care must be taken
however, to attempt to define the spacing between the points to be small enough so that a
global optimal solution is not missed.

25
The difficulty that Konno and Kuno (1990) encountered in their method in
determining parameter values can be eliminated if we assume that the convex function g
in the objective function of problem (CLMP) is a linear function. Problem (GLMP) is
obtained by making this replacement. Konno, Yajima, and Matsui (1991) considered
problem (GLMP), but they assumed that d, and d2 are zero. To solve problem (GLMP),
Konno, Yajima, and Matsui formulated the master problem
(MP3) min F(x\^) = {c ,3cj+^[c2 ,x^,
s.t. xe D,
(*'.*)=. no.
Notice that the parameter ^ appears in both the objective function and in a right-hand
side of a constraint.
Konno, Yajima, and Matsui (1991) showed that x' is a global solution of problem
(GLMP) if (x,Â£) is an optimal solution of problem (MP3). Schaible and Sodini (1995)
used problem (MP3) to show that a global optimal solution of problem (GLMP) lies on
an edge of D.
Konno, Yajima, and Matsui (1991) proposed a parametric simplex algorithm that
includes a right-hand side analysis and an objective function analysis to determine
intervals of parameter values for which bases remain both feasible and optimal. The
parametric analysis sweeps through parameter values from = minj^c1, x)\x e Dj to
=max^c',x)|Are d\ The objective function F is then minimized over each of the
intervals.

26
Konno, Yajima, and Matsui (1991) tested their algorithm on randomly generated
problems of up to 350 constraints and 300 variables. They found that the problems can be
solved in much the same computational time as that of solving linear programs of equal
size.
The algorithm of Konno, Yajima, and Matsui (1991) suffers from the same
disadvantages as the algorithm of Konno and Kuno (1992). In particular, its efficiency
depends on the number of pivots performed to solve problem (MP3) for all possible
parameter values. Also many of the pivots performed will be unnecessary when they yield
bases that do not improve on a previously found solution.
Schaible and Sodini (1995) improved the algorithm of Konno, Yajima, and
Matsui (1991). From a given simplex tableau for problem (MP3), Schaible and Sodini
used parametric analysis to derive a formula that calculates the value of the objective
function F as the constraint (c',xj = X is set to increasing values of As X increases,
parametric right-hand-side analysis calculates new values for the basic variables. Schaible
and Sodini then derived some optimality conditions that detect when the parameter X is
set to a value such that from an optimal solution (x X) of problem (MP3), one obtains a
local minimum x of problem (GLMP). By applying these optimality conditions,
Schaible and Sodini were able to develop a simplex-based algorithm that solves problem
(MP3) in a finite number of primal and/or dual simplex iterations.
The algorithm proposed by Schaible and Sodini (1995) has three advantages over
the algorithm of Konno, Yajima, and Matsui (1991): (1) It may terminate before the
maximum possible parameter value ;max has been reached. (2) It is more efficient in that

27
it may skip over local optimal solutions that do not improve the objective function value.
(3) It can be used even when the feasible region is unbounded, and it can detect when
problem (GLMP) is unbounded from below.
Muu and Tam (1992) also considered problem (CLMP), but in their work, the
feasible region D is relaxed to a compact convex set. They seem to be the only
researchers to have considered this generalization of problem (CLMP). The authors
however tested their algorithm using a polytope for the feasible region.
Muu and Tam (1992) formulated the parametric master problem
(MP3') min F(x\Â£) = g(x)+^][c:'1 ,xSj + d2 j,
s.t. xe D,
(c',x^ + dÂ¡ = Â£, ? SO.
They proposed a branch and bound algorithm to solve problem (MP3'). Branch and
bound is a technique commonly used by algorithms in global optimization. Branching
refers to the successive partitioning of the feasible region and bounding refers to the
computation of lower and upper bounds on the global optimum over the partitions.
Partitions of the feasible region that produce a lower bound on the objective function that
exceeds the best upper bound found so far by the algorithm are eliminated from further
consideration. Such partitions are said to be fathomed. A branch and bound algorithm
terminates when all of the partitions have been fathomed.
In the algorithm of Muu and Tam (1992), partitions of the feasible region are
constructed by restricting the value of (c' ,x'j + dÂ¡ to values within an interval. The
algorithm begins by finding an interval / := [^,, Â£2 ] of achievable values of ^c1, x'j + dÂ¡

28
by solving the two convex programs := min {^c1, xj + d\x e d\ and
Â£2 := max {^c1 ,x) + di\xe d\. Optimal solutions u and v are then obtained for the two
convex programs
P(&):= min \c2,x) + d^+ g-(-c)|j:e D, Â£, + = 1,2.
A lower bound /}(/) over the interval /0 of the objective function F of problem
(MP3') is found by selecting P(/):= min {j8(Â£,), /)(Â§2)}. An upper bound a0 on F is
obtained by selecting a0 :=min {/(), /(v0)}. The interval /0 is next bisected and the
procedure repeated using the two subintervals. A subinterval that produces a lower bound
that exceeds the current upper bound is eliminated from further consideration; i.e. that
subinterval is considered to be fathomed. The procedure continues bisecting intervals Ik
to generating a sequence of solutions {** Jl, that converge to a limit point x" that is a
global optimal solution. Computational experiments on problems up to (m, n) = (30, 200)
showed that the algorithm is very efficient when both vectors c and d are positive.
2.2.4. Methods Based on Polyhedral Annexation
A limitation of conventional optimization methods is that they can become
trapped at a local minimum, or even a stationary point, if they are applied to a global
optimization problem, e.g. see the algorithms proposed by Swarup (1966a, 1966b). The
central problem of a global optimization method then is to overcome this limitation by
providing a certification test for global optimality, and if a point is not globally optimal,
determining how to move to a better solution. Tuy (1991) called this the subproblem of

29
transcending the incumbent where the incumbent is the best feasible solution found so
far by an algorithm.
Let / be the objective function for problem (LMP2), and let x be a vertex of D
that represents the incumbent solution for this problem. Then, from Tuy (1991), to
transcend the incumbent, one must find a point in x e D such that f(x) < f(x) or else
establish that no such point exists, i.e. that x is a global optimal solution for problem
(LMP2).
Let G := {x e Â£|/(x) /(?)}, where Â£2 is a convex set containing D. The
problem of transcending the incumbent can then be restated as the following problem.
(GCP) Check if DczG and if not, find a point xe D\G.
Problem (GCP) is known as the Geometric Complementary Problem.
Tuy (1990) developed the method of polyhedral annexation to solve problem
(GCP). In polyhedral annexation a sequence of polytopes PÂ¡ c P2 by adding a vertex to the polytope Pk_t of the previous iteration in such a way that a
vertex of D is annexed into the new polytope Pk. The sequence P, nD, P2n D,... forms
an expanding inner approximation of D. When a polytope Ph 2 D is found, all of the
extreme points of D have been searched and the algorithm terminates. Associated with
the sequence of polytopes P, cP, c-cP, c is the sequence of their polars
PÂ¡ z> P2 z> z> Pk z> where a polar Â£* of a convex set E in R" is defined as
E' := {y s R|(y,x) lfor allxe Â£j\ A dual correspondence exists between the facets of
a polytope Pk and the vertices of its polar Pk The subproblem of determining the

30
inequality representation of Pk, after a new vertex has been added can then be solved by
solving the easier problem of computing the vertices of Pk The termination condition
Ph 2 D has the corresponding condition Pk c D*. For a more detailed description of
polyhedral annexation, see the chapters on inner approximation in Horst, Pardalos, and
Thoai (1995) or in Horst and Tuy (1993).
Tuy and Tam (1992) proposed two algorithms that are derived using the
polyhedral annexation method with a dualization and dimension reduction technique
developed by Tuy (1991). Dualization refers to solving the original problem by solving
the dual problem of generating a sequence of polars until a polar Pk c D' is found. The
key to the dimension reduction technique is the introduction of a cone into problem
(GCP). Tuy and Tam (1992) assumed that c1 and c2 are linearly independent vectors and
then formed the cone K := [re R"|^c\a^ SO, f = 1, 2}. Cone K is of interest since if
Te D is an incumbent solution, then, for any xe (x + K), /(i)S f(x). In other words,
cone K identifies points in R" that can do no better than the incumbent solution 3c.
Computational effort might be saved using cone K since a part of the feasible region D
can be eliminated from further consideration and the search narrowed to the remaining
portion of D.
The first algorithm proposed by Tuy and Tam (1992) solves problem (LMP2) by
solving problem (GCP) through the dualization process of generating a sequence of
polars until a polar Pt c D" is found. Tuy and Tam (1992) showed that the polar K of
cone K is explicitly given as K' = {ye Rn\y = -t{c' f2c2 for some tl S 0, f, S o}. Any

31
vertex y in a polar PÂ¡ lies in the polar cone K', and the multipliers r, and f2 used to
express y are unique, since c1 and c2 are linearly independent vectors. Polar cone K' is
used to solve the dual problem by building a collapsing sequence of polars
P' dPÂ¡ D'OP,' z> with each polar being an improved approximation of D'. The
search is conducted in the two-dimensional space generated by c1 and c2 rather than in
the original -dimensional space. Solving the linear program
(LP(f)) max {-Â¡Â¡(c1 ,x'j-t2(c2,x^|jce D j,
where t, and f2 are the multipliers used to express some vertex x = -f,c' -r2c2 of P'h,
tests for the termination condition P'k c D.
The second algorithm proposed by Tuy and Tam (1992) is motivated by the
observation that for a fixed value of t = (t,, i2), problem (LP(r)) is equivalent to the linear
program
(LP(a)) max {(-c1 -a(c2-c',x^|xs Â£>}
where a = i2/(<2 + h )e [0, l] The first algorithm thus reduces to solving a sequence of
linear programs (LP(a)) for different values of the parameter a. The second algorithm
proposed by Tuy and Tam (1992) is to parametrically solve problem (LP(a)) for all of
the critical values of a at which new bases become optimal. The objective function/of
problem (LMP2) is evaluated at each basis and a global optimal solution chosen from
those bases. The second algorithm of Tuy and Tam (1992) is essentially the same
parametric problem (MP2) used by Konno and Kuno (1992).

32
Tuy and Tam (1992) ran computational experiments using both the first
polyhedral annexation algorithm and the second parametric algorithm. Their results
showed that for solving problem (LMP2), the parametric algorithm performed better than
the polyhedral annexation algorithm. The polyhedral annexation algorithm is not as
efficient because more simplex pivots were required than for the parametric algorithm.
Tuy and Tam (1992) proposed an improved variant of the polyhedral annexation
algorithm that reduces the number of pivots and the number of objective function
evaluations. The authors observed that the improved algorithm may potentially be more
useful for a problem with an objective function that is difficult to evaluate. The
computational experiments run using the parametric algorithm on problems of up to (m,
n) = (30, 200) and positive problem data were in line with the results reported in Konno
and Kuno (1992).
2.3. Extensions of Algorithms for Problem (LMP2) to Solve Problem (LMP)
when p S 3
The polyhedral annexation method of Tuy and Tam (1992) and the outcome-space
algorithms of Thoai (1991) and Falk and Palocsay (1994) can be extended to the more
general problem (LMP) where p g 3. Although the algorithms remain unchanged, the
subproblem of determining the new vertices becomes more difficult as the number of
function terms in the objective function increases.
2.4. Methods to Solve Problems (CMP), (GCMP), and (CCMP)
Relatively little work has been done in designing exact global solution algorithms
that address problems (CMP), (GCMP), and (CCMP). The algorithms that have been

33
proposed fall into two categories: (1) methods based on solving a reformulated problem
and (2) a method based on outer approximation.
2.4.1. Methods Based on Solving a Reformulated Problem
Konno and Kuno (1992) introduced problem (CMP) where p = 2 and formulated
a master problem by introducing a parameter into the original problem to separate the two
functions of the objective function into a summation. This technique of embedding the
original problem into a problem in a higher dimensional space is similar to the one used
by the authors in the same paper to solve problem (LMP2). At the time, Konno and Kuno
were not able to give an algorithm for solving the master problem. In Kuno and Konno
(1991) the authors proposed a branch and bound algorithm along with an underestimation
function to solve it. Computational results for problems of up to (m, n) = (200, 180)
indicated that the algorithm is efficient when the objective function is the product of a
linear function and a quadratic function and the feasible region is a polytope.
Kuno, Yajima, and Konno (1993) extended the paramaterization technique of
Kuno and Konno (1991) for problem (CMP) to handle cases where p S 2. They showed
that a global optimal solution to problem (CMP) can be obtained by solving the
equivalent problem
(MP4)
where
Â£e Rp R* 1, Si 0 >. For a fixed Â£ e E let x ( solution of min
G(x;Â£ )= ly/yOc). Let h : E -> R be defined by h{^)~ for

34
any Â£ e E. Solving problem (MP4) then reduces to solving the problem in Rp given by
(MP4') min
5<Â¡=
Kuno, Yajima, and Konno (1993) showed that h is a concave function over S and
therefore a global optimal solution of problem (MP4') exists on the boundary of 5 They
proposed an outer approximation method for solving problem (MP4') and tested their
algorithm against two subclasses of problem (CMP): (1) problem (LMP) and (2)
problems similar those tested in Kuno and Konno (1991) in which the objective function
is the product of a linear and a quadratic function and the constraints are linear
inequalities. Computational experiments showed that the total computational time is
dominated by that needed for solving the convex minimization master problems for each
parameter value. The results also showed that the number of cuts and vertices generated
increases rapidly as p increased from 2 to 5. The authors asserted that this was due to
inefficiencies in computing new vertices, especially when p exceeds 5. However, if p is
held constant, these numbers increased very slowly as the number of constraints and
variables increased. The authors concluded that their algorithm is reasonably efficient
when p is less than 4.
Jaumard, Meyer, and Tuy (1997) added a convex function to the objective
function of problem (CMP) to form problem (CCMP). The authors showed that problem
(CCMP) can be reduced to a quasiconcave minimization problem in R'} that is a
generalization of problem (MP4') used by Kuno, Yajima, and Konno (1993). In the
special case where /0 = 0 in problem (CCMP), the reduced quasiconcave minimization
problem in Jaumard, Meyer, and Tuy (1997) can be shown to be equivalent to the one

35
used by Kuno, Yajima, and Konno (1993). Jaumard, Meyer, and Tuy (1997) find a global
solution of problem (CCMP) by finding an optimal solution to the quasiconcave
minimization problem in Rp using a conical branch and bound method. They ran
computational experiments using their algorithm on test problems similar to those used
by Kuno, Yajima, and Konno (1993) and Thoai (1991). The authors report that their
results are very sensitive to the magnitude of p and not as sensitive to the size (m, n) of
the constraint matrix.
Sniedovich and Findlay (1995) analyzed problem (CMP) from the perspective of
c-programming but did not give a complete algorithm for solving it. C-programming is a
technique developed by Sniedovich (1984) for solving an optimization problem of the
form
(CP) q:=mmy/{(p(x)),
where X is some nonempty set, (p is a mapping on X with values in Rp, and y/ is a
differentiable and pseudo-concave function on some open set containing the set
(p(X): = [ transform the original optimization problem into the parametric programming problem
(MP5) ?(Â£).- minÂ£, Sniedovich showed that if x* is a globally optimal solution for problem (CP), then an
optimal solution for problem (MP5) is t;' = V y/( of ip.

36
For problem (CMP), the objective function can be expressed as the composite
of two functions, where, for each x s R", each yeRr, yz(y)= f~[ 1>Â¡- Sniedovich and Findlay claimed without proof that i// isa
i=i
differentiable and pseudo-concave function on the open convex set e R > o}. Since
problem (CMP) satisfies the requirements of c-programming, it can be solved by solving
the parametric problem
(MP 5') where S is any subset of Rr such that Vl//(ip(x))e H for all xe X. In problem (MP5'),
the parameter ^ appears only in the objective function, whereas in problem (MP4) the
parameter Â£ appears in both the objective function and in the constraints. Standard
Lagrangian methods can be employed to solve problem (MP5') for all ^ e E, while
specialized methods are required to optimize the objective function of problem (MP4)
with respect to the original variable x and the parameter j.
Kuno and Konno (1991) and Konno, Kuno, and Yajima (1994) considered
problem (GCMP) for cases where q = 1 and q S1 respectively. For q = 1, the master
problem and solution algorithm are similar to the one used by Kuno and Konno (1991) to
solve problem (CMP) when p = 2. Computational experiments showed that the
underestimation function does not perform as well as it does for problem (CMP).
For q ^ 1, the master problem in Konno, Kuno, and Yajima (1994) is formulated
by introducing a pair of parameters for each pair of convex functions that appear in the

37
objective function of problem (GCMP). The master problem is a convex minimization
problem in the space and is solved using an outer approximation algorithm.
Computational experiments conducted using a polyhedron for the feasible region showed
that for q = 1, this algorithm required less than half the computational time required by
the branch and bound with underestimation function algorithm proposed in Konno and
Kuno (1992) to solve problem (CMP).
Tuy (1992) gave problem (CMP) as an example of an optimization problem that
can be formulated as a Geometric Complementary Problem and solved it using a
parametric programming problem. The parametric programming problem is a convex
minimization problem in which a positive parameter vector is used to build a composite
objective function from the convex functions in the objective function of problem (CMP).
A complete algorithm that includes solving the parametric program was not given.
2.4.2. A Method Based on Outer Approximation
Thoai (1991) extended the algorithm based on the outer approximation technique
that he proposed for solving problem (LMP2) to address the solution of problem (CMP)
when p = 2. The main idea is to build a sequence of decreasing polytopes
of the convex feasible region X and a sequence of decreasing
polytopes S0 o 5, z> 3 Y of the outcome set Y, where
Y = {ye R2|y, = fÂ¡(x), y2 = f2 (x) for some x e X\.
Problem (CMP) is then solved by applying a modified version of the algorithm for
problem (LMP2). In any iteration k, up to two cuts are introduced, one for Pk and one
for Sk, to obtain tighter approximating sets.

38
Since the algorithm does not depend on the actual value of p, it can be extended
to handle cases where p 2 3.
2.5. Methods to Solve Problem (LMP) as a Concave Minimization Problem
Konno and Kuno (1992) showed that the objective function of problem (LMP) is
not a convex function over the feasible set D. Therefore, problem (LMP) is not a convex
programming problem. However, since the natural logarithm function In is a strictly
increasing concave function on (0, <*>), it is easy to show that the function
defined for all xe D is a concave function. In addition, the optimal solution set of the
concave minimization problem
min F(x\ s.t.xe D,
(CMIN)
is identical to the optimal solution set of problem (LMP). Therefore, any concave
minimization method may be applied to problem (LMP) if the objective function is
replaced by its logarithmic equivalent.
Using the above transformation modification, Tuy (1991) showed that problem
(LMP) could be solved in a reduced dimension space using polyhedral annexation and the
dualization and dimension reduction technique. The algorithm presented in Tuy and Tam
(1992) is essentially an improvement of the one in Tuy (1991).
Ryoo and Sahinidis (1996) also converted problem (LMP) into the problem
(CMIN). To solve problem (CMIN), they employed a branch and bound algorithm that
incorporates the use of valid inequalities to accelerate convergence. Branch and bound

39
algorithms may slowly converge to an optimal solution when the gap between the initial
upper and lower bounds is large. A valid inequality is a inequality constraint that does not
exclude any solution that yields an objective function value lower than the current best
upper bound. By introducing valid inequalities into the constraint set, inferior parts of the
feasible region may be removed from further consideration without eliminating possible
global optimal solutions. A second use of valid inequalities is to reduce the range of
values that the variables in the problem can assume. Ryoo and Sahinidis referred to these
two uses of valid inequalities as range reduction mechanisms. The performance of the
bounding procedure in the branch and bound algorithm is improved by using these range
reduction mechanisms, since smaller-sized partitions of the feasible region are used and
the variables are restricted to reduced ranges of values.
Ryoo and Sahinidis implemented the branch and bound algorithm along with the
range reduction mechanisms in a computer program called BARON (Branch-And-Reduce
Optimization Navigator). To more easily calculate lower bounds on the objective function
F of problem (CMIN) over a partition of the feasible region, the authors replaced F by a
linear underestimating function. Lower bounds were then calculated by solving linear
programs. The authors tested randomly-generated problems in sizes from (m, n) = (50,
50) to (200, 200), with p ranging from 2 to 5. They reported that only a small fraction of
the total CPU time is consumed in the range reduction mechanisms and that there seemed
to be a low-order polynomial relationship between the CPU time and the value of p .

CHAPTER 3
CONCAVE MULTIPLICATIVE PROGRAMMING PROBLEMS: ANALYSIS AND
AN EFFICIENT POINT SEARCH HEURISTIC FOR THE LINEAR CASE
3.1. Introduction
An important, but little researched area that deserves more attention, is the
development of heuristic algorithms for finding a good solution for multiplicative
programming problems. In some applications, a good, though not necessarily globally
optimal solution, may adequately meet the requirements of a user (Konno and Inori
1989). In these cases, since multiplicative programming problems are known to be NP-
hard, the expenditure of computational effort required to globally solve them may not be
needed.
This chapter has two purposes. The first is to present an analysis of problem (Px)
when problem (Px) is a concave multiplicative programming problem. The second
purpose is to propose a heuristic algorithm designed for the case where problem (Px) is a
linear multiplicative programming problem.
The analysis of the concave multiplicative programming problem is presented in
Section 3.2. This analysis shows a new way to write a concave multiplicative
programming problem as a concave minimization problem and some theoretical
consequences of this. It also shows some relationships between concave multiplicative
programs and certain multiple-objective mathematical programs. In Section 3.3, by using
some of the results of Section 3.2, we present and explain the workings of an efficient-
40

41
point search heuristic algorithm that we have developed for the linear multiplicative
programming problem. Section 3.4 reports and analyzes some statistics summarizing the
computational results that we obtained by coding the heuristic algorithm and applying it
to 260 randomly-generated linear multiplicative programs. In Section 3.4 we also report
the results of applying the heuristic algorithm to a multiplicative programming problem
formed from a decision situation using real data. In Section 3.5, we discuss the major
results of this chapter.
3.2. Analysis
Assume in problem (Px) that A is a convex set and that, for each j = 1,2,..., p,
fj'.X-^R is a concave function; i.e., assume that problem (Px) is a concave
multiplicative programming problem. Consider the function g : X R defined for each
re X by
|(x)=logg(x).
Then, it is a simple matter to show that g.X^>R is a concave function and that the
optimal solution set to the concave minimization problem
min g(jt), s.t.xs X, (3.1)
is identical to the optimal solution set of problem (Px). Thus, any concave multiplicative
programming problem of the form of problem (Px), if rewritten in the form (3.1), can be
solved by applying any appropriate general-purpose concave minimization algorithm to
(3.1). For discussions and reviews of concave minimization algorithms, see, for instance,
Benson (1995), Benson (1996), Horst and Tuy (1993), and Pardalos and Rosen (1987).

42
It is interesting and useful in both practice and theory to observe that, in addition
to (3.1), there is at least one other way to rewrite a concave multiplicative programming
problem as a concave minimization problem. To show how this can be accomplished, we
will first prove the following preliminary result.
Lemma 3.2.1. Let ae Rp satisfy a > 0, and consider the nonlinear programming problem
v = min(a,A), s.t. As A, (3.2)
where A =
Then, v is finite and problem (3.2) has at least one
optimal solution.
Proof. Notice that, if As A, then A > 0 and (a, A) > 0. Therefore, v > 0. This,
combined with the fact that A ^ 0, implies that v is finite.
Now, suppose that, for each j = 1,2,..., there exists a vector A'6 A such that
(a,XiS)^v + Ej,
where {eÂ¡Y.^ is a strictly decreasing sequence of positive real numbers such that
limÂ£y = 0. Then the sequence {AJ is either bounded or unbounded.
Case 1: {A' ]\=1 is bounded. Then, for some bounded set cA, A; e A for each
j = 1, 2,.... Therefore, by passing to an appropriate subsequence {A'}^ of {V }*=l, if
necessary, we can guarantee that A = lim A' exists. Furthermore, since A'eAcA for
jeJ
each j e J, and A is a closed set, A belongs to A. By assumption,
(j,A')v+Â£,, (3.3)

43
for each je J. By taking the limits over je J on both sides of (3.3), we conclude that
(a,Xjv. Since A e A, this implies that A is an optimal solution to (3.2).
Case 2: {A'Jl, is unbounded. Then, for some subsequence {A'}^, of {Aj and
forsome ke {l, 2,...,p}, limAf =+. For each jej, since A; e A, AJ >0.
Â¡-~
Combined with the fact that a > 0, implies that, for each ye 7,
0 By assumption, for each j e J,
(a,XJ)v+Sj. (3.5)
From (3.4) and (3.5), we obtain
a, Ai < v + e,
(3.6)
for each ye J. By taking the limits over ye J on both sides of (3.6), we conclude that
+ o = v, which is a contradiction. Therefore, this case cannot hold, and the proof is
complete.
Using Lemma 3.2.1, we may now establish the following theorem.
Theorem 3.2.1. Assume in problem (Px ) that X is a convex set and that /; :X > R,
j = 1,2 p, are concave functions. Let g : X * R be defined for each x e X by
-i'/p
g(x)=P
fi/yW
Then g : X > R is a concave function.

44
Proof. Consider the function h : X > R defined for each jtg X by
*M = min V, M. s.t.Ae A,
(3.7)
where A is as defined in Lemma 3.2.1. From Lemma3.2.1, since is strictly positive
on A for each j = 1,2,..., p, it follows that the minimum in (3.7) exists and is finite for
each x e X. If, for each A e A, we define a function h^ -. X > R by
M*)=Â£ajfj(x).
then for each xe X, h(x) may also be written as
h(x)=mmhl(x).
(3.8)
Notice that, for each A 6 A, : X > R is a concave function. From this and (3.8), we
conclude that h: X - R is also a concave function (Rockafellar 1970).
To complete the proof, we will show that, for each xe X h(x)= g(x). Toward
this end, fix xe X and let A(x)e X denote an optimal solution to problem (3.7). From
the Karush-Kuhn-Tucker necessary conditions for this problem (Bazaraa, Sherali, and
Shetty 1993), since X(x)> 0, it follows that there exists a nonnegative constant 0(x)
such that
(3.9)
Since A(x)e X is an optimal solution to problem (3.7), it is easy to see that
flAj(jr) = l.

45
Together with (3.9), this implies that
*;(*)/,(*)= 0(4 7 = 1.2 p. (3.10)
From (3.10), it follows that
Xj{x) = e{x)/fj{x), j = l,2,...,p.
By substitution in
i->
this implies that
0M--
-]1/P
(3.11)
Vp
From equations (3.10) and (3.11), we see that
P Tlfj(x) (3.12)
i-1 L 7='
Since xe X and X is an optimal solution to (3.7), the left-hand-side of equation
(3.12) coincides with h(x). By definition of g, the right-hand-side of equation (3.12)
equals g(jc), so that the proof is complete.
Theorem 3.2.1 can also be proven by using a composite function approach and
showing several preliminary results (Avriel, Diewert, Schaible, and Zang 1987). We offer
the proof here, because it is more direct and because we will use it below to help derive a
corollary of interest.

46
Notice from Theorem 3.2.1 that, when problem (Px) is a concave multiplicative
program, the optimal solution set of problem (Px) is identical to the optimal solution set
of the concave minimization problem
min g{x), s.t.xe X, (3.13)
where f: X - R is defined for each je X by
(*)= p\g(x)fF.
In practice, this implies that any concave multiplicative program (Px), if rewritten in the
form (3.13), can be solved by applying any suitable concave minimization algorithm to
(3.13). Notice also that problem (3.13) is a simpler reformulation of problem (Px ) for the
concave case than the typical reformulation used in the literature to solve problem (Px)
in the convex case (see e.g., Konno and Kuno 1992, Kuno and Konno 1991, Thoai 1991,
and Kuno, Yajima, and Konno 1993).
Theorem 3.2.1 also has some interesting theoretical implications concerning the
product of functions. For instance, for any finite set of concave functions fÂ¡, j = 1,2,
..., p, each defined on a common nonempty convex domain X c R" and each strictly
positive on this domain, it is known that the function g : X -> R defined by their product
is not necessarily concave, convex, or quasiconvex on X (Kuno, Yajima and Konno
1993 and Avriel, Diewert, Schaible and Zang 1988). However, from Theorem 3.2.1, the
function f : X > R given by
for each ie X is a concave function on X.

47
In addition, Theorem 3.2.1 implies the following result concerning the product of
a set of concave functions.
Corollary 3.2.1. Let X and fjy 7 = 1,2,...,/?, be defined as in Theorem 3.2.1, and
suppose that g : X > R is defined for each xe X by
7=1
Then g :X R is a quasiconcave function.
Proof. Choose ae R, and let
La={xe X\g(x)ia}.
If a 0, La = X is a convex set. If a > 0, then from Theorem 3.2.1 and Rockafellar
(1970), the set
Lf={xe X\p[g(x)]Vp zp]
is a convex set, where /? = pa'1''. Since = La, this implies that La is a convex set.
Therefore, we have shown that, for any as/?, La is a convex set. This is equivalent to
showing that g : X > R is a quasiconcave function (Bazaraa, Sherali, and Shetty 1993),
so that the proof is complete.
It follows from Corollary 3.2.1 that any concave multiplicative programming
problem (Px) is a problem involving the minimization of a quasiconcave function over a
convex set. Many of the most popular algorithms for minimizing a concave function over
a convex set are equally suitable for minimizing quasiconcave functions over convex sets
(Horst and Tuy 1993 and Benson 1995). As a result, we see that any concave

48
multiplicative program (Px) can be solved by applying any number of suitable concave
minimization algorithms directly to problem (Px). In particular, no reformulations of
problem (Px) are needed to apply these algorithms.
Remark 3.2.1. Corollary 3.2.1 has been previously shown to hold for the special case
where p = 2, X is a nonempty, compact polyhedron, and /, and f2 are linear functions
(see, e.g., Konno and Kuno 1992).
The next corollary of Theorem 3.2.1 concerns the minimization problem (3.7)
used in the proof of the theorem. Possible uses for this corollary may include the
construction of methods for finding local optimal solutions to concave multiplicative
programs, although we will not investigate this here.
Corollary 3.2.2. Let X and fjt j = 1, 2,...,p, be defined as in Theorem 3.2.1, and let A
be defined as in Lemma 3.2.1. Then, A is a convex set and, for each xe X, the unique
optimal solution X(x) to problem (3.7) is given by
Proof. Notice that A may be rewritten according to the relation
(3.14)
where
intfif ={2.6 R',|2.>o}.
It is easy to see that, for each j = 1,2,..., p, hÂ¡ : int RÂ£ R, defined for each Xe int S'
by

49
A,(AM,.
is a concave function on int Rf that satisfies
hj (A) > 0, for all A s int Rf .
Therefore, by Theorem 3.2.1, the function m: int Rf > R defined for each A 6 int Rf by
is a concave function. This implies that
{As intRf|m(A)^ p}
is a convex set (Rockafellar 1970). By (3.14), this proves that A is a convex set.
Now, fix xe X, and let A(x)e A denote an optimal solution to problem (3.7).
From the proof of Theorem 3.2.1, this implies that, for each k = 1, 2,..., p,
At{x)=e(x)/ ft(x),
where d(x) is given by (3.11), so that the corollary is proven.
In addition to its relationships to concave minimization, a concave multiplicative
program also has some interesting ties to multiple-objective mathematical programming.
In the remainder of this section, we will show some of the theoretical relationships
between concave multiplicative programs and certain multiple-objective mathematical
programs. In the next section, some practical benefits of those relationships will be
demonstrated.
Let f(x) denote the vector
[/iM./jM /,MT.

50
where fj.X-*R, j = 1,2,..., p, are the functions used in defining problem (Px).
Then, the components of the vector f(x) are generally conflicting, in the sense that the
nfima over X of fÂ¡(x), j = 1,2,..., p, are generally not simultaneously achieved at the
same point in X. As a result, inherent tradeoffs in the achievable values of the
components of f(x) over xe X are present. To account for these tradeoffs, and to seek
what decision makers call a most preferred solution in situations where the goal is to
attempt to simultaneously minimize fÂ¡{x), j = 1,2,..., p, over X, one of the most
popular approaches is to consider the associated multiple-objective mathematical
program
VMIN/(x:), s.t. jce X. (3.15)
In particular, in typical situations, a most preferred solution in X will exist that is also an
efficient solution for (3.15), where an efficient solution is defined as follows.
Definition 3.2.1. A point x e R" is called an efficient solution for (3.15) when x e X
and, whenever f(x)f(x) for some xe X, then f(x)= f(x).
An efficient solution is also called a nondominated or Pareto-optimal solution. By
generating or searching the set XE of the efficient solutions for (3.15), decision makers
are able to observe the inherent tradeoffs among the objective functions ft, j = 1,2,...,
p, that are available over X and are often able to choose from XE a most preferred
solution. For further discussions on multiple-objective mathematical programming and its
applications, the reader may consult, for instance, Cohon (1978), Evans (1984), Luc

51
(1989), Sawaragi, Nakayama, and Tanino (1985), Stadler (1979), Steuer (1986), Yu
(1985), Zeleny (1982) and references therein.
The first relationship between multiplicative programming and multiple-objective
mathematical programming is given in the following result. The proof of this result is an
elementary exercise. .
Proposition 3.2.1. Any optimal solution to problem (Px) must belong to the efficient set
XE of the multiple-objective mathematical programming problem (3.15).
Notice that Proposition 3.2.1 holds for arbitrary multiplicative programming
problems (Px). The next result, however, is restricted to certain types of concave
multiplicative programs.
Proposition 3.2.2. Assume in problem (Px) that X is a compact, convex set and that
fj.X-^R, j = 1,2,..., p, are concave functions. Then, there exists an optimal solution
to problem (Px) which is an extreme point of X.
Proof. From Theorem 3.2.1, problem (Px) can be solved by finding an optimal
solution to the concave minimization problem (3.13), where g : X R is the concave
function defined by
for each xe X. Since X is a nonempty compact, convex set, from Horst and Tuy (1993),
problem (3.13) has an optimal solution that is an extreme point of X. These two
observations together prove the desired result.

52
Taken together, Proposition 3.2.1 and 3.2.2 imply that any concave multiplicative
programming problem with a compact feasible region has at least one optimal solution
that is an efficient extreme point solution to the multiple-objective mathematical
programming problem (3.15). Special cases of this observations have been alluded to in
the literature (see, e.g., Aneja, Aggarwal and Nair 1984 and Sniedovich and Findlay,
1995). In the next section, we put this observation to practical use.
3.3. Efficient Point Search Heuristic
Assume in this section that, in problem (Px),
X={xeR'\AxZb}
is a compact polyhedron, where A is an mxn matrix and be Rm, and that for each
j = 1,2,..., p, f j(x) = (c1,xj, where c e R" for each j = 1,2,..., p. Then problem
(Px) is a linear multiplicative programming problem or, more briefly, a linear
multiplicative program (Konno and Kuno 1992). We have designed and tested a heuristic
algorithm for this problem, based in part on some of the results in the previous section. In
this section, we will formally state this heuristic algorithm and explain its workings.
The multiple-objective program (3.15) associated with a linear multiplicative
problem may be written as
VMIN Cx, s.t. Axib, (3.16)
where C is the pxn matrix whose jth row equals (c'J, j = 1,2 p. Problem (3.16)
is a multiple-objective linear programming problem (Steuer 1986 and Yu 1985). Let Xex
denote the set of extreme points of

53
X={xeRn\Axgb}.
Then, by Proposition 3.2.1 and 3.2.2, an optimal solution to the linear multiplicative
programming problem can be found in the set
of efficient extreme points of problem (3.16). The set X Exx is finite, and various
procedures have been developed for generating it in its entirety (see, e.g., Steuer 1986, Yu
1985 and Steuer 1983).
It follows that, in theory, at least, a global optimal solution to a linear
multiplicative problem can be found by completely enumerating the set XEex of efficient
extreme points of the associated multiple-objective linear programming problem (3.16)
and, from this set, choosing the point(s) with the smallest value of
w=(c>-t)
(see, e.g., Sniedovich and Findlay 1995). Unfortunately, as we shall see later, in practice
the exponential growth in the size of XEtx as a function of problem size (Steuer 1986)
renders this approach impractical for many cases.
The approach of the heuristic algorithm is to efficiently search a dispersed,
carefully chosen sample of candidate points from XE ex in order to find an attractive
solution to the linear multiplicative programming problem. To describe and explain the
workings of the heuristic, we must first present some theoretical background from the
theory of multiple-objective linear programming.
Let

54
W = {we w^SM,wSe},
where e e Rp isa vector with each entry equal to 1.0, and M is a positive real number.
For sufficiently large M, from Philip (1972) it is known that a point x belongs to the
efficient set XE of (3.16) if and only if x is an optimal solution to the weighted-sum
problem
min (wTC,x'j, s.t. Ax b, (3.17)
for some w = w e W. We will assume henceforth that M is chosen to be large enough to
guarantee that this property holds. It is also well known that the efficient set XE for
(3.16) is given by
where, for each we W, Xw denotes the optimal solution set of the linear program (3.17)
(Steuer 1986 and Yu 1985). Since the optimal solution set to (3.17) for any we VP is a
face of
X ={*e R"\Axib\,
it follows that the efficient set XE for (3.16) is equal to the union of the faces X,,
we W, of X. Although Xf is a connected set (Yu 1985), it is generally nonconvex. The
heuristic algorithm will individually identify efficient faces Xw, we W, of X, and find
an approximately-optimal extreme point solution to the problem
min
s.t.xe X.,
(3.18)
for each efficient face Xw that it finds.

55
Let
Y = {ye R''|y = Cx, for some xe X },
T2 = {ye R'J|y ^ Jiforsoroeys i'}
To aid in its search, the heuristic algorithm will solve the linear program
min (wTC,x^)
(3.19a)
s.t. Cts; y,
(3.19b)
Axib,
(3.19c)
for various values of ys K2 and we If The heuristic relies in part upon the properties
of problem (3.19) given in the next three results. The first two results follow easily from
Benson (1978).
Theorem 3.3.1. Suppose that x e R" and let y = Cx. Then, x is an efficient solution
for (3.16) if and only if, with y = y, x is an optimal solution to (3.19) for every
we W.
Theorem 3.3.2. If ye Y~ and we W, then (3.19) has at least one optimal solution, and
any optimal solution for (3.19) is an efficient solution for (3.16).
Theorem 3.3.3. Suppose in 13.191 that w = weW and that y = y = Cx, where x is
an efficient solution for (3.16). Let [u0T,zor) denote any optimal solution to linear
programming dual of (3.19), where u represents the dual variables corresponding to the
constraints Cx y of (3.19). Let w = u + w and let v0 = (wJCx. Then, x belongs
to the efficient face Xof X, and can be represented as

56
X_o={*exj(w0JrC;t = v0j.
Proof. To prove the theorem, we will show that, with w = w, x is an optimal
solution to problem (3.17). Suppose in (3.19) that w = w 6 VP and that y = y = Cx,
where x is an efficient solution for (3.16) given in the theorem. The dual linear program
to (3.19) is then given by
max-^y,H^-(,z),
s.t. -Ctu-Atz = Ctw,
u,z0.
From Theorem 3.3.1, x is an optimal solution to (3.19) when w=w and y = y. By
the duality theory of linear programming (Murty 1983), since (uOT,z0T) is an optimal
solution to the linear programming dual of (3.19) when w = w and y = y, this implies
that
(w/Cr=-(/,)-(b,z).
By rearranging this equation and using the definitions of y and w, we obtain
(iwJCx=-(b,z).
(3.20)
w, the dual linear program to (3.17) may be written as
max-(fc, z),
(3.21a)
s.t. -Atz = Ctw\
(3.21b)
zgO.
(3.21c)
Let z denote an arbitrary feasible solution to problem (3.21). From the definitions of u

57
and vv, this implies that (u 'T ,zT) is a feasible solution to the dual linear program of
(3.19). Since (u0T ,z0T) is an optimal solution to the latter problem, it follows that
- {y,) (b, z) Â§ (/, it0) (b, z),
or, equivalently,
-(b,z%-(b,z).
Notice that, since (u0T ,z0T) is an optimal solution to the dual linear program to (3.19), z
is a feasible solution to (3.21). By the choice of z, the preceding two statements imply
that z is an optimal solution to (3.21). Since x is an efficient solution for (3.16), with
w = vv, jc is a feasible solution for (3.17). From (3.20) and the duality theory of linear
programming (Murty 1983), since z is an optimal solution to (3.21), this implies that,
with w = w, x is an optimal solution to (3.17), and the proof is complete.
Notice in Theorem 3.3.3 that, for any t > 0, X^ = 2f_. This implies that, in
Theorem 3.3.3, when vv e W, there exists a f e (0, l) such that tw e W and
X_ = X_. Thus, in Theorem 3.3.3, when vv e W, X_ has an alternate representation
X for which iv e W. For simplicity, we may and will assume without loss of
generality that in Theorem 3.3.3, vv 6 W.
To generate various points y e Y* for use in problem (3.19), the heuristic
algorithm will rely upon the two concepts defined in the next two definitions (see, e.g.,
Zeleny 1982).

58
Definition 3.3.1. The point y' e Rp is called the ideal point of Y when, for each
j = 1,2 p, y'j equals the minimum value of yÂ¡ over Y.
Definition 3.3.2. The point yM e Rp is called the anti-ideal point of Y when, for each
j = 1,2,..., p, yj1 equals the maximum value of yt over Y.
Notice that y' and y* generally do not belong to Y. The algorithm uses these
two points as anchor points in an initialization procedure whose goal is, in part, to
generate a dispersed sample of points from T2.
The heuristic algorithm may be stated as follows.
Algorithm 3.3.1. Efficient Point Search Heuristic Algorithm
Initialization Phase. See Steps 1 through 5 below.
Step 1. Find the ideal and anti-ideal points y' and yM of Y.
Step 2. Find an optimal solution Je /?"+1 to the linear program
max a,
s.t. yM + a(y' -yM)'Cx,
Ax ib,
a SO,
and set y* = yM + a'(y' -yM).
Step 3. Choose a positive integer S and, for each Â¡ = 1,2,..., S, let
y' = /'+ {i/S ty-yA' )
Step 4. Choose a positive integer N such that liN iM p + l, let w = e e Rp and,
for each j = 1, 2 p, define vv' s R1' by

59
1, if i* j.
[N, if i = j.
Step 5. Set t/B = +<*>, i = O and j = 0.
Efficient Point Search Phase. See Steps 1 through 6 below.
Step 1. Set y = y' and w = w. and find any optimal solution xÂ¡ to linear program
(3.19).
Step 2. Set y = Cx!i and w = w in (3.19), and compute any optimal solution
{zJ J to the dual linear program to (3.19), where u denotes the optimal dual variables
corresponding to the constraints Cx^y of (3.19).
Step 3. Let w'1 = u'Â¡ + wJ. If w1' is a positive multiple of w" for some i' < i and f < j
such that (/', /) (i, /), then go to Step 6. Otherwise, continue.
Step 4. Let v:j = (vv" J Cx'1. For each h = 1,2,..., n, calculate ah according to the formula
n
c ,XJ
(3.22)
and find any basic optimal solution xu to the linear program
(a,x),
(3.23a)
II
(3.23b)
Axib.
(3.23c)
Step 5. If (c*, xÂ¡ go to Step 6. Otherwise, set x = x, and Uli =
and go to Step 6.

60
Step 6. Set j = j +1. If j p, go to Step I. Otherwise, set i = i +1 and j = 0. If i S,
go to Step 1. Otherwise, Stop: jce XEa is the recommended solution to the linear
multiplicative programming problem.
In the initialization phase of the algorithm, samples of points from Y* and from
W are generated. To generate the sample of points from F2, Step 2 of this phase
determines the point y between yA and y' such that, of all line segments with
endpoints yM and y that lie in T1 and for which y lies on the line segment connecting
yA' and y', the line segment L connecting yAI and y has maximum norm. The sample
{/ |i = 1,2,..., s} of points from Y2 is then generated in Step 3 of this phase by
partitioning L into S line segments of equal length, where S is a positive integer chosen
by the user. In Step 4, a sample of p +1 all-integer vectors from W is generated, where
for p of these vectors, the value N of one of the components is chosen by the user from
the set {l, p + l}.
Each iteration of the efficient point search phase of the heuristic executes two key
operations. First, it identifies an efficient face X_ of X. Second, unless this face has
been previously identified during an earlier execution of this phase, with w = w'J in
problem (3.18), by using a first-order linear approximation to the objective function of
this problem, it finds an extreme point xJ of X in this efficient face that is an
approximate optimal solution to (3.18).
Steps 1 through 3 of the efficient point search phase of the algorithm identify an
efficient face of X. In Step 1, with y = y' e T2 and w = w'e W, the linear program

61
(3.19) is solved for any optimal solution x11. By Theorem 3.3.2, this optimal solution
must exist and is an efficient solution for (3.16). In Steps 2 and 3, with y = Cx'1 and
w = w in (3.19), the dual linear program to (3.19) is solved to yield the vector u'J e Rp,
and the weighting vector w = u + w is computed. From Theorem 3.3.3, the face X_s
corresponding to this weighting vector is an efficient face for (3.16) and contains x'J.
Furthermore, from the same theorem, this face can be written as
(3.24)
where v,y = (w'' J Cx'1. Step 3 checks whether or not Xhas been identified during a
previous execution of this phase of the algorithm. If so, the algorithm proceeds to Step 6
to prepare for another possible iteration of the efficient point search phase of the heuristic.
Otherwise, control shifts to Steps 4-5.
In Steps 4-5 of the efficient point search phase, problem (3.18) is approximately
solved using a new efficient face Xw as the feasible region. In particular, in Step 4, (3.22)
is first used to construct the nonconstant portion of a first-order Taylor series linear
approximation (a, x) of the objective function of problem (3.18) at x = x s X_Â¡,. Next,
using the representation (3.24) of the efficient face an extreme point minimizer x
of (a,x) over Xis found by solving the linear program (3.23). Notice that x e XEiI
(see Rockafellar 1970). In Step 5, the value achieved by x'1 in the objective function of
the linear multiplicative problem is compared to the smallest value UB found thus far for
this objective function by the search. If x achieves a smaller objective function value

62
than UB, xli becomes the new incumbent solution x and UB is reduced in value
accordingly.
Notice that the performance of the heuristic algorithm depends in part upon the
number, locations, and dimensions of the efficient faces (3.24) that are searched via
problem (3.23). This, in turn, is partially dependent upon the sizes of the parameters S
and N chosen by the user. The goal is to search as many points of X Ea as possible by
generating a variety of distinct efficient faces (3.24) of large dimensions that are
dispersed widely throughout XE. Notice that, since each efficient face identified by the
heuristic is given in the form (3.24) and searched by solving linear program (3.23), the
individual points in XEex that are searched by the algorithm are searched implicitly rather
than explicitly, i.e., they do not need to be explicitly enumerated.
3.4. Computational Results
The heuristic algorithm described in Section 3.3 has the following attractive
characteristics:
(a) it can be implemented using only linear programming methods;
(b) it generally implicitly searches many efficient extreme points of (3.16) at once
by optimizing over entire efficient faces of (3.16), rather than by explicitly examining
individual efficient extreme points of (3.16);
(c) it allows the user to manipulate the nature and extent of the efficient face
search through the choices for the input parameters S and IV;
(d) it finds efficient faces of (3.16) by attempting to globally sample from a
variety of regions of the efficient set.

63
To evaluate the effectiveness in practice of the heuristic algorithm and its features,
we have written a VS FORTRAN computer code for the algorithm and used it to solve
260 linear multiplicative programming problems of various sizes. To execute the code on
these 260 problems, we used an IBM ES/9000 model 831 mainframe computer. As a
further illustration of the effectiveness in practice of the heuristic algorithm, we solved a
multiple-object linear programming problem in forest management that was derived from
a real decision situation using real data.
To implement Step 3 of the initialization phase of the algorithm, we chose to set
S = 4, so that a sample of five points lying between yM and y1 in Y' is always
generated in this step. We used a value of N = 9 in Step 4 of the initialization phase to
help generate the sample of p +1 points from W.
To solve the linear programming problems called for by the heuristic, the
computer code uses the simplex method procedures given in the subroutines of the
Optimization Subroutine Library (International Business Machines 1990). These
subroutines employ anticycling rales to handle degeneracy as needed. Therefore, they are
especially appropriate for solving instances of problem (3.23), since these problems
always contain degenerate extreme points.
Let
and suppose that k is a positive integer. To generate the 260 test problems, we used the
following random procedure. First, for each j = l,2,...,p, we generated the elements of
the vector c e It" by randomly drawing elements from the set {l, 2,..., 10}. Next, we

64
generated a nonempty, compact polyhedral feasible region X cint R". This region can
be written as
X ={re R"\Pxiq, lg^ iq, j = l, 2
where P is a kxn matrix, qe Rk, and qe R. To accomplish this, first the elements of P
were generated by randomly choosing elements from the set {l, 2,..., 10}. Next, for each
i = 1,2,..., fc, the formula
= Â£ P?,
was used to calculate qÂ¡, and, finally, q was chosen according to the rule
q = max {<^Â¡1 = 1,2,...,k}.
Each test problem was constructed to belong to one of four categories, where a
category is defined by the number p of linear functions used in the objective function
J^J (c1, x'j of the test problem. The values p = 2,3,4,5 were chosen to define these
categories. We chose these categories in this way because empirical evidence seems to
indicate that the complexity of these problems is more sensitive to the magnitude of p
than to the magnitudes of k or n (Kuno, Yajima and Konno 1993). Within each
category, the test problems were classified into subcategories of 10 problems, each
defined by the values of the ordered pair (k,n).
To help evaluate the attractiveness of the solutions found by the heuristic
algorithm, we found a global optimal solution for each test problem by completely
enumerating all of the efficient extreme points of the associated multiple-objective linear

65
program (3.16). To accomplish this, we use the ADBASE computer code developed by
Steuer (1983).
Some statistics summarizing the results of these computations are presented in
Tables 3.1-3.4. In each table, each row gives average statistics for a subcategory (k,n) of
10 problems, a measure of the worst case performance of the heuristic, and the number of
problems in a category for which a global optimal solution was found. The first statistic is
the average number of efficient extreme points found by ADBASE in solving the
problems by complete enumeration. In some sense, the magnitudes of these numbers
correspond to the average relative difficulties, by subcategory, of each group of 10 linear
multiplicative programs in a subcategory. The second statistic is the average efficiency
rating r given by
where zH is the objective function value returned by the heuristic, and where zmin and
zma* are the global minimum and maximum values of the objective function of the test
problem over the corresponding set of efficient extreme points of (3.16). Thus, 0 i r ^ 1,
and the closer r is to 1.0, the more attractive the value z returned by the heuristic is
relative to the actual global minimum value The third statistic given for each
subcategory in these tables is the average CPU time (seconds) that the heuristic needed to
solve a problem in the subcategory. The fourth statistic shows the lowest efficiency rating
calculated for a problem in the subcategory. It gives a measure of the worst case
performance of the heuristic algorithm when applied to the 10 problems in a subcategory.

66
Table 3.1. Computational Results: p = 2.
Subcategory
k n
Avg No.
Eff. Points
Avg. Eff.
Rating r
Avg. Solutions
Time (sec.)
Lowest Eff.
Rating r
No. Exact
Solutions
25
20
28.8
1.000
0.227
1.000
10
25
30
28.8
1.000
0.241
1.000
10
30
40
47.9
1.000
0.389
1.000
10
40
30
28.2
1.000
0.328
1.000
10
40
50
47.0
0.999
0.504
0.996
8
50
40
35.1
0.999
0.453
0.999
9
50
60
29.2
1.000
0.556
1.000
10
60
70
62.3
1.000
1.070
1.000
10
The fifth statistic is the number of problems in a category for which the heuristic
algorithm found a global optimal solution.
These four tables show that the solutions returned by the heuristic algorithm give,
on the average, quite accurate estimates of the actual global minimum values for the 260
linear multiplicative test problems generated. This is indicated by the fact that average
efficiency ratings by subcategory always were at least 0.920, and in approximately 96%
of the subcategories exceeded 0.950. It is noteworthy that, for these problems, these
ratings r by subcategory do not seem to decline significantly as p, k, and n increase in
Table 3.2. Computational Results: p = 3.
Subcategory
k n
Avg. No. Eff.
Ext. Points
Avg Eff.
Rating r
Avg. Solutions
Time (sec.)
Lowest Eff.
Rating r
No. Exact
Solutions
25
20
330.6
0.985
0.321
0.951
4
25
30
896.8
0.960
0.469
0.708
5
30
40
873.3
0.987
0.543
0.884
7
40
30
949.3
0.993
0.609
0.968
6
40
50
2073.7
0.920
0.967
0.806
4
50
40
1484.9
0.993
0.908
0.961
7
50
60
2846.3
0.995
1.298
0.978
6
60
70
5867.5
0.969
2.495
0.799
2

67
Table 3.3. Computational Results: p = 4.
Subcategory
k n
Avg. No. Eff.
Ext. Points
Avg Eff.
Rating r
Avg. Solutions
Time (sec.)
Lowest Eff.
Rating r
No. Exact
Solutions
25
20
2789.5
0.998
0.426
0.993
4
25
30
7245.9
0.992
0.598
0.945
5
30
40
23656
0.986
1.019
0.947
1
40
30
19034
0.978
0.998
0.923
2
40
50
50889
0.969
1.539
0.918
0
50
40
59443
0.969
1.587
0.843
2
50
50
83780
0.981
1.901
0.890
3
value. In addition, with the exception of one subcategory, a global optimal solution was
found for at least one problem in each subcategory.
The average solution times by subcategories shown in the four tables indicate that,
for these test problems, the computational effort required by the heuristic was rather
small. In fact, these average times were always less than 2.50 seconds. In comparison to
exact algorithms that have been used in test situations to globally solve linear
multiplicative problems, these times are generally either at least as small or much smaller
(see, e.g., Kuno, Yajima, Konno 1993 and Ryoo and Sahinidis 1996). Furthermore, in
contrast to solution times for exact algorithms, these average solution times seem much
less sensitive to increases in p, n, k or to increases in the average number of efficient
Table 3.4. Computational Results: p = 5.
Subcategory
k n
Avg. No. Eff.
Ext. Points
Avg. Eff.
Rating r
Avg. Solutions
Time (sec.)
Lowest Eff.
Rating r
No. Exact
Solutions
10
20
1331.4
0.993
0.353
0.941
5
20
10
527.1
0.998
0.294
0.993
2
25
30
57115
0.995
0.962
0.992
2

68
extreme points that exist in the corresponding problems (3.16); see Kuno, Yajima, Konno
(1993) and Ryoo and Sahinidis, (1996).
Finally, it is worth noting that we were able to apply the heuristic to much larger
problems than those reported in Tables 3.1-3.4. However, the number of efficient extreme
points in the associated multiple-objective linear programming problems (3.16) for these
cases always exceeded 200,000. Since the ADBASE code cannot be used to find all of the
efficient extreme points for such problems, we were unable to completely enumerate the
sets of efficient extreme points to find zmm and r values for these problems. Thus, we are
as yet not able to draw conclusions concerning the accuracy of the heuristic for any
problems larger than those reported in Tables 3.1-3.4.
To further illustrate the effectiveness in practice of the heuristic algorithm, we
solved a real application problem in forest management that was studied in Steuer and
Schuler (1978) as a multiple-objective linear programming problem. The problem
involves the allocation of land and budget monies in a way that seeks to maximize
objectives in timber production, hunting and cattle grazing in the Swan Creek subunit of
the Mark Twain National Forest. Steuer and Schuler (1978) provide actual data used to
formulate their multiple-objective linear programming problem. The problem contains 31
decision variables, 5 linear objective functions, and 13 constraints. Our multiplicative
programming problem was formed from this problem by multiplying the 5 linear
objective functions together to form a single objective function. The heuristic was then
used to search for an approximate solution that maximizes this single objective function
subject to the constraints of the forest management multiple-objective linear
programming problem.

69
To help evaluate the attractiveness of the solution found by the heuristic
algorithm, we found a global optimal solution by enumerating the 83 efficient extreme
points of the associated forest management multiple-objective linear program using the
ADBASE computer code. An efficiency rating of r = 0.999 was calculated using the
slightly modified equation
r = 1 [(z, Z* )/(z Z,)]
since this multiplicative programming problem is a maximization problem rather than a
minimization problem. This efficiency rating indicates that the heuristic algorithm
returned an attractive value zH relative to the actual global maximum value z,m.
3.5. Discussion
The results of this chapter imply that there are at least two ways to rewrite a
concave multiplicative programming problem as a concave minimization problem. It
follows that concave minimization theory and methods can be used in these ways to
analyze and solve concave multiplicative programs. The results also imply that a concave
multiplicative programming problem can be analyzed and solved directly, without any
reformulation, as a quasiconcave minimization problem over a convex set. Furthermore,
the analysis in the chapter implies that any concave multiplicative programming problem
(Px) with a compact feasible region has at least one optimal solution that is an efficient
extreme point solution of the associated multiple-objective mathematical programming
problem (3.15). Therefore, the opportunity exists for devising solution methods for such
problems (Px) that search among the efficient extreme points of the associated multiple-
objective problems (3.15). The chapter proposes a heuristic algorithm that takes this

70
approach for solving linear multiplicative programs. From the computational results
presented for this heuristic algorithm, we conclude that its features and performance offer
significant potential for conveniently finding very attractive solutions with relatively little
computational effort to the various applications using linear multiplicative programming
encountered in practice. Thus, the theoretical and algorithmic results presented in this
chapter offer some potential new avenues for more effectively analyzing and solving
multiplicative programming problems of various types.

CHAPTER 4
A GENERAL MULTIPLICATIVE PROGRAMMING PROBLEM IN OUTCOME-
SPACE
4.1. Introduction
Recall from Chapter 1 that the multiplicative programming problem is given by
(Px) vx = mins.t.Ate X,
where p2t 2 is an integer, X is a nonempty set in Rn, and, for each j = 1,2,..., p,
fj.X > R satisfies ft (x) > 0 for all xe X. For simplicity, we assume that the
minimum vx in problem (Px) is achieved.
For any xe R", let f(x) denote thep-vector withy'th entry equal to fÂ¡{x),
j = 1,2,...,p. Let ye R'' denote the p-vector with jth entry equal to yjx j = \,2,..., p.
For each j = 1,2,..., p, let y e R satisfy
yySsup/y(jc),s.t.jce X,
where y( = +~ is possible, and let ye R1 denote the vector with fth entry equal to yÂ¡,
j = 1,2,..., p. Although various outcome-space reformulations of problem (Px) have
been proposed for solution purposes, one of the most common reformulations is given by
the problem
(Pri) vy = min g(y), s.t. y e Ys,
71

72
where
Y- = {ye P'|/(x) ygy for some .re X j, (4.1)
and where, for each ye Y-, g:Yi*R is defined by
g(y)=f{yj- (4.2)
7=1
For example, problem (Pyi) is essentially the reformulation of problem (Px) used in the
algorithms of Benson (1998c), Falk and Palocsay (1994) and Thoai (1991). Notice that
since X is nonempty, Ys is a nonempty set. By constructing appropriate global solution
algorithms for problem (Pyi), this problem provides us with the opportunity to solve
problem (Px) by working in the outcome-space R'" of the problem, rather than in the
decision space R", which is generally much larger than Rp. In order to globally solve
problem (Py,), it is important to understand the properties of the set Y~ defined by (4.1),
of the function g defined by (4.2), and of problem (Pyi) itself.
This chapter undertakes a mathematical analysis of the outcome-space
reformulation (Pyi) of problem (Px). The analysis is organized according to whether or
not the outcome-space problem satisfies conditions for the general case, the convex case,
or the polyhedral case. For the general case, we show, for instance, that globally solving
either problem (Px) or problem (P,) essentially also globally solves the other problem,
and that, for any feasible point y for problem (Pyi), either g(y)< g(y) for some ye Y*
or y satisfies a condition that is necessary, but not sufficient, for it to be a local optimal
solution for problem (Pyi). For the convex and polyhedral cases, we show stronger

73
results. For example, we show for the convex case that any global optimal solution for
problem (Pyi) must lie on the boundary of Y-, that the objective function g in problem
(Pyi) is strictly pseudoconcave on Y-, and, when Y- is closed and contains at least one
extreme point, that problem (Pyi) has an extreme point global optimal solution.
The analysis of the general case of problem (Pyi) is given in Section 4.2. Section
4.3 provides analytical results for both the convex and polyhedral cases of problem (Pyi).
4.2. Results for the General Case of Problem (Pyi)
Notice under the assumptions made in Section 4.1 for problem (Px) that Y- is a
nonempty subset of Rx {z e > o}. When Y- satisfies this condition, we obtain
what we will call the general case of problem (Pyi).
It is important to establish that by solving the general case outcome-space
formulation (Pyi) of problem (Px), a global optimal solution for problem (Px) can be
recovered. The following result, by showing that problems (Px) and (P.) are equivalent
in a certain sense, immediately establishes this fact.
Theorem 4.2.1. (a) If x is a global optimal solution for problem (Px), then y" = f(x')
is a global optimal solution for problem (Pyi). Furthermore, vy = vx.
(b) Problem (Pyi) has at least one global optimal solution. Furthermore,
if y is a global optimal solution for problem (Pyi), then any x' e X such that
A*hy' is a global optimal solution for problem (Px).

74
Proof, (a) Let x' be a global optimal solution for problem (Px), and set
y* = f(x). From (4.1) and (4.2), this implies that y e Y- and that
giy')=tlfj(x")=v--
M
Therefore, vy gv,. If g(y)< vx were to hold for some ye Vs, then, from (4.1) and (4.2),
there would exist an xe X such that
M
which contradicts the definition of vx. Therefore, g(y)^vx for all ye Y-. This implies
that vy~ivx. Since vy gvx = g(y), and y" e Y-, it follows that vy=vx = g(y), and y is
a global optimal solution for problem (P s).
(b) By assumption, we may choose a global optimal solution for problem
(Px). From part (a), this implies that problem (Pyi) has at least one global optimal
solution. Suppose that y' is a global optimal solution for problem (Pyi). Since y" e Y-,
(4.1) implies that we may choose an arbitrary x' e X such that /(jt')gy'. Then, from
(4.2), since 0 Since x* e X and y* is a global optimal solution for problem (PyS), this implies that
vxgflfj(x-)gvy.
y=l
(4.3)

75
Frompart(a), vy = vt. By (4.3), this implies that fJ(x')= vs. Since x e X, it follows
M
that jc* is a global optimal solution for problem (Px).
Suppose in the general case of problem (Pyi) that a point ye Y- has been
generated. For algorithmic purposes, it may be valuable to have a tool for finding an
alternate point yeYs that satisfies g(y) gives an idea for potentially helping to create such a tool. To prove this result, we need
the following lemma. This lemma will also be useful in proving several other results later
in this chapter.
Lemma 4.2.1. Assume that ye Yi. Then, for any ye Y-,
(V p)(Vg(y). y) = (yÂ¡/yÂ¡\
7=1
and
g(y)(Vpl,(yj/yMg(y)\g(y)/g(y)]/P,
7=1
with equality holding in the latter relationship iff, for some constant M > 0, y}=M yr
j = 1,2,..., p.
Proof. Choose an arbitrary point ye Y~. Suppose that ye Ks. Then, by (4.2),
since Y~ 0. By definition of g,
0/p)(v(>).3'>=0/p) ru
yÂ¡
= (l/p)bW/>y]y/

76
= ?(>")Â£ (l/pX^/y,)- (4-4)
7=1
Since (l/p)^0, (yÂ¡/yj)>0 for each j = \,2,...,p, and p(l/p)=l, the arithmetic-
geometric mean inequality (Duffin, Peterson, and Zener 1967) implies that
Â£ (l/p)iyj/yÂ¡ 'k\giy)/g(y)r.
with equality holding iff for some constant M > 0, yj=Myj for each j = 1,2,..., p.
Together with (4.4), since g(y)sO, this implies the desired results.
Theorem 4,2.2. Assume that ye Â¥-. If
1.0 > inf (1Jpfciyj/yj). (4.5)
* /-I
then g(y)< g(y) for some yeYs. In particular, if y achieves the infimum in (4.5), then
g(y) Proof. Suppose that y 6 Y-. If (4.5) holds, then for some yeY-,
1.0>(l/p)Â£(y;/y,). (4.6)
7=1
Since g(y)>0, this implies that
g{y)>g{y){i/p),(yi/yj). (4.7)
7=1
From Lemma4.2.1, since ye Y, we know that
g(y)(^ p)'tiiyl/yj'hg(y)\g(y)/giy)]'lp- (4.8)
Since g(y)>0, together (4.7) and (4.8) imply that

77
i-0>k(y)/g(y)]/P-
Because g(y)>0, this implies that g(y)< g{y). Therefore, g(y)< g(y) for some
ye Y*. Since, for any point y that achieves the infimum in (4.5), (4.6) is also satisfied,
the argument above also implies that if y achieves the infimum in (4.5), then
g(y) Notice that when yeY*, the infimum in (4.5) is either less than 1.0 or equal to
1.0. From Theorem 4.2.2, when this infimum is less than 1.0, a point y in Y* such that
g(y)< g(y) exists. In particular, in this case y is not a global optimal solution for
problem (P-). The next result covers the case when the infimum in (4.5) equals 1.0.
Theorem 4.2.3. Assume that yeY-. If
1.0 = inf(l/p)Â£(y;/y,.), (4.9)
7=1
then y is an optimal solution to
vd =min(Vi;(y),y-y), (4.10)
and vd = 0.
Proof. From (4.9), since yeY*, the infimum in (4.9) is achieved at y = y. By
Lemma 4.2.1, since g(y) is a positive constant, this implies that y also minimizes
(l/ p)(y giy),y) over Y*. Since (l/p) is a positive constant and -(Vg(y), y) isa
constant, it is easy to see that this implies that y is an optimal solution to (4.10) and
v=0.

78
A point ye Â¥- is a local optimal solution for problem (Py,) when there exists an
e>0 such that for each yeY- forwhich ||j>-y||Â£, g{y)^g(y). From Theorem 4.2.3,
when ye Y- and (4.9) holds, then, for any yeY-, if there is a S >0 such that
d :={y y) satisfies y + Xd e YÂ¡ for all X such that 0 < Ag<5, the directional derivative
of g at y in the direction d will be nonnegative, i.e., (Vg(y),d)Â£0. FromBazaraa,
Sherali and Shetty (1993), this is a necessary, but not sufficient condition for y to be a
local (or global) optimal solution for problem (PyS).
4.3. Results for Convex and Polyhedral Cases of Problem (Pyt)
When Yi, in addition to being a nonempty subset of /if, is a convex set, then we
obtain what we will call the convex case of problem (Pyi). Similarly, when Yi, in
addition to being a nonempty subset of /if, is a polyhedron, then we obtain what we will
call the polyhedral case of problem (Pyi). Each of these types of outcome-space versions
of problem (Px) arises from a broad class of decision space problems, as shown by the
next result.
Theorem 4.3.1. When X isa convex set and, for each j = \,2,...,p, fÂ¡ is a convex
function on X, we obtain the convex case of problem (P s). When X is a polyhedron
and, for each j = 1,2,..., p, f] is linear on R", we obtain the polyhedral case of
problem (P s).
Proof. Assume, in addition to the assumptions made in Section 4.1 on X and on
fj, j = 1,2,..., p, that X is a convex set and that, for each j = 1,2,...,p, fy is a

79
convex function on X. We will show that Y- is a convex set. Choose any y1, y2 e Y-.
From (4.1), since y',y1e.Y~, we may choose x',x2eX such that fj{x )s y' and
fj(x2)y2, j = 1,2,..., p. Suppose that Ae R and OAil. Then, since ASO and
(l-A)SO, for each j = \,2,...,p,
A fj (*')+ (1 A)/, {x2 )g A y'1 + (1 X)y1. (4.11)
By the convexity of fÂ¡, j = 1,2,..., p, on the convex set X, if we set 3c = A x' +
(l-A)jr2, then
xe X,
(4.12)
and, for each j = 1,2,..., p,
/^A/^O+O-A)/^2). (4.13)
From (4.11)-(4.13), f(x) Ay' + (l-A)y2, where x e X Since y',y2eY-, y1 y
holds for each i = 1, 2. As a result, since A, (l A)S 0, Ay1 + (l A)y2 y. The
conditions for Ay' + (l-X)y2 to belong to Y- are thus satisfied. By the choices of y', y2
and A, this implies that Y is a convex set.
Now suppose, in addition to the assumptions made in Section 4.1 on X and on
fj, j = 1, 2,...,p, that X is a polyhedron and that, for each j = 1,2 p, fj is a linear
function on if". We will show that Yi is a polyhedron. By definition, since X is a
polyhedron, there exists a finite number q of linear functions j = 1, 2 q, on R",
and real numbers bÂ¡, j = \,2,...,q, such that
X={xeRn\gj{x)zbj, y = l, 2 ?}.

80
Let Z Â£ R"*p be defined as the set of all solutions (x, >) to the system of linear
inequalities (4.14)-(4.16) given by
7 = 1, 2,..
,P,
(4.14)
IIA
v;>
7 = 1, 2,..
,P,
(4.15)
bj
7 = 1,2,..
.,q.
(4.16)
Then, by definition, Z is a polyhedron in Rn+P. Let A be the px(n+ p) matrix whose
first n columns each equal Oe R and whose last p columns together form the pxp
identity matrix. Then, from (4.1) and the definition of Z, Y = AZ. From Rockafellar
(1970, Theorem 19.3), Y- is a polyhedron in R'\
In convex cases of problem (P s) (and thus, in polyhedral cases as well), certain
locations within Y- for seeking global optimal solutions can be specified. For instance,
we have the following result.
Theorem 4.3.2. Suppose that problem (Pyi) satisfies the conditions for the convex case.
Then:
(a) Any global optimal solution for problem (Pyi) belongs to the boundary of Ys.
(b) If Y- is closed and contains at least one extreme point, then there exists at
least one global optimal solution for problem (PyS) that is an extreme point of Y~.
Proof. Assume that Y~, in addition to being a nonempty subset of Rp, is a
convex set, i.e., that we have the convex case for problem (Pyi). Then, from Theorem
4.2.1, problem (Pyi) has at least one global optimal solution.

81
(a) To show this part of the theorem, let y* denote an arbitrary global optimal
solution to problem (Pyi). Suppose that y is not on the boundary of F. By the choice
of y and since Y- is a convex set, Y- has a nonempty interior. Therefore, y must
belong to the interior of F. From (4.1), this implies that for some xe X, f(x)< y
must hold. By assumption, since xe X, f(x)> 0. Therefore, if we set y = fix), it
follows that y e F and
iWiU
M M
From (4.2), this contradicts the global optimality of y in problem (Pyi). Therefore, y
must belong to the boundary of F.
(b) From the discussion in Section 3.2, since F is a nonempty convex set and,
for each j = 1,2,..., p, the function hj(y)=yj is positive and concave on F, the global
optimal solution set for problem (Pyi) is identical to the global optimal solution set for
the problem
(Pys) min g(y), s.t.ye F,
where, for each y e F, g : F R is the concave function defined by
s(y)=
n
Since F is a nonempty, closed convex set with at least one extreme point, from
Rockafellar (1970, Corollary 18.5.3), it is easy to see that F can contain no lines.
Furthermore, since problem (Pyf) has at least one global optimal solution, problem (PyS)

82
also has at least one global optimal solution. By Rockafellar (1970, Corollary 32.3.1),
since g is a concave function on Y-, the latter two statements imply that problem (Py,)
has at least one global optimal solution y that is an extreme point of Y~. Because the
optimal solution sets of problems (Pyi) and (Pyi) coincide, this completes the proof.
Suppose that Y- is a nonempty, closed convex subset of R', and that Y-
contains at least one extreme point. Then, from Theorem 4.3.2, there will exist at least
one global optimal solution for problem (Pyi) that is an extreme point of Y-, and all
global optimal solutions for problem (Pyi) will lie on the boundary of Y-. Neither of
these properties, however, is necessarily shared by the decision set-based problem (Px)
whose outcome-space reformulation yields problem (PyS). The following example
demonstrates this.
Example 4.3.1. Let p = 2, X = {(jr,,jc2Jr 6 R2|0gx, 26, I = 1, 2},
/,(xx2) = (x,-l)2 + 1,
and
f2(xx2)=(x2-2f + 1
in problem (Px). Then X is a nonempty, convex set and for each i = 1,2, f is a
convex, positively-valued function on X Therefore, by Theorem 4.3.1, the problem
(.PÂ¡) obtained by formulating the outcome-space version of problem (Px) is guaranteed
to satisfy the conditions of the convex case for problem (Pyi). Furthermore, it is not
difficult to show, in this case, that Ys is compact. Thus, Y- is closed and contains at

83
least one extreme point. It is easy to see that the unique global optimal solution to
problem (Pyi) is (y'J = (l, l) which, as guaranteed by Theorem 4.3.2, is an extreme
point of Ys (and is thus on the boundary of Y-). On the other hand, the only global
optimal solution to problem (Px) is (x'J =(l, 2), yet x is neither on the boundary of
X nor is it an extreme point of X.
To present the next result, we need to define two types of functions.
Definition 4.3.1. Let Z q, R be a nonempty convex set, and let h:Z > R. The function
h is said to be auasiconcave on Z when for each z1, z2 e Z and Ae R such that
0 h\h' + (l-A)z2Js min Mz'Wz2)}-
Definition 4.3.2. Let W be an open set in R" that contains Z Â£ Rn, and let h.W R.
The function h is said to be strictly pseudoconcave over Z when h is differentiable over
Z and, for each distinct z',z2eZ, if ^V/i(z'),z2-z'^gO, then h(z2)< h(z').
It is well known that a differentiable, quasiconcave function h.Z-^R need not
be strictly pseudoconcave over Z. For a discussion of quasiconcave and strictly
pseudoconcave functions see, for example, Bazaraa, Sherali and Shetty (1993).
From Konno and Kuno (1995, p. 379), we know that when Y- is a convex set,
since Ys Â£ /if, g .Ys > R defined by (4.2) is quasiconcave on Yi. Thus, in the convex
case, problem (P f) is a minimization of a quasiconcave function over a convex set. In
fact, however, we have the following even stronger result.

84
Theorem 4.3.3. Suppose that problem (P s) satisfies the conditions for the convex case.
Then, in this problem, g is a strictly pseudoconcave function over the convex set Y-.
Proof. The set Y- is a convex set by definition of the convex case for problem
(Pyi). To show that g is strictly pseudoconcave over Y~, notice first that by (4.2), g
can be considered to be well defined over the open set Rf. Also notice that g is
differentiable over R1 and, thus, over Y- c Rf.
Suppose now that y' and y1 are distinct points in Y- that satisfy
(Vg(y'), y2 y'^ 0. Then, from (4.2), we obtain
yl-'tgiy')
(4.17)
By multiplying both sides of (4.17) by (l/p) and rearranging, we obtain that
(4.18)
*=i
From Lemma 4.2.1
s{y')(i/p)(ylh'kh g(y)t{y2)U(y')] .
(4.19)

85
with equality holding iff, for some M > 0, y* = M y[, k = 1,2 p. There are two
cases to consider.
Case (i): There is no M >0 such that y\=My\, k =1,2,...,p. Then, in (4.19),
strict inequality holds, so that from (4.18) and (4.19),
G,l)>(y)k(>2)/(>'')]
Since g(y)>0, this implies that g(y2)< g(y).
Case (ii): For some M > 0, y\ = M y'k, k = 1, 2,..., p. If we choose such an M,
then (4.19) holds as an equality. Thus, from (4.19) and the choice of M, we obtain that
g{y' )(Vp)Â£ (v* h\ )= g(y' )b (y2 )/s(y)] (4-2)
*=1
and that
g(y2)=M-g(y'), (4.21)
respectively. Since g(y)> 0, together (4.18), (4.20) and (4.21) imply that
s(y g(y' )(m p ) lp= M giy').
Dividing through by g(y)> 0 yields M gl. Notice that M 1, since, by assumption, y'
and y1 are distinct. Therefore M <1. By(4.21), since g(y'), g(y2)>0, this implies that
g(y2)< g(y' ) ar|d the proof is complete.
Remark 4.3.1. Theorem 4.3.3 justifies and strengthens the claim of Sniedovich and
Findlay (1995, p. 317) that when Yi is a convex subset of Rl, g.Yi-J>R defined by
(4.2) is differentiable and pseudoconcave on Yi.

86
From Theorem 4.3.3, in the convex case, problem (PyS) is a global optimization
problem involving the minimization of a strictly pseudoconcave function over a convex
set Y-. Therefore, as in the general case, multiple local optimal solutions for problem
(Pri) will generally exist that are not globally optimal.
From Theorem 3.2.1, we know that when Ys is a nonempty, convex subset of
Rp, the function g : Ys R defined, as in the proof of Theorem 4.3.2, by
g{y)=b(y)]h
(4.22)
is concave where g:Yi >R is given by (2). By the next result, when the domain of g is
restricted to an appropriate subset of Yi, a stronger statement can be made.
Theorem 4.3.4. Assume that Y~ is a nonempty, compact, convex subset of Rp. For any
e Rp and be R such that a > 0 and b> 0, let Z(a,b)=Yi |"] {ye R'|(a, y) = b\.
a
Then g : Z(a,b)> R defined for each ye Z(a,b) by (4.22) is a strictly concave function
for any ae Rp and te R such that a > 0 and b> 0.
Proof. Assume that y,y2 e Z(a,b) and y1 y2, where ae Rp, be R, a >0,
and b > 0. Since Z(a,b) is an intersection of two convex sets, it is itself a convex set.
Therefore, if we choose X e R such that 0 < X < 1, then
z:=Ay1 + (l-A)y2e Z(a,b).
Also, by (2) and (4.22),
(4.23)
From Polya and Szego (1972),

87
yIp
17 [A y) + (1 A)y,2] Apfl y' + (l - xyf[ y2
}/p r
iIr
(4.24)
with equality holding iff A y \ = K(l A)y2, j = 1,2,..., p, for some positive constant K.
Since
, \V/>
and
(i-Arrt
iIp
y,
= (i-^(y2).
(4.23) and (4.24) will imply the desired result if we can show that no K > 0 exists such
that
X y) = K(\-X)y), j = 1,2 p. (4.25)
Notice that since y' *y2, Kx := [A/(l A)] does not satisfy (4.25).
Suppose, to the contrary, that for some K > 0, (4.25) is satisfied. Then from
(4.25) it follows that
y = /f[(l-A)/A]y2. (4.26)
Since y',y2 6 Z(a,b),
(a,y') = (a,y2) = b. (4.27)
Substituting for y' in (4.27) via (4.26), we obtain
/Â¡[(l A)/A](a, y2) = (a, y2) = b.

88
Solving here for K, we obtain that K [A/(1 A)]. Since K = Ki= [A/(l A)] does not
satisfy (4.25), this contradiction concludes the proof.
It is important to notice that the counterpart of Theorem 4.3.4 in the decision
space does not hold, even in the polyhedral case. In particular, suppose that X C R" is a
nonempty, compact polyhedron and, for each j = 1,2,..., p, that there exists a c e R"
such that fj(x) = (cj,x')> 0 for all xeX. Then, although the function h: X > R
defined for each x e X by
h(x)=
Vp
L J=l J
is concave (see Theorem 3.2.1), the function h : X(a,b)>R need not be strictly
concave, where ae Rp, be R, a>0, b>0, and
(4.28)
X(a,b)=
The following example illustrates this observation.
Example 4.3.2. Let
X = {(x,, x2 J e R2 |o.5 g x, S 4.0,1 = 1,2},
and let fÂ¡(xÂ¡,x2)={(1,1),(x,,jc2)), j = 1,2. Then X is a nonempty, compact polyhedron
and, for each j = 1,2, fÂ¡ is positive and linear on X. As guaranteed by Theorem 3.2.1,
h :X > /?, which, by (4.28), is given by
h{xx2)={x,+x2),

89
is concave. However, if, for example, aÂ¡ = a2 = 1 and 6 = 4, then h is not strictly
concave on
X(a,b)= {(jc, jCj |o.5 jc, S4.0, i = 1,2, xÂ¡ +x2 =4}
Consider now problem (PyS) when the conditions of the polyhedral case hold.
Assume also that Ys is a compact set, and that ye Y~. For algorithmic purposes, it may
be quite useful in this case to develop tools for finding local optimal solutions for
problem (Pyi). These tools could then potentially be used to construct global solution
algorithms for the problem that repeatedly move from a local optimal solution to an
improved local optimal solution until a global optimal solution is found. The remaining
results in this section are motivated, in part, by the desire to find such tools.
Notice that in the polyhedral case, the optimization problem in (9) is a linear
program given by
(LP) min(l/p)(y,/y;),s.t.ye Yi.
1=1
Problem (LP) will have an optimal solution y' that can be found, for instance, by the
simplex method. Since yeYi, the minimum value vmin in problem (LP) satisfies vmin
S1.0. As a result, there are three possible cases for problem (LP). First, vmin <1.0 may
hold. Second, vmin = 1.0 may hold, with y being the unique optimal solution to problem
(LP). Third, vmin = 1.0 may hold, with problem (LP) having multiple optimal solutions.
In the first case, from Theorem 4.2.2, it follows that g(y')< g(y), where y is any
optimal solution to problem (LP), so that a more attractive feasible solution y' to

90
problem (Pyi) than y has been found. To analyze the second case, we need the
following two definitions and lemma.
Definition 4,3.3. A point y e Y* is a strict local optimal solution for problem (Pyi) when
there exists an e > 0 such that for each y e Ys for which y *y and ||y y || < e,
g(y)>g{y)-
Definition 4.3.4. Let Z be a nonempty convex set in R", and let h : Z > R. The function
h is said to be strongly quasiconcave on Z when for each z, z2 e Z with z' z2, we
have
h[lz' +(l A)z2 ]> min ^i(z'). *(z2)}
for each A such that 0 < A < 1.
Lemma 4.3.1. Let Z be a nonempty convex set in R", and let h: Z > R be strongly
quasiconcave. Suppose that z', / = 1,2 k, are distinct points in Z and that r is an
element of the convex hull of z', i = l,2,...,k, such that, for each i = 1,2,...,k, s*z.
Then
Proof. The lemma is easy to prove using Definition 4.3.4 and induction.
The following result analyzes the case where vmin = 1.0 and y is the unique optimal
solution to problem (LP).
Theorem 4.3.5. Assume that problem (PyÂ¡) satisfies the conditions for the polyhedral
case, and that Y% is a compact set. Assume also that ye Y*. Suppose that vmin = 1.0

91
and that y = y is the unique optimal solution to problem (LP). Then y is a strict local
optimal solution for problem (PyS).
Proof. Since g(y)> 0 and y = y is the unique optimal solution to the problem
mi n(1/p)Â£(:Vj/>v)'
y*r- ;=1
y = y must also be the unique optimal solution to the problem
min
y*Y~ j=i
Therefore, by Lemma 4.2.1, y = y is the unique optimal solution to the problem
min(l/p)(Vg(>0, y).
Since (l/p)> 0 and ~(Vg(y),y) is a constant, this implies that y = y is the unique
optimal solution to the problem
min(Vg(y),y-y). (4.29)
Therefore, the optimal value of problem (4.29) is 0, and for all ye Vs such that y*y,
(Vg{y\y-y)>0- (4-3)
Let d\d2,...,dk represent the directions of the edges of 7s emanating from the
extreme point y of Ys. From (4.30),
(Vg(y),d')>o
for all i = 1,2 k. By Theorem 4.1.2 in Bazaraa, Sherali, and Shetty (1993), this implies
that there exist positive reals 5,., i = 1,2 k, such that
g{y + X-,d')> g{y)
(4.31)

92
for each A, e (0,5,). Let y + Sdy + Sd2, ..., y + Sdk.
Then by definition of S and (4.31),
g{y + Sd)>g(y) (4-32)
for each i = 1,2,...,k. Let z be any element of the convex hull of y, y + Sd', i = 1,2,
such that z*y and,for each = 1,2,...,/:, z y + Sd'. Since g isastrictly
pseudoconcave function on Y~, it is also a strongly quasiconcave function on Ys
(Bazaraa, Sherali, Shetty 1993). As a result, by Lemma 4.3.1,
g(z)> min jg(y), g(y + 8d),i = 1,2 fc}. (4.33)
From (4.32) and (4.33), g(z)> g(y). Since S > 0, this implies that there exists an e > 0
sufficiently small so that if ze Ts, ||z-y||<Â£, and z y, then g(z)> g(y).
Under the assumptions of Theorem 4.3.5, if vmin = 1.0 but y = y is one of two or
more optimal solutions to problem (LP), then y need not be a strict local optimal
solution for problem (Pyi). The following example illustrates this point.
Example 4.3.3. Let
rs = {(*,yj eR2|y,+y2S8, y,S7,y2S4j,
and let yT = (4,4)r. Then Y- is a nonempty, compact polyhedron in R2, and the
assumptions of Theorem 4.3.5 are satisfied. In this case, ye Ys and y is an optimal
solution to problem (LP). However, since (y SJ = (4 + S,4-Sj e Y- and g(y5) for all values of 8 such that 0 < 5 < 3, by Definition 4.3.3, y is not a strict local optimal

Full Text