Multiplicative programming

MISSING IMAGE

Material Information

Title:
Multiplicative programming theory and algorithms
Physical Description:
vii, 137 leaves : ; 29 cm.
Language:
English
Creator:
Boger, George
Publication Date:

Subjects

Subjects / Keywords:
Decision and Information Sciences thesis, Ph. D   ( lcsh )
Dissertations, Academic -- Decision and Information Sciences -- UF   ( lcsh )
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1999.
Bibliography:
Includes bibliographical references (leaves 131-136).
Statement of Responsibility:
by George Boger.
General Note:
Printout.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 021549200
oclc - 43702775
System ID:
AA00013619:00001


This item is only available as the following downloads:


Full Text










MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS


By

GEORGE BOGER













A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA














ACKNOWLEDGMENTS


I would like to thank my entire supervisory committee Dr. Harold Benson, Dr.

Selcuk Erenguc, Dr. Asoo Vakharia, and Dr. Richard Francis for their time and helpful

feedback on my dissertation. I am especially grateful to my committee chairman, Dr.

Benson, for suggesting the topic of multiplicative programming problems and for his

tremendous assistance and unending support. Without his help, this dissertation would

not have been completed. I would also like to thank Mr. Erijang Sun for proving some

theoretical results needed to support my dissertation topic.

I am also grateful to the DIS department chairman, Dr. Erenguc, for providing an

assistantship and for allowing me to teach undergraduate courses during my time at the

University of Florida. The teaching experience was an enjoyable and rewarding

experience.

I would like to thank my family for their encouragement and emotional support. I

would also like to thank my colleagues in the Ph.D. program for their friendship and their

support.

Finally, I am in debt to my master's degree advisor, Dr. Frederick Buoni, at the

Florida Institute of Technology, for his guidance. He suggested multiple objective linear

programming as a topic for my thesis. While working on the thesis, I met Dr. Benson

during a visit to FIT to present a talk related to multiple objective linear programming.









Dr. Benson agreed to serve on my master's degree committee and later recruited me for

the DIS Ph.D. program.














TABLE OF CONTENTS

page

ACKNOW LEDGM ENTS.............................................................................

ABSTRACT........................... ........................... ................

CHAPTERS

1 INTRODUCTION ............................................................. .......

1.1. The Multiplicative Programming Problem............................. ........... 1
1.2. Reformulations of the Multiplicative Programming Problem.....................4
1.3. Purpose and Organization of the Dissertation...........................................

2 A REVIEW OF THE LITERATURE ON MULTIPLICATIVE
PROGRAMMING PROBLEMS .......................... ........................9

2.1. Organization of the Literature Review ................................... ............. 9
2.2. Methods to Solve Problems (LMP2), (GLMP), and (CLMP)................... 13
2.2.1. Methods Based on Quadratic Programming ................................. 15
2.2.2. Methods Based on Searching the Outcome Set ............................ 17
2.2.3. Methods Based on Solving a Parametric Master Problem.............. 22
2.2.4. Methods Based on Polyhedral Annexation................................... 28
2.3. Extensions of Algorithms for Problem (LMP2) to Solve Problem
(LM P) when p 3............................. ...........................................32
2.4. Methods to Solve Problems (CMP), (GCMP) and (CCMP) ...................32
2.4.1. Methods Based on Solving a Reformulated Problem ...................33
2.4.2. A Method Based on Outer Approximation .....................................37
2.5. Methods to Solve Problem (LMP) as a Concave Minimization
Problem ............................ ... ..... ...................... 38

3 CONCAVE MULTIPLICATIVE PROGRAMMING PROBLEMS:
ANALYSIS AND AN EFFICIENT POINT SEARCH HEURISTIC
FOR THE LINEAR CASE ......................... ..............................40

3.1. Introduction.................. ......................... .. .......................... 40
3.2. A analysis ........................................ ................................................4 1
3.3. Efficient Point Search Heuristic ......................... .........................52
3.4. Computational Results............................. ...........................62
3.5. D iscussion........................................ .............................................. 69









4 A GENERAL MULTIPLICATIVE PROGRAMMING PROBLEM IN
OUTCOME-SPACE ............................ .... ........................71

4.1. Introduction.......................................................... .......................... 7 1
4.2. Results for the General Case of Problem (Py,) ......................................73
4.3. Results for Convex and Polyhedral Cases of Problem (P ) ...................78
4.4. D iscussion............................................ ................................................ 96

5 AN OUTCOME-SPACE CUTTING-PLANE ALGORITHM FOR
LINEAR MULTIPLICATIVE PROGRAMMING ................................98

5.1. Introduction...................................... ............................. .............. 98
5.2. Theoretical Prerequisites ....................................... .............. 100
5.3. Outcome-Space, Cutting-Plane Algorithm............................................ 104
5.3.1. Strict Local Optimal Solution Search.......................................... 105
5.3.2. Cutting Plane Construction. ................................................. 107
5.3.3. Termination Test. ................................... ........................... 109
5.3.4. Outcome-Space, Cutting-Plane Algorithm ................................. 110
5.4. Implementation.......................... ...... ............................ 114
5.5. Exam ple .......................................................... ............................. 119
5.6. Concluding Remarks ........................................ ........... ..... 124

6 SUMMARY AND FUTURE RESEARCH................................................. 125

6.1. Introduction.................................................. .......................... 125
6.2. Future Research on the Heuristic Algorithm......................................... 125
6.3. Future Research on an Global Solution Algorithms................................ 127

R E FE R E N C E S .......................................................................... ................................ 131

BIOGRAPHICAL SKETCH................................................ ............. 137














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS

By

George Boger

December 1999

Chairman: Harold P. Benson
Major Department: Decision and Information Sciences

Multiplicative programming problems are mathematical optimization problems in

which the objective function contains a product of several real valued functions defined

over a common domain and the feasible decisions are described by a nonempty set. These

optimization problems have some important applications in engineering, finance,

economics, and other fields. Multiplicative programming problems, however, are difficult

global optimization problems that are known to be NP-hard.

This dissertation has two purposes. The first is to develop and test a heuristic

algorithm that finds a good solution, though not necessarily a globally optimal solution,

for the linear multiplicative programming problem. The second purpose is to develop a

global solution algorithm for the linear multiplicative programming problem that is

potentially more efficient than existing algorithms for this problem.









To evaluate the effectiveness in practice of the heuristic algorithm, we have

written a FORTRAN computer program and used it to solve 260 randomly generated

linear multiplicative programming problems of various sizes. Our experimental results

show that the computational requirements of the heuristic algorithm are not overly

burdensome when compared to the effort required to solve a linear multiplicative

programming problem.

The framework of the outcome-space, cutting-plane algorithm is taken from a

pure cutting plane, decision set-based method developed by Horst and Tuy for solving

concave minimization problems. By adapting the approach of this method to an outcome-

space reformulation of the linear multiplicative programming problem, rather than

directly applying the method to the original decision set formulation, it is expected that

considerable computational savings can potentially be obtained. We also show how

additional computational benefits might be obtained by implementing the new algorithm

appropriately. To illustrate the new algorithm, we apply it to the solution of a sample

problem.














CHAPTER 1
INTRODUCTION


1.1. The Multiplicative Programming Problem

Multiplicative programming problems are mathematical optimization problems in

which the objective function contains a product of several real valued functions defined

over a common domain and the feasible decisions are describe by a nonempty set. These

problems occur is a wide variety of application areas.

For example, Konno and Inori (1989) studied a bond portfolio optimization

problem in which the portfolio's performance is measured by a number of indices such as

the average coupon rate, the average terminal yield, and the average length to maturity.

The goal of the portfolio manager is to improve the performance of the portfolio by

purchasing or selling bonds in the marketplace subject to some limiting constraints. The

manager must consider multiple incomparable objectives such as maximizing the average

terminal yield and minimizing the average maturity time. Konno and Inori choose to

optimize several objectives simultaneously by multiplying them together since the

objectives do not share a common scale.

Another example of a multiplicative programming problem, given in Maling,

Mueller and Heller (1982), is a packaging problem encountered in designing very large-

scale integrated circuit (VLSI) chips and laying out building floor plans or manufacturing

plant facilities. In the problem, the overall rectangular dimensions of the feasible layout







2
plans are constrained rather than fixed. Different layout plans with differing overall

rectangular dimensions are obtained according to how the components of a system are

arranged within each plan. The objective is to find the arrangement of components that

minimizes the overall layout area subject to certain constraints on the area and the

perimeter of the layout.

Henderson and Quandt (1971, p. 15) also give an application of multiplicative

programming problems. Their example is from microeconomics. In their example, a

rational consumer wishes to find a combination of two commodities to purchase from

which he will derive the highest possible level of satisfaction. Budgetary constraints and

the availability of the commodities limit the quantities the consumer may purchase. The

consumer's level of satisfaction is captured by his utility function, which is assumed to be

the product of the quantities of the two commodities. The rational consumer's problem is

then formulated as maximizing his utility function subject to the budgetary and

commodity availability constraints.

The multiplicative programming problem or, more briefly, the multiplicative

program, may be formulated mathematically as


(Px) minh(x)= (f,(x), s.t. xe X,


where p > 2 is an integer, X c R", and, for each j = 1, 2,..., p, f : X -R satisfies

f,(x)2 0 for all x X. For simplicity we will assume throughout this proposal that the

minimum of problem (Px) is achieved at some point x' e X. In addition we will assume

that p is significantly less than n since this holds for virtually all applications of








multiplicative programming problems. If f, (I) = 0 for some j E {l, 2,..., p} and some

eE X, then clearly is a global optimal solution. This condition can be checked by

solving p minimization problems min {f (x)xe X j = 1, 2,...,p. Therefore, we may

assume without loss of generality that, for each j = 1, 2,..., p, f (x)> 0 holds for all

xe X.

The objective function h of problem (Px) is generally not a convex function. As

a result, problem (Px) belongs to a class of nonconvex programming problems called

global optimization problems. In contrast to convex programming problems, there may be

many local minima for problem (Px) that are not globally optimal. Conventional local

optimization methods based on gradients, subgradients, conjugate directions, or the

Karush-Kuhn-Tucker conditions, for instance, are at best guaranteed only to find a local

minimum. These methods must then terminate, since there is neither a local criterion for

certifying the global optimality of a given solution nor a way to determine how to proceed

to a better solution if the solution is not globally optimal. From the perspective of

computational complexity, problem (Px) is a difficult problem that is known to be NP-

hard even when the objective function is simply h(x)= x, x2 and the feasible region X

is a polyhedron (Matsui 1996).

When in addition to the assumptions given previously for problem (Px), X is a

convex set and, for each j= 1, 2,..., p, f : X R is a concave function, we obtain the

concave case of problem (Px), called the concave multiplicative programming problem.

The convex case of problem (Px), called the convex multiplicative programming








problem, is obtained when, in addition to the assumptions made previously for problem

(Px), X is a convex set and, for each j = 1, 2,..., p, f, : X R is a convex function.

A special linear case of problem (Px), called the linear multiplicative programming

problem, is obtained when, in addition to the assumptions make previously for problem

(Px), X is a compact polyhedron and, for each j = 1, 2,..., p, f : X R is a linear

function (Konno and Kuno 1992).

1.2. Reformulations of the Multiplicative Programming Problem

During the 1990's there has been a resurgence of interest in problem (Px).

Encouraged by the rapid advances in high speed computing, researchers began developing

and testing new methods for solving global optimization problems that arise in practical

applications, including problem (Px).

Included among the global optimization methods used to solve problem (Px) for

the special case when p = 2 are various parametric simplex method-based algorithms

(e.g., Konno and Kuno 1992, Konno and Kuno 1995, Konno, Yajima, and Matsui 1991,

and Schaible and Sodini 1995), branch and bound procedures (e.g., Kuno 1996 and Muu

and Tam 1992), and various other types of algorithms (e.g., Konno and Kuno 1990,

Pardalos 1990, and Tuy and Tam 1992).

When p > 2, globally solving problem (Px) has been shown empirically to

require considerably more computational effort than when p = 2 (see, e.g., Ryoo and

Sahinidis 1996). A smaller number of the algorithms for solving problem (Px) when







5
p > 2 solve the problem directly without reformulating it as an outcome-space problem.

Included among these, for instance, is the polyhedral annexation algorithm of Tuy (1991).

Most of the algorithms for solving problem (Px) when p > 2, however, solve the

problem indirectly by globally solving an outcome-space reformulation of the problem

instead. This is because in practical applications p is routinely much smaller than n,

often by two or more orders of magnitude. As a result, working in R' is computationally

less challenging than working in R".

Let y E RP denote the p-vector with jth entry equal to y,, j = 1, 2,..., p. For

each j= 1, 2..., p, let e, R satisfy

j >sup fj(x) s.t.xe X,

where j = +oo is possible, and let ye RP denote the vector with jth entry equal to 9,,

j = 1, 2,..., p. Let f(x) denote the vector f(x)= f,(x) f2(xl...,fp(x)]', where

fj: X -- R, j = 1, 2,..., p, are the functions used in defining problem (Px). Thoai

(1991) and later Konno and Kuno (1995) based their outer approximation algorithms for

respectively solving the convex and linear cases of problem (Px) on one of the more

direct reformulations of problem (Px) as an outcome-space problem. Their reformulation

is given by


(P',) min yj, s.t. y Y',
j=1

where

Y ={ye R'jf(x)







Falk and Palocsay (1994) based their branch and bound, image space algorithm

for the linear case of problem (Px) on another outcome-space reformulation that is

closely related to problem (Py,). Their reformulation is given by


(Pr) minly,, s.t. ye Y,
j=I

where

Y= ye RP y=Cx forsomexe X}

and C is a (pxn) matrix whose rows are cT, j = 1, 2,..., p.


1.3. Purpose and Organization of the Dissertation

This dissertation has two main purposes. The first is to develop and test a heuristic

algorithm that finds a good solution, though not necessarily a globally optimal solution,

for the linear case of problem (Px). The second purpose is to develop an exact global

solution algorithm for the linear case of problem (Px) that is potentially more efficient

than existing algorithms for this problem.

Since the linear multiplicative programming problem is known to be an NP-hard,

multiextremal global optimization problem, it is inherently more difficult to globally

solve than a convex programming problem of the same size. In some application cases, a

solution will adequately meet the requirements of a user; see, e.g., Konno and Inori

(1989). In these cases, the use of a heuristic algorithm seems to be appropriate for finding

a satisfactory solution. To date, however, there is no known heuristic algorithm tailored to

finding a good solution for the linear multiplicative programming problem. In their

review of algorithms for solving problem (Px), Konno and Kuno (1995) do not mention







7

any heuristic algorithms for problem (Px), and our survey of the literature has revealed

none.

To develop the heuristic algorithm, we first analyze the concave multiplicative

programming problem. The analysis yields a new way to write a concave multiplicative

programming problem as a concave minimization problem. As a result, a concave

multiplicative programming problem can be solved by using any existing concave

minimization algorithm without resorting to a reformulation of the problem. We also

show that some relationships exist between concave multiplicative programming

problems and certain multiple-objective mathematical programs. These relationships are

exploited to develop the heuristic algorithm for the linear case of problem (Px).

For cases where a linear multiplicative program must be solved for an exact global

optimal solution, we expect that globally solving the outcome-space reformulation (Py)

instead will result in a significant decrease in the computational effort over that required

to directly solve the problem. This is because in typical applications of linear

multiplicative programs, p is several orders of magnitude smaller than n. As a result,

working in RP should be computationally less challenging than working in R".

To globally solve the outcome-space reformulation (P,,) of a linear multiplicative

program, we develop an outcome-space, pure cutting plane algorithm that works in R'.

The framework for the algorithm is taken from a pure cutting plane, decision set-based

concave minimization method developed by Horst and Tuy (1993). We show how to

adapt this method to solving the reformulation (P,,) of a linear multiplicative program

for a global extreme point optimal solution. Once this global solution is found, we can








recover a globally optimal solution for the linear multiplicative program in decision

space. As a further computational enhancement, we also show that for purposes of

implementation, the mechanics of the outcome-space, cutting-plane algorithm can be

applied to the smaller problem (Pr) instead of problem (P,,).

The organization of the proposal is as follows. In Chapter 2 we present a review

of the literature on multiplicative programming problems. In Chapter 3 we analyze the

concave multiplicative programming problem, apply the results to develop a heuristic

algorithm for the linear multiplicative programming problem, and report test results using

the heuristic algorithm on some randomly-generated problems. In Chapter 4 we analyze

the reformulation problem (P,,) and show that, under certain convexity assumptions on

Y-, problem (P,) has a global extreme point optimal solution y* e Y. We then present

a procedure that is guaranteed to find a strict local optimal extreme point solution for the

reformulation problem (P, ) of the linear multiplicative program. In Chapter 5 we

present an outcome-space, cutting-plane algorithm for globally solving a linear

multiplicative program. The algorithm employs the strict local optimal search procedure

presented in Chapter 4. We also illustrate the algorithm by applying it to the solution of a

sample problem. Finally, in Chapter 6, we give an overall summary and conclusions, and

we discuss directions for further research.













CHAPTER 2
A REVIEW OF THE LITERATURE ON MULTIPLICATIVE PROGRAMMING
PROBLEMS


2.1. Organization of the Literature Review

In this chapter we present a review of the literature on methods proposed for

solving multiplicative programming problems. The only known literature review on

multiplicative programming problems appears in Konno and Kuno (1995). In their

literature review Konno and Kuno defined multiplicative programming problems as "a

class of minimization problems containing a product of several convex functions either in

its objective function or in its constraints." They included problems in which the

objective function contained the summation of a convex function and the product of

convex functions.

Konno and Kuno (1995) organized their literature review based on whether the

problem data are linear or nonlinear and on the number of functions that appear in the

objective function. They considered solution methods for the following multiplicative

programming problems.

The first multiplicative programming problem considered by Konno and Kuno is

the special case of quadratic programming

(LMP2) min f (x)= ((c,x)+ d)((c2,x)+d2), s.t. x D,

where D := xe R' Ax > b, x > 0O is a non-empty polytope (bounded polyhedron) in






10

which A is an mxn matrix, be R", and, for each i= 1, 2, c' e R" \{0} and d, e R.In

addition, it is assumed that, for each x e D, (c', x) + d, > 0, i= 1, 2.

The second multiplicative programming problem that they considered is the

convex multiplicative programming problem

(CMP) minf(x)=I f (x, s.t.xe X,


where X C R' is a nonempty, compact, convex set and, for each j = 1, 2,..., p,

f: R" -- R is a convex function that satisfies f (x)> 0 for all xe X.

Konno and Kuno (1995) considered two special cases of problem (CMP): (1) the

case where p = 2 and (2) the case where p 2 and the problem data are linear. The

second case may be defined as the following extension of problem (LMP2):

(LMP) min f (x) = c',x)+di], s.t.xe D,


where p 2 is an integer and, for each i = 1, 2,..., p, (c',x) + d > 0 holds for all xe D.

Finally, Konno and Kuno (1995) considered three classes of problems related to

problem (CMP). In the first class is the following problem:

(GCMP) min f(x) = fo(x)+ fj (x)f42,(x s.t.xe X,
j=1

where, for each j = 0,1,...2q, f, : R R is a convex function that satisfies f (x)> 0

for all xe X.

The second class is a special case of (GCMP) in which q = 1 and the problem

data are linear. This class may be defined as the following extension of problem (LMP2):








(GLMP) min f(x) = (c, x) + (( x) + d,)((c, x) + d2, s.t. xe D,

where co R" and c', di, i = 1, 2, and D are defined as in problem (LMP2).

The third class of problems considered by Konno and Kuno (1995) is the

minimization of a convex function over a feasible region that includes a product of

convex functions in its constraint set.

Konno and Kuno's coverage of the literature is not exhaustive. They focused on

algorithms that have been demonstrated by computational experiments to be practical for

reasonably large problems (Konno and Kuno 1995, p. 370). Algorithms proposed by

Konno, Kuno, and their associates have been tested on randomly generated problems and

the results reported. However, computational results have not been reported by most of

the other researchers and therefore their methods were not included in the review.

Since the publication of the review by Konno and Kuno, two more multiplicative

programming problems have been discussed in the literature. The first problem adds a

convex function to the objective of problem (LMP2) to obtain the problem:

(CLMP) min f(x)= g(x)+((c',x) +dX(c2,x)+ d2), s.t.x D,

where g : R" R is a twice differentiable convex function and c', d,, i= 1,2, and D

are defined as in problem (LMP2). The second problem adds a convex function to

problem (CMP) to obtain the problem:

(CCMP) min f(x)= fo(x)+ f f(x), s.t.xe X,


where f : R" -- R is a convex function that satisfies fo(x)> 0 for all xe X and f,,

j = 1, 2,..., p, and X are defined as in problem (CMP).








The emphasis of this review will be on optimization problems in which a product

of functions appears in the objective function. Optimization problems with objective

functions that are comprised of a summation of a function and the product of functions

are also included in the review. Methods proposed for solving these problems may be

adapted to solve a problem whose objective function is strictly a product of functions by

setting the added function to the null function. The functions that appear in the objective

function will be either convex or linear functions since to date these are the only

multiplicative programming problems to appear in the literature. In this review we will

not consider optimization problems in which a product of functions appears in the

constraint set.

Like the review of Konno and Kuno (1995), this literature review is organized

based on whether the problem data are linear or nonlinear and on the number of functions

that appear in the objective function. It is divided into the following four sections. Section

2.2 reviews the methods proposed to solve problems (LMP2), (GLMP), and (CLMP).

Section 2.3 reviews the methods to solve problem (LMP) that are extensions of methods

for problem (LMP2). Section 2.4 reviews the methods to solve problems (CMP),

(GCMP), and (CCMP). Section 2.5 reviews the methods to solve problem (LMP) as a

concave minimization problem.

The rationale for organizing the literature review in this way is as follows.

Historically, the first algorithms for solving multiplicative programming problems were

specifically proposed for solving problem (LMP2). Problems (GLMP) and (CLMP) are

grouped with problem (LMP2) since they were conceived as extensions of that problem.

Several of the algorithms proposed for solving problem (LMP2) can be extended to solve








the problem (LMP), since they do not depend upon having only two functions in the

product term of the objective function. Problems (LMP2), (LMP), (GLMP), and (CLMP)

contain linear functions and polyhedral feasible regions. Algorithms for solving these

problems are implemented with the aid of the simplex method, which is used to solve

linear programming subproblems. The problems (CMP), (GCMP), and (CCMP) contain

nonlinear data and must rely on other optimization methods to solve nonlinear convex

programming problems. The latter three problems are therefore placed in a separate

group. Problems (GCMP) and (CCMP) are included in the group with problem (CMP)

because only one article addresses each problem, and they were conceived as extensions

of problem (CMP). Finally, two articles appeared in the literature that proposed solving

problem (LMP) as a concave minimization problem using techniques that the authors had

previously developed.

Table 2.1 gives a summary of the multiplicative programming problems

considered in this literature review along with the assumptions placed on the feasible

region and the objective function of each problem.

2.2. Methods to Solve Problems (LMP2), (GLMP), and (CLMP)

The methods for solving problem (LMP2), (GLMP), and (CLMP) are further

divided into four categories. In the first category are those methods that analyze problem

(LMP2) as a special case of quadratic programming. In the second category are

algorithms that analyze problem (LMP2) by searching the outcome set. In the third

category are the algorithms that solve an easier parametric programming problem rather

than directly solving problems (LMP2), (GLMP), and (CLMP). In the last category are









Table 2.1. Summary of Multiplicative Program Types and Assumptions on Problems
n 1.1 Assumptions on the
Problem Feasible Ri on Objective Function Assumptions on the Objective Function
LMP2 D is abounded polyhedron. ((c',x)+ d)((c,x)+ d2) (c',x)+d, > i = 1, 2,forall xe D.

GLMP D is a bounded polyhedron. (co,x)+((cl,x)+d,)((c ,x)+d2) (cox)>Oand(c,x)+d >0, i=1,2,forall
xe D.
g : R" R is a twice differentiable convex
CLMP D is a bounded polyhedron, g(x)+((cl,x)+dX(c2,x)+d2) function and (c',x)+ d, > 0, i=1, 2, for all
xe D.
LMP D is a bounded polyhedron. f c',x) +d (c',x)+d > O, i= 2,..., p, forall xeD.

For each j= 1, 2, ...,p, f: R"'-- R isa
CMP X is a compact convex set. fj (x) convex function that satisfies f (x)> 0 for all
xe X.
Foreach j=0,1, ...,p, f :R" -4R isa
GCMP X is a compact convex set. fo (x) + f- (ix)f2i (x) convex function that satisfies fj (x)> 0 for all
j=1
xe X.
Foreach j = 0,1, ...,p, f : R" -+ R isa
CCMP X is a compact convex set. f (x) + f (x) convex function that satisfies f (x)> 0 for all
X _X.








two algorithms that solve problem (LMP2) based on the method of polyhedral

annexation.

2.2.1. Methods Based on Quadratic Programming

Since the objective function of problem (LMP2) can be expressed as


f(x)= ((c'x) + d,)((c,x) + d)= I xTQx+ rrx+ dd,,

where re R', and Q is a real symmetric n xn matrix, problem (LMP2) is a special class

of quadratic programming. Swarup (1966a and 1966b) was the first researcher to analyze

problem (LMP2) in this way, but he did not propose any exact solution algorithms. His

two articles are included in the literature review for completeness. Pardalos (1990) also

analyzed problem (LMP2) in this way, and he proposed an exact global solution

algorithm.

Swarup (1966a) showed that if both linear functions (c',x) + d,, i = 1, 2 are

positive over the feasible region D, the objective function f is quasiconcave over D, It

is well known that generally for any local minimizer of a quasiconcave function over a

polytope, there exists an extreme point local minimizer over the polytope that has the

same function value. Swamp proposed a simplex based method for finding such a local

optimal solution. The key to the algorithm is a test that determines if entering a given

nonbasic variable into the current simplex basis will lower the objective function value. A

simplex basis of a local optimal solution can be reached by beginning at any feasible

basis and moving through a sequence of simplex tableaux by pivoting in qualifying

nonbasic variables until none remain. Once a local optimal solution is found, the







16
algorithm stops. No information is available to either certify the global optimality of the

solution or to determine how to proceed to an improved solution.

In another work, Swamp (1966b) formulated the following parametric linear

program by introducing an auxiliary variable 4 and moving one of the linear functions

into the constraint set:

(MP1) minF(x;4) =(c',x)+d,

s.t. x eD,

(c',x)+ d, 2= 5 0.

Since (c2, x) + d2 appears in the constraint set, dual pricing information is

available to determine the value of (c',x)+ d, as 4 is set to achievable values of

(c2, x)+ d2 over D. Swamp derived a test that uses this information to determine when

4 is set to a level that corresponds to a local optimal solution. All local optimal solutions

can then theoretically be found by parametrically solving problem (MPI) over all

achievable values of 4. A global optimal solution x' of problem (LMP2) can then be

found by identifying a global solution (x', 4') of problem (MP1).

Pardalos (1990) observed that if c' and c2 are linearly independent, then the

Hessian matrix Q of the objective function of problem (LMP2) has one positive

eigenvalue and one negative eigenvalue, and the remaining eigenvalues are equal to zero.

By applying the spectral decomposition theorem of linear algebra, the objective function

can be rewritten in terms of two variables. The problem can then be solved by examining

the vertices of an orthogonal projection of the feasible region D into a two-dimensional







17
polytope in the space of the two variables used in the rewritten objective function.

Pardalos (1990) proposed an algorithm that enumerates all vertices of the two-

dimensional polytope until an optimal vertex is found. The algorithm may require an

exponential number of steps, but its average computational time complexity is bounded

by a polynomial.

2.2.2. Methods Based on Searching the Outcome Set

The objective function of problem (LMP2) can be expressed as the composite

yy(p) of two mappings, where, for each xe R", p(x)= ((cl,x+d,,(c2,x)+d), and, for

each y e R2, YI(y) = Y Y2 The mapping ip maps each point x e D into a point

y =(y,, y2) where y,:=(c',x)+d, andy2 :=(c2,x)+d2. Since y, and y are linear

functions, ( is a linear transformation and hence the linear structure of D is preserved

(Rockafellar 1970). The image of D under (p is then the compact, convex polyhedron

Y:= {y R2y, =(c',x )+d,, y, = (c2,x) + d2 for some x D

called the outcome polyhedron. A global optimal solution of problem (LMP2) can be

found by finding a point of Y that globally minimizes the product y, y2. Since the

search is conducted in Y e R2 rather than R", it may be possible to economize on the

computational effort required to solve problem (LMP2).

Three articles, Aneja, Aggarwal, and Nair (1984), Falk and Palocsay (1994), and

Thoai (1991) proposed algorithms for solving problem (LMP2) based on searching the

outcome set using outer approximation techniques. Outer approximation is a global

optimization technique that uses a decreasing sequence of simple sets to approximate the








feasible region. The approximations are used in a series of optimization problems that are

easier to solve than the original problem. These optimization problems are sequentially

solved until a global optimal solution to the original problem is found. The technique has

been very useful in solving global optimization problems in which the feasible region Z

is a polytope and the global optimal solution is known to be an extreme point of Z. In

this form of outer approximation, the algorithm begins by finding a simple polytope

P0 D Z with an easily defined inequality representation and an easily calculated set of

vertices. A series of algorithmic iterations follows that builds a sequence of decreasing

polytopes P, D P z D : Z in which one polytope is generated in each iteration. In an

iteration k of the algorithm, the original objective function is evaluated at the extreme

points of Pk to find an optimal solution vk. If vk is an extreme point of Z, then v" is a

global optimal solution to the original problem. Otherwise, a portion of Pk \ Z is cut off

to form P,,,. The point v* is part of the region cut off; i.e., vk is not included in the

polytope P+,. The cut is made by adding a constraint called a cutting plane constraint to

the constraint set that defines P The cutting plane constraint adds additional vertices to

Pk.+ that were not present in P. and therefore they must be calculated.

Aneja, Aggarwal, and Nair (1984) proposed an algorithm that examines the

solutions associated with the bicriterion programming problem:

(BCP) VMIN(y,=(c',x)+d,,y2=(c,x)+d2),

s.t. xe D.








The intent of problem (BCP) is to simultaneously minimize the two criterion

functions y, and y2. Conflicts usually exist between the two criterion functions that

prevent a single point of D from simultaneously minimizing both functions. The usual

notion of an optimal solution used in single objective linear programming is replaced by

the concept of efficient solutions when discussing the solutions of problem (BCP). A

solution x is an efficient solution of problem (BCP) if YE D and, whenever for each

i=1,2, (c',x)+di <(c',5)+d, forsome x D,then (c',x)+d, =(c', )+d,. i=l, 2.

The set of efficient points of D is mapped by Vp into a set of points on the surface of Y

called the efficient frontier.

Aneja, Aggarwal, and Nair (1984) showed that a global optimal solution of

problem (LMP2) is attained at an efficient extreme point x' of D that is mapped by ip

into an extreme point (y,', y) in the efficient frontier of Y. Their algorithm searches the

efficient frontier for an extreme point that minimizes yy,2 by using a modified outer

approximation technique. Initially the legs of a right-angle triangle form the first

approximation of the efficient frontier. The "rise" and the "run" values of the slope of the

hypotenuse are two positive scalar values. The functions y, = (c',x) +d, and


y= (c, x) + d, are multiplied by these values and then summed to form a single linear

objective function. This objective function is then minimized over the feasible region D.

It is well known that the minimizer i of such a linear program is an efficient extreme

point of D (Steuer 1986). The solution to the linear program finds another point (9,, 92)

on the efficient frontier that is used to subdivide the initial triangle into two triangles. The







20
algorithm is then repeated using each of the smaller triangles. The algorithm terminates

when there are no more extreme points of the efficient frontier that need to be searched.

In the algorithm of Aneja, Aggarwal, and Nair (1984), a new vertex must be

calculated for each triangle. This is easily done by solving two systems of two equations

in the unknowns y, and y2. This special technique however, can not be easily extended

to handle cases where p > 2.

Falk and Palocsay (1994) also proposed a solution algorithm that searches among

the extreme points of Y using a modified outer approximation technique. In the first

phase of the algorithm, the two linear programs

1, = min(c',x + d and 12 = minc2, x + d,
.xD / xD -

are solved for optimal solutions x' and x2 respectively. Two initial vertices y' and y2

of Y are then

y' =((c',x')+d,,(c2,x)+d2) and y2 =((c',x2)+d,,(c2,x)+d).

An initial polytope in outcome-space containing an optimal solution for the problem

(YP) min y


is y, /1, and y2 /12 and an inequality a,y, +ay2 <1, where a, and a2 are determined

such that ay, +a2y2 = 1 passes through the point

y = argmin t'i(y:, y2), (y, y2
i=1.2

In each iteration of the algorithm, values for a, and a2 are updated and a linear program

of the form







21

(YLP) min ay, + a2y2

is solved to remove portions of the initial polytope from the search for an optimal

solution for problem (YP). The new vertices generated at each iteration are easily

calculated since the isovalue contours of problem (YLP) are linear. The algorithm

terminates when the optimal value of problem (YLP) is one.

The algorithm proposed by Thoai (1991) for solving problem (LMP2) uses an

outer approximation technique that begins by enclosing the outcome set Y in a rectangle

P,. In an iteration k of the algorithm, the extreme point (v,, v) of the outer

approximation that yields the lowest value of the product y, y, is found. A linear

program is then used to determine if the extreme point (,, 2 ) maps to a feasible point i

of D. If not, information is obtained from the linear program to generate a cutting plane

constraint that slices off the extreme point (,, 2 ) from the polytope P, The new

vertices generated by the cut are then calculated using a conventional approach (see

Horst, Pardalos, and Thoai 1995 or Horst and Tuy 1993). Since the method of

determining these new vertices is not dependent on the fact that the dimension of the

outcome set is two, Thoai's algorithm can be extended to handle cases where p > 2.

In the algorithms of Aneja, Aggarwal, and Nair (1984) and Thoai (1991), the only

variations in the linear programs used in successive iterations involve changes in

objective function coefficients. The authors gain some computational efficiency by

restarting the simplex method at the optimal solution of the previous iteration. Only a few

simplex pivots are then generally needed to produce a new optimal solution.






22
2.2.3. Methods Based on Solving a Parametric Master Problem

The difficulty in solving problem (LMP2) is caused by the product form of the

objective function. Konno and Kuno (1992) added a parameter 4 and formed the

following problem that they called the master problem:

(MP2) minF(x;4)= ((c',x +d,)+ (( +d2),


s.t. xeD, 5 >0.

Notice that for a fixed value 4' of 4, problem (MP2) is a linear programming

problem. To solve problem (MP2), Konno and Kuno proposed using a parametric

objective function simplex method to the find critical values of 4 at which new bases

become optimal. The values of the objective function F are then evaluated at these

bases. A global optimal solution (x', ') of problem (MP2) is found by choosing the

basis that minimizes F over these values. Konno and Kuno (1992) showed that if

(x', 4') is an optimal solution of problem (MP2), then x is a global optimal solution of

problem (LMP2).

Konno and Kuno tested this algorithm on randomly generated problems (LMP2)

with nonnegative problem data that ranged in size from (m, n) = (30, 50) to (220, 200).

Their computational experiments showed that the amount of computational time needed

to solve problem (LMP2) is not much different from that required to solve linear

programs of the same size.

In Konno and Kuno (1995) the authors slightly simplified the above parametric

method by redefining the auxiliary parameter so that convex combinations of the two

linear functions are used in the objective function of problem (MP2). This modification








makes it easier to find critical parameter values, since the interval [0, 1] over which the

auxiliary parameter ranges is bounded. The rest of the method remained the same.

Although Konno and Kuno (1992) did not explicitly say it, their algorithm can be

viewed as searching the efficient extreme points of problem (BCP) for one that is a global

optimal solution of problem (LMP2). Notice that for a sufficiently small value ', an

extreme point optimal solution (x', 4') to problem (MP2) coincides with an optimal

solution x' of the linear program min {(c',x)+d x D Similarly, for a sufficiently

large value [', an extreme point optimal solution (x', ') coincides with an optimal

solution x" of the linear program min {(c',lx)+ dxe DI. For any fixed value > 0, the

objective function F(x,4) is a composite objective function formed by multiplying the

two linear functions by positive values and summing the result. It is well known that any

extreme point minimizer of such a composite objective function over the feasible region

D is an efficient extreme point of the problem (BCP) (Steuer 1986). The efficient

extreme points of problem (BCP) are found by solving linear programs for parameter

values between and 4'. As Aneja, Aggarwal, and Nair (1984) have shown, the global

solution lies at an efficient extreme point of D in problem (BCP).

A disadvantage of the algorithm of Konno and Kuno is that it may require many

pivots to solve problem (MP2) for all possible parameter values. This will especially be

true if there is a great conflict between the two linear functions of the objective function.

If for example c2 = -c,, then every extreme point of D is an efficient extreme point of

problem (BCP). Since the size of the set of extreme points of the polytope D grows








exponentially with D, the number of optimal solutions to problem (MP2) over the entire

range of parameter values grows exponentially with D and is not bounded by a

polynomial. Konno and Kuno in fact observed that the computational time increased as

the number of local minima increased. An additional disadvantage of the Konno and

Kuno algorithm is that many of the pivots performed will be unnecessary when they are

to bases that do not improve on a previously found solution.

In another paper, Konno and Kuno (1990) added a convex function to the

objective function of problem (LMP2) to obtain the problem (CLMP). With this addition,

the objective function may no longer be quasiconcave and therefore, the global minimum

may not necessarily be attained at an extreme point of the feasible region D.

To solve problem (CLMP), Konno and Kuno (1990) proposed an algorithm that

solves a parametric master problem which, for a fixed parameter value, is a nonlinear

convex programming problem. The algorithm involves solving this master problem a

finite number of times, once for each of a finite number of prechosen values for the

parameter. A troublesome aspect of the algorithm is that it is difficult to determine the

proper parameter values to choose. The authors suggested choosing values for the

parameter that are equally spaced in the interval of possible parameter values and solving

the resulting master problems to determine a neighborhood containing a globally optimal

solution to problem (CLMP). A local search is then done in that neighborhood for a

globally optimal solution using the Karush-Kuhn-Tucker conditions. Care must be taken

however, to attempt to define the spacing between the points to be small enough so that a

global optimal solution is not missed.






25
The difficulty that Konno and Kuno (1990) encountered in their method in

determining parameter values can be eliminated if we assume that the convex function g

in the objective function of problem (CLMP) is a linear function. Problem (GLMP) is

obtained by making this replacement. Konno, Yajima, and Matsui (1991) considered

problem (GLMP), but they assumed that d, and d, are zero. To solve problem (GLMP),

Konno, Yajima, and Matsui formulated the master problem

(MP3) min F(x;4) = (co,x) + (c2,x),

s.t. xe D,

(c', )= 0.

Notice that the parameter appears in both the objective function and in a right-hand

side of a constraint.

Konno, Yajima, and Matsui (1991) showed that x' is a global solution of problem

(GLMP) if (x', E') is an optimal solution of problem (MP3). Schaible and Sodini (1995)

used problem (MP3) to show that a global optimal solution of problem (GLMP) lies on

an edge of D.

Konno, Yajima, and Matsui (1991) proposed a parametric simplex algorithm that

includes a right-hand side analysis and an objective function analysis to determine

intervals of parameter values for which bases remain both feasible and optimal. The

parametric analysis sweeps through parameter values from 4, = min{(c',x)Ix e D} to

a, = max(c', x)Ix e D. The objective function F is then minimized over each of the

intervals.






26
Konno, Yajima, and Matsui (1991) tested their algorithm on randomly generated

problems of up to 350 constraints and 300 variables. They found that the problems can be

solved in much the same computational time as that of solving linear programs of equal

size.

The algorithm of Konno, Yajima, and Matsui (1991) suffers from the same

disadvantages as the algorithm of Konno and Kuno (1992). In particular, its efficiency

depends on the number of pivots performed to solve problem (MP3) for all possible

parameter values. Also many of the pivots performed will be unnecessary when they yield

bases that do not improve on a previously found solution.

Schaible and Sodini (1995) improved the algorithm of Konno, Yajima, and

Matsui (1991). From a given simplex tableau for problem (MP3), Schaible and Sodini

used parametric analysis to derive a formula that calculates the value of the objective

function F as the constraint (c', x) = is set to increasing values of i'. As !' increases,

parametric right-hand-side analysis calculates new values for the basic variables. Schaible

and Sodini then derived some optimality conditions that detect when the parameter 4' is

set to a value such that from an optimal solution (x', ') of problem (MP3), one obtains a

local minimum x' of problem (GLMP). By applying these optimality conditions,

Schaible and Sodini were able to develop a simplex-based algorithm that solves problem

(MP3) in a finite number of primal and/or dual simplex iterations.

The algorithm proposed by Schaible and Sodini (1995) has three advantages over

the algorithm of Konno, Yajima, and Matsui (1991): (1) It may terminate before the

maximum possible parameter value m, has been reached. (2) It is more efficient in that








it may skip over local optimal solutions that do not improve the objective function value.

(3) It can be used even when the feasible region is unbounded, and it can detect when

problem (GLMP) is unbounded from below.

Muu and Tam (1992) also considered problem (CLMP), but in their work, the

feasible region D is relaxed to a compact convex set. They seem to be the only

researchers to have considered this generalization of problem (CLMP). The authors

however tested their algorithm using a polytope for the feasible region.

Muu and Tam (1992) formulated the parametric master problem

(MP3') minF(x;4)= g(x)+4lc2,x)+d2 ,

s.t. x D,

c',x) +d,=(, i0.

They proposed a branch and bound algorithm to solve problem (MP 3'). Branch and

bound is a technique commonly used by algorithms in global optimization. Branching

refers to the successive partitioning of the feasible region and bounding refers to the

computation of lower and upper bounds on the global optimum over the partitions.

Partitions of the feasible region that produce a lower bound on the objective function that

exceeds the best upper bound found so far by the algorithm are eliminated from further

consideration. Such partitions are said to be fathomed. A branch and bound algorithm

terminates when all of the partitions have been fathomed.

In the algorithm of Muu and Tam (1992), partitions of the feasible region are

constructed by restricting the value of (c', x) + d, to values within an interval. The

algorithm begins by finding an interval o := [i,, 2] of achievable values of (c, ) + d








by solving the two convex programs 4, : min (c', x) + d,lx Dj and

S:= max {(c', x) + dix D}. Optimal solutions uo and vo are then obtained for the two

convex programs

(,):= min ,c',x +d2+ g(x)xE D, [c',x)+d,]i } i=1,2.

A lower bound 3(Io) over the interval I, of the objective function F of problem

(MP 3') is found by selecting /3(I,):= min {f3(4), 3(2)} An upper bound a, on F is

obtained by selecting := min f (uO), f(vO)}. The interval I0 is next bisected and the

procedure repeated using the two subintervals. A subinterval that produces a lower bound

that exceeds the current upper bound is eliminated from further consideration; i.e. that

subinterval is considered to be fathomed. The procedure continues bisecting intervals Ik

to generating a sequence of solutions {k =' that converge to a limit point x' that is a

global optimal solution. Computational experiments on problems up to (m, n) = (30, 200)

showed that the algorithm is very efficient when both vectors c and d are positive.

2.2.4. Methods Based on Polyhedral Annexation

A limitation of conventional optimization methods is that they can become

trapped at a local minimum, or even a stationary point, if they are applied to a global

optimization problem, e.g. see the algorithms proposed by Swarp (1966a, 1966b). The

central problem of a global optimization method then is to overcome this limitation by

providing a certification test for global optimality, and if a point is not globally optimal,

determining how to move to a better solution. Tuy (1991) called this the subproblem of







29
"transcending the incumbent" where the incumbent is the best feasible solution found so

far by an algorithm.

Let f be the objective function for problem (LMP2), and let X be a vertex of D

that represents the incumbent solution for this problem. Then, from Tuy (1991), to

transcend the incumbent, one must find a point in x e D such that f(x) < f () or else

establish that no such point exists, i.e. that i is a global optimal solution for problem

(LMP2).

Let G := {x e f (x) f (1)}, where S is a convex set containing D. The

problem of transcending the incumbent can then be restated as the following problem.

(GCP) Check if Dc G and if not, find a point x e D \ G.

Problem (GCP) is known as the Geometric Complementary Problem.

Tuy (1990) developed the method of polyhedral annexation to solve problem

(GCP). In polyhedral annexation a sequence of polytopes P, c P2 c *- c Pk c is built

by adding a vertex to the polytope Pk-_ of the previous iteration in such a way that a

vertex of D is annexed into the new polytope Pk. The sequence P n D, P2 n D,... forms

an expanding inner approximation of D. When a polytope P.h D is found, all of the

extreme points of D have been searched and the algorithm terminates. Associated with

the sequence of polytopes P, c P, c ... c Pk c ... is the sequence of their polars

P* > P,* D .. D PkW* D .. where a polar E' of a convex set E in R" is defined as

E* := R"(y, x) I for allx e E}. A dual correspondence exists between the facets of

a polytope Pk and the vertices of its polar Pk'. The subproblem of determining the








inequality representation of Pk, after a new vertex has been added can then be solved by

solving the easier problem of computing the vertices of Pk. The termination condition

P, D D has the corresponding condition P*' D'. For a more detailed description of

polyhedral annexation, see the chapters on inner approximation in Horst, Pardalos, and

Thoai (1995) or in Horst and Tuy (1993).

Tuy and Tam (1992) proposed two algorithms that are derived using the

polyhedral annexation method with a dualization and dimension reduction technique

developed by Tuy (1991). Dualization refers to solving the original problem by solving

the dual problem of generating a sequence of polars until a polar P,' g D* is found. The

key to the dimension reduction technique is the introduction of a cone into problem

(GCP). Tuy and Tam (1992) assumed that c' and c2 are linearly independent vectors and

then formed the cone K : x R" (c',x) 0, i =1, 2}. Cone K is of interest since if

xe D is an incumbent solution, then, for any e (I + K), f(i) 2 f(x). In other words,

cone K identifies points in R" that can do no better than the incumbent solution x.

Computational effort might be saved using cone K since a part of the feasible region D

can be eliminated from further consideration and the search narrowed to the remaining

portion of D.

The first algorithm proposed by Tuy and Tam (1992) solves problem (LMP2) by

solving problem (GCP) through the dualization process of generating a sequence of

polars until a polar P, c D' is found. Tuy and Tam (1992) showed that the polar K' of

cone K is explicitly given as K' = {y R"Iy = -tc' tc2 for some t 20, t, 2 Oj. Any








vertex I in a polar Pk lies in the polar cone K', and the multipliers i, and f, used to

express 9 are unique, since c' and c2 are linearly independent vectors. Polar cone K' is

used to solve the dual problem by building a collapsing sequence of polars

P,' >) P2* ... 3 P,* =) ** with each polar being an improved approximation of D'. The

search is conducted in the two-dimensional space generated by c' and c2 rather than in

the original n-dimensional space. Solving the linear program

(LP(f)) max {-,(c',x)-i2(c2,x )xe D,

where t1 and 2, are the multipliers used to express some vertex I =-Tc' -t2c' of P,

tests for the termination condition P, c D'.

The second algorithm proposed by Tuy and Tam (1992) is motivated by the

observation that for a fixed value of t = (t, t2), problem (LP(t)) is equivalent to the linear

program

(LP(a)) max {(-c'-a(c -c',x)xe D}

where a = t/(t, + t,)e [0, 1]. The first algorithm thus reduces to solving a sequence of

linear programs (LP(a)) for different values of the parameter a. The second algorithm

proposed by Tuy and Tam (1992) is to parametrically solve problem (LP(a)) for all of

the critical values of a at which new bases become optimal. The objective functionfof

problem (LMP2) is evaluated at each basis and a global optimal solution chosen from

those bases. The second algorithm of Tuy and Tam (1992) is essentially the same

parametric problem (MP2) used by Konno and Kuno (1992).








Tuy and Tam (1992) ran computational experiments using both the first

polyhedral annexation algorithm and the second parametric algorithm. Their results

showed that for solving problem (LMP2), the parametric algorithm performed better than

the polyhedral annexation algorithm. The polyhedral annexation algorithm is not as

efficient because more simplex pivots were required than for the parametric algorithm.

Tuy and Tam (1992) proposed an improved variant of the polyhedral annexation

algorithm that reduces the number of pivots and the number of objective function

evaluations. The authors observed that the improved algorithm may potentially be more

useful for a problem with an objective function that is difficult to evaluate. The

computational experiments run using the parametric algorithm on problems of up to (m,

n) = (30, 200) and positive problem data were in line with the results reported in Konno

and Kuno (1992).

2.3. Extensions of Algorithms for Problem (LMP2) to Solve Problem (LMP)
when p 3

The polyhedral annexation method of Tuy and Tam (1992) and the outcome-space

algorithms of Thoai (1991) and Falk and Palocsay (1994) can be extended to the more

general problem (LMP) where p 3. Although the algorithms remain unchanged, the

subproblem of determining the new vertices becomes more difficult as the number of

function terms in the objective function increases.

2.4. Methods to Solve Problems (CMP), (GCMP), and (CCMP)

Relatively little work has been done in designing exact global solution algorithms

that address problems (CMP), (GCMP), and (CCMP). The algorithms that have been








proposed fall into two categories: (1) methods based on solving a reformulated problem

and (2) a method based on outer approximation.

2.4.1. Methods Based on Solving a Reformulated Problem

Konno and Kuno (1992) introduced problem (CMP) where p = 2 and formulated

a master problem by introducing a parameter into the original problem to separate the two

functions of the objective function into a summation. This technique of embedding the

original problem into a problem in a higher dimensional space is similar to the one used

by the authors in the same paper to solve problem (LMP2). At the time, Konno and Kuno

were not able to give an algorithm for solving the master problem. In Kuno and Konno

(1991) the authors proposed a branch and bound algorithm along with an underestimation

function to solve it. Computational results for problems of up to (m, n) = (200, 180)

indicated that the algorithm is efficient when the objective function is the product of a

linear function and a quadratic function and the feasible region is a polytope.

Kuno, Yajima, and Konno (1993) extended the paramaterization technique of

Kuno and Konno (1991) for problem (CMP) to handle cases where p 2. They showed

that a global optimal solution to problem (CMP) can be obtained by solving the

equivalent problem


(MP4) min minG(x;4)= J4,(x),


where- = -( ~ l 0}. For a fixed -, let x'(-) denote an optimal


solution of min G(x;})= F ,f,(x). Let h: E -> R be defined by h(4):= G(x'( ); ) for
j=I







34

any E E E. Solving problem (MP4) then reduces to solving the problem in RP given by

(MP4') min h().

Kuno, Yajima, and Konno (1993) showed that h is a concave function over 5 and

therefore a global optimal solution of problem (MP4') exists on the boundary of E They

proposed an outer approximation method for solving problem (MP4') and tested their

algorithm against two subclasses of problem (CMP): (1) problem (LMP) and (2)

problems similar those tested in Kuno and Konno (1991) in which the objective function

is the product of a linear and a quadratic function and the constraints are linear

inequalities. Computational experiments showed that the total computational time is

dominated by that needed for solving the convex minimization master problems for each

parameter value. The results also showed that the number of cuts and vertices generated

increases rapidly as p increased from 2 to 5. The authors asserted that this was due to

inefficiencies in computing new vertices, especially when p exceeds 5. However, if p is

held constant, these numbers increased very slowly as the number of constraints and

variables increased. The authors concluded that their algorithm is reasonably efficient

when p is less than 4.

Jaumard, Meyer, and Tuy (1997) added a convex function to the objective

function of problem (CMP) to form problem (CCMP). The authors showed that problem

(CCMP) can be reduced to a quasiconcave minimization problem in RP that is a

generalization of problem (MP4') used by Kuno, Yajima, and Konno (1993). In the

special case where fo = 0 in problem (CCMP), the reduced quasiconcave minimization

problem in Jaumard, Meyer, and Tuy (1997) can be shown to be equivalent to the one






35
used by Kuno, Yajima, and Konno (1993). Jaumard, Meyer, and Tuy (1997) find a global

solution of problem (CCMP) by finding an optimal solution to the quasiconcave

minimization problem in RP using a conical branch and bound method. They ran

computational experiments using their algorithm on test problems similar to those used

by Kuno, Yajima, and Konno (1993) and Thoai (1991). The authors report that their

results are very sensitive to the magnitude of p and not as sensitive to the size (m, n) of

the constraint matrix.

Sniedovich and Findlay (1995) analyzed problem (CMP) from the perspective of

c-programming but did not give a complete algorithm for solving it. C-programming is a

technique developed by Sniedovich (1984) for solving an optimization problem of the

form

(CP) q:=mixn ((p(x)),

where X is some nonempty set, qp is a mapping on X with values in RP, and y is a

differentiable and pseudo-concave function on some open set containing the set

Tp(X): = {q(x) x e X}. The heart of the technique is to linearize the function V and

transform the original optimization problem into the parametric programming problem

(MP5) q(4):= minei(x)4, E RP.


Sniedovich showed that if x' is a globally optimal solution for problem (CP), then an

optimal solution '* for problem (MP5) is = Vy(q,(x')), where Vy(.) is the gradient

of f.






36
For problem (CMP), the objective function can be expressed as the composite

y(qp) of two functions, where, for each xe R", tp(x)= (f,(x), f,(x )... f,(x)), and, for


each y R y(y)= l y,. Sniedovich and Findlay claimed without proof that Vy is a
1=1

differentiable and pseudo-concave function on the open convex set 4 e R > 0}. Since

problem (CMP) satisfies the requirements of c-programming, it can be solved by solving

the parametric problem

(MP5') q( ):=min i f;i (x eE,
.X i=l

where E is any subset of RP such that Vy(q,(x))e E for all xe X. In problem (MP5'),

the parameter appears only in the objective function, whereas in problem (MP4) the

parameter I appears in both the objective function and in the constraints. Standard

Lagrangian methods can be employed to solve problem (MP5') for all e EZ, while

specialized methods are required to optimize the objective function of problem (MP4)

with respect to the original variable x and the parameter 4.

Kuno and Konno (1991) and Konno, Kuno, and Yajima (1994) considered

problem (GCMP) for cases where q = 1 and q > 1 respectively. For q = 1, the master

problem and solution algorithm are similar to the one used by Kuno and Konno (1991) to

solve problem (CMP) when p = 2. Computational experiments showed that the

underestimation function does not perform as well as it does for problem (CMP).

For q 2 1, the master problem in Konno, Kuno, and Yajima (1994) is formulated

by introducing a pair of parameters for each pair of convex functions that appear in the






37
objective function of problem (GCMP). The master problem is a convex minimization

problem in the space R"2' and is solved using an outer approximation algorithm.

Computational experiments conducted using a polyhedron for the feasible region showed

that for q = 1, this algorithm required less than half the computational time required by

the branch and bound with underestimation function algorithm proposed in Konno and

Kuno (1992) to solve problem (CMP).

Tuy (1992) gave problem (CMP) as an example of an optimization problem that

can be formulated as a Geometric Complementary Problem and solved it using a

parametric programming problem. The parametric programming problem is a convex

minimization problem in which a positive parameter vector is used to build a composite

objective function from the convex functions in the objective function of problem (CMP).

A complete algorithm that includes solving the parametric program was not given.

2.4.2. A Method Based on Outer Approximation

Thoai (1991) extended the algorithm based on the outer approximation technique

that he proposed for solving problem (LMP2) to address the solution of problem (CMP)

when p= 2. The main idea is to build a sequence of decreasing polytopes

P0 D Pi 3 DX of the convex feasible region X and a sequence of decreasing

polytopes So D S 3 ) Y of the outcome set Y, where

f = {ye R2ly, = f,(x) y2 = f2(x)for some x X.

Problem (CMP) is then solved by applying a modified version of the algorithm for

problem (LMP2). In any iteration k, up to two cuts are introduced, one for Pk and one


for Sk, to obtain tighter approximating sets.








Since the algorithm does not depend on the actual value of p, it can be extended

to handle cases where pt 3.

2.5. Methods to Solve Problem (LMP) as a Concave Minimization Problem

Konno and Kuno (1992) showed that the objective function of problem (LMP) is

not a convex function over the feasible set D. Therefore, problem (LMP) is not a convex

programming problem. However, since the natural logarithm function In is a strictly

increasing concave function on (0, o), it is easy to show that the function


F(x)=ln[ (c',x)+d, + = ln Ic',x)+d,]


defined for all x D is a concave function. In addition, the optimal solution set of the

concave minimization problem

(CMIN) min F(x s.t.xe D,

is identical to the optimal solution set of problem (LMP). Therefore, any concave

minimization method may be applied to problem (LMP) if the objective function is

replaced by its logarithmic equivalent.

Using the above transformation modification, Tuy (1991) showed that problem

(LMP) could be solved in a reduced dimension space using polyhedral annexation and the

dualization and dimension reduction technique. The algorithm presented in Tuy and Tam

(1992) is essentially an improvement of the one in Tuy (1991).

Ryoo and Sahinidis (1996) also converted problem (LMP) into the problem

(CMIN). To solve problem (CMIN), they employed a branch and bound algorithm that

incorporates the use of valid inequalities to accelerate convergence. Branch and bound








algorithms may slowly converge to an optimal solution when the gap between the initial

upper and lower bounds is large. A valid inequality is a inequality constraint that does not

exclude any solution that yields an objective function value lower than the current best

upper bound. By introducing valid inequalities into the constraint set, inferior parts of the

feasible region may be removed from further consideration without eliminating possible

global optimal solutions. A second use of valid inequalities is to reduce the range of

values that the variables in the problem can assume. Ryoo and Sahinidis referred to these

two uses of valid inequalities as range reduction mechanisms. The performance of the

bounding procedure in the branch and bound algorithm is improved by using these range

reduction mechanisms, since smaller-sized partitions of the feasible region are used and

the variables are restricted to reduced ranges of values.

Ryoo and Sahinidis implemented the branch and bound algorithm along with the

range reduction mechanisms in a computer program called BARON (Branch-And-Reduce

Optimization Navigator). To more easily calculate lower bounds on the objective function

F of problem (CM IN i ,\ er a partition of the feasible region, the authors replaced F by a

linear underestimating function. Lower bounds were then calculated by solving linear

programs. The authors tested randomly-generated problems in sizes from (m, n) = (50,

50) to (200, 200), with p ranging from 2 to 5. They reported that only a small fraction of

the total CPU time is consumed in the range reduction mechanisms and that there seemed

to be a low-order polynomial relationship between the CPU time and the value of p.














CHAPTER 3
CONCAVE MULTIPLICATIVE PROGRAMMING PROBLEMS: ANALYSIS AND
AN EFFICIENT POINT SEARCH HEURISTIC FOR THE LINEAR CASE


3.1. Introduction

An important, but little researched area that deserves more attention, is the

development of heuristic algorithms for finding a good solution for multiplicative

programming problems. In some applications, a good, though not necessarily globally

optimal solution, may adequately meet the requirements of a user (Konno and Inori

1989). In these cases, since multiplicative programming problems are known to be NP-

hard, the expenditure of computational effort required to globally solve them may not be

needed.

This chapter has two purposes. The first is to present an analysis of problem (Px)

when problem (Px) is a concave multiplicative programming problem. The second

purpose is to propose a heuristic algorithm designed for the case where problem (Px) is a

linear multiplicative programming problem.

The analysis of the concave multiplicative programming problem is presented in

Section 3.2. This analysis shows a new way to write a concave multiplicative

programming problem as a concave minimization problem and some theoretical

consequences of this. It also shows some relationships between concave multiplicative

programs and certain multiple-objective mathematical programs. In Section 3.3, by using

some of the results of Section 3.2, we present and explain the workings of an efficient-

40






41
point search heuristic algorithm that we have developed for the linear multiplicative

programming problem. Section 3.4 reports and analyzes some statistics summarizing the

computational results that we obtained by coding the heuristic algorithm and applying it

to 260 randomly-generated linear multiplicative programs. In Section 3.4 we also report

the results of applying the heuristic algorithm to a multiplicative programming problem

formed from a decision situation using real data. In Section 3.5, we discuss the major

results of this chapter.

3.2. Analysis

Assume in problem (P,) that X is a convex set and that, for each j = 1, 2,..., p,

f : X -4 R is a concave function; i.e., assume that problem (Px) is a concave

multiplicative programming problem. Consider the function : X -- R defined for each

xe X by

g(x)= log g(x).

Then, it is a simple matter to show that k : X -- R is a concave function and that the

optimal solution set to the concave minimization problem

min k(x), s.t. x X, (3.1)

is identical to the optimal solution set of problem (Px). Thus, any concave multiplicative

programming problem of the form of problem (Px), if rewritten in the form (3.1), can be

solved by applying any appropriate general-purpose concave minimization algorithm to

(3.1). For discussions and reviews of concave minimization algorithms, see, for instance,

Benson (1995), Benson (1996), Horst and Tuy (1993), and Pardalos and Rosen (1987).








It is interesting and useful in both practice and theory to observe that, in addition

to (3.1), there is at least one other way to rewrite a concave multiplicative programming

problem as a concave minimization problem. To show how this can be accomplished, we

will first prove the following preliminary result.

Lemma 3.2.1. Let a RP satisfy a > 0, and consider the nonlinear programming problem

v = min(a, ), s.t. Ae A, (3.2)


where A = e Rf AR j 1, Al 0 Then, v is finite and problem (3.2) has at least one


optimal solution.

Proof. Notice that, if Ae A, then A > 0 and (a,A) > 0. Therefore, v > 0. This,

combined with the fact that A 0, implies that v is finite.

Now, suppose that, for each j = 1, 2,..., there exists a vector Aj e A such that

(a,Aj) v+ej,

where {e, t is a strictly decreasing sequence of positive real numbers such that


lime = 0. Then the sequence {, is either bounded or unbounded.


Case 1: {A [, is bounded. Then, for some bounded set XA A, X' E A for each

j = 1, 2,.... Therefore, by passing to an appropriate subsequence {M ,.' of {(I ,, if

necessary, we can guarantee that X = limAj exists. Furthermore, since V E A c A for
each j J, and A is a closed set, Aelongs to By assumption,
each j e J, and A is a closed set, A belongs to A. By assumption,






43
for each j E J. By taking the limits over je J on both sides of (3.3), we conclude that

(a, ) 5 v. Since 1~ A, this implies that X is an optimal solution to (3.2).

Case 2: {I 1}, is unbounded. Then, for some subsequence {,2'}je of {ij,, and

for some k E {l, 2,...p, lim). =+o. For each je J, since X E A, j > 0.

Combined with the fact that a > 0, implies that, for each j e J,

0
By assumption, for each je J,

(a,. ') v+e,. (3.5)

From (3.4) and (3.5), we obtain

a,,
for each je J. By taking the limits over je J on both sides of (3.6), we conclude that

+ o = v, which is a contradiction. Therefore, this case cannot hold, and the proof is

complete. C

Using Lemma 3.2.1, we may now establish the following theorem.

Theorem 3.2.1. Assume in problem (Px) that X is a convex set and that f : X -> R,

j = 1, 2,..., p, are concave functions. Let g : X -> R be defined for each x eX by


W(x)= pl ifj(x) .
[I-' J


Then : X -> R is a concave function.






44
Proof. Consider the function h: X -+ R defined for each xe X by

h(x)= min Ajf,(x), s.t.Ae A, (3.7)
j=1

where A is as defined in Lemma 3.2.1. From Lemma 3.2.1, since f, is strictly positive

on X for each j = 1, 2,..., p, it follows that the minimum in (3.7) exists and is finite for

each xe X. If, for each A e A, we define a function hA :X -4 R by


h, (x)= f,(x),
J=I

then for each xe X, h(x) may also be written as

h(x)= min h (x). (3.8)

Notice that, for each A e A, hA : X -4 R is a concave function. From this and (3.8), we

conclude that h: X -- R is also a concave function (Rockafellar 1970).

To complete the proof, we will show that, for each xr X, h(x)= g(x). Toward

this end, fix xe X, and let A(x)e X denote an optimal solution to problem (3.7). From

the Karush-Kuhn-Tucker necessary conditions for this problem (Bazaraa, Sherali, and

Shetty 1993), since A(x)> 0, it follows that there exists a nonnegative constant O(x)

such that


fJ(x)-9(x) A fk (x)]/,(x)=0, j=1,2,..., p. (3.9)

Since ,A(x)E X is an optimal solution to problem (3.7), it is easy to see that


kI (x)=1.
k~l







Together with (3.9), this implies that

,Ai(x)f1(x)= (x), j= 2,.... p. (3.10)

From (3.10), it follows that

A,(x)=O(x)/fj(x), j=1,2,...,p.

By substitution in

I,(x)= 1,

this implies that


e(x)== f (x) (3.11)


From equations (3.10) and (3.11), we see that


Zx)f,(x)= p f(x) (3.12)

Since xe X and A(x)e X is an optimal solution to (3.7), the left-hand-side of equation

(3.12) coincides with h(x). By definition of g, the right-hand-side of equation (3.12)

equals g(x) so that the proof is complete. [

Theorem 3.2.1 can also be proven by using a composite function approach and

showing several preliminary results (Avriel, Diewert, Schaible, and Zang 1987). We offer

the proof here, because it is more direct and because we will use it below to help derive a

corollary of interest.







46
Notice from Theorem 3.2.1 that, when problem (Px) is a concave multiplicative

program, the optimal solution set of problem (Px) is identical to the optimal solution set

of the concave minimization problem

min g(x), s.t.xe X, (3.13)

where g: X -- R is defined for each xe X by


(x)= p[g(x) i.

In practice, this implies that any concave multiplicative program (Px), if rewritten in the

form (3.13), can be solved by applying any suitable concave minimization algorithm to

(3.13). Notice also that problem (3.13) is a simpler reformulation of problem (Px) for the

concave case than the typical reformulation used in the literature to solve problem (Px)

in the convex case (see e.g., Konno and Kuno 1992, Kuno and Konno 1991, Thoai 1991,

and Kuno, Yajima, and Konno 1993).

Theorem 3.2.1 also has some interesting theoretical implications concerning the

product of functions. For instance, for any finite set of concave functions fj, j = 1, 2,

..., p, each defined on a common nonempty convex domain X c R" and each strictly

positive on this domain, it is known that the function g: X -- R defined by their product

is not necessarily concave, convex, or quasiconvex on X (Kuno, Yajima and Konno

1993 and Avriel, Diewert, Schaible and Zang 1988). However, from Theorem 3.2.1, the

function f :X -R given by




for each xe X is a concave function on X.






47
In addition, Theorem 3.2.1 implies the following result concerning the product of

a set of concave functions.

Corollary 3.2.1. Let X and fj, j= 1, 2,..., p, be defined as in Theorem 3.2.1, and

suppose that g: X -- R is defined for each xe X by


g(x)= I,(x).


Then g: X -- R is a quasiconcave function.

Proof. Choose a R, and let

L,= {xe Xlg(x) a}.

If a 0, L, = X is a convex set. If a > 0, then from Theorem 3.2.1 and Rockafellar

(1970), the set

= I{xe xEp[g(x)'IIP p}

is a convex set, where = pa'l/. Since L = L,, this implies that L, is a convex set.

Therefore, we have shown that, for any a R, L, is a convex set. This is equivalent to

showing that g: X -+ R is a quasiconcave function (Bazaraa, Sherali, and Shetty 1993),

so that the proof is complete. O3

It follows from Corollary 3.2.1 that any concave multiplicative programming

problem (Px) is a problem involving the minimization of a quasiconcave function over a

convex set. Many of the most popular algorithms for minimizing a concave function over

a convex set are equally suitable for minimizing quasiconcave functions over convex sets

(Horst and Tuy 1993 and Benson 1995). As a result, we see that any concave








multiplicative program (P ) can be solved by applying any number of suitable concave

minimization algorithms directly to problem (Px). In particular, no reformulations of

problem (Px) are needed to apply these algorithms.

Remark 3.2.1. Corollary 3.2.1 has been previously shown to hold for the special case

where p = 2, X is a nonempty, compact polyhedron, and f, and f2 are linear functions

(see, e.g., Konno and Kuno 1992).

The next corollary of Theorem 3.2.1 concerns the minimization problem (3.7)

used in the proof of the theorem. Possible uses for this corollary may include the

construction of methods for finding local optimal solutions to concave multiplicative

programs, although we will not investigate this here.

Corollary 3.2.2. Let X and f, j = 1, 2,..., p, be defined as in Theorem 3.2.1, and let A

be defined as in Lemma 3.2.1. Then, A is a convex set and, for each xe X, the unique

optimal solution A(x) to problem (3.7) is given by


A (x)= f(x) f(x), k = 1,2,..., p.

Proof. Notice that A may be rewritten according to the relation


A= AEintR p A[ I/P (3.14)


where

intR,' ={le RPA>0}.

It is easy to see that, for each j = 1, 2,..., p, h, :int RP -- R, defined for each a int RP

by






49
h,(4) = ,

is a concave function on int RP that satisfies

hi(A)> 0, for all AE int Rf.

Therefore, by Theorem 3.2.1, the function m:int R+ R defined for each A e int Rf by


m(A)= p[ ,


is a concave function. This implies that

{Ae intR IRn(A.l)g p

is a convex set (Rockafellar 1970). By (3.14), this proves that A is a convex set.

Now, fix xe X, and let A(x)e A denote an optimal solution to problem (3.7).

From the proof of Theorem 3.2.1, this implies that, for each k = 1, 2,..., p,

,(x)=e(x)/ f,(x),

where O(x) is given by (3.11), so that the corollary is proven. E

In addition to its relationships to concave minimization, a concave multiplicative

program also has some interesting ties to multiple-objective mathematical programming.

In the remainder of this section, we will show some of the theoretical relationships

between concave multiplicative programs and certain multiple-objective mathematical

programs. In the next section, some practical benefits of those relationships will be

demonstrated.

Let f(x) denote the vector


[f,(x), f,.(x)...,






50
where f : X -R, j = 1, 2,..., p, are the functions used in defining problem (Px).

Then, the components of the vector f(x) are generally conflicting, in the sense that the

infima over X of fj(x) j = 1, 2,..., p, are generally not simultaneously achieved at the

same point in X. As a result, inherent tradeoffs in the achievable values of the

components of f(x) over x e X are present. To account for these tradeoffs, and to seek

what decision makers call a most preferred solution in situations where the goal is to

attempt to simultaneously minimize f,(x), j= 1, 2,..., p, over X, one of the most

popular approaches is to consider the associated multiple-objective mathematical

program

VMIN f(x), s.t.xe X. (3.15)

In particular, in typical situations, a most preferred solution in X will exist that is also an

efficient solution for (3.15), where an efficient solution is defined as follows.

Definition 3.2.1. A point x E R" is called an efficient solution for (3.15) when xo E X

and, whenever f(x)< f(xo) for some xe X, then f(x)= f(xo).

An efficient solution is also called a nondominated or Pareto-optimal solution. By

generating or searching the set XE of the efficient solutions for (3.15), decision makers

are able to observe the inherent tradeoffs among the objective functions fj, j = 1, 2,...,

p, that are available over X and are often able to choose from X. a most preferred

solution. For further discussions on multiple-objective mathematical programming and its

applications, the reader may consult, for instance, Cohon (1978), Evans (1984), Luc







51
(1989), Sawaragi, Nakayama, and Tanino (1985), Stadler (1979), Steuer (1986), Yu

(1985), Zeleny (1982) and references therein.

The first relationship between multiplicative programming and multiple-objective

mathematical programming is given in the following result. The proof of this result is an

elementary exercise.

Proposition 3.2.1. Any optimal solution to problem (Px) must belong to the efficient set

XE of the multiple-objective mathematical programming problem (3.15).

Notice that Proposition 3.2.1 holds for arbitrary multiplicative programming

problems (Px). The next result, however, is restricted to certain types of concave

multiplicative programs.

Proposition 3.2.2. Assume in problem (Px) that X is a compact, convex set and that

f, :X -4 R, j = 1, 2..., p, are concave functions. Then, there exists an optimal solution

to problem (Px) which is an extreme point of X.

Proof. From Theorem 3.2.1, problem (Px) can be solved by finding an optimal

solution to the concave minimization problem (3.13), where g': X -- R is the concave

function defined by


g(x) = P[if(x) ,


for each xe X. Since X is a nonempty compact, convex set, from Horst and Tuy (1993),

problem (3.13) has an optimal solution that is an extreme point of X. These two

observations together prove the desired result. O







52
Taken together, Proposition 3.2.1 and 3.2.2 imply that any concave multiplicative

programming problem with a compact feasible region has at least one optimal solution

that is an efficient extreme point solution to the multiple-objective mathematical

programming problem (3.15). Special cases of this observations have been alluded to in

the literature (see, e.g., Aneja, Aggarwal and Nair 1984 and Sniedovich and Findlay,

1995). In the next section, we put this observation to practical use.

3.3. Efficient Point Search Heuristic

Assume in this section that, in problem (Px),

X = xe R'"Ax bl

is a compact polyhedron, where A is an m x n matrix and b e R', and that for each

j = 1,2,..., p, f (x)= (c, x), where c' e R" for each j= 1,2,...,p. Then problem

(Px) is a linear multiplicative programming problem or, more briefly, a linear

multiplicative program (Konno and Kuno 1992). We have designed and tested a heuristic

algorithm for this problem, based in part on some of the results in the previous section. In

this section, we will formally state this heuristic algorithm and explain its workings.

The multiple-objective program (3.15) associated with a linear multiplicative

problem may be written as

VMIN Cx, s.t. Ax b, (3.16)

where C is the pxn matrix whose jth row equals (c ), j = 1, 2,...,p. Problem (3.16)

is a multiple-objective linear programming problem (Steuer 1986 and Yu 1985). Let X,


denote the set of extreme points of







53

X =Ixe R'Ax b).

Then, by Proposition 3.2.1 and 3.2.2, an optimal solution to the linear multiplicative

programming problem can be found in the set

x.- = (XEnx,)

of efficient extreme points of problem (3.16). The set X,x is finite, and various

procedures have been developed for generating it in its entirety (see, e.g., Steuer 1986, Yu

1985 and Steuer 1983).

It follows that, in theory, at least, a global optimal solution to a linear

multiplicative problem can be found by completely enumerating the set XE, of efficient

extreme points of the associated multiple-objective linear programming problem (3.16)

and, from this set, choosing the points) with the smallest value of


g(x)= (c'J,x
j=I

(see, e.g., Sniedovich and Findlay 1995). Unfortunately, as we shall see later, in practice

the exponential growth in the size of XE, as a function of problem size (Steuer 1986)

renders this approach impractical for many cases.

The approach of the heuristic algorithm is to efficiently search a dispersed,

carefully chosen sample of candidate points from X Ex in order to find an attractive

solution to the linear multiplicative programming problem. To describe and explain the

workings of the heuristic, we must first present some theoretical background from the

theory of multiple-objective linear programming.

Let








W ={we RI'(e,w)
where ee R' is a vector with each entry equal to 1.0, and M is a positive real number.

For sufficiently large M, from Philip (1972) it is known that a point xO belongs to the

efficient set XE of (3.16) if and only if xo is an optimal solution to the weighted-sum

problem

min (wC, ), s.t. Ax < b, (3.17)

for some w = w e W. We will assume henceforth that M is chosen to be large enough to

guarantee that this property holds. It is also well known that the efficient set XE for

(3.16) is given by

XE = U{Xjwe w},

where, for each we W, X, denotes the optimal solution set of the linear program (3.17)

(Steuer 1986 and Yu 1985). Since the optimal solution set to (3.17) for any we W is a

face of

X= E R"'Ax5b\,

it follows that the efficient set XE for (3.16) is equal to the union of the faces X,,

we W, of X. Although XE is a connected set (Yu 1985), it is generally nonconvex. The

heuristic algorithm will individually identify efficient faces X,, we W, of X, and find

an approximately-optimal extreme point solution to the problem


min (c',x, s.t. x X, (3.18)
j=l


for each efficient face X, that it finds.








Let

Y={ye R'ly=Cx,forsomexe X},

Y = {ye Rly > y, forsomeF Y}.

To aid in its search, the heuristic algorithm will solve the linear program

min w'C,x) (3.19a)

s.t. Cx y, (3.19b)

Ax b, (3.19c)

for various values of y e Y' and we W The heuristic relies in part upon the properties

of problem (3.19) given in the next three results. The first two results follow easily from

Benson (1978).

Theorem 3.3.1. Suppose that xo e R" and let yO = Cxo. Then, xO is an efficient solution

for (3.16) if and only if, with y = yO, xo is an optimal solution to (3.19) for every

we W.

Theorem 3.3.2. If ye Y" and we W, then (3.19) has at least one optimal solution, and

any optimal solution for (3.19) is an efficient solution for (3.16).

Theorem 3.3.3. Suppose in (3.19) that w = w W and that y = yO = Cxo, where xo is

an efficient solution for (3.16). Let (uor, zor) denote any optimal solution to linear

programming dual of (3.19), where uo represents the dual variables corresponding to the

constraints Cx yO of (3.19). Let wO = u + w and let vo = (w CxO. Then, xo belongs

to the efficient face X. of X, and X, can be represented as






56

X, ={xe X(wo-Cx=vo}.

Proof. To prove the theorem, we will show that, with w = 0o, xo is an optimal

solution to problem (3.17). Suppose in (3.19) that w= w e W and that y = yO = Cxo,

where xO is an efficient solution for (3.16) given in the theorem. The dual linear program

to (3.19) is then given by

max-(yO,u)-(b,z),

s.t. -CTu-ArZ=CTWo

u,z O.

From Theorem 3.3.1, xo is an optimal solution to (3.19) when w = w and y = yO. By

the duality theory of linear programming (Murty 1983), since (UOT, zor) is an optimal

solution to the linear programming dual of (3.19) when w = w and y = yO, this implies

that

(we Cxo = -(y, u)-(b, z.

By rearranging this equation and using the definitions of yO and WO, we obtain

( CxO C =-(b, z). (3.20)

With w = Vo, the dual linear program to (3.17) may be written as

max-(b,z), (3.21a)

s.t. -ATz = CrWO, (3.21b)

z 0. (3.21c)

Let z denote an arbitrary feasible solution to problem (3.21). From the definitions of u








and w0, this implies that (uor, z) is a feasible solution to the dual linear program of

(3.19). Since (UOT, ZT ) is an optimal solution to the latter problem, it follows that

-(y, uO)-(b, zo) -(yo, )-(b, ),

or, equivalently,




Notice that, since (uor,zOT) is an optimal solution to the dual linear program to (3.19), z

is a feasible solution to (3.21). By the choice of z, the preceding two statements imply

that z is an optimal solution to (3.21). Since xo is an efficient solution for (3.16), with

w = W, xo is a feasible solution for (3.17). From (3.20) and the duality theory of linear

programming (Murty 1983), since z0 is an optimal solution to (3.21), this implies that,

with w = wt, xo is an optimal solution to (3.17), and the proof is complete. O

Notice in Theorem 3.3.3 that, for any t > 0, X, = X,. This implies that, in

Theorem 3.3.3, when W" o W, there exists a tE (0, 1) such that tfw e W and

X, = X,,. Thus, in Theorem 3.3.3, when wO e W, X0, has an alternate representation

X-, for which w= e W. For simplicity, we may and will assume without loss of

generality that in Theorem 3.3.3, wo e W.

To generate various points ye Y' for use in problem (3.19), the heuristic

algorithm will rely upon the two concepts defined in the next two definitions (see, e.g.,

Zeleny 1982).







58
Definition 3.3.1. The point y' RP is called the ideal point of Y when, for each

j = 1, 2,..., p, y' equals the minimum value of y, over Y.

Definition 3.3.2. The point y" e RP is called the anti-ideal point of Y when, for each

j= 1, 2,... p, y' equals the maximum value of y, over Y.

Notice that y' and yj' generally do not belong to Y. The algorithm uses these

two points as anchor points in an initialization procedure whose goal is, in part, to

generate a dispersed sample of points from Y'.

The heuristic algorithm may be stated as follows.

Algorithm 3.3.1. Efficient Point Search Heuristic Algorithm

Initialization Phase. See Steps 1 through 5 below.

Step 1. Find the ideal and anti-ideal points y' and y" of Y.

Step 2. Find an optimal solution [(xj ,Ja' R"' to the linear program

max a,

s.t. y" +a(y' -yAI) Cx,

Ax b,

a>O,

and set y* = yA +a'(y' yA).

Step 3. Choose a positive integer S and, for each i = 1, 2,..., S, let

y' =y"+(i/SXy* -y ).

Ste 4. Choose a positive integer N such that 1 N < M p+1, let w = e R', and,

for each j = 1, 2,..., p, define w' e R" by






59
I fl, ifi j,
N, if i = j.

Ste 5. Set UB = +o, i =0 and j =0.

Efficient Point Search Phase. See Steps 1 through 6 below.

Step 1. Set y = y' and w= w', and find any optimal solution x" to linear program

(3.19).

Step 2. Set y = Cx and w = w in (3.19), and compute any optimal solution [(u;,

(z' J to the dual linear program to (3.19), where u" denotes the optimal dual variables

corresponding to the constraints Cx < y of (3.19).

Step 3. Let 'j = u'j + w. If w~ is a positive multiple of W'i for some i' i and j' j

such that (i', j') (i, j), then go to Step 6. Otherwise, continue.

Step 4. Let v, = (W!J C"'. For each h = 1, 2,...,n, calculate a, according to the formula


a, = it=l[ c',.V d ]c, (3.22)

and find any basic optimal solution xd to the linear program

min(a, x), (3.23a)

s.t. (wr TCx =v,, (3.23b)

Ax < b. (3.23c)

Ste 5. If fI (ck x )>UB, go to Step 6. Otherwise, set ,= x", and UB= (c*,.
k=1 k*=

and go to Step 6.







60
Step 6. Set j = j +1. If j p, go to Step 1. Otherwise, set i = i +1 and j = 0. If i S,

go to Step 1. Otherwise, Stop: xe XE,, is the recommended solution to the linear

multiplicative programming problem.

In the initialization phase of the algorithm, samples of points from Y' and from

Ware generated. To generate the sample of points from Y', Step 2 of this phase

determines the point y' between y"A and y' such that, of all line segments with

endpoints y" and y that lie in Y' and for which y lies on the line segment connecting

y" and y', the line segment L connecting y" and y' has maximum norm. The sample

y'i = 1, 2,.... S of points from Y" is then generated in Step 3 of this phase by

partitioning L into S line segments of equal length, where S is a positive integer chosen

by the user. In Step 4, a sample of p +1 all-integer vectors from W is generated, where

for p of these vectors, the value N of one of the components is chosen by the user from

the set {l,2,...,M- p+l}.

Each iteration of the efficient point search phase of the heuristic executes two key

operations. First, it identifies an efficient face X, of X. Second, unless this face has

been previously identified during an earlier execution of this phase, with w = w' in

problem (3.18), by using a first-order linear approximation to the objective function of

this problem, it finds an extreme point x' of X in this efficient face that is an

approximate optimal solution to (3.18).

Steps I through 3 of the efficient point search phase of the algorithm identify an

efficient face of X. In Step 1, with y = y' Y' and w= w' e W, the linear program






61
(3.19) is solved for any optimal solution PX. By Theorem 3.3.2, this optimal solution

must exist and is an efficient solution for (3.16). In Steps 2 and 3, with y = CxT and

w = wJ in (3.19), the dual linear program to (3.19) is solved to yield the vector uU RP,

and the weighting vector W' = u + w' is computed. From Theorem 3.3.3, the face X,

corresponding to this weighting vector is an efficient face for (3.16) and contains x.

Furthermore, from the same theorem, this face can be written as

X, = {E R" Ax b,(W Cx =v,,, (3.24)


where v, = (W'i CI'. Step 3 checks whether or not X, has been identified during a

previous execution of this phase of the algorithm. If so, the algorithm proceeds to Step 6

to prepare for another possible iteration of the efficient point search phase of the heuristic.

Otherwise, control shifts to Steps 4-5.

In Steps 4-5 of the efficient point search phase, problem (3.18) is approximately

solved using a new efficient face X, as the feasible region. In particular, in Step 4, (3.22)

is first used to construct the nonconstant portion of a first-order Taylor series linear

approximation (a,x) of the objective function of problem (3.18) at x = x e X,,. Next,

using the representation (3.24) of the efficient face X,, an extreme point minimizer xi

of (a, x) over X,, is found by solving the linear program (3.23). Notice that x4 e XE,

(see Rockafellar 1970). In Step 5, the value achieved by x' in the objective function of

the linear multiplicative problem is compared to the smallest value UB found thus far for

this objective function by the search. If x" achieves a smaller objective function value







62

than UB, x' becomes the new incumbent solution x and UB is reduced in value

accordingly.

Notice that the performance of the heuristic algorithm depends in part upon the

number, locations, and dimensions of the efficient faces (3.24) that are searched via

problem (3.23). This, in turn, is partially dependent upon the sizes of the parameters S

and N chosen by the user. The goal is to search as many points of X,., as possible by

generating a variety of distinct efficient faces (3.24) of large dimensions that are

dispersed widely throughout X,. Notice that, since each efficient face identified by the

heuristic is given in the form (3.24) and searched by solving linear program (3.23), the

individual points in XE,. that are searched by the algorithm are searched implicitly rather

than explicitly, i.e., they do not need to be explicitly enumerated.

3.4. Computational Results

The heuristic algorithm described in Section 3.3 has the following attractive

characteristics:

(a) it can be implemented using only linear programming methods;

(b) it generally implicitly searches many efficient extreme points of (3.16) at once

by optimizing over entire efficient faces of (3.16), rather than by explicitly examining

individual efficient extreme points of (3.16);

(c) it allows the user to manipulate the nature and extent of the efficient face

search through the choices for the input parameters S and N;

(d) it finds efficient faces of (3.16) by attempting to globally sample from a

variety of regions of the efficient set.







63
To evaluate the effectiveness in practice of the heuristic algorithm and its features,

we have written a VS FORTRAN computer code for the algorithm and used it to solve

260 linear multiplicative programming problems of various sizes. To execute the code on

these 260 problems, we used an IBM ES/9000 model 831 mainframe computer. As a

further illustration of the effectiveness in practice of the heuristic algorithm, we solved a

multiple-object linear programming problem in forest management that was derived from

a real decision situation using real data.

To implement Step 3 of the initialization phase of the algorithm, we chose to set

S = 4, so that a sample of five points lying between y" and y' in Y" is always

generated in this step. We used a value of N = 9 in Step 4 of the initialization phase to

help generate the sample of p +1 points from W.

To solve the linear programming problems called for by the heuristic, the

computer code uses the simplex method procedures given in the subroutines of the

Optimization Subroutine Library (International Business Machines 1990). These

subroutines employ anticycling rules to handle degeneracy as needed. Therefore, they are

especially appropriate for solving instances of problem (3.23), since these problems

always contain degenerate extreme points.

Let

intR" ={xe R"lx>0j,

and suppose that k is a positive integer. To generate the 260 test problems, we used the

following random procedure. First, for each j = 1, 2,..., p, we generated the elements of

the vector c E R" by randomly drawing elements from the set (1, 2,..., 10). Next, we






64
generated a nonempty, compact polyhedral feasible region X C int R,. This region can

be written as

X= { R'\Px q,l xj < j=, 2,....n,

where P is a k x n matrix, q E Rk, and 4 E R. To accomplish this, first the elements of P

were generated by randomly choosing elements from the set {l, 2,..., 10}. Next, for each

i= 1,2,..., k, the formula


qi- P"


was used to calculate q,, and, finally, 4 was chosen according to the rule

4= max {q,i= l,2,...,k}.

Each test problem was constructed to belong to one of four categories, where a

category is defined by the number p of linear functions used in the objective function


S(ci, x) of the test problem. The values p = 2, 3,4, 5 were chosen to define these
j=I

categories. We chose these categories in this way because empirical evidence seems to

indicate that the complexity of these problems is more sensitive to the magnitude of p

than to the magnitudes of k or n (Kuno, Yajima and Konno 1993). Within each

category, the test problems were classified into subcategories of 10 problems, each

defined by the values of the ordered pair (k,n).

To help evaluate the attractiveness of the solutions found by the heuristic

algorithm, we found a global optimal solution for each test problem by completely

enumerating all of the efficient extreme points of the associated multiple-objective linear







65
program (3.16). To accomplish this, we use the ADBASE computer code developed by

Steuer (1983).

Some statistics summarizing the results of these computations are presented in

Tables 3.1-3.4. In each table, each row gives average statistics for a subcategory (k,n) of

10 problems, a measure of the worst case performance of the heuristic, and the number of

problems in a category for which a global optimal solution was found. The first statistic is

the average number of efficient extreme points found by ADBASE in solving the

problems by complete enumeration. In some sense, the magnitudes of these numbers

correspond to the average relative difficulties, by subcategory, of each group of 10 linear

multiplicative programs in a subcategory. The second statistic is the average efficiency

rating r given by

r = 1- [(ZH Zin .)/(Z Z. )],

where z, is the objective function value returned by the heuristic, and where z,, and

z, are the global minimum and maximum values of the objective function of the test

problem over the corresponding set of efficient extreme points of (3.16). Thus, 0
and the closer r is to 1.0, the more attractive the value z, returned by the heuristic is

relative to the actual global minimum value zn, The third statistic given for each

subcategory in these tables is the average CPU time (seconds) that the heuristic needed to

solve a problem in the subcategory. The fourth statistic shows the lowest efficiency rating

calculated for a problem in the subcategory. It gives a measure of the worst case

performance of the heuristic algorithm when applied to the 10 problems in a subcategory.







66
Table 3.1. Computational Results: p = 2.

Subcategory Avg No. Avg. Eff. Avg. Solutions Lowest Eff. No. Exact
k n Eff. Points Rating r Time (sec.) Rating r Solutions
25 20 28.8 1.000 0.227 1.000 10
25 30 28.8 1.000 0.241 1.000 10
30 40 47.9 1.000 0.389 1.000 10
40 30 28.2 1.000 0.328 1.000 10
40 50 47.0 0.999 0.504 0.996 8
50 40 35.1 0.999 0.453 0.999 9
50 60 29.2 1.000 0.556 1.000 10
60 70 62.3 1.000 1.070 1.000 10



The fifth statistic is the number of problems in a category for which the heuristic

algorithm found a global optimal solution.

These four tables show that the solutions returned by the heuristic algorithm give,

on the average, quite accurate estimates of the actual global minimum values for the 260

linear multiplicative test problems generated. This is indicated by the fact that average

efficiency ratings by subcategory always were at least 0.920, and in approximately 96%

of the subcategories exceeded 0.950. It is noteworthy that, for these problems, these

ratings r by subcategory do not seem to decline significantly as p, k, and n increase in



Table 3.2. Computational Results: p = 3.

Subcategory Avg. No. Eff. Avg Eff. Avg. Solutions Lowest Eff. No. Exact
k n Ext. Points Rating r Time (sec.) Rating r Solutions
25 20 330.6 0.985 0.321 0.951 4
25 30 896.8 0.960 0.469 0.708 5
30 40 873.3 0.987 0.543 0.884 7
40 30 949.3 0.993 0.609 0.968 6
40 50 2073.7 0.920 0.967 0.806 4
50 40 1484.9 0.993 0.908 0.961 7
50 60 2846.3 0.995 1.298 0.978 6
60 70 5867.5 0.969 2.495 0.799 2







67
Table 3.3. Computational Results: p = 4.

Subcategory Avg. No. Eff. Avg Eff. Avg. Solutions Lowest Eff. No. Exact
k n Ext. Points Rating r Time (sec.) Rating r Solutions
25 20 2789.5 0.998 0.426 0.993 4
25 30 7245.9 0.992 0.598 0.945 5
30 40 23656 0.986 1.019 0.947 1
40 30 19034 0.978 0.998 0.923 2
40 50 50889 0.969 1.539 0.918 0
50 40 59443 0.969 1.587 0.843 2
50 50 83780 0.981 1.901 0.890 3



value. In addition, with the exception of one subcategory, a global optimal solution was

found for at least one problem in each subcategory.

The average solution times by subcategories shown in the four tables indicate that,

for these test problems, the computational effort required by the heuristic was rather

small. In fact, these average times were always less than 2.50 seconds. In comparison to

exact algorithms that have been used in test situations to globally solve linear

multiplicative problems, these times are generally either at least as small or much smaller

(see, e.g., Kuno, Yajima, Konno 1993 and Ryoo and Sahinidis 1996). Furthermore, in

contrast to solution times for exact algorithms, these average solution times seem much

less sensitive to increases in p, n, k or to increases in the average number of efficient


Table 3.4. Computational Results: p = 5.

Subcategory Avg. No. Eff. Avg. Eff. Avg. Solutions Lowest Eff. No. Exact
k n Ext. Points Rating r Time (sec.) Rating r Solutions
10 20 1331.4 0.993 0.353 0.941 5
20 10 527.1 0.998 0.294 0.993 2
25 30 57115 0.995 0.962 0.992 2







68
extreme points that exist in the corresponding problems (3.16); see Kuno, Yajima, Konno

(1993) and Ryoo and Sahinidis, (1996).

Finally, it is worth noting that we were able to apply the heuristic to much larger

problems than those reported in Tables 3.1-3.4. However, the number of efficient extreme

points in the associated multiple-objective linear programming problems (3.16) for these

cases always exceeded 200,000. Since the ADBASE code cannot be used to find all of the

efficient extreme points for such problems, we were unable to completely enumerate the

sets of efficient extreme points to find z,, and r values for these problems. Thus, we are

as yet not able to draw conclusions concerning the accuracy of the heuristic for any

problems larger than those reported in Tables 3.1-3.4.

To further illustrate the effectiveness in practice of the heuristic algorithm, we

solved a real application problem in forest management that was studied in Steuer and

Schuler (1978) as a multiple-objective linear programming problem. The problem

involves the allocation of land and budget monies in a way that seeks to maximize

objectives in timber production, hunting and cattle grazing in the Swan Creek subunit of

the Mark Twain National Forest. Steuer and Schuler (1978) provide actual data used to

formulate their multiple-objective linear programming problem. The problem contains 31

decision variables, 5 linear objective functions, and 13 constraints. Our multiplicative

programming problem was formed from this problem by multiplying the 5 linear

objective functions together to form a single objective function. The heuristic was then

used to search for an approximate solution that maximizes this single objective function

subject to the constraints of the forest management multiple-objective linear

programming problem.








To help evaluate the attractiveness of the solution found by the heuristic

algorithm, we found a global optimal solution by enumerating the 83 efficient extreme

points of the associated forest management multiple-objective linear program using the

ADBASE computer code. An efficiency rating of r = 0.999 was calculated using the

slightly modified equation

r= 1 I Z (z,,- )]

since this multiplicative programming problem is a maximization problem rather than a

minimization problem. This efficiency rating indicates that the heuristic algorithm

returned an attractive value z, relative to the actual global maximum value z,.


3.5. Discussion

The results of this chapter imply that there are at least two ways to rewrite a

concave multiplicative programming problem as a concave minimization problem. It

follows that concave minimization theory and methods can be used in these ways to

analyze and solve concave multiplicative programs. The results also imply that a concave

multiplicative programming problem can be analyzed and solved directly, without any

reformulation, as a quasiconcave minimization problem over a convex set. Furthermore,

the analysis in the chapter implies that any concave multiplicative programming problem

(Px) with a compact feasible region has at least one optimal solution that is an efficient

extreme point solution of the associated multiple-objective mathematical programming

problem (3.15). Therefore, the opportunity exists for devising solution methods for such

problems (Px) that search among the efficient extreme points of the associated multiple-

objective problems (3.15). The chapter proposes a heuristic algorithm that takes this







70
approach for solving linear multiplicative programs. From the computational results

presented for this heuristic algorithm, we conclude that its features and performance offer

significant potential for conveniently finding very attractive solutions with relatively little

computational effort to the various applications using linear multiplicative programming

encountered in practice. Thus, the theoretical and algorithmic results presented in this

chapter offer some potential new avenues for more effectively analyzing and solving

multiplicative programming problems of various types.













CHAPTER 4
A GENERAL MULTIPLICATIVE PROGRAMMING PROBLEM IN OUTCOME-
SPACE

4.1. Introduction

Recall from Chapter 1 that the multiplicative programming problem is given by


(Px) vx =min fl(x),s.t.xe X,


where p > 2 is an integer, X is a nonempty set in R", and, for each j = 1, 2,..., p,

f,: X -4 R satisfies f,(x)> 0 for all xe X. For simplicity, we assume that the

minimum v, in problem (Px) is achieved.

For any xe R", let f(x) denote the p-vector withjth entry equal to f,(x),

j = 1, 2,..., p. Let ye RP denote the p-vector withjth entry equal to yj, j = 2,..., p.

For each j = 2,..., p, let iY e R satisfy

5j1sup f,(x),s.t.xe X,

where 5j = +o is possible, and let e RP denote the vector with jth entry equal to y,

j = 1, 2,..., p. Although various outcome-space reformulations of problem (Px) have

been proposed for solution purposes, one of the most common reformulations is given by

the problem

(P-') v, = min g(y), s.t. ye Y4,








where

YS ={yE RPIf(x)yg forsomexe X, (4.1)

and where, for each ye Y g : Y -+ R is defined by


g(y) = l Y. (4.2)
j=1

For example, problem (P, ) is essentially the reformulation of problem (Px) used in the

algorithms of Benson (1998c), Falk and Palocsay (1994) and Thoai (1991). Notice that

since X is nonempty, Ys is a nonempty set. By constructing appropriate global solution

algorithms for problem (P,r), this problem provides us with the opportunity to solve

problem (Px) by working in the outcome-space RP of the problem, rather than in the

decision space R", which is generally much larger than RP. In order to globally solve

problem (P,'), it is important to understand the properties of the set Ys defined by (4.1),

of the function g defined by (4.2), and of problem (P, ) itself.

This chapter undertakes a mathematical analysis of the outcome-space

reformulation (P,,) of problem (Px). The analysis is organized according to whether or

not the outcome-space problem satisfies conditions for the general case, the convex case,

or the polyhedral case. For the general case, we show, for instance, that globally solving

either problem (Px) or problem (P J) essentially also globally solves the other problem,

and that, for any feasible point y for problem (Pr,), either g(y)< g(y) for some ye Y

or y satisfies a condition that is necessary, but not sufficient, for it to be a local optimal

solution for problem (P,"). For the convex and polyhedral cases, we show stronger






73
results. For example, we show for the convex case that any global optimal solution for

problem (P,) must lie on the boundary of YV, that the objective function g in problem

(P_,) is strictly pseudoconcave on Y-, and, when YV is closed and contains at least one

extreme point, that problem (P,,) has an extreme point global optimal solution.

The analysis of the general case of problem (P,-) is given in Section 4.2. Section

4.3 provides analytical results for both the convex and polyhedral cases of problem (Py_).

4.2. Results for the General Case of Problem (P,,)

Notice under the assumptions made in Section 4.1 for problem (Px) that Y- is a

nonempty subset of RP := z e R'Pz > O0. When Y- satisfies this condition, we obtain

what we will call the general case of problem (P,,).

It is important to establish that by solving the general case outcome-space

formulation (PF,) of problem (Px), a global optimal solution for problem (Px) can be

recovered. The following result, by showing that problems (Px) and (P.) are equivalent

in a certain sense, immediately establishes this fact.

Theorem 4.2.1. (a) If x' is a global optimal solution for problem (Px), then y = f(x*)

is a global optimal solution for problem (P,). Furthermore, v, = v,.

(b) Problem (P,,) has at least one global optimal solution. Furthermore,

if y' is a global optimal solution for problem (Py,), then any x' e X such that

f(x' ) y* is a global optimal solution for problem (Px).






74
Proof. (a) Let x' be a global optimal solution for problem (Px), and set

y' = f(x'). From (4.1) and (4.2), this implies that y" e Y- and that


g(y')= f f(x)=v,.
j=1

Therefore, v,
there would exist an xe X such that


o< lf,(T) g(y)< .
11

which contradicts the definition of v,. Therefore, g(y)v, for all ye YV. This implies

that v, v,. Since v,
a global optimal solution for problem (Pr,).

(b) By assumption, we may choose a global optimal solution for problem

(Px). From part (a), this implies that problem (P,-) has at least one global optimal

solution. Suppose that y' is a global optimal solution for problem (Pr,). Since y' e YV,

(4.1) implies that we may choose an arbitrary x' e X such that f(x' ) y'. Then, from

(4.2), since 0



Since x' e X and y' is a global optimal solution for problem (P_), this implies that


vY~ f, (x') jil






75

From part (a), v, = v,. By (4.3), this implies that If,(x'*)= v. Since x' e X, it follows
j=I

that x' is a global optimal solution for problem (Px). O

Suppose in the general case of problem (Py,) that a point ye Y- has been

generated. For algorithmic purposes, it may be valuable to have a tool for finding an

alternate point ye Y- that satisfies g(y)< g(y), if such a point exists. The next result

gives an idea for potentially helping to create such a tool. To prove this result, we need

the following lemma. This lemma will also be useful in proving several other results later

in this chapter.

Lemma 4.2.1. Assume that E YV. Then, for any yE Y',


(l/p)(Vg(y), y) = g(y)(l p) i (y/y),
j=l

and

g(yM)/Ap) (/j y gg(yj)g(y)g g(y)
jI=

with equality holding in the latter relationship iff, for some constant M > 0, y, = M y,

j=l, 2,..., p.

Proof. Choose an arbitrary point ye Y-. Suppose that ye YV. Then, by (4.2),

since Y' cR, g(y)> 0. By definition of g,


(1/p)(Vg(y),'y)= (l/p) ky j


= (i/p)[g(y)/Yj ]Y
j=I






76

=g(y) (1/p)(y /y). (4.4)
j=1

Since (l/p)>0, (yj/y)> 0 for each j = 1, 2,...,p, and p(1/p)= 1, the arithmetic-

geometric mean inequality (Duffin, Peterson, and Zener 1967) implies that

(i/P)(y/YJ) g(Y)/8(Y),
j=I

with equality holding iff for some constant M > 0, y, = My for each j = 1, 2,..., p.

Together with (4.4), since g(y)>0, this implies the desired results. [

Theorem 4.2.2. Assume that ye Y-. If

1.0 > inf(l/p) (y,/y), (4.5)


then g(y)< g(y) for some yE Y. In particular, if y achieves the infimum in (4.5), then

g(y)< g(y).

Proof. Suppose that y e Y-. If (4.5) holds, then for some e YV,

1.o>(/p)(y,/y,). (4.6)
J=i

Since g(y)> 0, this implies that


g(y)> g(y)(1i/p) (/y,). (4.7)
j=l
/i--

From Lemma 4.2.1, since y e Y', we know that

g(y)(l/p) (jt/yj ) g(y)[g (y)/g(y)]i/P. (4.8)
j=j

Since g(y)> 0, together (4.7) and (4.8) imply that






77

1.0> [g()/g(y)]/P.

Because g(y)> 0, this implies that g(y)< g(y). Therefore, g(y)< g(y) for some

ye YV. Since, for any point 5 that achieves the infimum in (4.5), (4.6) is also satisfied,

the argument above also implies that if y achieves the infimum in (4.5), then

g(y)< g(y).-

Notice that when y e Y-, the infimum in (4.5) is either less than 1.0 or equal to

1.0. From Theorem 4.2.2, when this infimum is less than 1.0, a point y in YV such that

g(y)< g(y) exists. In particular, in this case y is not a global optimal solution for

problem (P,). The next result covers the case when the infimum in (4.5) equals 1.0.

Theorem 4.2.3. Assume that ye Y If

1.0 = inf (/p)(,/y,), (4.9)

then y is an optimal solution to

vd = min(Vg(), y y), (4.10)

and vd =0.

Proof. From (4.9), since ye Y the infimum in (4.9) is achieved at y = y. By

Lemma 4.2.1, since g(y) is a positive constant, this implies that y also minimizes

(1lp)(Vg(y), y) over Y'. Since (l/p) is a positive constant and -(Vg(y), y) is a

constant, it is easy to see that this implies that y is an optimal solution to (4.10) and

d =0. ]







78
A point ye Y- is a local optimal solution for problem (P,,) when there exists an

E >0 such that for each ye Y- for which Ily-y -le, g(y)-g(y). From Theorem 4.2.3,

when ye Y- and (4.9) holds, then, for any y E Y-, if there is a S > 0 such that

d := (y y) satisfies y + Ad e Y- for all A such that 0 < A56, the directional derivative

of g at y in the direction d will be nonnegative, i.e., (Vg(y)d)0. From Bazaraa,

Sherali and Shetty (1993), this is a necessary, but not sufficient condition for y to be a

local (or global) optimal solution for problem (P,,).


4.3. Results for Convex and Polyhedral Cases of Problem (PY,)

When Y-, in addition to being a nonempty subset of RP, is a convex set, then we

obtain what we will call the convex case of problem (P-,). Similarly, when Y-, in

addition to being a nonempty subset of RP, is a polyhedron, then we obtain what we will

call the polyhedral case of problem (P,). Each of these types of outcome-space versions

of problem (Px) arises from a broad class of decision space problems, as shown by the

next result.

Theorem 4.3.1. When X is a convex set and, for each j = 1, 2,..., p, f is a convex

function on X, we obtain the convex case of problem (Pr). When X is a polyhedron

and, for each j = 1, 2,..., p, f, is linear on R", we obtain the polyhedral case of

problem (PY,).

Proof. Assume, in addition to the assumptions made in Section 4.1 on X and on

fj, j= 1, 2,..., p, that X is a convex set and that, for each j= 1, 2,..., p, f, is a






79
convex function on X. We will show that Y- is a convex set. Choose any y', y2 E Y.

From (4.1), since y', y2 e Y, we may choose x', x2e X such that f (x' ) yl and

f,(x2)
(1-A)>0, for each j=l, 2,..., p,

Af, (x)+ (I-A)f,(x2)Ay' + (-)y2. (4.11)

By the convexity of f,, j = 2,..., p, on the convex set X, if we set x= x' +

(1- )x2, then

xe X, (4.12)

and, for each j = 1, 2,...,p,

f ,()-A f,(xl)X )+(1-)f, (x). (4.13)

From (4.11)-(4.13), f (x)) A y' +(1 )y2, where e X Since y', y2 Y, y' Y

holds for each i = 1, 2. As a result, since A, (1- .A) 0, A y' + (1- A)y2 Y. The

conditions for A y' + (1 A)y2 to belong to YV are thus satisfied. By the choices of y', y2

and A, this implies that Y is a convex set.

Now suppose, in addition to the assumptions made in Section 4.1 on X and on

f, j = 1, 2,..., p, that X is a polyhedron and that, for each j = 1, 2,..., p, fj is a linear

function on R". We will show that Y- is a polyhedron. By definition, since X is a

polyhedron, there exists a finite number q of linear functions g,, j = 1, 2,...,q, on R",

and real numbers b j = 1, 2,..., q, such that

X = xe R"lg (x)b,, j=1,2,...,q}.







80

Let Z F R""+ be defined as the set of all solutions (x, y) to the system of linear

inequalities (4.14)-(4.16) given by

f,(x)- Y<0, j=1, 2..., p, (4.14)

yj.j, j=l,2,...,p, (4.15)

g,(x)
Then, by definition, Z is a polyhedron in R" Let A be the p x (n+ p) matrix whose

first n columns each equal 0e RP and whose last p columns together form the px p

identity matrix. Then, from (4.1) and the definition of Z, Y = AZ. From Rockafellar

(1970, Theorem 19.3), YV is a polyhedron in R.. O

In convex cases of problem (Pr,) (and thus, in polyhedral cases as well), certain

locations within Y- for seeking global optimal solutions can be specified. For instance,

we have the following result.

Theorem 4.3.2. Suppose that problem (P,,) satisfies the conditions for the convex case.

Then:

(a) Any global optimal solution for problem (P,) belongs to the boundary of YV.

(b) If YV is closed and contains at least one extreme point, then there exists at

least one global optimal solution for problem (Pr,) that is an extreme point of YV.

Proof. Assume that YV, in addition to being a nonempty subset of R.P, is a

convex set, i.e., that we have the convex case for problem (P r). Then, from Theorem

4.2.1, problem (P,,) has at least one global optimal solution.






81
(a) To show this part of the theorem, let y' denote an arbitrary global optimal

solution to problem (Pr,). Suppose that y' is not on the boundary of YV. By the choice

of y* and since Y5 is a convex set, Y- has a nonempty interior. Therefore, y' must

belong to the interior of Y". From (4.1), this implies that for some xe X, f(x)< y'

must hold. By assumption, since X, f(xI)> 0. Therefore, if we set y = f (), it

follows that ye Y- and

YIj < YI;
j=I j=l

From (4.2), this contradicts the global optimality of y' in problem (P.,). Therefore, y

must belong to the boundary of Y .

(b) From the discussion in Section 3.2, since Y- is a nonempty convex set and,

for each j = 1, 2..., p, the function h (y) = y is positive and concave on Y-, the global

optimal solution set for problem (P,,) is identical to the global optimal solution set for

the problem

(Pr) min (y),s.t.ye Y-,

where, for each y e Y : Y- R is the concave function defined by


(Y)= Y p.


Since YV is a nonempty, closed convex set with at least one extreme point, from

Rockafellar (1970, Corollary 18.5.3), it is easy to see that Y- can contain no lines.

Furthermore, since problem (P,-) has at least one global optimal solution, problem (P s)







82
also has at least one global optimal solution. By Rockafellar (1970, Corollary 32.3.1),

since k is a concave function on Y-, the latter two statements imply that problem (P,)

has at least one global optimal solution y that is an extreme point of Y'. Because the

optimal solution sets of problems (P,,) and (P,3) coincide, this completes the proof. [

Suppose that Ys is a nonempty, closed convex subset of Rf, and that Ys

contains at least one extreme point. Then, from Theorem 4.3.2, there will exist at least

one global optimal solution for problem (P,,) that is an extreme point of Y-, and all

global optimal solutions for problem (Pr,) will lie on the boundary of Y-. Neither of

these properties, however, is necessarily shared by the decision set-based problem (Px)

whose outcome-space reformulation yields problem (P,_). The following example

demonstrates this.

Example 4.3.1.Let p = 2, X = {(x.,x2f E R210x, S6, i = 1, 2,

f,(x1,x2)= ( -1)2 +1,

and

f,(x.,x,)=(x2 2) +

in problem (Px). Then X is a nonempty, convex set and for each i = 1, 2, f, is a

convex, positively-valued function on X Therefore, by Theorem 4.3.1, the problem

(Py,) obtained by formulating the outcome-space version of problem (Px) is guaranteed

to satisfy the conditions of the convex case for problem (Ps,). Furthermore, it is not

difficult to show, in this case, that Y- is compact. Thus, Y- is closed and contains at






83
least one extreme point. It is easy to see that the unique global optimal solution to

problem (P _) is (y ) = (1, 1) which, as guaranteed by Theorem 4.3.2, is an extreme

point of Y- (and is thus on the boundary of Y ). On the other hand, the only global

optimal solution to problem (Px) is (x'J = (1, 2), yet x' is neither on the boundary of

X nor is it an extreme point of X.

To present the next result, we need to define two types of functions.

Definition 4.3.1. Let Z c R" be a nonempty convex set, and let h: Z -> R. The function

h is said to be quasiconcave on Z when for each z', z2 Z and A R such that

0
h[iz' + (l- )z2 ]> min h(z h(z2)}.

Definition 4.3.2. Let W be an open set in R" that contains Z R R", and let h: W -> R.

The function h is said to be strictly pseudoconcave over Z when h is differentiable over

Z and, for each distinct z', z E Z, if (Vh(z'), z2- zl) O, then h(z2)< h(z').

It is well known that a differentiable, quasiconcave function h: Z -- R need not

be strictly pseudoconcave over Z. For a discussion of quasiconcave and strictly

pseudoconcave functions see, for example, Bazaraa, Sherali and Shetty (1993).

From Konno and Kuno (1995, p. 379), we know that when Y- is a convex set,

since Y5 C RP, g: Y< -4 R defined by (4.2) is quasiconcave on Y4. Thus, in the convex

case, problem (P,-) is a minimization of a quasiconcave function over a convex set. In

fact, however, we have the following even stronger result.






84
Theorem 4.3.3. Suppose that problem (P,,) satisfies the conditions for the convex case.

Then, in this problem, g is a strictly pseudoconcave function over the convex set Y-.

Proof. The set Y- is a convex set by definition of the convex case for problem

(P,,). To show that g is strictly pseudoconcave over Y-, notice first that by (4.2), g

can be considered to be well defined over the open set RP. Also notice that g is

differentiable over RP and, thus, over Y- g RBf.

Suppose now that y' and y2 are distinct points in Y" that satisfy

(Vg(y'), y2 y'l 0. Then, from (4.2), we obtain

O >(Vg(y),Y2 -yl)= Y I --Y)
k=1 j k

= Y Y y g(y)
k=l jl k=1

= [g(y' )](Y/y: )- pg(y')
k=1

= g (y' )[(y /y)- p]. (4.17)

By multiplying both sides of (4.17) by (l/p) and rearranging, we obtain that

g(y ) g(y')(1/p) (y/y: ). (4.18)
k=i
From Lemma 4.2.1,

gy')(1/p) y/y: ) g(yl )[g(y2 l (4.19)
ii






85
with equality holding iff, for some M > 0, y2 = M y[, k = 1, 2,..., p. There are two

cases to consider.

Case (i): There is no M >0 such that y = Myk, k = 2,..., p. Then, in (4.19),

strict inequality holds, so that from (4.18) and (4.19),

g(y')> g(yI [g(y2g(y )]'/

Since g(y')>0, this implies that g(y)< g(y').

Case (ii): For some M > 0, y2 = M y, k = 1, 2,..., p. If we choose such an M,

then (4.19) holds as an equality. Thus, from (4.19) and the choice of M, we obtain that

g(y' )(1/p)(y/y: )= gy' )[g(y2 ) g(y' )f/ (4.20)
k=1

and that

g(y2)=Mpg(y), (4.21)

respectively. Since g(y')> 0, together (4.18), (4.20) and (4.21) imply that

g(y)' gG XM P) = MgGy-

Dividing through by g(y')> 0 yields M <1. Notice that M # 1, since, by assumption, y'

and y2 are distinct. Therefore M <1. By (4.21), since g(y'), g(y2)> 0, this implies that

g(y2)< g(y'), and the proof is complete. [

Remark 4.3.1. Theorem 4.3.3 justifies and strengthens the claim of Sniedovich and

Findlay (1995, p. 317) that when YV is a convex subset of Rf, g: Y -) R defined by

(4.2) is differentiable and pseudoconcave on Y-.






86
From Theorem 4.3.3, in the convex case, problem (P,-) is a global optimization

problem involving the minimization of a strictly pseudoconcave function over a convex

set Y-. Therefore, as in the general case, multiple local optimal solutions for problem

(P,,) will generally exist that are not globally optimal.

From Theorem 3.2.1, we know that when Y- is a nonempty, convex subset of

RP, the function : Y< -4 R defined, as in the proof of Theorem 4.3.2, by

R(y)= [g(y)lp (4.22)

is concave where g : Y' R is given by (2). By the next result, when the domain of g is

restricted to an appropriate subset of Y-, a stronger statement can be made.

Theorem 4.3.4. Assume that Y- is a nonempty, compact, convex subset of R.P. For any

ae R' and be R such that a >0 and b>0, let Z(a,b)=Y -n {ye RPl(a,y)=b}.

Then R : Z(a,b) -- R defined for each y E Z(a, b) by (4.22) is a strictly concave function

for any a e RP and be R such that a > 0 and b > 0.

Proof. Assume that yl,y2 EZ(a,b) and y' y2, where ae R', be R, a>0,

and b > 0. Since Z(a, b) is an intersection of two convex sets, it is itself a convex set.

Therefore, if we choose A E R such that 0 < A < 1, then

z:=Ay'+(l-A)y E Z(a,b).

Also, by (2) and (4.22),


(z)=A [Ay' +(1-)yJl P. (4.23)
L/='


From Polya and Szego (1972),






87

Ay+(1- A)y+j] s y + (i- Y y) ] (4.24)
j=1i J { = j=1 I J=1

with equality holding iff A y) = K(l- -)y', j= 1, 2,.... p, for some positive constant K.

Since




and

[(I-~ y (lA1- (y2)

(4.23) and (4.24) will imply the desired result if we can show that no K > 0 exists such
that

A.y =K(1-AI)y', j=1,2,...,p. (4.25)

Notice that since y' f y2, Kx := [A/(l -)] does not satisfy (4.25).

Suppose, to the contrary, that for some K > 0, (4.25) is satisfied. Then from

(4.25) it follows that

y'= K[(1- )/y]y2. (4.26)

Since y', y2 e Z(a,b),

(a, yl= (a, y2 =b. (4.27)

Substituting for y' in (4.27) via (4.26), we obtain

K[(l- )/A;(a, y) =(a, y2)=b.






88
Solving here for K, we obtain that K = [/(1 A)]. Since K = K, = [A/(1 A)] does not

satisfy (4.25), this contradiction concludes the proof. O

It is important to notice that the counterpart of Theorem 4.3.4 in the decision

space does not hold, even in the polyhedral case. In particular, suppose that X C R" is a

nonempty, compact polyhedron and, for each j = 1, 2,..., p, that there exists a c, a R"

such that f (x)= (ci, x) >0 for all xe X. Then, although the function h: X R

defined for each x eX by


h(x)=l (c, x (4.28)

is concave (see Theorem 3.2.1), the function h: X(a,b)-- R need not be strictly

concave, where aE RP, be R, a>0, b>0, and

X(a,b)= xeX ac,,x)=b}.


The following example illustrates this observation.

Example 4.3.2. Let

X ={(x,,x2 e R: 0 5 <, 4.0,i=1,2},

and let f,(x,,x,)= ((1, l),(x,, x)), j= 1, 2. Then X is a nonempty, compact polyhedron

and, for each j = 1, 2, f, is positive and linear on X. As guaranteed by Theorem 3.2.1,

h: X -- R, which, by (4.28), is given by

h(xi,,x)=(x, +x2),







89
is concave. However, if, for example, a, = a2 = 1 and b = 4, then h is not strictly

concave on

X(a,b)= (x,,x2,0l.5x,< 4.0,i= l,2,x, +x2 =4.

Consider now problem (Pr,) when the conditions of the polyhedral case hold.

Assume also that Y- is a compact set, and that ye Y-. For algorithmic purposes, it may

be quite useful in this case to develop tools for finding local optimal solutions for

problem (PY). These tools could then potentially be used to construct global solution

algorithms for the problem that repeatedly move from a local optimal solution to an

improved local optimal solution until a global optimal solution is found. The remaining

results in this section are motivated, in part, by the desire to find such tools.

Notice that in the polyhedral case, the optimization problem in (9) is a linear

program given by


(LP) min (1/p)~ (yj/), s.t. ye y .
j=1

Problem (LP) will have an optimal solution y' that can be found, for instance, by the

simplex method. Since ye Ys, the minimum value vmin in problem (LP) satisfies vmin

S1.0. As a result, there are three possible cases for problem (LP). First, vmin <1.0 may

hold. Second, vmin = 1.0 may hold, with y being the unique optimal solution to problem

(LP). Third, vmin = 1.0 may hold, with problem (LP) having multiple optimal solutions.

In the first case, from Theorem 4.2.2, it follows that gy')< g(y), where y* is any

optimal solution to problem (LP), so that a more attractive feasible solution y" to






90
problem (P,-) than y has been found. To analyze the second case, we need the

following two definitions and lemma.

Definition 4.3.3. A point ye Y- is a strict local optimal solution for problem (P,,) when

there exists an E > 0 such that for each ye Y- for which y y and ly I < e,

g(y)> g(y).

Definition 4.3.4. Let Z be a nonempty convex set in R', and let h: Z -* R. The function

h is said to be strongly quasiconcave on Z when for each z', z2 e Z with z' z2, we

have

h[A z' +(1-A 2]> min (z' ),h(z2)

for each A such that 0 < ; < 1.

Lemma 4.3.1. Let Z be a nonempty convex set in R", and let h: Z -* R be strongly

quasiconcave. Suppose that z', i= 1, 2...,k, are distinct points in Z and that s is an

element of the convex hull of z', i = 1, 2,...k, such that, for each i = 1, 2,...,k, s z'.

Then

h(s)> min h(z')i= l,2...k }.

Proof. The lemma is easy to prove using Definition 4.3.4 and induction. O

The following result analyzes the case where vmin = 1.0 and y is the unique optimal

solution to problem (LP).

Theorem 4.3.5. Assume that problem (P',) satisfies the conditions for the polyhedral

case, and that YV is a compact set. Assume also that ye Y-. Suppose that vmin = 1.0






91
and that y = y is the unique optimal solution to problem (LP). Then y is a strict local

optimal solution for problem (Pr').

Proof. Since g(y)> 0 and y = y is the unique optimal solution to the problem


min(l/p) (y,/ly,)
Y.Y- j=1

y = y must also be the unique optimal solution to the problem


mi g(y)(l/p) (yj/G ).
'Y6 j=l

Therefore, by Lemma 4.2.1, y = y is the unique optimal solution to the problem

min(l/p)(Vg(Y3 y).
yEY'

Since (l/p)> 0 and (Vg(y), y) is a constant, this implies that y = y is the unique

optimal solution to the problem

min(Vg(y), y y). (4.29)
eVY

Therefore, the optimal value of problem (4.29) is 0, and for all ye Y- such that y ; y,

(Vg(y),y- y) > 0. (4.30)

Let d', d2...,dk represent the directions of the edges of Y5 emanating from the

extreme point y of Y-. From (4.30),

(Vg(y),d')>0

for all i= 1, 2,...,k. By Theorem 4.1.2 in Bazaraa, Sherali, and Shetty (1993), this implies

that there exist positive reals 6,, i = 1, 2,...,k, such that

g(yG+ d')> g(y) (4.31)






92
for each Ae (0,6,). Let 8= 1/2 min { i = 1, 2,...,k}, and consider the points y and

y+6d', y+&d2 ..., y+.d.

Then by definition of 8 and (4.31),

g( + d')> g(y) (4.32)

for each i = 1, 2,...,k. Let z be any element of the convex hull of y, y +&d, i= 1, 2,

...,k, such that z # y and, for each i= 1, 2,...,k, z Y+ y d'. Since g is a strictly

pseudoconcave function on Y-, it is also a strongly quasiconcave function on Y-

(Bazaraa, Sherali, Shetty 1993). As a result, by Lemma 4.3.1,

g(z)> min (y) g(y +&d'), i = 1, 2,...,k}. (4.33)

From (4.32) and (4.33), g(z)> g(y). Since 8 >0, this implies that there exists an e >0

sufficiently small so that if ze Y', IIz- y< e, and z Y, then g(z)> g(y). ]

Under the assumptions of Theorem 4.3.5, if vmin = 1.0 but y = y is one of two or

more optimal solutions to problem (LP), then y need not be a strict local optimal

solution for problem (P, ). The following example illustrates this point.

Example 4.3.3. Let

Y = (y, E R21y + y, >8, y1 7, y2 4j,

and let yT = (4, 4). Then Y- is a nonempty, compact polyhedron in R2, and the

assumptions of Theorem 4.3.5 are satisfied. In this case, ye Y6 and y is an optimal

solution to problem (LP). However, since (y~ = (4 +, 4- 8 e Y' and g(y6)< g(y)

for all values of 6 such that 0 < < 3, by Definition 4.3.3, y is not a strict local optimal






93
solution for problem (P,,). (In fact, y is not even a local optimal solution for problem

(Pr,).) Notice that problem (LP) in this case has multiple optimal solutions.

In the third case of problem (LP), vmin = 1.0 and problem (LP) has multiple

optimal solutions. In this case, by the next result, as in the first case, an improved feasible

solution for problem (Py,) is at hand. The proof of this result relies crucially on Theorem

4.3.4.

Theorem 4.3.6. Assume that problem (Pr,) satisfies the conditions for the polyhedral

case, and that Ys is compact. Suppose that y is an optimal solution for problem (LP),

and suppose that problem (LP) has multiple optimal solutions. Then, for any y' # y that

is an optimal solution for problem (LP), g(y) < g(y) must hold.

Proof. Let y* # y be an optimal solution to problem (LP). Then, since g(y)>0,

y' is also an optimal solution for the problem


min g(y)(1/p)(y /y,), s.t. ye YV.


By Lemma 4.2.1, since y YV, this implies that y* is an optimal solution to the problem

min (1/p)(Vg(y), y), s.t. ye Y".

Since (l/p)(Vg(y), y) is a fixed number, it follows that y' is an optimal solution to the

problem

min (l/p)(Vg(y), y-), s.t. ye Y. (4.34)

By assumption, y is an optimal solution for problem (LP). Therefore, the optimal value

of problem (LP) equals 1.0. From Theorem 4.2.3, this implies that the optimal value of




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E65TJAL2Y_OD5NKM INGEST_TIME 2013-03-27T15:21:06Z PACKAGE AA00013619_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES