Citation
Multiplicative programming

Material Information

Title:
Multiplicative programming theory and algorithms
Creator:
Boger, George
Publication Date:
Language:
English
Physical Description:
vii, 137 leaves : ; 29 cm.

Subjects

Subjects / Keywords:
Algorithms ( jstor )
Approximation ( jstor )
Efficiency objectives ( jstor )
Heuristics ( jstor )
Linear programming ( jstor )
Mathematics ( jstor )
Objective functions ( jstor )
Optimal solutions ( jstor )
Polyhedrons ( jstor )
Polytopes ( jstor )
Decision and Information Sciences thesis, Ph. D ( lcsh )
Dissertations, Academic -- Decision and Information Sciences -- UF ( lcsh )
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1999.
Bibliography:
Includes bibliographical references (leaves 131-136).
General Note:
Printout.
General Note:
Vita.
Statement of Responsibility:
by George Boger.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright [name of dissertation author]. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
021549200 ( ALEPH )
43702775 ( OCLC )

Downloads

This item has the following downloads:


Full Text










MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS


By

GEORGE BOGER













A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA














ACKNOWLEDGMENTS


I would like to thank my entire supervisory committee Dr. Harold Benson, Dr.

Selcuk Erenguc, Dr. Asoo Vakharia, and Dr. Richard Francis for their time and helpful

feedback on my dissertation. I am especially grateful to my committee chairman, Dr.

Benson, for suggesting the topic of multiplicative programming problems and for his

tremendous assistance and unending support. Without his help, this dissertation would

not have been completed. I would also like to thank Mr. Erijang Sun for proving some

theoretical results needed to support my dissertation topic.

I am also grateful to the DIS department chairman, Dr. Erenguc, for providing an

assistantship and for allowing me to teach undergraduate courses during my time at the

University of Florida. The teaching experience was an enjoyable and rewarding

experience.

I would like to thank my family for their encouragement and emotional support. I

would also like to thank my colleagues in the Ph.D. program for their friendship and their

support.

Finally, I am in debt to my master's degree advisor, Dr. Frederick Buoni, at the

Florida Institute of Technology, for his guidance. He suggested multiple objective linear

programming as a topic for my thesis. While working on the thesis, I met Dr. Benson

during a visit to FIT to present a talk related to multiple objective linear programming.









Dr. Benson agreed to serve on my master's degree committee and later recruited me for

the DIS Ph.D. program.














TABLE OF CONTENTS

page

ACKNOW LEDGM ENTS.............................................................................

ABSTRACT........................... ........................... ................

CHAPTERS

1 INTRODUCTION ............................................................. .......

1.1. The Multiplicative Programming Problem............................. ........... 1
1.2. Reformulations of the Multiplicative Programming Problem.....................4
1.3. Purpose and Organization of the Dissertation...........................................

2 A REVIEW OF THE LITERATURE ON MULTIPLICATIVE
PROGRAMMING PROBLEMS .......................... ........................9

2.1. Organization of the Literature Review ................................... ............. 9
2.2. Methods to Solve Problems (LMP2), (GLMP), and (CLMP)................... 13
2.2.1. Methods Based on Quadratic Programming ................................. 15
2.2.2. Methods Based on Searching the Outcome Set ............................ 17
2.2.3. Methods Based on Solving a Parametric Master Problem.............. 22
2.2.4. Methods Based on Polyhedral Annexation................................... 28
2.3. Extensions of Algorithms for Problem (LMP2) to Solve Problem
(LM P) when p 3............................. ...........................................32
2.4. Methods to Solve Problems (CMP), (GCMP) and (CCMP) ...................32
2.4.1. Methods Based on Solving a Reformulated Problem ...................33
2.4.2. A Method Based on Outer Approximation .....................................37
2.5. Methods to Solve Problem (LMP) as a Concave Minimization
Problem ............................ ... ..... ...................... 38

3 CONCAVE MULTIPLICATIVE PROGRAMMING PROBLEMS:
ANALYSIS AND AN EFFICIENT POINT SEARCH HEURISTIC
FOR THE LINEAR CASE ......................... ..............................40

3.1. Introduction.................. ......................... .. .......................... 40
3.2. A analysis ........................................ ................................................4 1
3.3. Efficient Point Search Heuristic ......................... .........................52
3.4. Computational Results............................. ...........................62
3.5. D iscussion........................................ .............................................. 69









4 A GENERAL MULTIPLICATIVE PROGRAMMING PROBLEM IN
OUTCOME-SPACE ............................ .... ........................71

4.1. Introduction.......................................................... .......................... 7 1
4.2. Results for the General Case of Problem (Py,) ......................................73
4.3. Results for Convex and Polyhedral Cases of Problem (P ) ...................78
4.4. D iscussion............................................ ................................................ 96

5 AN OUTCOME-SPACE CUTTING-PLANE ALGORITHM FOR
LINEAR MULTIPLICATIVE PROGRAMMING ................................98

5.1. Introduction...................................... ............................. .............. 98
5.2. Theoretical Prerequisites ....................................... .............. 100
5.3. Outcome-Space, Cutting-Plane Algorithm............................................ 104
5.3.1. Strict Local Optimal Solution Search.......................................... 105
5.3.2. Cutting Plane Construction. ................................................. 107
5.3.3. Termination Test. ................................... ........................... 109
5.3.4. Outcome-Space, Cutting-Plane Algorithm ................................. 110
5.4. Implementation.......................... ...... ............................ 114
5.5. Exam ple .......................................................... ............................. 119
5.6. Concluding Remarks ........................................ ........... ..... 124

6 SUMMARY AND FUTURE RESEARCH................................................. 125

6.1. Introduction.................................................. .......................... 125
6.2. Future Research on the Heuristic Algorithm......................................... 125
6.3. Future Research on an Global Solution Algorithms................................ 127

R E FE R E N C E S .......................................................................... ................................ 131

BIOGRAPHICAL SKETCH................................................ ............. 137














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS

By

George Boger

December 1999

Chairman: Harold P. Benson
Major Department: Decision and Information Sciences

Multiplicative programming problems are mathematical optimization problems in

which the objective function contains a product of several real valued functions defined

over a common domain and the feasible decisions are described by a nonempty set. These

optimization problems have some important applications in engineering, finance,

economics, and other fields. Multiplicative programming problems, however, are difficult

global optimization problems that are known to be NP-hard.

This dissertation has two purposes. The first is to develop and test a heuristic

algorithm that finds a good solution, though not necessarily a globally optimal solution,

for the linear multiplicative programming problem. The second purpose is to develop a

global solution algorithm for the linear multiplicative programming problem that is

potentially more efficient than existing algorithms for this problem.









To evaluate the effectiveness in practice of the heuristic algorithm, we have

written a FORTRAN computer program and used it to solve 260 randomly generated

linear multiplicative programming problems of various sizes. Our experimental results

show that the computational requirements of the heuristic algorithm are not overly

burdensome when compared to the effort required to solve a linear multiplicative

programming problem.

The framework of the outcome-space, cutting-plane algorithm is taken from a

pure cutting plane, decision set-based method developed by Horst and Tuy for solving

concave minimization problems. By adapting the approach of this method to an outcome-

space reformulation of the linear multiplicative programming problem, rather than

directly applying the method to the original decision set formulation, it is expected that

considerable computational savings can potentially be obtained. We also show how

additional computational benefits might be obtained by implementing the new algorithm

appropriately. To illustrate the new algorithm, we apply it to the solution of a sample

problem.














CHAPTER 1
INTRODUCTION


1.1. The Multiplicative Programming Problem

Multiplicative programming problems are mathematical optimization problems in

which the objective function contains a product of several real valued functions defined

over a common domain and the feasible decisions are describe by a nonempty set. These

problems occur is a wide variety of application areas.

For example, Konno and Inori (1989) studied a bond portfolio optimization

problem in which the portfolio's performance is measured by a number of indices such as

the average coupon rate, the average terminal yield, and the average length to maturity.

The goal of the portfolio manager is to improve the performance of the portfolio by

purchasing or selling bonds in the marketplace subject to some limiting constraints. The

manager must consider multiple incomparable objectives such as maximizing the average

terminal yield and minimizing the average maturity time. Konno and Inori choose to

optimize several objectives simultaneously by multiplying them together since the

objectives do not share a common scale.

Another example of a multiplicative programming problem, given in Maling,

Mueller and Heller (1982), is a packaging problem encountered in designing very large-

scale integrated circuit (VLSI) chips and laying out building floor plans or manufacturing

plant facilities. In the problem, the overall rectangular dimensions of the feasible layout







2
plans are constrained rather than fixed. Different layout plans with differing overall

rectangular dimensions are obtained according to how the components of a system are

arranged within each plan. The objective is to find the arrangement of components that

minimizes the overall layout area subject to certain constraints on the area and the

perimeter of the layout.

Henderson and Quandt (1971, p. 15) also give an application of multiplicative

programming problems. Their example is from microeconomics. In their example, a

rational consumer wishes to find a combination of two commodities to purchase from

which he will derive the highest possible level of satisfaction. Budgetary constraints and

the availability of the commodities limit the quantities the consumer may purchase. The

consumer's level of satisfaction is captured by his utility function, which is assumed to be

the product of the quantities of the two commodities. The rational consumer's problem is

then formulated as maximizing his utility function subject to the budgetary and

commodity availability constraints.

The multiplicative programming problem or, more briefly, the multiplicative

program, may be formulated mathematically as


(Px) minh(x)= (f,(x), s.t. xe X,


where p > 2 is an integer, X c R", and, for each j = 1, 2,..., p, f : X -R satisfies

f,(x)2 0 for all x X. For simplicity we will assume throughout this proposal that the

minimum of problem (Px) is achieved at some point x' e X. In addition we will assume

that p is significantly less than n since this holds for virtually all applications of








multiplicative programming problems. If f, (I) = 0 for some j E {l, 2,..., p} and some

eE X, then clearly is a global optimal solution. This condition can be checked by

solving p minimization problems min {f (x)xe X j = 1, 2,...,p. Therefore, we may

assume without loss of generality that, for each j = 1, 2,..., p, f (x)> 0 holds for all

xe X.

The objective function h of problem (Px) is generally not a convex function. As

a result, problem (Px) belongs to a class of nonconvex programming problems called

global optimization problems. In contrast to convex programming problems, there may be

many local minima for problem (Px) that are not globally optimal. Conventional local

optimization methods based on gradients, subgradients, conjugate directions, or the

Karush-Kuhn-Tucker conditions, for instance, are at best guaranteed only to find a local

minimum. These methods must then terminate, since there is neither a local criterion for

certifying the global optimality of a given solution nor a way to determine how to proceed

to a better solution if the solution is not globally optimal. From the perspective of

computational complexity, problem (Px) is a difficult problem that is known to be NP-

hard even when the objective function is simply h(x)= x, x2 and the feasible region X

is a polyhedron (Matsui 1996).

When in addition to the assumptions given previously for problem (Px), X is a

convex set and, for each j= 1, 2,..., p, f : X R is a concave function, we obtain the

concave case of problem (Px), called the concave multiplicative programming problem.

The convex case of problem (Px), called the convex multiplicative programming








problem, is obtained when, in addition to the assumptions made previously for problem

(Px), X is a convex set and, for each j = 1, 2,..., p, f, : X R is a convex function.

A special linear case of problem (Px), called the linear multiplicative programming

problem, is obtained when, in addition to the assumptions make previously for problem

(Px), X is a compact polyhedron and, for each j = 1, 2,..., p, f : X R is a linear

function (Konno and Kuno 1992).

1.2. Reformulations of the Multiplicative Programming Problem

During the 1990's there has been a resurgence of interest in problem (Px).

Encouraged by the rapid advances in high speed computing, researchers began developing

and testing new methods for solving global optimization problems that arise in practical

applications, including problem (Px).

Included among the global optimization methods used to solve problem (Px) for

the special case when p = 2 are various parametric simplex method-based algorithms

(e.g., Konno and Kuno 1992, Konno and Kuno 1995, Konno, Yajima, and Matsui 1991,

and Schaible and Sodini 1995), branch and bound procedures (e.g., Kuno 1996 and Muu

and Tam 1992), and various other types of algorithms (e.g., Konno and Kuno 1990,

Pardalos 1990, and Tuy and Tam 1992).

When p > 2, globally solving problem (Px) has been shown empirically to

require considerably more computational effort than when p = 2 (see, e.g., Ryoo and

Sahinidis 1996). A smaller number of the algorithms for solving problem (Px) when







5
p > 2 solve the problem directly without reformulating it as an outcome-space problem.

Included among these, for instance, is the polyhedral annexation algorithm of Tuy (1991).

Most of the algorithms for solving problem (Px) when p > 2, however, solve the

problem indirectly by globally solving an outcome-space reformulation of the problem

instead. This is because in practical applications p is routinely much smaller than n,

often by two or more orders of magnitude. As a result, working in R' is computationally

less challenging than working in R".

Let y E RP denote the p-vector with jth entry equal to y,, j = 1, 2,..., p. For

each j= 1, 2..., p, let e, R satisfy

j >sup fj(x) s.t.xe X,

where j = +oo is possible, and let ye RP denote the vector with jth entry equal to 9,,

j = 1, 2,..., p. Let f(x) denote the vector f(x)= f,(x) f2(xl...,fp(x)]', where

fj: X -- R, j = 1, 2,..., p, are the functions used in defining problem (Px). Thoai

(1991) and later Konno and Kuno (1995) based their outer approximation algorithms for

respectively solving the convex and linear cases of problem (Px) on one of the more

direct reformulations of problem (Px) as an outcome-space problem. Their reformulation

is given by


(P',) min yj, s.t. y Y',
j=1

where

Y ={ye R'jf(x)







Falk and Palocsay (1994) based their branch and bound, image space algorithm

for the linear case of problem (Px) on another outcome-space reformulation that is

closely related to problem (Py,). Their reformulation is given by


(Pr) minly,, s.t. ye Y,
j=I

where

Y= ye RP y=Cx forsomexe X}

and C is a (pxn) matrix whose rows are cT, j = 1, 2,..., p.


1.3. Purpose and Organization of the Dissertation

This dissertation has two main purposes. The first is to develop and test a heuristic

algorithm that finds a good solution, though not necessarily a globally optimal solution,

for the linear case of problem (Px). The second purpose is to develop an exact global

solution algorithm for the linear case of problem (Px) that is potentially more efficient

than existing algorithms for this problem.

Since the linear multiplicative programming problem is known to be an NP-hard,

multiextremal global optimization problem, it is inherently more difficult to globally

solve than a convex programming problem of the same size. In some application cases, a

solution will adequately meet the requirements of a user; see, e.g., Konno and Inori

(1989). In these cases, the use of a heuristic algorithm seems to be appropriate for finding

a satisfactory solution. To date, however, there is no known heuristic algorithm tailored to

finding a good solution for the linear multiplicative programming problem. In their

review of algorithms for solving problem (Px), Konno and Kuno (1995) do not mention







7

any heuristic algorithms for problem (Px), and our survey of the literature has revealed

none.

To develop the heuristic algorithm, we first analyze the concave multiplicative

programming problem. The analysis yields a new way to write a concave multiplicative

programming problem as a concave minimization problem. As a result, a concave

multiplicative programming problem can be solved by using any existing concave

minimization algorithm without resorting to a reformulation of the problem. We also

show that some relationships exist between concave multiplicative programming

problems and certain multiple-objective mathematical programs. These relationships are

exploited to develop the heuristic algorithm for the linear case of problem (Px).

For cases where a linear multiplicative program must be solved for an exact global

optimal solution, we expect that globally solving the outcome-space reformulation (Py)

instead will result in a significant decrease in the computational effort over that required

to directly solve the problem. This is because in typical applications of linear

multiplicative programs, p is several orders of magnitude smaller than n. As a result,

working in RP should be computationally less challenging than working in R".

To globally solve the outcome-space reformulation (P,,) of a linear multiplicative

program, we develop an outcome-space, pure cutting plane algorithm that works in R'.

The framework for the algorithm is taken from a pure cutting plane, decision set-based

concave minimization method developed by Horst and Tuy (1993). We show how to

adapt this method to solving the reformulation (P,,) of a linear multiplicative program

for a global extreme point optimal solution. Once this global solution is found, we can








recover a globally optimal solution for the linear multiplicative program in decision

space. As a further computational enhancement, we also show that for purposes of

implementation, the mechanics of the outcome-space, cutting-plane algorithm can be

applied to the smaller problem (Pr) instead of problem (P,,).

The organization of the proposal is as follows. In Chapter 2 we present a review

of the literature on multiplicative programming problems. In Chapter 3 we analyze the

concave multiplicative programming problem, apply the results to develop a heuristic

algorithm for the linear multiplicative programming problem, and report test results using

the heuristic algorithm on some randomly-generated problems. In Chapter 4 we analyze

the reformulation problem (P,,) and show that, under certain convexity assumptions on

Y-, problem (P,) has a global extreme point optimal solution y* e Y. We then present

a procedure that is guaranteed to find a strict local optimal extreme point solution for the

reformulation problem (P, ) of the linear multiplicative program. In Chapter 5 we

present an outcome-space, cutting-plane algorithm for globally solving a linear

multiplicative program. The algorithm employs the strict local optimal search procedure

presented in Chapter 4. We also illustrate the algorithm by applying it to the solution of a

sample problem. Finally, in Chapter 6, we give an overall summary and conclusions, and

we discuss directions for further research.













CHAPTER 2
A REVIEW OF THE LITERATURE ON MULTIPLICATIVE PROGRAMMING
PROBLEMS


2.1. Organization of the Literature Review

In this chapter we present a review of the literature on methods proposed for

solving multiplicative programming problems. The only known literature review on

multiplicative programming problems appears in Konno and Kuno (1995). In their

literature review Konno and Kuno defined multiplicative programming problems as "a

class of minimization problems containing a product of several convex functions either in

its objective function or in its constraints." They included problems in which the

objective function contained the summation of a convex function and the product of

convex functions.

Konno and Kuno (1995) organized their literature review based on whether the

problem data are linear or nonlinear and on the number of functions that appear in the

objective function. They considered solution methods for the following multiplicative

programming problems.

The first multiplicative programming problem considered by Konno and Kuno is

the special case of quadratic programming

(LMP2) min f (x)= ((c,x)+ d)((c2,x)+d2), s.t. x D,

where D := xe R' Ax > b, x > 0O is a non-empty polytope (bounded polyhedron) in






10

which A is an mxn matrix, be R", and, for each i= 1, 2, c' e R" \{0} and d, e R.In

addition, it is assumed that, for each x e D, (c', x) + d, > 0, i= 1, 2.

The second multiplicative programming problem that they considered is the

convex multiplicative programming problem

(CMP) minf(x)=I f (x, s.t.xe X,


where X C R' is a nonempty, compact, convex set and, for each j = 1, 2,..., p,

f: R" -- R is a convex function that satisfies f (x)> 0 for all xe X.

Konno and Kuno (1995) considered two special cases of problem (CMP): (1) the

case where p = 2 and (2) the case where p 2 and the problem data are linear. The

second case may be defined as the following extension of problem (LMP2):

(LMP) min f (x) = c',x)+di], s.t.xe D,


where p 2 is an integer and, for each i = 1, 2,..., p, (c',x) + d > 0 holds for all xe D.

Finally, Konno and Kuno (1995) considered three classes of problems related to

problem (CMP). In the first class is the following problem:

(GCMP) min f(x) = fo(x)+ fj (x)f42,(x s.t.xe X,
j=1

where, for each j = 0,1,...2q, f, : R R is a convex function that satisfies f (x)> 0

for all xe X.

The second class is a special case of (GCMP) in which q = 1 and the problem

data are linear. This class may be defined as the following extension of problem (LMP2):








(GLMP) min f(x) = (c, x) + (( x) + d,)((c, x) + d2, s.t. xe D,

where co R" and c', di, i = 1, 2, and D are defined as in problem (LMP2).

The third class of problems considered by Konno and Kuno (1995) is the

minimization of a convex function over a feasible region that includes a product of

convex functions in its constraint set.

Konno and Kuno's coverage of the literature is not exhaustive. They focused on

algorithms that have been demonstrated by computational experiments to be practical for

reasonably large problems (Konno and Kuno 1995, p. 370). Algorithms proposed by

Konno, Kuno, and their associates have been tested on randomly generated problems and

the results reported. However, computational results have not been reported by most of

the other researchers and therefore their methods were not included in the review.

Since the publication of the review by Konno and Kuno, two more multiplicative

programming problems have been discussed in the literature. The first problem adds a

convex function to the objective of problem (LMP2) to obtain the problem:

(CLMP) min f(x)= g(x)+((c',x) +dX(c2,x)+ d2), s.t.x D,

where g : R" R is a twice differentiable convex function and c', d,, i= 1,2, and D

are defined as in problem (LMP2). The second problem adds a convex function to

problem (CMP) to obtain the problem:

(CCMP) min f(x)= fo(x)+ f f(x), s.t.xe X,


where f : R" -- R is a convex function that satisfies fo(x)> 0 for all xe X and f,,

j = 1, 2,..., p, and X are defined as in problem (CMP).








The emphasis of this review will be on optimization problems in which a product

of functions appears in the objective function. Optimization problems with objective

functions that are comprised of a summation of a function and the product of functions

are also included in the review. Methods proposed for solving these problems may be

adapted to solve a problem whose objective function is strictly a product of functions by

setting the added function to the null function. The functions that appear in the objective

function will be either convex or linear functions since to date these are the only

multiplicative programming problems to appear in the literature. In this review we will

not consider optimization problems in which a product of functions appears in the

constraint set.

Like the review of Konno and Kuno (1995), this literature review is organized

based on whether the problem data are linear or nonlinear and on the number of functions

that appear in the objective function. It is divided into the following four sections. Section

2.2 reviews the methods proposed to solve problems (LMP2), (GLMP), and (CLMP).

Section 2.3 reviews the methods to solve problem (LMP) that are extensions of methods

for problem (LMP2). Section 2.4 reviews the methods to solve problems (CMP),

(GCMP), and (CCMP). Section 2.5 reviews the methods to solve problem (LMP) as a

concave minimization problem.

The rationale for organizing the literature review in this way is as follows.

Historically, the first algorithms for solving multiplicative programming problems were

specifically proposed for solving problem (LMP2). Problems (GLMP) and (CLMP) are

grouped with problem (LMP2) since they were conceived as extensions of that problem.

Several of the algorithms proposed for solving problem (LMP2) can be extended to solve








the problem (LMP), since they do not depend upon having only two functions in the

product term of the objective function. Problems (LMP2), (LMP), (GLMP), and (CLMP)

contain linear functions and polyhedral feasible regions. Algorithms for solving these

problems are implemented with the aid of the simplex method, which is used to solve

linear programming subproblems. The problems (CMP), (GCMP), and (CCMP) contain

nonlinear data and must rely on other optimization methods to solve nonlinear convex

programming problems. The latter three problems are therefore placed in a separate

group. Problems (GCMP) and (CCMP) are included in the group with problem (CMP)

because only one article addresses each problem, and they were conceived as extensions

of problem (CMP). Finally, two articles appeared in the literature that proposed solving

problem (LMP) as a concave minimization problem using techniques that the authors had

previously developed.

Table 2.1 gives a summary of the multiplicative programming problems

considered in this literature review along with the assumptions placed on the feasible

region and the objective function of each problem.

2.2. Methods to Solve Problems (LMP2), (GLMP), and (CLMP)

The methods for solving problem (LMP2), (GLMP), and (CLMP) are further

divided into four categories. In the first category are those methods that analyze problem

(LMP2) as a special case of quadratic programming. In the second category are

algorithms that analyze problem (LMP2) by searching the outcome set. In the third

category are the algorithms that solve an easier parametric programming problem rather

than directly solving problems (LMP2), (GLMP), and (CLMP). In the last category are









Table 2.1. Summary of Multiplicative Program Types and Assumptions on Problems
n 1.1 Assumptions on the
Problem Feasible Ri on Objective Function Assumptions on the Objective Function
LMP2 D is abounded polyhedron. ((c',x)+ d)((c,x)+ d2) (c',x)+d, > i = 1, 2,forall xe D.

GLMP D is a bounded polyhedron. (co,x)+((cl,x)+d,)((c ,x)+d2) (cox)>Oand(c,x)+d >0, i=1,2,forall
xe D.
g : R" R is a twice differentiable convex
CLMP D is a bounded polyhedron, g(x)+((cl,x)+dX(c2,x)+d2) function and (c',x)+ d, > 0, i=1, 2, for all
xe D.
LMP D is a bounded polyhedron. f c',x) +d (c',x)+d > O, i= 2,..., p, forall xeD.

For each j= 1, 2, ...,p, f: R"'-- R isa
CMP X is a compact convex set. fj (x) convex function that satisfies f (x)> 0 for all
xe X.
Foreach j=0,1, ...,p, f :R" -4R isa
GCMP X is a compact convex set. fo (x) + f- (ix)f2i (x) convex function that satisfies fj (x)> 0 for all
j=1
xe X.
Foreach j = 0,1, ...,p, f : R" -+ R isa
CCMP X is a compact convex set. f (x) + f (x) convex function that satisfies f (x)> 0 for all
X _X.








two algorithms that solve problem (LMP2) based on the method of polyhedral

annexation.

2.2.1. Methods Based on Quadratic Programming

Since the objective function of problem (LMP2) can be expressed as


f(x)= ((c'x) + d,)((c,x) + d)= I xTQx+ rrx+ dd,,

where re R', and Q is a real symmetric n xn matrix, problem (LMP2) is a special class

of quadratic programming. Swarup (1966a and 1966b) was the first researcher to analyze

problem (LMP2) in this way, but he did not propose any exact solution algorithms. His

two articles are included in the literature review for completeness. Pardalos (1990) also

analyzed problem (LMP2) in this way, and he proposed an exact global solution

algorithm.

Swarup (1966a) showed that if both linear functions (c',x) + d,, i = 1, 2 are

positive over the feasible region D, the objective function f is quasiconcave over D, It

is well known that generally for any local minimizer of a quasiconcave function over a

polytope, there exists an extreme point local minimizer over the polytope that has the

same function value. Swamp proposed a simplex based method for finding such a local

optimal solution. The key to the algorithm is a test that determines if entering a given

nonbasic variable into the current simplex basis will lower the objective function value. A

simplex basis of a local optimal solution can be reached by beginning at any feasible

basis and moving through a sequence of simplex tableaux by pivoting in qualifying

nonbasic variables until none remain. Once a local optimal solution is found, the







16
algorithm stops. No information is available to either certify the global optimality of the

solution or to determine how to proceed to an improved solution.

In another work, Swamp (1966b) formulated the following parametric linear

program by introducing an auxiliary variable 4 and moving one of the linear functions

into the constraint set:

(MP1) minF(x;4) =(c',x)+d,

s.t. x eD,

(c',x)+ d, 2= 5 0.

Since (c2, x) + d2 appears in the constraint set, dual pricing information is

available to determine the value of (c',x)+ d, as 4 is set to achievable values of

(c2, x)+ d2 over D. Swamp derived a test that uses this information to determine when

4 is set to a level that corresponds to a local optimal solution. All local optimal solutions

can then theoretically be found by parametrically solving problem (MPI) over all

achievable values of 4. A global optimal solution x' of problem (LMP2) can then be

found by identifying a global solution (x', 4') of problem (MP1).

Pardalos (1990) observed that if c' and c2 are linearly independent, then the

Hessian matrix Q of the objective function of problem (LMP2) has one positive

eigenvalue and one negative eigenvalue, and the remaining eigenvalues are equal to zero.

By applying the spectral decomposition theorem of linear algebra, the objective function

can be rewritten in terms of two variables. The problem can then be solved by examining

the vertices of an orthogonal projection of the feasible region D into a two-dimensional







17
polytope in the space of the two variables used in the rewritten objective function.

Pardalos (1990) proposed an algorithm that enumerates all vertices of the two-

dimensional polytope until an optimal vertex is found. The algorithm may require an

exponential number of steps, but its average computational time complexity is bounded

by a polynomial.

2.2.2. Methods Based on Searching the Outcome Set

The objective function of problem (LMP2) can be expressed as the composite

yy(p) of two mappings, where, for each xe R", p(x)= ((cl,x+d,,(c2,x)+d), and, for

each y e R2, YI(y) = Y Y2 The mapping ip maps each point x e D into a point

y =(y,, y2) where y,:=(c',x)+d, andy2 :=(c2,x)+d2. Since y, and y are linear

functions, ( is a linear transformation and hence the linear structure of D is preserved

(Rockafellar 1970). The image of D under (p is then the compact, convex polyhedron

Y:= {y R2y, =(c',x )+d,, y, = (c2,x) + d2 for some x D

called the outcome polyhedron. A global optimal solution of problem (LMP2) can be

found by finding a point of Y that globally minimizes the product y, y2. Since the

search is conducted in Y e R2 rather than R", it may be possible to economize on the

computational effort required to solve problem (LMP2).

Three articles, Aneja, Aggarwal, and Nair (1984), Falk and Palocsay (1994), and

Thoai (1991) proposed algorithms for solving problem (LMP2) based on searching the

outcome set using outer approximation techniques. Outer approximation is a global

optimization technique that uses a decreasing sequence of simple sets to approximate the








feasible region. The approximations are used in a series of optimization problems that are

easier to solve than the original problem. These optimization problems are sequentially

solved until a global optimal solution to the original problem is found. The technique has

been very useful in solving global optimization problems in which the feasible region Z

is a polytope and the global optimal solution is known to be an extreme point of Z. In

this form of outer approximation, the algorithm begins by finding a simple polytope

P0 D Z with an easily defined inequality representation and an easily calculated set of

vertices. A series of algorithmic iterations follows that builds a sequence of decreasing

polytopes P, D P z D : Z in which one polytope is generated in each iteration. In an

iteration k of the algorithm, the original objective function is evaluated at the extreme

points of Pk to find an optimal solution vk. If vk is an extreme point of Z, then v" is a

global optimal solution to the original problem. Otherwise, a portion of Pk \ Z is cut off

to form P,,,. The point v* is part of the region cut off; i.e., vk is not included in the

polytope P+,. The cut is made by adding a constraint called a cutting plane constraint to

the constraint set that defines P The cutting plane constraint adds additional vertices to

Pk.+ that were not present in P. and therefore they must be calculated.

Aneja, Aggarwal, and Nair (1984) proposed an algorithm that examines the

solutions associated with the bicriterion programming problem:

(BCP) VMIN(y,=(c',x)+d,,y2=(c,x)+d2),

s.t. xe D.








The intent of problem (BCP) is to simultaneously minimize the two criterion

functions y, and y2. Conflicts usually exist between the two criterion functions that

prevent a single point of D from simultaneously minimizing both functions. The usual

notion of an optimal solution used in single objective linear programming is replaced by

the concept of efficient solutions when discussing the solutions of problem (BCP). A

solution x is an efficient solution of problem (BCP) if YE D and, whenever for each

i=1,2, (c',x)+di <(c',5)+d, forsome x D,then (c',x)+d, =(c', )+d,. i=l, 2.

The set of efficient points of D is mapped by Vp into a set of points on the surface of Y

called the efficient frontier.

Aneja, Aggarwal, and Nair (1984) showed that a global optimal solution of

problem (LMP2) is attained at an efficient extreme point x' of D that is mapped by ip

into an extreme point (y,', y) in the efficient frontier of Y. Their algorithm searches the

efficient frontier for an extreme point that minimizes yy,2 by using a modified outer

approximation technique. Initially the legs of a right-angle triangle form the first

approximation of the efficient frontier. The "rise" and the "run" values of the slope of the

hypotenuse are two positive scalar values. The functions y, = (c',x) +d, and


y= (c, x) + d, are multiplied by these values and then summed to form a single linear

objective function. This objective function is then minimized over the feasible region D.

It is well known that the minimizer i of such a linear program is an efficient extreme

point of D (Steuer 1986). The solution to the linear program finds another point (9,, 92)

on the efficient frontier that is used to subdivide the initial triangle into two triangles. The







20
algorithm is then repeated using each of the smaller triangles. The algorithm terminates

when there are no more extreme points of the efficient frontier that need to be searched.

In the algorithm of Aneja, Aggarwal, and Nair (1984), a new vertex must be

calculated for each triangle. This is easily done by solving two systems of two equations

in the unknowns y, and y2. This special technique however, can not be easily extended

to handle cases where p > 2.

Falk and Palocsay (1994) also proposed a solution algorithm that searches among

the extreme points of Y using a modified outer approximation technique. In the first

phase of the algorithm, the two linear programs

1, = min(c',x + d and 12 = minc2, x + d,
.xD / xD -

are solved for optimal solutions x' and x2 respectively. Two initial vertices y' and y2

of Y are then

y' =((c',x')+d,,(c2,x)+d2) and y2 =((c',x2)+d,,(c2,x)+d).

An initial polytope in outcome-space containing an optimal solution for the problem

(YP) min y


is y, /1, and y2 /12 and an inequality a,y, +ay2 <1, where a, and a2 are determined

such that ay, +a2y2 = 1 passes through the point

y = argmin t'i(y:, y2), (y, y2
i=1.2

In each iteration of the algorithm, values for a, and a2 are updated and a linear program

of the form







21

(YLP) min ay, + a2y2

is solved to remove portions of the initial polytope from the search for an optimal

solution for problem (YP). The new vertices generated at each iteration are easily

calculated since the isovalue contours of problem (YLP) are linear. The algorithm

terminates when the optimal value of problem (YLP) is one.

The algorithm proposed by Thoai (1991) for solving problem (LMP2) uses an

outer approximation technique that begins by enclosing the outcome set Y in a rectangle

P,. In an iteration k of the algorithm, the extreme point (v,, v) of the outer

approximation that yields the lowest value of the product y, y, is found. A linear

program is then used to determine if the extreme point (,, 2 ) maps to a feasible point i

of D. If not, information is obtained from the linear program to generate a cutting plane

constraint that slices off the extreme point (,, 2 ) from the polytope P, The new

vertices generated by the cut are then calculated using a conventional approach (see

Horst, Pardalos, and Thoai 1995 or Horst and Tuy 1993). Since the method of

determining these new vertices is not dependent on the fact that the dimension of the

outcome set is two, Thoai's algorithm can be extended to handle cases where p > 2.

In the algorithms of Aneja, Aggarwal, and Nair (1984) and Thoai (1991), the only

variations in the linear programs used in successive iterations involve changes in

objective function coefficients. The authors gain some computational efficiency by

restarting the simplex method at the optimal solution of the previous iteration. Only a few

simplex pivots are then generally needed to produce a new optimal solution.






22
2.2.3. Methods Based on Solving a Parametric Master Problem

The difficulty in solving problem (LMP2) is caused by the product form of the

objective function. Konno and Kuno (1992) added a parameter 4 and formed the

following problem that they called the master problem:

(MP2) minF(x;4)= ((c',x +d,)+ (( +d2),


s.t. xeD, 5 >0.

Notice that for a fixed value 4' of 4, problem (MP2) is a linear programming

problem. To solve problem (MP2), Konno and Kuno proposed using a parametric

objective function simplex method to the find critical values of 4 at which new bases

become optimal. The values of the objective function F are then evaluated at these

bases. A global optimal solution (x', ') of problem (MP2) is found by choosing the

basis that minimizes F over these values. Konno and Kuno (1992) showed that if

(x', 4') is an optimal solution of problem (MP2), then x is a global optimal solution of

problem (LMP2).

Konno and Kuno tested this algorithm on randomly generated problems (LMP2)

with nonnegative problem data that ranged in size from (m, n) = (30, 50) to (220, 200).

Their computational experiments showed that the amount of computational time needed

to solve problem (LMP2) is not much different from that required to solve linear

programs of the same size.

In Konno and Kuno (1995) the authors slightly simplified the above parametric

method by redefining the auxiliary parameter so that convex combinations of the two

linear functions are used in the objective function of problem (MP2). This modification








makes it easier to find critical parameter values, since the interval [0, 1] over which the

auxiliary parameter ranges is bounded. The rest of the method remained the same.

Although Konno and Kuno (1992) did not explicitly say it, their algorithm can be

viewed as searching the efficient extreme points of problem (BCP) for one that is a global

optimal solution of problem (LMP2). Notice that for a sufficiently small value ', an

extreme point optimal solution (x', 4') to problem (MP2) coincides with an optimal

solution x' of the linear program min {(c',x)+d x D Similarly, for a sufficiently

large value [', an extreme point optimal solution (x', ') coincides with an optimal

solution x" of the linear program min {(c',lx)+ dxe DI. For any fixed value > 0, the

objective function F(x,4) is a composite objective function formed by multiplying the

two linear functions by positive values and summing the result. It is well known that any

extreme point minimizer of such a composite objective function over the feasible region

D is an efficient extreme point of the problem (BCP) (Steuer 1986). The efficient

extreme points of problem (BCP) are found by solving linear programs for parameter

values between and 4'. As Aneja, Aggarwal, and Nair (1984) have shown, the global

solution lies at an efficient extreme point of D in problem (BCP).

A disadvantage of the algorithm of Konno and Kuno is that it may require many

pivots to solve problem (MP2) for all possible parameter values. This will especially be

true if there is a great conflict between the two linear functions of the objective function.

If for example c2 = -c,, then every extreme point of D is an efficient extreme point of

problem (BCP). Since the size of the set of extreme points of the polytope D grows








exponentially with D, the number of optimal solutions to problem (MP2) over the entire

range of parameter values grows exponentially with D and is not bounded by a

polynomial. Konno and Kuno in fact observed that the computational time increased as

the number of local minima increased. An additional disadvantage of the Konno and

Kuno algorithm is that many of the pivots performed will be unnecessary when they are

to bases that do not improve on a previously found solution.

In another paper, Konno and Kuno (1990) added a convex function to the

objective function of problem (LMP2) to obtain the problem (CLMP). With this addition,

the objective function may no longer be quasiconcave and therefore, the global minimum

may not necessarily be attained at an extreme point of the feasible region D.

To solve problem (CLMP), Konno and Kuno (1990) proposed an algorithm that

solves a parametric master problem which, for a fixed parameter value, is a nonlinear

convex programming problem. The algorithm involves solving this master problem a

finite number of times, once for each of a finite number of prechosen values for the

parameter. A troublesome aspect of the algorithm is that it is difficult to determine the

proper parameter values to choose. The authors suggested choosing values for the

parameter that are equally spaced in the interval of possible parameter values and solving

the resulting master problems to determine a neighborhood containing a globally optimal

solution to problem (CLMP). A local search is then done in that neighborhood for a

globally optimal solution using the Karush-Kuhn-Tucker conditions. Care must be taken

however, to attempt to define the spacing between the points to be small enough so that a

global optimal solution is not missed.






25
The difficulty that Konno and Kuno (1990) encountered in their method in

determining parameter values can be eliminated if we assume that the convex function g

in the objective function of problem (CLMP) is a linear function. Problem (GLMP) is

obtained by making this replacement. Konno, Yajima, and Matsui (1991) considered

problem (GLMP), but they assumed that d, and d, are zero. To solve problem (GLMP),

Konno, Yajima, and Matsui formulated the master problem

(MP3) min F(x;4) = (co,x) + (c2,x),

s.t. xe D,

(c', )= 0.

Notice that the parameter appears in both the objective function and in a right-hand

side of a constraint.

Konno, Yajima, and Matsui (1991) showed that x' is a global solution of problem

(GLMP) if (x', E') is an optimal solution of problem (MP3). Schaible and Sodini (1995)

used problem (MP3) to show that a global optimal solution of problem (GLMP) lies on

an edge of D.

Konno, Yajima, and Matsui (1991) proposed a parametric simplex algorithm that

includes a right-hand side analysis and an objective function analysis to determine

intervals of parameter values for which bases remain both feasible and optimal. The

parametric analysis sweeps through parameter values from 4, = min{(c',x)Ix e D} to

a, = max(c', x)Ix e D. The objective function F is then minimized over each of the

intervals.






26
Konno, Yajima, and Matsui (1991) tested their algorithm on randomly generated

problems of up to 350 constraints and 300 variables. They found that the problems can be

solved in much the same computational time as that of solving linear programs of equal

size.

The algorithm of Konno, Yajima, and Matsui (1991) suffers from the same

disadvantages as the algorithm of Konno and Kuno (1992). In particular, its efficiency

depends on the number of pivots performed to solve problem (MP3) for all possible

parameter values. Also many of the pivots performed will be unnecessary when they yield

bases that do not improve on a previously found solution.

Schaible and Sodini (1995) improved the algorithm of Konno, Yajima, and

Matsui (1991). From a given simplex tableau for problem (MP3), Schaible and Sodini

used parametric analysis to derive a formula that calculates the value of the objective

function F as the constraint (c', x) = is set to increasing values of i'. As !' increases,

parametric right-hand-side analysis calculates new values for the basic variables. Schaible

and Sodini then derived some optimality conditions that detect when the parameter 4' is

set to a value such that from an optimal solution (x', ') of problem (MP3), one obtains a

local minimum x' of problem (GLMP). By applying these optimality conditions,

Schaible and Sodini were able to develop a simplex-based algorithm that solves problem

(MP3) in a finite number of primal and/or dual simplex iterations.

The algorithm proposed by Schaible and Sodini (1995) has three advantages over

the algorithm of Konno, Yajima, and Matsui (1991): (1) It may terminate before the

maximum possible parameter value m, has been reached. (2) It is more efficient in that








it may skip over local optimal solutions that do not improve the objective function value.

(3) It can be used even when the feasible region is unbounded, and it can detect when

problem (GLMP) is unbounded from below.

Muu and Tam (1992) also considered problem (CLMP), but in their work, the

feasible region D is relaxed to a compact convex set. They seem to be the only

researchers to have considered this generalization of problem (CLMP). The authors

however tested their algorithm using a polytope for the feasible region.

Muu and Tam (1992) formulated the parametric master problem

(MP3') minF(x;4)= g(x)+4lc2,x)+d2 ,

s.t. x D,

c',x) +d,=(, i0.

They proposed a branch and bound algorithm to solve problem (MP 3'). Branch and

bound is a technique commonly used by algorithms in global optimization. Branching

refers to the successive partitioning of the feasible region and bounding refers to the

computation of lower and upper bounds on the global optimum over the partitions.

Partitions of the feasible region that produce a lower bound on the objective function that

exceeds the best upper bound found so far by the algorithm are eliminated from further

consideration. Such partitions are said to be fathomed. A branch and bound algorithm

terminates when all of the partitions have been fathomed.

In the algorithm of Muu and Tam (1992), partitions of the feasible region are

constructed by restricting the value of (c', x) + d, to values within an interval. The

algorithm begins by finding an interval o := [i,, 2] of achievable values of (c, ) + d








by solving the two convex programs 4, : min (c', x) + d,lx Dj and

S:= max {(c', x) + dix D}. Optimal solutions uo and vo are then obtained for the two

convex programs

(,):= min ,c',x +d2+ g(x)xE D, [c',x)+d,]i } i=1,2.

A lower bound 3(Io) over the interval I, of the objective function F of problem

(MP 3') is found by selecting /3(I,):= min {f3(4), 3(2)} An upper bound a, on F is

obtained by selecting := min f (uO), f(vO)}. The interval I0 is next bisected and the

procedure repeated using the two subintervals. A subinterval that produces a lower bound

that exceeds the current upper bound is eliminated from further consideration; i.e. that

subinterval is considered to be fathomed. The procedure continues bisecting intervals Ik

to generating a sequence of solutions {k =' that converge to a limit point x' that is a

global optimal solution. Computational experiments on problems up to (m, n) = (30, 200)

showed that the algorithm is very efficient when both vectors c and d are positive.

2.2.4. Methods Based on Polyhedral Annexation

A limitation of conventional optimization methods is that they can become

trapped at a local minimum, or even a stationary point, if they are applied to a global

optimization problem, e.g. see the algorithms proposed by Swarp (1966a, 1966b). The

central problem of a global optimization method then is to overcome this limitation by

providing a certification test for global optimality, and if a point is not globally optimal,

determining how to move to a better solution. Tuy (1991) called this the subproblem of







29
"transcending the incumbent" where the incumbent is the best feasible solution found so

far by an algorithm.

Let f be the objective function for problem (LMP2), and let X be a vertex of D

that represents the incumbent solution for this problem. Then, from Tuy (1991), to

transcend the incumbent, one must find a point in x e D such that f(x) < f () or else

establish that no such point exists, i.e. that i is a global optimal solution for problem

(LMP2).

Let G := {x e f (x) f (1)}, where S is a convex set containing D. The

problem of transcending the incumbent can then be restated as the following problem.

(GCP) Check if Dc G and if not, find a point x e D \ G.

Problem (GCP) is known as the Geometric Complementary Problem.

Tuy (1990) developed the method of polyhedral annexation to solve problem

(GCP). In polyhedral annexation a sequence of polytopes P, c P2 c *- c Pk c is built

by adding a vertex to the polytope Pk-_ of the previous iteration in such a way that a

vertex of D is annexed into the new polytope Pk. The sequence P n D, P2 n D,... forms

an expanding inner approximation of D. When a polytope P.h D is found, all of the

extreme points of D have been searched and the algorithm terminates. Associated with

the sequence of polytopes P, c P, c ... c Pk c ... is the sequence of their polars

P* > P,* D .. D PkW* D .. where a polar E' of a convex set E in R" is defined as

E* := R"(y, x) I for allx e E}. A dual correspondence exists between the facets of

a polytope Pk and the vertices of its polar Pk'. The subproblem of determining the








inequality representation of Pk, after a new vertex has been added can then be solved by

solving the easier problem of computing the vertices of Pk. The termination condition

P, D D has the corresponding condition P*' D'. For a more detailed description of

polyhedral annexation, see the chapters on inner approximation in Horst, Pardalos, and

Thoai (1995) or in Horst and Tuy (1993).

Tuy and Tam (1992) proposed two algorithms that are derived using the

polyhedral annexation method with a dualization and dimension reduction technique

developed by Tuy (1991). Dualization refers to solving the original problem by solving

the dual problem of generating a sequence of polars until a polar P,' g D* is found. The

key to the dimension reduction technique is the introduction of a cone into problem

(GCP). Tuy and Tam (1992) assumed that c' and c2 are linearly independent vectors and

then formed the cone K : x R" (c',x) 0, i =1, 2}. Cone K is of interest since if

xe D is an incumbent solution, then, for any e (I + K), f(i) 2 f(x). In other words,

cone K identifies points in R" that can do no better than the incumbent solution x.

Computational effort might be saved using cone K since a part of the feasible region D

can be eliminated from further consideration and the search narrowed to the remaining

portion of D.

The first algorithm proposed by Tuy and Tam (1992) solves problem (LMP2) by

solving problem (GCP) through the dualization process of generating a sequence of

polars until a polar P, c D' is found. Tuy and Tam (1992) showed that the polar K' of

cone K is explicitly given as K' = {y R"Iy = -tc' tc2 for some t 20, t, 2 Oj. Any








vertex I in a polar Pk lies in the polar cone K', and the multipliers i, and f, used to

express 9 are unique, since c' and c2 are linearly independent vectors. Polar cone K' is

used to solve the dual problem by building a collapsing sequence of polars

P,' >) P2* ... 3 P,* =) ** with each polar being an improved approximation of D'. The

search is conducted in the two-dimensional space generated by c' and c2 rather than in

the original n-dimensional space. Solving the linear program

(LP(f)) max {-,(c',x)-i2(c2,x )xe D,

where t1 and 2, are the multipliers used to express some vertex I =-Tc' -t2c' of P,

tests for the termination condition P, c D'.

The second algorithm proposed by Tuy and Tam (1992) is motivated by the

observation that for a fixed value of t = (t, t2), problem (LP(t)) is equivalent to the linear

program

(LP(a)) max {(-c'-a(c -c',x)xe D}

where a = t/(t, + t,)e [0, 1]. The first algorithm thus reduces to solving a sequence of

linear programs (LP(a)) for different values of the parameter a. The second algorithm

proposed by Tuy and Tam (1992) is to parametrically solve problem (LP(a)) for all of

the critical values of a at which new bases become optimal. The objective functionfof

problem (LMP2) is evaluated at each basis and a global optimal solution chosen from

those bases. The second algorithm of Tuy and Tam (1992) is essentially the same

parametric problem (MP2) used by Konno and Kuno (1992).








Tuy and Tam (1992) ran computational experiments using both the first

polyhedral annexation algorithm and the second parametric algorithm. Their results

showed that for solving problem (LMP2), the parametric algorithm performed better than

the polyhedral annexation algorithm. The polyhedral annexation algorithm is not as

efficient because more simplex pivots were required than for the parametric algorithm.

Tuy and Tam (1992) proposed an improved variant of the polyhedral annexation

algorithm that reduces the number of pivots and the number of objective function

evaluations. The authors observed that the improved algorithm may potentially be more

useful for a problem with an objective function that is difficult to evaluate. The

computational experiments run using the parametric algorithm on problems of up to (m,

n) = (30, 200) and positive problem data were in line with the results reported in Konno

and Kuno (1992).

2.3. Extensions of Algorithms for Problem (LMP2) to Solve Problem (LMP)
when p 3

The polyhedral annexation method of Tuy and Tam (1992) and the outcome-space

algorithms of Thoai (1991) and Falk and Palocsay (1994) can be extended to the more

general problem (LMP) where p 3. Although the algorithms remain unchanged, the

subproblem of determining the new vertices becomes more difficult as the number of

function terms in the objective function increases.

2.4. Methods to Solve Problems (CMP), (GCMP), and (CCMP)

Relatively little work has been done in designing exact global solution algorithms

that address problems (CMP), (GCMP), and (CCMP). The algorithms that have been








proposed fall into two categories: (1) methods based on solving a reformulated problem

and (2) a method based on outer approximation.

2.4.1. Methods Based on Solving a Reformulated Problem

Konno and Kuno (1992) introduced problem (CMP) where p = 2 and formulated

a master problem by introducing a parameter into the original problem to separate the two

functions of the objective function into a summation. This technique of embedding the

original problem into a problem in a higher dimensional space is similar to the one used

by the authors in the same paper to solve problem (LMP2). At the time, Konno and Kuno

were not able to give an algorithm for solving the master problem. In Kuno and Konno

(1991) the authors proposed a branch and bound algorithm along with an underestimation

function to solve it. Computational results for problems of up to (m, n) = (200, 180)

indicated that the algorithm is efficient when the objective function is the product of a

linear function and a quadratic function and the feasible region is a polytope.

Kuno, Yajima, and Konno (1993) extended the paramaterization technique of

Kuno and Konno (1991) for problem (CMP) to handle cases where p 2. They showed

that a global optimal solution to problem (CMP) can be obtained by solving the

equivalent problem


(MP4) min minG(x;4)= J4,(x),


where- = -( ~ l 0}. For a fixed -, let x'(-) denote an optimal


solution of min G(x;})= F ,f,(x). Let h: E -> R be defined by h(4):= G(x'( ); ) for
j=I







34

any E E E. Solving problem (MP4) then reduces to solving the problem in RP given by

(MP4') min h().

Kuno, Yajima, and Konno (1993) showed that h is a concave function over 5 and

therefore a global optimal solution of problem (MP4') exists on the boundary of E They

proposed an outer approximation method for solving problem (MP4') and tested their

algorithm against two subclasses of problem (CMP): (1) problem (LMP) and (2)

problems similar those tested in Kuno and Konno (1991) in which the objective function

is the product of a linear and a quadratic function and the constraints are linear

inequalities. Computational experiments showed that the total computational time is

dominated by that needed for solving the convex minimization master problems for each

parameter value. The results also showed that the number of cuts and vertices generated

increases rapidly as p increased from 2 to 5. The authors asserted that this was due to

inefficiencies in computing new vertices, especially when p exceeds 5. However, if p is

held constant, these numbers increased very slowly as the number of constraints and

variables increased. The authors concluded that their algorithm is reasonably efficient

when p is less than 4.

Jaumard, Meyer, and Tuy (1997) added a convex function to the objective

function of problem (CMP) to form problem (CCMP). The authors showed that problem

(CCMP) can be reduced to a quasiconcave minimization problem in RP that is a

generalization of problem (MP4') used by Kuno, Yajima, and Konno (1993). In the

special case where fo = 0 in problem (CCMP), the reduced quasiconcave minimization

problem in Jaumard, Meyer, and Tuy (1997) can be shown to be equivalent to the one






35
used by Kuno, Yajima, and Konno (1993). Jaumard, Meyer, and Tuy (1997) find a global

solution of problem (CCMP) by finding an optimal solution to the quasiconcave

minimization problem in RP using a conical branch and bound method. They ran

computational experiments using their algorithm on test problems similar to those used

by Kuno, Yajima, and Konno (1993) and Thoai (1991). The authors report that their

results are very sensitive to the magnitude of p and not as sensitive to the size (m, n) of

the constraint matrix.

Sniedovich and Findlay (1995) analyzed problem (CMP) from the perspective of

c-programming but did not give a complete algorithm for solving it. C-programming is a

technique developed by Sniedovich (1984) for solving an optimization problem of the

form

(CP) q:=mixn ((p(x)),

where X is some nonempty set, qp is a mapping on X with values in RP, and y is a

differentiable and pseudo-concave function on some open set containing the set

Tp(X): = {q(x) x e X}. The heart of the technique is to linearize the function V and

transform the original optimization problem into the parametric programming problem

(MP5) q(4):= minei(x)4, E RP.


Sniedovich showed that if x' is a globally optimal solution for problem (CP), then an

optimal solution '* for problem (MP5) is = Vy(q,(x')), where Vy(.) is the gradient

of f.






36
For problem (CMP), the objective function can be expressed as the composite

y(qp) of two functions, where, for each xe R", tp(x)= (f,(x), f,(x )... f,(x)), and, for


each y R y(y)= l y,. Sniedovich and Findlay claimed without proof that Vy is a
1=1

differentiable and pseudo-concave function on the open convex set 4 e R > 0}. Since

problem (CMP) satisfies the requirements of c-programming, it can be solved by solving

the parametric problem

(MP5') q( ):=min i f;i (x eE,
.X i=l

where E is any subset of RP such that Vy(q,(x))e E for all xe X. In problem (MP5'),

the parameter appears only in the objective function, whereas in problem (MP4) the

parameter I appears in both the objective function and in the constraints. Standard

Lagrangian methods can be employed to solve problem (MP5') for all e EZ, while

specialized methods are required to optimize the objective function of problem (MP4)

with respect to the original variable x and the parameter 4.

Kuno and Konno (1991) and Konno, Kuno, and Yajima (1994) considered

problem (GCMP) for cases where q = 1 and q > 1 respectively. For q = 1, the master

problem and solution algorithm are similar to the one used by Kuno and Konno (1991) to

solve problem (CMP) when p = 2. Computational experiments showed that the

underestimation function does not perform as well as it does for problem (CMP).

For q 2 1, the master problem in Konno, Kuno, and Yajima (1994) is formulated

by introducing a pair of parameters for each pair of convex functions that appear in the






37
objective function of problem (GCMP). The master problem is a convex minimization

problem in the space R"2' and is solved using an outer approximation algorithm.

Computational experiments conducted using a polyhedron for the feasible region showed

that for q = 1, this algorithm required less than half the computational time required by

the branch and bound with underestimation function algorithm proposed in Konno and

Kuno (1992) to solve problem (CMP).

Tuy (1992) gave problem (CMP) as an example of an optimization problem that

can be formulated as a Geometric Complementary Problem and solved it using a

parametric programming problem. The parametric programming problem is a convex

minimization problem in which a positive parameter vector is used to build a composite

objective function from the convex functions in the objective function of problem (CMP).

A complete algorithm that includes solving the parametric program was not given.

2.4.2. A Method Based on Outer Approximation

Thoai (1991) extended the algorithm based on the outer approximation technique

that he proposed for solving problem (LMP2) to address the solution of problem (CMP)

when p= 2. The main idea is to build a sequence of decreasing polytopes

P0 D Pi 3 DX of the convex feasible region X and a sequence of decreasing

polytopes So D S 3 ) Y of the outcome set Y, where

f = {ye R2ly, = f,(x) y2 = f2(x)for some x X.

Problem (CMP) is then solved by applying a modified version of the algorithm for

problem (LMP2). In any iteration k, up to two cuts are introduced, one for Pk and one


for Sk, to obtain tighter approximating sets.








Since the algorithm does not depend on the actual value of p, it can be extended

to handle cases where pt 3.

2.5. Methods to Solve Problem (LMP) as a Concave Minimization Problem

Konno and Kuno (1992) showed that the objective function of problem (LMP) is

not a convex function over the feasible set D. Therefore, problem (LMP) is not a convex

programming problem. However, since the natural logarithm function In is a strictly

increasing concave function on (0, o), it is easy to show that the function


F(x)=ln[ (c',x)+d, + = ln Ic',x)+d,]


defined for all x D is a concave function. In addition, the optimal solution set of the

concave minimization problem

(CMIN) min F(x s.t.xe D,

is identical to the optimal solution set of problem (LMP). Therefore, any concave

minimization method may be applied to problem (LMP) if the objective function is

replaced by its logarithmic equivalent.

Using the above transformation modification, Tuy (1991) showed that problem

(LMP) could be solved in a reduced dimension space using polyhedral annexation and the

dualization and dimension reduction technique. The algorithm presented in Tuy and Tam

(1992) is essentially an improvement of the one in Tuy (1991).

Ryoo and Sahinidis (1996) also converted problem (LMP) into the problem

(CMIN). To solve problem (CMIN), they employed a branch and bound algorithm that

incorporates the use of valid inequalities to accelerate convergence. Branch and bound








algorithms may slowly converge to an optimal solution when the gap between the initial

upper and lower bounds is large. A valid inequality is a inequality constraint that does not

exclude any solution that yields an objective function value lower than the current best

upper bound. By introducing valid inequalities into the constraint set, inferior parts of the

feasible region may be removed from further consideration without eliminating possible

global optimal solutions. A second use of valid inequalities is to reduce the range of

values that the variables in the problem can assume. Ryoo and Sahinidis referred to these

two uses of valid inequalities as range reduction mechanisms. The performance of the

bounding procedure in the branch and bound algorithm is improved by using these range

reduction mechanisms, since smaller-sized partitions of the feasible region are used and

the variables are restricted to reduced ranges of values.

Ryoo and Sahinidis implemented the branch and bound algorithm along with the

range reduction mechanisms in a computer program called BARON (Branch-And-Reduce

Optimization Navigator). To more easily calculate lower bounds on the objective function

F of problem (CM IN i ,\ er a partition of the feasible region, the authors replaced F by a

linear underestimating function. Lower bounds were then calculated by solving linear

programs. The authors tested randomly-generated problems in sizes from (m, n) = (50,

50) to (200, 200), with p ranging from 2 to 5. They reported that only a small fraction of

the total CPU time is consumed in the range reduction mechanisms and that there seemed

to be a low-order polynomial relationship between the CPU time and the value of p.














CHAPTER 3
CONCAVE MULTIPLICATIVE PROGRAMMING PROBLEMS: ANALYSIS AND
AN EFFICIENT POINT SEARCH HEURISTIC FOR THE LINEAR CASE


3.1. Introduction

An important, but little researched area that deserves more attention, is the

development of heuristic algorithms for finding a good solution for multiplicative

programming problems. In some applications, a good, though not necessarily globally

optimal solution, may adequately meet the requirements of a user (Konno and Inori

1989). In these cases, since multiplicative programming problems are known to be NP-

hard, the expenditure of computational effort required to globally solve them may not be

needed.

This chapter has two purposes. The first is to present an analysis of problem (Px)

when problem (Px) is a concave multiplicative programming problem. The second

purpose is to propose a heuristic algorithm designed for the case where problem (Px) is a

linear multiplicative programming problem.

The analysis of the concave multiplicative programming problem is presented in

Section 3.2. This analysis shows a new way to write a concave multiplicative

programming problem as a concave minimization problem and some theoretical

consequences of this. It also shows some relationships between concave multiplicative

programs and certain multiple-objective mathematical programs. In Section 3.3, by using

some of the results of Section 3.2, we present and explain the workings of an efficient-

40






41
point search heuristic algorithm that we have developed for the linear multiplicative

programming problem. Section 3.4 reports and analyzes some statistics summarizing the

computational results that we obtained by coding the heuristic algorithm and applying it

to 260 randomly-generated linear multiplicative programs. In Section 3.4 we also report

the results of applying the heuristic algorithm to a multiplicative programming problem

formed from a decision situation using real data. In Section 3.5, we discuss the major

results of this chapter.

3.2. Analysis

Assume in problem (P,) that X is a convex set and that, for each j = 1, 2,..., p,

f : X -4 R is a concave function; i.e., assume that problem (Px) is a concave

multiplicative programming problem. Consider the function : X -- R defined for each

xe X by

g(x)= log g(x).

Then, it is a simple matter to show that k : X -- R is a concave function and that the

optimal solution set to the concave minimization problem

min k(x), s.t. x X, (3.1)

is identical to the optimal solution set of problem (Px). Thus, any concave multiplicative

programming problem of the form of problem (Px), if rewritten in the form (3.1), can be

solved by applying any appropriate general-purpose concave minimization algorithm to

(3.1). For discussions and reviews of concave minimization algorithms, see, for instance,

Benson (1995), Benson (1996), Horst and Tuy (1993), and Pardalos and Rosen (1987).








It is interesting and useful in both practice and theory to observe that, in addition

to (3.1), there is at least one other way to rewrite a concave multiplicative programming

problem as a concave minimization problem. To show how this can be accomplished, we

will first prove the following preliminary result.

Lemma 3.2.1. Let a RP satisfy a > 0, and consider the nonlinear programming problem

v = min(a, ), s.t. Ae A, (3.2)


where A = e Rf AR j 1, Al 0 Then, v is finite and problem (3.2) has at least one


optimal solution.

Proof. Notice that, if Ae A, then A > 0 and (a,A) > 0. Therefore, v > 0. This,

combined with the fact that A 0, implies that v is finite.

Now, suppose that, for each j = 1, 2,..., there exists a vector Aj e A such that

(a,Aj) v+ej,

where {e, t is a strictly decreasing sequence of positive real numbers such that


lime = 0. Then the sequence {, is either bounded or unbounded.


Case 1: {A [, is bounded. Then, for some bounded set XA A, X' E A for each

j = 1, 2,.... Therefore, by passing to an appropriate subsequence {M ,.' of {(I ,, if

necessary, we can guarantee that X = limAj exists. Furthermore, since V E A c A for
each j J, and A is a closed set, Aelongs to By assumption,
each j e J, and A is a closed set, A belongs to A. By assumption,






43
for each j E J. By taking the limits over je J on both sides of (3.3), we conclude that

(a, ) 5 v. Since 1~ A, this implies that X is an optimal solution to (3.2).

Case 2: {I 1}, is unbounded. Then, for some subsequence {,2'}je of {ij,, and

for some k E {l, 2,...p, lim). =+o. For each je J, since X E A, j > 0.

Combined with the fact that a > 0, implies that, for each j e J,

0
By assumption, for each je J,

(a,. ') v+e,. (3.5)

From (3.4) and (3.5), we obtain

a,,
for each je J. By taking the limits over je J on both sides of (3.6), we conclude that

+ o = v, which is a contradiction. Therefore, this case cannot hold, and the proof is

complete. C

Using Lemma 3.2.1, we may now establish the following theorem.

Theorem 3.2.1. Assume in problem (Px) that X is a convex set and that f : X -> R,

j = 1, 2,..., p, are concave functions. Let g : X -> R be defined for each x eX by


W(x)= pl ifj(x) .
[I-' J


Then : X -> R is a concave function.






44
Proof. Consider the function h: X -+ R defined for each xe X by

h(x)= min Ajf,(x), s.t.Ae A, (3.7)
j=1

where A is as defined in Lemma 3.2.1. From Lemma 3.2.1, since f, is strictly positive

on X for each j = 1, 2,..., p, it follows that the minimum in (3.7) exists and is finite for

each xe X. If, for each A e A, we define a function hA :X -4 R by


h, (x)= f,(x),
J=I

then for each xe X, h(x) may also be written as

h(x)= min h (x). (3.8)

Notice that, for each A e A, hA : X -4 R is a concave function. From this and (3.8), we

conclude that h: X -- R is also a concave function (Rockafellar 1970).

To complete the proof, we will show that, for each xr X, h(x)= g(x). Toward

this end, fix xe X, and let A(x)e X denote an optimal solution to problem (3.7). From

the Karush-Kuhn-Tucker necessary conditions for this problem (Bazaraa, Sherali, and

Shetty 1993), since A(x)> 0, it follows that there exists a nonnegative constant O(x)

such that


fJ(x)-9(x) A fk (x)]/,(x)=0, j=1,2,..., p. (3.9)

Since ,A(x)E X is an optimal solution to problem (3.7), it is easy to see that


kI (x)=1.
k~l







Together with (3.9), this implies that

,Ai(x)f1(x)= (x), j= 2,.... p. (3.10)

From (3.10), it follows that

A,(x)=O(x)/fj(x), j=1,2,...,p.

By substitution in

I,(x)= 1,

this implies that


e(x)== f (x) (3.11)


From equations (3.10) and (3.11), we see that


Zx)f,(x)= p f(x) (3.12)

Since xe X and A(x)e X is an optimal solution to (3.7), the left-hand-side of equation

(3.12) coincides with h(x). By definition of g, the right-hand-side of equation (3.12)

equals g(x) so that the proof is complete. [

Theorem 3.2.1 can also be proven by using a composite function approach and

showing several preliminary results (Avriel, Diewert, Schaible, and Zang 1987). We offer

the proof here, because it is more direct and because we will use it below to help derive a

corollary of interest.







46
Notice from Theorem 3.2.1 that, when problem (Px) is a concave multiplicative

program, the optimal solution set of problem (Px) is identical to the optimal solution set

of the concave minimization problem

min g(x), s.t.xe X, (3.13)

where g: X -- R is defined for each xe X by


(x)= p[g(x) i.

In practice, this implies that any concave multiplicative program (Px), if rewritten in the

form (3.13), can be solved by applying any suitable concave minimization algorithm to

(3.13). Notice also that problem (3.13) is a simpler reformulation of problem (Px) for the

concave case than the typical reformulation used in the literature to solve problem (Px)

in the convex case (see e.g., Konno and Kuno 1992, Kuno and Konno 1991, Thoai 1991,

and Kuno, Yajima, and Konno 1993).

Theorem 3.2.1 also has some interesting theoretical implications concerning the

product of functions. For instance, for any finite set of concave functions fj, j = 1, 2,

..., p, each defined on a common nonempty convex domain X c R" and each strictly

positive on this domain, it is known that the function g: X -- R defined by their product

is not necessarily concave, convex, or quasiconvex on X (Kuno, Yajima and Konno

1993 and Avriel, Diewert, Schaible and Zang 1988). However, from Theorem 3.2.1, the

function f :X -R given by




for each xe X is a concave function on X.






47
In addition, Theorem 3.2.1 implies the following result concerning the product of

a set of concave functions.

Corollary 3.2.1. Let X and fj, j= 1, 2,..., p, be defined as in Theorem 3.2.1, and

suppose that g: X -- R is defined for each xe X by


g(x)= I,(x).


Then g: X -- R is a quasiconcave function.

Proof. Choose a R, and let

L,= {xe Xlg(x) a}.

If a 0, L, = X is a convex set. If a > 0, then from Theorem 3.2.1 and Rockafellar

(1970), the set

= I{xe xEp[g(x)'IIP p}

is a convex set, where = pa'l/. Since L = L,, this implies that L, is a convex set.

Therefore, we have shown that, for any a R, L, is a convex set. This is equivalent to

showing that g: X -+ R is a quasiconcave function (Bazaraa, Sherali, and Shetty 1993),

so that the proof is complete. O3

It follows from Corollary 3.2.1 that any concave multiplicative programming

problem (Px) is a problem involving the minimization of a quasiconcave function over a

convex set. Many of the most popular algorithms for minimizing a concave function over

a convex set are equally suitable for minimizing quasiconcave functions over convex sets

(Horst and Tuy 1993 and Benson 1995). As a result, we see that any concave








multiplicative program (P ) can be solved by applying any number of suitable concave

minimization algorithms directly to problem (Px). In particular, no reformulations of

problem (Px) are needed to apply these algorithms.

Remark 3.2.1. Corollary 3.2.1 has been previously shown to hold for the special case

where p = 2, X is a nonempty, compact polyhedron, and f, and f2 are linear functions

(see, e.g., Konno and Kuno 1992).

The next corollary of Theorem 3.2.1 concerns the minimization problem (3.7)

used in the proof of the theorem. Possible uses for this corollary may include the

construction of methods for finding local optimal solutions to concave multiplicative

programs, although we will not investigate this here.

Corollary 3.2.2. Let X and f, j = 1, 2,..., p, be defined as in Theorem 3.2.1, and let A

be defined as in Lemma 3.2.1. Then, A is a convex set and, for each xe X, the unique

optimal solution A(x) to problem (3.7) is given by


A (x)= f(x) f(x), k = 1,2,..., p.

Proof. Notice that A may be rewritten according to the relation


A= AEintR p A[ I/P (3.14)


where

intR,' ={le RPA>0}.

It is easy to see that, for each j = 1, 2,..., p, h, :int RP -- R, defined for each a int RP

by






49
h,(4) = ,

is a concave function on int RP that satisfies

hi(A)> 0, for all AE int Rf.

Therefore, by Theorem 3.2.1, the function m:int R+ R defined for each A e int Rf by


m(A)= p[ ,


is a concave function. This implies that

{Ae intR IRn(A.l)g p

is a convex set (Rockafellar 1970). By (3.14), this proves that A is a convex set.

Now, fix xe X, and let A(x)e A denote an optimal solution to problem (3.7).

From the proof of Theorem 3.2.1, this implies that, for each k = 1, 2,..., p,

,(x)=e(x)/ f,(x),

where O(x) is given by (3.11), so that the corollary is proven. E

In addition to its relationships to concave minimization, a concave multiplicative

program also has some interesting ties to multiple-objective mathematical programming.

In the remainder of this section, we will show some of the theoretical relationships

between concave multiplicative programs and certain multiple-objective mathematical

programs. In the next section, some practical benefits of those relationships will be

demonstrated.

Let f(x) denote the vector


[f,(x), f,.(x)...,






50
where f : X -R, j = 1, 2,..., p, are the functions used in defining problem (Px).

Then, the components of the vector f(x) are generally conflicting, in the sense that the

infima over X of fj(x) j = 1, 2,..., p, are generally not simultaneously achieved at the

same point in X. As a result, inherent tradeoffs in the achievable values of the

components of f(x) over x e X are present. To account for these tradeoffs, and to seek

what decision makers call a most preferred solution in situations where the goal is to

attempt to simultaneously minimize f,(x), j= 1, 2,..., p, over X, one of the most

popular approaches is to consider the associated multiple-objective mathematical

program

VMIN f(x), s.t.xe X. (3.15)

In particular, in typical situations, a most preferred solution in X will exist that is also an

efficient solution for (3.15), where an efficient solution is defined as follows.

Definition 3.2.1. A point x E R" is called an efficient solution for (3.15) when xo E X

and, whenever f(x)< f(xo) for some xe X, then f(x)= f(xo).

An efficient solution is also called a nondominated or Pareto-optimal solution. By

generating or searching the set XE of the efficient solutions for (3.15), decision makers

are able to observe the inherent tradeoffs among the objective functions fj, j = 1, 2,...,

p, that are available over X and are often able to choose from X. a most preferred

solution. For further discussions on multiple-objective mathematical programming and its

applications, the reader may consult, for instance, Cohon (1978), Evans (1984), Luc







51
(1989), Sawaragi, Nakayama, and Tanino (1985), Stadler (1979), Steuer (1986), Yu

(1985), Zeleny (1982) and references therein.

The first relationship between multiplicative programming and multiple-objective

mathematical programming is given in the following result. The proof of this result is an

elementary exercise.

Proposition 3.2.1. Any optimal solution to problem (Px) must belong to the efficient set

XE of the multiple-objective mathematical programming problem (3.15).

Notice that Proposition 3.2.1 holds for arbitrary multiplicative programming

problems (Px). The next result, however, is restricted to certain types of concave

multiplicative programs.

Proposition 3.2.2. Assume in problem (Px) that X is a compact, convex set and that

f, :X -4 R, j = 1, 2..., p, are concave functions. Then, there exists an optimal solution

to problem (Px) which is an extreme point of X.

Proof. From Theorem 3.2.1, problem (Px) can be solved by finding an optimal

solution to the concave minimization problem (3.13), where g': X -- R is the concave

function defined by


g(x) = P[if(x) ,


for each xe X. Since X is a nonempty compact, convex set, from Horst and Tuy (1993),

problem (3.13) has an optimal solution that is an extreme point of X. These two

observations together prove the desired result. O







52
Taken together, Proposition 3.2.1 and 3.2.2 imply that any concave multiplicative

programming problem with a compact feasible region has at least one optimal solution

that is an efficient extreme point solution to the multiple-objective mathematical

programming problem (3.15). Special cases of this observations have been alluded to in

the literature (see, e.g., Aneja, Aggarwal and Nair 1984 and Sniedovich and Findlay,

1995). In the next section, we put this observation to practical use.

3.3. Efficient Point Search Heuristic

Assume in this section that, in problem (Px),

X = xe R'"Ax bl

is a compact polyhedron, where A is an m x n matrix and b e R', and that for each

j = 1,2,..., p, f (x)= (c, x), where c' e R" for each j= 1,2,...,p. Then problem

(Px) is a linear multiplicative programming problem or, more briefly, a linear

multiplicative program (Konno and Kuno 1992). We have designed and tested a heuristic

algorithm for this problem, based in part on some of the results in the previous section. In

this section, we will formally state this heuristic algorithm and explain its workings.

The multiple-objective program (3.15) associated with a linear multiplicative

problem may be written as

VMIN Cx, s.t. Ax b, (3.16)

where C is the pxn matrix whose jth row equals (c ), j = 1, 2,...,p. Problem (3.16)

is a multiple-objective linear programming problem (Steuer 1986 and Yu 1985). Let X,


denote the set of extreme points of







53

X =Ixe R'Ax b).

Then, by Proposition 3.2.1 and 3.2.2, an optimal solution to the linear multiplicative

programming problem can be found in the set

x.- = (XEnx,)

of efficient extreme points of problem (3.16). The set X,x is finite, and various

procedures have been developed for generating it in its entirety (see, e.g., Steuer 1986, Yu

1985 and Steuer 1983).

It follows that, in theory, at least, a global optimal solution to a linear

multiplicative problem can be found by completely enumerating the set XE, of efficient

extreme points of the associated multiple-objective linear programming problem (3.16)

and, from this set, choosing the points) with the smallest value of


g(x)= (c'J,x
j=I

(see, e.g., Sniedovich and Findlay 1995). Unfortunately, as we shall see later, in practice

the exponential growth in the size of XE, as a function of problem size (Steuer 1986)

renders this approach impractical for many cases.

The approach of the heuristic algorithm is to efficiently search a dispersed,

carefully chosen sample of candidate points from X Ex in order to find an attractive

solution to the linear multiplicative programming problem. To describe and explain the

workings of the heuristic, we must first present some theoretical background from the

theory of multiple-objective linear programming.

Let








W ={we RI'(e,w)
where ee R' is a vector with each entry equal to 1.0, and M is a positive real number.

For sufficiently large M, from Philip (1972) it is known that a point xO belongs to the

efficient set XE of (3.16) if and only if xo is an optimal solution to the weighted-sum

problem

min (wC, ), s.t. Ax < b, (3.17)

for some w = w e W. We will assume henceforth that M is chosen to be large enough to

guarantee that this property holds. It is also well known that the efficient set XE for

(3.16) is given by

XE = U{Xjwe w},

where, for each we W, X, denotes the optimal solution set of the linear program (3.17)

(Steuer 1986 and Yu 1985). Since the optimal solution set to (3.17) for any we W is a

face of

X= E R"'Ax5b\,

it follows that the efficient set XE for (3.16) is equal to the union of the faces X,,

we W, of X. Although XE is a connected set (Yu 1985), it is generally nonconvex. The

heuristic algorithm will individually identify efficient faces X,, we W, of X, and find

an approximately-optimal extreme point solution to the problem


min (c',x, s.t. x X, (3.18)
j=l


for each efficient face X, that it finds.








Let

Y={ye R'ly=Cx,forsomexe X},

Y = {ye Rly > y, forsomeF Y}.

To aid in its search, the heuristic algorithm will solve the linear program

min w'C,x) (3.19a)

s.t. Cx y, (3.19b)

Ax b, (3.19c)

for various values of y e Y' and we W The heuristic relies in part upon the properties

of problem (3.19) given in the next three results. The first two results follow easily from

Benson (1978).

Theorem 3.3.1. Suppose that xo e R" and let yO = Cxo. Then, xO is an efficient solution

for (3.16) if and only if, with y = yO, xo is an optimal solution to (3.19) for every

we W.

Theorem 3.3.2. If ye Y" and we W, then (3.19) has at least one optimal solution, and

any optimal solution for (3.19) is an efficient solution for (3.16).

Theorem 3.3.3. Suppose in (3.19) that w = w W and that y = yO = Cxo, where xo is

an efficient solution for (3.16). Let (uor, zor) denote any optimal solution to linear

programming dual of (3.19), where uo represents the dual variables corresponding to the

constraints Cx yO of (3.19). Let wO = u + w and let vo = (w CxO. Then, xo belongs

to the efficient face X. of X, and X, can be represented as






56

X, ={xe X(wo-Cx=vo}.

Proof. To prove the theorem, we will show that, with w = 0o, xo is an optimal

solution to problem (3.17). Suppose in (3.19) that w= w e W and that y = yO = Cxo,

where xO is an efficient solution for (3.16) given in the theorem. The dual linear program

to (3.19) is then given by

max-(yO,u)-(b,z),

s.t. -CTu-ArZ=CTWo

u,z O.

From Theorem 3.3.1, xo is an optimal solution to (3.19) when w = w and y = yO. By

the duality theory of linear programming (Murty 1983), since (UOT, zor) is an optimal

solution to the linear programming dual of (3.19) when w = w and y = yO, this implies

that

(we Cxo = -(y, u)-(b, z.

By rearranging this equation and using the definitions of yO and WO, we obtain

( CxO C =-(b, z). (3.20)

With w = Vo, the dual linear program to (3.17) may be written as

max-(b,z), (3.21a)

s.t. -ATz = CrWO, (3.21b)

z 0. (3.21c)

Let z denote an arbitrary feasible solution to problem (3.21). From the definitions of u








and w0, this implies that (uor, z) is a feasible solution to the dual linear program of

(3.19). Since (UOT, ZT ) is an optimal solution to the latter problem, it follows that

-(y, uO)-(b, zo) -(yo, )-(b, ),

or, equivalently,




Notice that, since (uor,zOT) is an optimal solution to the dual linear program to (3.19), z

is a feasible solution to (3.21). By the choice of z, the preceding two statements imply

that z is an optimal solution to (3.21). Since xo is an efficient solution for (3.16), with

w = W, xo is a feasible solution for (3.17). From (3.20) and the duality theory of linear

programming (Murty 1983), since z0 is an optimal solution to (3.21), this implies that,

with w = wt, xo is an optimal solution to (3.17), and the proof is complete. O

Notice in Theorem 3.3.3 that, for any t > 0, X, = X,. This implies that, in

Theorem 3.3.3, when W" o W, there exists a tE (0, 1) such that tfw e W and

X, = X,,. Thus, in Theorem 3.3.3, when wO e W, X0, has an alternate representation

X-, for which w= e W. For simplicity, we may and will assume without loss of

generality that in Theorem 3.3.3, wo e W.

To generate various points ye Y' for use in problem (3.19), the heuristic

algorithm will rely upon the two concepts defined in the next two definitions (see, e.g.,

Zeleny 1982).







58
Definition 3.3.1. The point y' RP is called the ideal point of Y when, for each

j = 1, 2,..., p, y' equals the minimum value of y, over Y.

Definition 3.3.2. The point y" e RP is called the anti-ideal point of Y when, for each

j= 1, 2,... p, y' equals the maximum value of y, over Y.

Notice that y' and yj' generally do not belong to Y. The algorithm uses these

two points as anchor points in an initialization procedure whose goal is, in part, to

generate a dispersed sample of points from Y'.

The heuristic algorithm may be stated as follows.

Algorithm 3.3.1. Efficient Point Search Heuristic Algorithm

Initialization Phase. See Steps 1 through 5 below.

Step 1. Find the ideal and anti-ideal points y' and y" of Y.

Step 2. Find an optimal solution [(xj ,Ja' R"' to the linear program

max a,

s.t. y" +a(y' -yAI) Cx,

Ax b,

a>O,

and set y* = yA +a'(y' yA).

Step 3. Choose a positive integer S and, for each i = 1, 2,..., S, let

y' =y"+(i/SXy* -y ).

Ste 4. Choose a positive integer N such that 1 N < M p+1, let w = e R', and,

for each j = 1, 2,..., p, define w' e R" by






59
I fl, ifi j,
N, if i = j.

Ste 5. Set UB = +o, i =0 and j =0.

Efficient Point Search Phase. See Steps 1 through 6 below.

Step 1. Set y = y' and w= w', and find any optimal solution x" to linear program

(3.19).

Step 2. Set y = Cx and w = w in (3.19), and compute any optimal solution [(u;,

(z' J to the dual linear program to (3.19), where u" denotes the optimal dual variables

corresponding to the constraints Cx < y of (3.19).

Step 3. Let 'j = u'j + w. If w~ is a positive multiple of W'i for some i' i and j' j

such that (i', j') (i, j), then go to Step 6. Otherwise, continue.

Step 4. Let v, = (W!J C"'. For each h = 1, 2,...,n, calculate a, according to the formula


a, = it=l[ c',.V d ]c, (3.22)

and find any basic optimal solution xd to the linear program

min(a, x), (3.23a)

s.t. (wr TCx =v,, (3.23b)

Ax < b. (3.23c)

Ste 5. If fI (ck x )>UB, go to Step 6. Otherwise, set ,= x", and UB= (c*,.
k=1 k*=

and go to Step 6.







60
Step 6. Set j = j +1. If j p, go to Step 1. Otherwise, set i = i +1 and j = 0. If i S,

go to Step 1. Otherwise, Stop: xe XE,, is the recommended solution to the linear

multiplicative programming problem.

In the initialization phase of the algorithm, samples of points from Y' and from

Ware generated. To generate the sample of points from Y', Step 2 of this phase

determines the point y' between y"A and y' such that, of all line segments with

endpoints y" and y that lie in Y' and for which y lies on the line segment connecting

y" and y', the line segment L connecting y" and y' has maximum norm. The sample

y'i = 1, 2,.... S of points from Y" is then generated in Step 3 of this phase by

partitioning L into S line segments of equal length, where S is a positive integer chosen

by the user. In Step 4, a sample of p +1 all-integer vectors from W is generated, where

for p of these vectors, the value N of one of the components is chosen by the user from

the set {l,2,...,M- p+l}.

Each iteration of the efficient point search phase of the heuristic executes two key

operations. First, it identifies an efficient face X, of X. Second, unless this face has

been previously identified during an earlier execution of this phase, with w = w' in

problem (3.18), by using a first-order linear approximation to the objective function of

this problem, it finds an extreme point x' of X in this efficient face that is an

approximate optimal solution to (3.18).

Steps I through 3 of the efficient point search phase of the algorithm identify an

efficient face of X. In Step 1, with y = y' Y' and w= w' e W, the linear program






61
(3.19) is solved for any optimal solution PX. By Theorem 3.3.2, this optimal solution

must exist and is an efficient solution for (3.16). In Steps 2 and 3, with y = CxT and

w = wJ in (3.19), the dual linear program to (3.19) is solved to yield the vector uU RP,

and the weighting vector W' = u + w' is computed. From Theorem 3.3.3, the face X,

corresponding to this weighting vector is an efficient face for (3.16) and contains x.

Furthermore, from the same theorem, this face can be written as

X, = {E R" Ax b,(W Cx =v,,, (3.24)


where v, = (W'i CI'. Step 3 checks whether or not X, has been identified during a

previous execution of this phase of the algorithm. If so, the algorithm proceeds to Step 6

to prepare for another possible iteration of the efficient point search phase of the heuristic.

Otherwise, control shifts to Steps 4-5.

In Steps 4-5 of the efficient point search phase, problem (3.18) is approximately

solved using a new efficient face X, as the feasible region. In particular, in Step 4, (3.22)

is first used to construct the nonconstant portion of a first-order Taylor series linear

approximation (a,x) of the objective function of problem (3.18) at x = x e X,,. Next,

using the representation (3.24) of the efficient face X,, an extreme point minimizer xi

of (a, x) over X,, is found by solving the linear program (3.23). Notice that x4 e XE,

(see Rockafellar 1970). In Step 5, the value achieved by x' in the objective function of

the linear multiplicative problem is compared to the smallest value UB found thus far for

this objective function by the search. If x" achieves a smaller objective function value







62

than UB, x' becomes the new incumbent solution x and UB is reduced in value

accordingly.

Notice that the performance of the heuristic algorithm depends in part upon the

number, locations, and dimensions of the efficient faces (3.24) that are searched via

problem (3.23). This, in turn, is partially dependent upon the sizes of the parameters S

and N chosen by the user. The goal is to search as many points of X,., as possible by

generating a variety of distinct efficient faces (3.24) of large dimensions that are

dispersed widely throughout X,. Notice that, since each efficient face identified by the

heuristic is given in the form (3.24) and searched by solving linear program (3.23), the

individual points in XE,. that are searched by the algorithm are searched implicitly rather

than explicitly, i.e., they do not need to be explicitly enumerated.

3.4. Computational Results

The heuristic algorithm described in Section 3.3 has the following attractive

characteristics:

(a) it can be implemented using only linear programming methods;

(b) it generally implicitly searches many efficient extreme points of (3.16) at once

by optimizing over entire efficient faces of (3.16), rather than by explicitly examining

individual efficient extreme points of (3.16);

(c) it allows the user to manipulate the nature and extent of the efficient face

search through the choices for the input parameters S and N;

(d) it finds efficient faces of (3.16) by attempting to globally sample from a

variety of regions of the efficient set.







63
To evaluate the effectiveness in practice of the heuristic algorithm and its features,

we have written a VS FORTRAN computer code for the algorithm and used it to solve

260 linear multiplicative programming problems of various sizes. To execute the code on

these 260 problems, we used an IBM ES/9000 model 831 mainframe computer. As a

further illustration of the effectiveness in practice of the heuristic algorithm, we solved a

multiple-object linear programming problem in forest management that was derived from

a real decision situation using real data.

To implement Step 3 of the initialization phase of the algorithm, we chose to set

S = 4, so that a sample of five points lying between y" and y' in Y" is always

generated in this step. We used a value of N = 9 in Step 4 of the initialization phase to

help generate the sample of p +1 points from W.

To solve the linear programming problems called for by the heuristic, the

computer code uses the simplex method procedures given in the subroutines of the

Optimization Subroutine Library (International Business Machines 1990). These

subroutines employ anticycling rules to handle degeneracy as needed. Therefore, they are

especially appropriate for solving instances of problem (3.23), since these problems

always contain degenerate extreme points.

Let

intR" ={xe R"lx>0j,

and suppose that k is a positive integer. To generate the 260 test problems, we used the

following random procedure. First, for each j = 1, 2,..., p, we generated the elements of

the vector c E R" by randomly drawing elements from the set (1, 2,..., 10). Next, we






64
generated a nonempty, compact polyhedral feasible region X C int R,. This region can

be written as

X= { R'\Px q,l xj < j=, 2,....n,

where P is a k x n matrix, q E Rk, and 4 E R. To accomplish this, first the elements of P

were generated by randomly choosing elements from the set {l, 2,..., 10}. Next, for each

i= 1,2,..., k, the formula


qi- P"


was used to calculate q,, and, finally, 4 was chosen according to the rule

4= max {q,i= l,2,...,k}.

Each test problem was constructed to belong to one of four categories, where a

category is defined by the number p of linear functions used in the objective function


S(ci, x) of the test problem. The values p = 2, 3,4, 5 were chosen to define these
j=I

categories. We chose these categories in this way because empirical evidence seems to

indicate that the complexity of these problems is more sensitive to the magnitude of p

than to the magnitudes of k or n (Kuno, Yajima and Konno 1993). Within each

category, the test problems were classified into subcategories of 10 problems, each

defined by the values of the ordered pair (k,n).

To help evaluate the attractiveness of the solutions found by the heuristic

algorithm, we found a global optimal solution for each test problem by completely

enumerating all of the efficient extreme points of the associated multiple-objective linear







65
program (3.16). To accomplish this, we use the ADBASE computer code developed by

Steuer (1983).

Some statistics summarizing the results of these computations are presented in

Tables 3.1-3.4. In each table, each row gives average statistics for a subcategory (k,n) of

10 problems, a measure of the worst case performance of the heuristic, and the number of

problems in a category for which a global optimal solution was found. The first statistic is

the average number of efficient extreme points found by ADBASE in solving the

problems by complete enumeration. In some sense, the magnitudes of these numbers

correspond to the average relative difficulties, by subcategory, of each group of 10 linear

multiplicative programs in a subcategory. The second statistic is the average efficiency

rating r given by

r = 1- [(ZH Zin .)/(Z Z. )],

where z, is the objective function value returned by the heuristic, and where z,, and

z, are the global minimum and maximum values of the objective function of the test

problem over the corresponding set of efficient extreme points of (3.16). Thus, 0
and the closer r is to 1.0, the more attractive the value z, returned by the heuristic is

relative to the actual global minimum value zn, The third statistic given for each

subcategory in these tables is the average CPU time (seconds) that the heuristic needed to

solve a problem in the subcategory. The fourth statistic shows the lowest efficiency rating

calculated for a problem in the subcategory. It gives a measure of the worst case

performance of the heuristic algorithm when applied to the 10 problems in a subcategory.







66
Table 3.1. Computational Results: p = 2.

Subcategory Avg No. Avg. Eff. Avg. Solutions Lowest Eff. No. Exact
k n Eff. Points Rating r Time (sec.) Rating r Solutions
25 20 28.8 1.000 0.227 1.000 10
25 30 28.8 1.000 0.241 1.000 10
30 40 47.9 1.000 0.389 1.000 10
40 30 28.2 1.000 0.328 1.000 10
40 50 47.0 0.999 0.504 0.996 8
50 40 35.1 0.999 0.453 0.999 9
50 60 29.2 1.000 0.556 1.000 10
60 70 62.3 1.000 1.070 1.000 10



The fifth statistic is the number of problems in a category for which the heuristic

algorithm found a global optimal solution.

These four tables show that the solutions returned by the heuristic algorithm give,

on the average, quite accurate estimates of the actual global minimum values for the 260

linear multiplicative test problems generated. This is indicated by the fact that average

efficiency ratings by subcategory always were at least 0.920, and in approximately 96%

of the subcategories exceeded 0.950. It is noteworthy that, for these problems, these

ratings r by subcategory do not seem to decline significantly as p, k, and n increase in



Table 3.2. Computational Results: p = 3.

Subcategory Avg. No. Eff. Avg Eff. Avg. Solutions Lowest Eff. No. Exact
k n Ext. Points Rating r Time (sec.) Rating r Solutions
25 20 330.6 0.985 0.321 0.951 4
25 30 896.8 0.960 0.469 0.708 5
30 40 873.3 0.987 0.543 0.884 7
40 30 949.3 0.993 0.609 0.968 6
40 50 2073.7 0.920 0.967 0.806 4
50 40 1484.9 0.993 0.908 0.961 7
50 60 2846.3 0.995 1.298 0.978 6
60 70 5867.5 0.969 2.495 0.799 2







67
Table 3.3. Computational Results: p = 4.

Subcategory Avg. No. Eff. Avg Eff. Avg. Solutions Lowest Eff. No. Exact
k n Ext. Points Rating r Time (sec.) Rating r Solutions
25 20 2789.5 0.998 0.426 0.993 4
25 30 7245.9 0.992 0.598 0.945 5
30 40 23656 0.986 1.019 0.947 1
40 30 19034 0.978 0.998 0.923 2
40 50 50889 0.969 1.539 0.918 0
50 40 59443 0.969 1.587 0.843 2
50 50 83780 0.981 1.901 0.890 3



value. In addition, with the exception of one subcategory, a global optimal solution was

found for at least one problem in each subcategory.

The average solution times by subcategories shown in the four tables indicate that,

for these test problems, the computational effort required by the heuristic was rather

small. In fact, these average times were always less than 2.50 seconds. In comparison to

exact algorithms that have been used in test situations to globally solve linear

multiplicative problems, these times are generally either at least as small or much smaller

(see, e.g., Kuno, Yajima, Konno 1993 and Ryoo and Sahinidis 1996). Furthermore, in

contrast to solution times for exact algorithms, these average solution times seem much

less sensitive to increases in p, n, k or to increases in the average number of efficient


Table 3.4. Computational Results: p = 5.

Subcategory Avg. No. Eff. Avg. Eff. Avg. Solutions Lowest Eff. No. Exact
k n Ext. Points Rating r Time (sec.) Rating r Solutions
10 20 1331.4 0.993 0.353 0.941 5
20 10 527.1 0.998 0.294 0.993 2
25 30 57115 0.995 0.962 0.992 2







68
extreme points that exist in the corresponding problems (3.16); see Kuno, Yajima, Konno

(1993) and Ryoo and Sahinidis, (1996).

Finally, it is worth noting that we were able to apply the heuristic to much larger

problems than those reported in Tables 3.1-3.4. However, the number of efficient extreme

points in the associated multiple-objective linear programming problems (3.16) for these

cases always exceeded 200,000. Since the ADBASE code cannot be used to find all of the

efficient extreme points for such problems, we were unable to completely enumerate the

sets of efficient extreme points to find z,, and r values for these problems. Thus, we are

as yet not able to draw conclusions concerning the accuracy of the heuristic for any

problems larger than those reported in Tables 3.1-3.4.

To further illustrate the effectiveness in practice of the heuristic algorithm, we

solved a real application problem in forest management that was studied in Steuer and

Schuler (1978) as a multiple-objective linear programming problem. The problem

involves the allocation of land and budget monies in a way that seeks to maximize

objectives in timber production, hunting and cattle grazing in the Swan Creek subunit of

the Mark Twain National Forest. Steuer and Schuler (1978) provide actual data used to

formulate their multiple-objective linear programming problem. The problem contains 31

decision variables, 5 linear objective functions, and 13 constraints. Our multiplicative

programming problem was formed from this problem by multiplying the 5 linear

objective functions together to form a single objective function. The heuristic was then

used to search for an approximate solution that maximizes this single objective function

subject to the constraints of the forest management multiple-objective linear

programming problem.








To help evaluate the attractiveness of the solution found by the heuristic

algorithm, we found a global optimal solution by enumerating the 83 efficient extreme

points of the associated forest management multiple-objective linear program using the

ADBASE computer code. An efficiency rating of r = 0.999 was calculated using the

slightly modified equation

r= 1 I Z (z,,- )]

since this multiplicative programming problem is a maximization problem rather than a

minimization problem. This efficiency rating indicates that the heuristic algorithm

returned an attractive value z, relative to the actual global maximum value z,.


3.5. Discussion

The results of this chapter imply that there are at least two ways to rewrite a

concave multiplicative programming problem as a concave minimization problem. It

follows that concave minimization theory and methods can be used in these ways to

analyze and solve concave multiplicative programs. The results also imply that a concave

multiplicative programming problem can be analyzed and solved directly, without any

reformulation, as a quasiconcave minimization problem over a convex set. Furthermore,

the analysis in the chapter implies that any concave multiplicative programming problem

(Px) with a compact feasible region has at least one optimal solution that is an efficient

extreme point solution of the associated multiple-objective mathematical programming

problem (3.15). Therefore, the opportunity exists for devising solution methods for such

problems (Px) that search among the efficient extreme points of the associated multiple-

objective problems (3.15). The chapter proposes a heuristic algorithm that takes this







70
approach for solving linear multiplicative programs. From the computational results

presented for this heuristic algorithm, we conclude that its features and performance offer

significant potential for conveniently finding very attractive solutions with relatively little

computational effort to the various applications using linear multiplicative programming

encountered in practice. Thus, the theoretical and algorithmic results presented in this

chapter offer some potential new avenues for more effectively analyzing and solving

multiplicative programming problems of various types.













CHAPTER 4
A GENERAL MULTIPLICATIVE PROGRAMMING PROBLEM IN OUTCOME-
SPACE

4.1. Introduction

Recall from Chapter 1 that the multiplicative programming problem is given by


(Px) vx =min fl(x),s.t.xe X,


where p > 2 is an integer, X is a nonempty set in R", and, for each j = 1, 2,..., p,

f,: X -4 R satisfies f,(x)> 0 for all xe X. For simplicity, we assume that the

minimum v, in problem (Px) is achieved.

For any xe R", let f(x) denote the p-vector withjth entry equal to f,(x),

j = 1, 2,..., p. Let ye RP denote the p-vector withjth entry equal to yj, j = 2,..., p.

For each j = 2,..., p, let iY e R satisfy

5j1sup f,(x),s.t.xe X,

where 5j = +o is possible, and let e RP denote the vector with jth entry equal to y,

j = 1, 2,..., p. Although various outcome-space reformulations of problem (Px) have

been proposed for solution purposes, one of the most common reformulations is given by

the problem

(P-') v, = min g(y), s.t. ye Y4,








where

YS ={yE RPIf(x)yg forsomexe X, (4.1)

and where, for each ye Y g : Y -+ R is defined by


g(y) = l Y. (4.2)
j=1

For example, problem (P, ) is essentially the reformulation of problem (Px) used in the

algorithms of Benson (1998c), Falk and Palocsay (1994) and Thoai (1991). Notice that

since X is nonempty, Ys is a nonempty set. By constructing appropriate global solution

algorithms for problem (P,r), this problem provides us with the opportunity to solve

problem (Px) by working in the outcome-space RP of the problem, rather than in the

decision space R", which is generally much larger than RP. In order to globally solve

problem (P,'), it is important to understand the properties of the set Ys defined by (4.1),

of the function g defined by (4.2), and of problem (P, ) itself.

This chapter undertakes a mathematical analysis of the outcome-space

reformulation (P,,) of problem (Px). The analysis is organized according to whether or

not the outcome-space problem satisfies conditions for the general case, the convex case,

or the polyhedral case. For the general case, we show, for instance, that globally solving

either problem (Px) or problem (P J) essentially also globally solves the other problem,

and that, for any feasible point y for problem (Pr,), either g(y)< g(y) for some ye Y

or y satisfies a condition that is necessary, but not sufficient, for it to be a local optimal

solution for problem (P,"). For the convex and polyhedral cases, we show stronger






73
results. For example, we show for the convex case that any global optimal solution for

problem (P,) must lie on the boundary of YV, that the objective function g in problem

(P_,) is strictly pseudoconcave on Y-, and, when YV is closed and contains at least one

extreme point, that problem (P,,) has an extreme point global optimal solution.

The analysis of the general case of problem (P,-) is given in Section 4.2. Section

4.3 provides analytical results for both the convex and polyhedral cases of problem (Py_).

4.2. Results for the General Case of Problem (P,,)

Notice under the assumptions made in Section 4.1 for problem (Px) that Y- is a

nonempty subset of RP := z e R'Pz > O0. When Y- satisfies this condition, we obtain

what we will call the general case of problem (P,,).

It is important to establish that by solving the general case outcome-space

formulation (PF,) of problem (Px), a global optimal solution for problem (Px) can be

recovered. The following result, by showing that problems (Px) and (P.) are equivalent

in a certain sense, immediately establishes this fact.

Theorem 4.2.1. (a) If x' is a global optimal solution for problem (Px), then y = f(x*)

is a global optimal solution for problem (P,). Furthermore, v, = v,.

(b) Problem (P,,) has at least one global optimal solution. Furthermore,

if y' is a global optimal solution for problem (Py,), then any x' e X such that

f(x' ) y* is a global optimal solution for problem (Px).






74
Proof. (a) Let x' be a global optimal solution for problem (Px), and set

y' = f(x'). From (4.1) and (4.2), this implies that y" e Y- and that


g(y')= f f(x)=v,.
j=1

Therefore, v,
there would exist an xe X such that


o< lf,(T) g(y)< .
11

which contradicts the definition of v,. Therefore, g(y)v, for all ye YV. This implies

that v, v,. Since v,
a global optimal solution for problem (Pr,).

(b) By assumption, we may choose a global optimal solution for problem

(Px). From part (a), this implies that problem (P,-) has at least one global optimal

solution. Suppose that y' is a global optimal solution for problem (Pr,). Since y' e YV,

(4.1) implies that we may choose an arbitrary x' e X such that f(x' ) y'. Then, from

(4.2), since 0



Since x' e X and y' is a global optimal solution for problem (P_), this implies that


vY~ f, (x') jil






75

From part (a), v, = v,. By (4.3), this implies that If,(x'*)= v. Since x' e X, it follows
j=I

that x' is a global optimal solution for problem (Px). O

Suppose in the general case of problem (Py,) that a point ye Y- has been

generated. For algorithmic purposes, it may be valuable to have a tool for finding an

alternate point ye Y- that satisfies g(y)< g(y), if such a point exists. The next result

gives an idea for potentially helping to create such a tool. To prove this result, we need

the following lemma. This lemma will also be useful in proving several other results later

in this chapter.

Lemma 4.2.1. Assume that E YV. Then, for any yE Y',


(l/p)(Vg(y), y) = g(y)(l p) i (y/y),
j=l

and

g(yM)/Ap) (/j y gg(yj)g(y)g g(y)
jI=

with equality holding in the latter relationship iff, for some constant M > 0, y, = M y,

j=l, 2,..., p.

Proof. Choose an arbitrary point ye Y-. Suppose that ye YV. Then, by (4.2),

since Y' cR, g(y)> 0. By definition of g,


(1/p)(Vg(y),'y)= (l/p) ky j


= (i/p)[g(y)/Yj ]Y
j=I






76

=g(y) (1/p)(y /y). (4.4)
j=1

Since (l/p)>0, (yj/y)> 0 for each j = 1, 2,...,p, and p(1/p)= 1, the arithmetic-

geometric mean inequality (Duffin, Peterson, and Zener 1967) implies that

(i/P)(y/YJ) g(Y)/8(Y),
j=I

with equality holding iff for some constant M > 0, y, = My for each j = 1, 2,..., p.

Together with (4.4), since g(y)>0, this implies the desired results. [

Theorem 4.2.2. Assume that ye Y-. If

1.0 > inf(l/p) (y,/y), (4.5)


then g(y)< g(y) for some yE Y. In particular, if y achieves the infimum in (4.5), then

g(y)< g(y).

Proof. Suppose that y e Y-. If (4.5) holds, then for some e YV,

1.o>(/p)(y,/y,). (4.6)
J=i

Since g(y)> 0, this implies that


g(y)> g(y)(1i/p) (/y,). (4.7)
j=l
/i--

From Lemma 4.2.1, since y e Y', we know that

g(y)(l/p) (jt/yj ) g(y)[g (y)/g(y)]i/P. (4.8)
j=j

Since g(y)> 0, together (4.7) and (4.8) imply that






77

1.0> [g()/g(y)]/P.

Because g(y)> 0, this implies that g(y)< g(y). Therefore, g(y)< g(y) for some

ye YV. Since, for any point 5 that achieves the infimum in (4.5), (4.6) is also satisfied,

the argument above also implies that if y achieves the infimum in (4.5), then

g(y)< g(y).-

Notice that when y e Y-, the infimum in (4.5) is either less than 1.0 or equal to

1.0. From Theorem 4.2.2, when this infimum is less than 1.0, a point y in YV such that

g(y)< g(y) exists. In particular, in this case y is not a global optimal solution for

problem (P,). The next result covers the case when the infimum in (4.5) equals 1.0.

Theorem 4.2.3. Assume that ye Y If

1.0 = inf (/p)(,/y,), (4.9)

then y is an optimal solution to

vd = min(Vg(), y y), (4.10)

and vd =0.

Proof. From (4.9), since ye Y the infimum in (4.9) is achieved at y = y. By

Lemma 4.2.1, since g(y) is a positive constant, this implies that y also minimizes

(1lp)(Vg(y), y) over Y'. Since (l/p) is a positive constant and -(Vg(y), y) is a

constant, it is easy to see that this implies that y is an optimal solution to (4.10) and

d =0. ]







78
A point ye Y- is a local optimal solution for problem (P,,) when there exists an

E >0 such that for each ye Y- for which Ily-y -le, g(y)-g(y). From Theorem 4.2.3,

when ye Y- and (4.9) holds, then, for any y E Y-, if there is a S > 0 such that

d := (y y) satisfies y + Ad e Y- for all A such that 0 < A56, the directional derivative

of g at y in the direction d will be nonnegative, i.e., (Vg(y)d)0. From Bazaraa,

Sherali and Shetty (1993), this is a necessary, but not sufficient condition for y to be a

local (or global) optimal solution for problem (P,,).


4.3. Results for Convex and Polyhedral Cases of Problem (PY,)

When Y-, in addition to being a nonempty subset of RP, is a convex set, then we

obtain what we will call the convex case of problem (P-,). Similarly, when Y-, in

addition to being a nonempty subset of RP, is a polyhedron, then we obtain what we will

call the polyhedral case of problem (P,). Each of these types of outcome-space versions

of problem (Px) arises from a broad class of decision space problems, as shown by the

next result.

Theorem 4.3.1. When X is a convex set and, for each j = 1, 2,..., p, f is a convex

function on X, we obtain the convex case of problem (Pr). When X is a polyhedron

and, for each j = 1, 2,..., p, f, is linear on R", we obtain the polyhedral case of

problem (PY,).

Proof. Assume, in addition to the assumptions made in Section 4.1 on X and on

fj, j= 1, 2,..., p, that X is a convex set and that, for each j= 1, 2,..., p, f, is a






79
convex function on X. We will show that Y- is a convex set. Choose any y', y2 E Y.

From (4.1), since y', y2 e Y, we may choose x', x2e X such that f (x' ) yl and

f,(x2)
(1-A)>0, for each j=l, 2,..., p,

Af, (x)+ (I-A)f,(x2)Ay' + (-)y2. (4.11)

By the convexity of f,, j = 2,..., p, on the convex set X, if we set x= x' +

(1- )x2, then

xe X, (4.12)

and, for each j = 1, 2,...,p,

f ,()-A f,(xl)X )+(1-)f, (x). (4.13)

From (4.11)-(4.13), f (x)) A y' +(1 )y2, where e X Since y', y2 Y, y' Y

holds for each i = 1, 2. As a result, since A, (1- .A) 0, A y' + (1- A)y2 Y. The

conditions for A y' + (1 A)y2 to belong to YV are thus satisfied. By the choices of y', y2

and A, this implies that Y is a convex set.

Now suppose, in addition to the assumptions made in Section 4.1 on X and on

f, j = 1, 2,..., p, that X is a polyhedron and that, for each j = 1, 2,..., p, fj is a linear

function on R". We will show that Y- is a polyhedron. By definition, since X is a

polyhedron, there exists a finite number q of linear functions g,, j = 1, 2,...,q, on R",

and real numbers b j = 1, 2,..., q, such that

X = xe R"lg (x)b,, j=1,2,...,q}.







80

Let Z F R""+ be defined as the set of all solutions (x, y) to the system of linear

inequalities (4.14)-(4.16) given by

f,(x)- Y<0, j=1, 2..., p, (4.14)

yj.j, j=l,2,...,p, (4.15)

g,(x)
Then, by definition, Z is a polyhedron in R" Let A be the p x (n+ p) matrix whose

first n columns each equal 0e RP and whose last p columns together form the px p

identity matrix. Then, from (4.1) and the definition of Z, Y = AZ. From Rockafellar

(1970, Theorem 19.3), YV is a polyhedron in R.. O

In convex cases of problem (Pr,) (and thus, in polyhedral cases as well), certain

locations within Y- for seeking global optimal solutions can be specified. For instance,

we have the following result.

Theorem 4.3.2. Suppose that problem (P,,) satisfies the conditions for the convex case.

Then:

(a) Any global optimal solution for problem (P,) belongs to the boundary of YV.

(b) If YV is closed and contains at least one extreme point, then there exists at

least one global optimal solution for problem (Pr,) that is an extreme point of YV.

Proof. Assume that YV, in addition to being a nonempty subset of R.P, is a

convex set, i.e., that we have the convex case for problem (P r). Then, from Theorem

4.2.1, problem (P,,) has at least one global optimal solution.






81
(a) To show this part of the theorem, let y' denote an arbitrary global optimal

solution to problem (Pr,). Suppose that y' is not on the boundary of YV. By the choice

of y* and since Y5 is a convex set, Y- has a nonempty interior. Therefore, y' must

belong to the interior of Y". From (4.1), this implies that for some xe X, f(x)< y'

must hold. By assumption, since X, f(xI)> 0. Therefore, if we set y = f (), it

follows that ye Y- and

YIj < YI;
j=I j=l

From (4.2), this contradicts the global optimality of y' in problem (P.,). Therefore, y

must belong to the boundary of Y .

(b) From the discussion in Section 3.2, since Y- is a nonempty convex set and,

for each j = 1, 2..., p, the function h (y) = y is positive and concave on Y-, the global

optimal solution set for problem (P,,) is identical to the global optimal solution set for

the problem

(Pr) min (y),s.t.ye Y-,

where, for each y e Y : Y- R is the concave function defined by


(Y)= Y p.


Since YV is a nonempty, closed convex set with at least one extreme point, from

Rockafellar (1970, Corollary 18.5.3), it is easy to see that Y- can contain no lines.

Furthermore, since problem (P,-) has at least one global optimal solution, problem (P s)







82
also has at least one global optimal solution. By Rockafellar (1970, Corollary 32.3.1),

since k is a concave function on Y-, the latter two statements imply that problem (P,)

has at least one global optimal solution y that is an extreme point of Y'. Because the

optimal solution sets of problems (P,,) and (P,3) coincide, this completes the proof. [

Suppose that Ys is a nonempty, closed convex subset of Rf, and that Ys

contains at least one extreme point. Then, from Theorem 4.3.2, there will exist at least

one global optimal solution for problem (P,,) that is an extreme point of Y-, and all

global optimal solutions for problem (Pr,) will lie on the boundary of Y-. Neither of

these properties, however, is necessarily shared by the decision set-based problem (Px)

whose outcome-space reformulation yields problem (P,_). The following example

demonstrates this.

Example 4.3.1.Let p = 2, X = {(x.,x2f E R210x, S6, i = 1, 2,

f,(x1,x2)= ( -1)2 +1,

and

f,(x.,x,)=(x2 2) +

in problem (Px). Then X is a nonempty, convex set and for each i = 1, 2, f, is a

convex, positively-valued function on X Therefore, by Theorem 4.3.1, the problem

(Py,) obtained by formulating the outcome-space version of problem (Px) is guaranteed

to satisfy the conditions of the convex case for problem (Ps,). Furthermore, it is not

difficult to show, in this case, that Y- is compact. Thus, Y- is closed and contains at






83
least one extreme point. It is easy to see that the unique global optimal solution to

problem (P _) is (y ) = (1, 1) which, as guaranteed by Theorem 4.3.2, is an extreme

point of Y- (and is thus on the boundary of Y ). On the other hand, the only global

optimal solution to problem (Px) is (x'J = (1, 2), yet x' is neither on the boundary of

X nor is it an extreme point of X.

To present the next result, we need to define two types of functions.

Definition 4.3.1. Let Z c R" be a nonempty convex set, and let h: Z -> R. The function

h is said to be quasiconcave on Z when for each z', z2 Z and A R such that

0
h[iz' + (l- )z2 ]> min h(z h(z2)}.

Definition 4.3.2. Let W be an open set in R" that contains Z R R", and let h: W -> R.

The function h is said to be strictly pseudoconcave over Z when h is differentiable over

Z and, for each distinct z', z E Z, if (Vh(z'), z2- zl) O, then h(z2)< h(z').

It is well known that a differentiable, quasiconcave function h: Z -- R need not

be strictly pseudoconcave over Z. For a discussion of quasiconcave and strictly

pseudoconcave functions see, for example, Bazaraa, Sherali and Shetty (1993).

From Konno and Kuno (1995, p. 379), we know that when Y- is a convex set,

since Y5 C RP, g: Y< -4 R defined by (4.2) is quasiconcave on Y4. Thus, in the convex

case, problem (P,-) is a minimization of a quasiconcave function over a convex set. In

fact, however, we have the following even stronger result.






84
Theorem 4.3.3. Suppose that problem (P,,) satisfies the conditions for the convex case.

Then, in this problem, g is a strictly pseudoconcave function over the convex set Y-.

Proof. The set Y- is a convex set by definition of the convex case for problem

(P,,). To show that g is strictly pseudoconcave over Y-, notice first that by (4.2), g

can be considered to be well defined over the open set RP. Also notice that g is

differentiable over RP and, thus, over Y- g RBf.

Suppose now that y' and y2 are distinct points in Y" that satisfy

(Vg(y'), y2 y'l 0. Then, from (4.2), we obtain

O >(Vg(y),Y2 -yl)= Y I --Y)
k=1 j k

= Y Y y g(y)
k=l jl k=1

= [g(y' )](Y/y: )- pg(y')
k=1

= g (y' )[(y /y)- p]. (4.17)

By multiplying both sides of (4.17) by (l/p) and rearranging, we obtain that

g(y ) g(y')(1/p) (y/y: ). (4.18)
k=i
From Lemma 4.2.1,

gy')(1/p) y/y: ) g(yl )[g(y2 l (4.19)
ii






85
with equality holding iff, for some M > 0, y2 = M y[, k = 1, 2,..., p. There are two

cases to consider.

Case (i): There is no M >0 such that y = Myk, k = 2,..., p. Then, in (4.19),

strict inequality holds, so that from (4.18) and (4.19),

g(y')> g(yI [g(y2g(y )]'/

Since g(y')>0, this implies that g(y)< g(y').

Case (ii): For some M > 0, y2 = M y, k = 1, 2,..., p. If we choose such an M,

then (4.19) holds as an equality. Thus, from (4.19) and the choice of M, we obtain that

g(y' )(1/p)(y/y: )= gy' )[g(y2 ) g(y' )f/ (4.20)
k=1

and that

g(y2)=Mpg(y), (4.21)

respectively. Since g(y')> 0, together (4.18), (4.20) and (4.21) imply that

g(y)' gG XM P) = MgGy-

Dividing through by g(y')> 0 yields M <1. Notice that M # 1, since, by assumption, y'

and y2 are distinct. Therefore M <1. By (4.21), since g(y'), g(y2)> 0, this implies that

g(y2)< g(y'), and the proof is complete. [

Remark 4.3.1. Theorem 4.3.3 justifies and strengthens the claim of Sniedovich and

Findlay (1995, p. 317) that when YV is a convex subset of Rf, g: Y -) R defined by

(4.2) is differentiable and pseudoconcave on Y-.






86
From Theorem 4.3.3, in the convex case, problem (P,-) is a global optimization

problem involving the minimization of a strictly pseudoconcave function over a convex

set Y-. Therefore, as in the general case, multiple local optimal solutions for problem

(P,,) will generally exist that are not globally optimal.

From Theorem 3.2.1, we know that when Y- is a nonempty, convex subset of

RP, the function : Y< -4 R defined, as in the proof of Theorem 4.3.2, by

R(y)= [g(y)lp (4.22)

is concave where g : Y' R is given by (2). By the next result, when the domain of g is

restricted to an appropriate subset of Y-, a stronger statement can be made.

Theorem 4.3.4. Assume that Y- is a nonempty, compact, convex subset of R.P. For any

ae R' and be R such that a >0 and b>0, let Z(a,b)=Y -n {ye RPl(a,y)=b}.

Then R : Z(a,b) -- R defined for each y E Z(a, b) by (4.22) is a strictly concave function

for any a e RP and be R such that a > 0 and b > 0.

Proof. Assume that yl,y2 EZ(a,b) and y' y2, where ae R', be R, a>0,

and b > 0. Since Z(a, b) is an intersection of two convex sets, it is itself a convex set.

Therefore, if we choose A E R such that 0 < A < 1, then

z:=Ay'+(l-A)y E Z(a,b).

Also, by (2) and (4.22),


(z)=A [Ay' +(1-)yJl P. (4.23)
L/='


From Polya and Szego (1972),






87

Ay+(1- A)y+j] s y + (i- Y y) ] (4.24)
j=1i J { = j=1 I J=1

with equality holding iff A y) = K(l- -)y', j= 1, 2,.... p, for some positive constant K.

Since




and

[(I-~ y (lA1- (y2)

(4.23) and (4.24) will imply the desired result if we can show that no K > 0 exists such
that

A.y =K(1-AI)y', j=1,2,...,p. (4.25)

Notice that since y' f y2, Kx := [A/(l -)] does not satisfy (4.25).

Suppose, to the contrary, that for some K > 0, (4.25) is satisfied. Then from

(4.25) it follows that

y'= K[(1- )/y]y2. (4.26)

Since y', y2 e Z(a,b),

(a, yl= (a, y2 =b. (4.27)

Substituting for y' in (4.27) via (4.26), we obtain

K[(l- )/A;(a, y) =(a, y2)=b.






88
Solving here for K, we obtain that K = [/(1 A)]. Since K = K, = [A/(1 A)] does not

satisfy (4.25), this contradiction concludes the proof. O

It is important to notice that the counterpart of Theorem 4.3.4 in the decision

space does not hold, even in the polyhedral case. In particular, suppose that X C R" is a

nonempty, compact polyhedron and, for each j = 1, 2,..., p, that there exists a c, a R"

such that f (x)= (ci, x) >0 for all xe X. Then, although the function h: X R

defined for each x eX by


h(x)=l (c, x (4.28)

is concave (see Theorem 3.2.1), the function h: X(a,b)-- R need not be strictly

concave, where aE RP, be R, a>0, b>0, and

X(a,b)= xeX ac,,x)=b}.


The following example illustrates this observation.

Example 4.3.2. Let

X ={(x,,x2 e R: 0 5 <, 4.0,i=1,2},

and let f,(x,,x,)= ((1, l),(x,, x)), j= 1, 2. Then X is a nonempty, compact polyhedron

and, for each j = 1, 2, f, is positive and linear on X. As guaranteed by Theorem 3.2.1,

h: X -- R, which, by (4.28), is given by

h(xi,,x)=(x, +x2),







89
is concave. However, if, for example, a, = a2 = 1 and b = 4, then h is not strictly

concave on

X(a,b)= (x,,x2,0l.5x,< 4.0,i= l,2,x, +x2 =4.

Consider now problem (Pr,) when the conditions of the polyhedral case hold.

Assume also that Y- is a compact set, and that ye Y-. For algorithmic purposes, it may

be quite useful in this case to develop tools for finding local optimal solutions for

problem (PY). These tools could then potentially be used to construct global solution

algorithms for the problem that repeatedly move from a local optimal solution to an

improved local optimal solution until a global optimal solution is found. The remaining

results in this section are motivated, in part, by the desire to find such tools.

Notice that in the polyhedral case, the optimization problem in (9) is a linear

program given by


(LP) min (1/p)~ (yj/), s.t. ye y .
j=1

Problem (LP) will have an optimal solution y' that can be found, for instance, by the

simplex method. Since ye Ys, the minimum value vmin in problem (LP) satisfies vmin

S1.0. As a result, there are three possible cases for problem (LP). First, vmin <1.0 may

hold. Second, vmin = 1.0 may hold, with y being the unique optimal solution to problem

(LP). Third, vmin = 1.0 may hold, with problem (LP) having multiple optimal solutions.

In the first case, from Theorem 4.2.2, it follows that gy')< g(y), where y* is any

optimal solution to problem (LP), so that a more attractive feasible solution y" to






90
problem (P,-) than y has been found. To analyze the second case, we need the

following two definitions and lemma.

Definition 4.3.3. A point ye Y- is a strict local optimal solution for problem (P,,) when

there exists an E > 0 such that for each ye Y- for which y y and ly I < e,

g(y)> g(y).

Definition 4.3.4. Let Z be a nonempty convex set in R', and let h: Z -* R. The function

h is said to be strongly quasiconcave on Z when for each z', z2 e Z with z' z2, we

have

h[A z' +(1-A 2]> min (z' ),h(z2)

for each A such that 0 < ; < 1.

Lemma 4.3.1. Let Z be a nonempty convex set in R", and let h: Z -* R be strongly

quasiconcave. Suppose that z', i= 1, 2...,k, are distinct points in Z and that s is an

element of the convex hull of z', i = 1, 2,...k, such that, for each i = 1, 2,...,k, s z'.

Then

h(s)> min h(z')i= l,2...k }.

Proof. The lemma is easy to prove using Definition 4.3.4 and induction. O

The following result analyzes the case where vmin = 1.0 and y is the unique optimal

solution to problem (LP).

Theorem 4.3.5. Assume that problem (P',) satisfies the conditions for the polyhedral

case, and that YV is a compact set. Assume also that ye Y-. Suppose that vmin = 1.0






91
and that y = y is the unique optimal solution to problem (LP). Then y is a strict local

optimal solution for problem (Pr').

Proof. Since g(y)> 0 and y = y is the unique optimal solution to the problem


min(l/p) (y,/ly,)
Y.Y- j=1

y = y must also be the unique optimal solution to the problem


mi g(y)(l/p) (yj/G ).
'Y6 j=l

Therefore, by Lemma 4.2.1, y = y is the unique optimal solution to the problem

min(l/p)(Vg(Y3 y).
yEY'

Since (l/p)> 0 and (Vg(y), y) is a constant, this implies that y = y is the unique

optimal solution to the problem

min(Vg(y), y y). (4.29)
eVY

Therefore, the optimal value of problem (4.29) is 0, and for all ye Y- such that y ; y,

(Vg(y),y- y) > 0. (4.30)

Let d', d2...,dk represent the directions of the edges of Y5 emanating from the

extreme point y of Y-. From (4.30),

(Vg(y),d')>0

for all i= 1, 2,...,k. By Theorem 4.1.2 in Bazaraa, Sherali, and Shetty (1993), this implies

that there exist positive reals 6,, i = 1, 2,...,k, such that

g(yG+ d')> g(y) (4.31)






92
for each Ae (0,6,). Let 8= 1/2 min { i = 1, 2,...,k}, and consider the points y and

y+6d', y+&d2 ..., y+.d.

Then by definition of 8 and (4.31),

g( + d')> g(y) (4.32)

for each i = 1, 2,...,k. Let z be any element of the convex hull of y, y +&d, i= 1, 2,

...,k, such that z # y and, for each i= 1, 2,...,k, z Y+ y d'. Since g is a strictly

pseudoconcave function on Y-, it is also a strongly quasiconcave function on Y-

(Bazaraa, Sherali, Shetty 1993). As a result, by Lemma 4.3.1,

g(z)> min (y) g(y +&d'), i = 1, 2,...,k}. (4.33)

From (4.32) and (4.33), g(z)> g(y). Since 8 >0, this implies that there exists an e >0

sufficiently small so that if ze Y', IIz- y< e, and z Y, then g(z)> g(y). ]

Under the assumptions of Theorem 4.3.5, if vmin = 1.0 but y = y is one of two or

more optimal solutions to problem (LP), then y need not be a strict local optimal

solution for problem (P, ). The following example illustrates this point.

Example 4.3.3. Let

Y = (y, E R21y + y, >8, y1 7, y2 4j,

and let yT = (4, 4). Then Y- is a nonempty, compact polyhedron in R2, and the

assumptions of Theorem 4.3.5 are satisfied. In this case, ye Y6 and y is an optimal

solution to problem (LP). However, since (y~ = (4 +, 4- 8 e Y' and g(y6)< g(y)

for all values of 6 such that 0 < < 3, by Definition 4.3.3, y is not a strict local optimal






93
solution for problem (P,,). (In fact, y is not even a local optimal solution for problem

(Pr,).) Notice that problem (LP) in this case has multiple optimal solutions.

In the third case of problem (LP), vmin = 1.0 and problem (LP) has multiple

optimal solutions. In this case, by the next result, as in the first case, an improved feasible

solution for problem (Py,) is at hand. The proof of this result relies crucially on Theorem

4.3.4.

Theorem 4.3.6. Assume that problem (Pr,) satisfies the conditions for the polyhedral

case, and that Ys is compact. Suppose that y is an optimal solution for problem (LP),

and suppose that problem (LP) has multiple optimal solutions. Then, for any y' # y that

is an optimal solution for problem (LP), g(y) < g(y) must hold.

Proof. Let y* # y be an optimal solution to problem (LP). Then, since g(y)>0,

y' is also an optimal solution for the problem


min g(y)(1/p)(y /y,), s.t. ye YV.


By Lemma 4.2.1, since y YV, this implies that y* is an optimal solution to the problem

min (1/p)(Vg(y), y), s.t. ye Y".

Since (l/p)(Vg(y), y) is a fixed number, it follows that y' is an optimal solution to the

problem

min (l/p)(Vg(y), y-), s.t. ye Y. (4.34)

By assumption, y is an optimal solution for problem (LP). Therefore, the optimal value

of problem (LP) equals 1.0. From Theorem 4.2.3, this implies that the optimal value of




Full Text

PAGE 1

MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS By GEORGE BOGER A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1999

PAGE 2

ACKNOWLEDGMENTS I would like to thank my entire supervisory committee Dr. Harold Benson, Dr. Selcuk Erenguc, Dr. Asoo Vakharia, and Dr. Richard Francis for their time and helpful feedback on my dissertation I am especially grateful to my committee chairman, Dr. Benson, for suggesting the topic of multiplicative programming problems and for his tremendous assistance and unending support. Without his help, this dissertation would not have been completed. I would also like to thank Mr. Erijang Sun for proving some theoretical results needed to support my dissertation topic I am also grateful to the DIS department chairman, Dr. Erenguc, for providing an assistantship and for allowing me to teach undergraduate courses during my time at the University of Florida The teaching experience was an enjoyable and rewarding expenence. I would like to thank my family for their encouragement and emotional support. I would also like to thank my colleagues in the Ph.D. program for their friendship and their support. Finally, I am in debt to my master's degree advisor, Dr. Frederick Buoni, at the Florida Institute of Technology, for his guidance. He suggested multiple objective linear programming as a topic for my thesis. While working on the thesis, I met Dr Benson during a visit to FIT to present a talk related to multiple objective linear programming. 11

PAGE 3

Dr. Benson agreed to serve on my master's degree committee and later recruited me for the DIS Ph.D. program. 111

PAGE 4

TABLE OF CONTENTS page S .. AC.KN' 0 WLEDG MENT . ... ........ .. ..... ........ .. .... . ... .... .. . ..... .... .. .... . .. . . ... ................ 11 ABSTRACT ....... ......... .. . .. ...... .. ....... .. .. ..... .. .. . .... ....... .............. .... .. ........ ... ........ .. ... Vl CHAPTERS 1 rnTRODUCTION .. .. ..... ................ ... .. ... . .. ....... ... .. . .......... .... .... ....... ...... .... .... l 1.1. The Multiplicative Programming Problem ........... ........ ..... .. . .. ... .. ...... ... .. 1 1.2 Reformulation s of the Multjplicative Programming Problem . ... ......... ..... 4 1 .3. Purpo se and Or ga nization of the Di sse rtation . . ........ . .. .... ....... . ... . .. .... .. 6 2 A REVIEW OF THE Lfl'ERATURE ON MUL'l'IPLICATIVE PROGRAM.M.IN'G PROBLEMS .. .. .. . .. ..... .. .. .. . ..... .... ... .... .. . ...... ..... 9 2.1. Or ga ni za tion of the Literature Review . .. .. .. .. ...... ...... .. . .. .... . . .. ......... .. . .... 9 2.2. Method s to Solve Problem s (LMP2), (GLMP), and (CLMP) . .. ... ...... ..... 13 2 .2. 1 Method s Based on Quadratic Programming ............ . .......... ........ 15 2.2.2. Method s Ba s ed on Searching the Outcome Set . .. . .. .... ...... ... ...... 17 2.2.3. Method s Based on Solving a Parametric Master Problem .. ........... 22 2.2.4. Method s Based on Polyhedral Annexation .. .... .... .. .... ............... . 28 2 .3. Extensions of Algorithms for Problem (LMP2) to Solve Problem (L MP) when p > 3 .... .. ........ ........ ....... .. ..... . .. ... . .. .. .. .... .. ........ . .... 32 2.4. M e th ods to Sol ve Problems (C MP ), ( GCMP) and (CCMP) . ........ .......... 32 2.4.1. Method s Ba se d on Solving a Reformulated Problem ..................... 33 2.4.2. A Method Based on Outer Approximation .............. .. . ... .... . . . .. .. 37 2.5. M e thod s to Sol ve Problem (LMP) as a Concave Minimization Problem ............................................... .. .......... . .. . .. ........... .... ........... 38 3 CONCAVE MULTIPLICATIVE PROG G PROBLEMS : ANALYSIS AND AN EFFICIENT PO.IN'T SEARCH HEURISTIC FOR THE L.IN'EAR CASE ..... .. ... .. . . .................................................... . 40 3.1. Introdt1ction .. .. .. .. .. . .. .. ..... .. .. . . .. .. ....................................... .... ........ ... 40 3 .2 Analysis .. ..... ..... .. .. .. . .. ..... .. .. .......... .. . ...................... ...... ... ... .. ............. 41 3.3. Efficient Point Search Heuristic .. ....... .. ... .. . ....................... ... ... ............ 52 3.4. Computational Result s ......... . ........ .. .. ..... .. . ......... ...... ... . .. . .. ........... . 62 3 5. Di sc u ss ion ......... ... . .. .. . . . .. .... .. ......... . .. .... . ..... .. .. ..................... . .. . .. . .. 69 IV

PAGE 5

4 A GENERAL MULTIPLICATIVE PROG _.._ G PROBLEM IN OUTCOME-SPACE .. .. ..... ........ . .. .. . .. .. . ... .... ........ ....... . .............. ... .... .. 71 4.1. Introduction ...... . ...... .. ... ... .. .. .......... .. . ............. .. ..... .............. ...... . .......... 71 4 .2 Re s ult s for the General Case of Problem (Py s ) .... ............. ....... .... . ... .. .. 73 4.3 R es ult s for Convex and Polyhedral Cases of Problem (P s) .... . ........... .. 78 Y4 .4. Di sc u ss ion .............................. .... .. ..... ....... .. .. .. .. ... ... ....... . .... .. ...... ......... .. 96 5 AN OUTCOME-SPACE CUTTING-PLANE ALGORITHM FOR LINEAR MULTIPLICATIVE PROGRAMMING ......... ....... ... . .. .. ...... 98 5 1. futroduction .. .. .. .. .. .. ..... .. .. .............. ..... . .. ..................... . .. ... ........ ......... 98 5 .2. Theoretical Prerequisite s ..... ..... ................. ............. ... .. . ........... .. .. ... 100 5.3 Out comeSpace, Cutting-Plane Algorithm . .. ........ ... . .. ................ . ........ 104 5 .3. 1 Strict Local Optimal Solution Search .............. . ....... ........ .. . ...... 105 5 .3 .2. Cutting PI ane Con s truction ... ... .. . .. .. .. ................. . .... ...... .. ... .... 107 5 .3 .3. Termination Test . ....... .. .. . ........... ......................................... ..... I 09 5 .3.4. Outcome Space Cutting-Plane Algorithm ................ ............ ..... 110 5 4. Implementation .. .. .. .... .. .. ....... ..... .. .. ..... ........... ... .. ... ................... .... .. .. 114 5.5. Example .. ......... ..... .. .......... ........ . .. .. .. ........... ................................... .... 119 5 6. Concluding Remarks ... .. ..... . ......... .. .. .. .......... ... .. ..... ............. ..... .. ........ 124 6 SUMMARY AND FUTURE RESEARCH ............................... .. ........... .. ... .. 125 6.1. Introdu ctio n ... .... ... ........ ....... ... . .. ...... .... ......... .... ...... .. .... . .. . .... . . ....... 125 6 .2 Futur e Research on the Heuristic Algorithm ................ .. ................ . ... . 125 6 3. Futur e Re se arch on an Global Solution Algorithms .............. . .... ......... . 127 REIBREN CBS . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 BIOGRAPHICAL SKETCH .. ......... .... .. .. . ................................ ...... ............................ 137 V

PAGE 6

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy MULTIPLICATIVE PROGRAMMING: THEORY AND ALGORITHMS By Chairman: Harold P. Benson George Boger December 1999 Major Department: Decision and Information Sciences Multiplicative programming problems are mathematical optimization problems in which the objective function contains a product of several real valued functions defined over a common domain and the feasible decisions are described by a nonempty set. These optimization problems have some important applications in engineering finance, economics, and other fields Multiplicative programming problems, however, are difficult global optimization problems that are known to be NP-hard This dissertation has two purposes. The first is to develop and test a heuristic algorithm that finds a good solution, though not necessarily a globally optimal solution, for the linear multiplicative programming problem. The second purpose is to develop a global solution algorithm for the linear multiplicative programming problem that is potentially more efficient than existing algorithms for this problem VI

PAGE 7

To evaluate the effectiveness in practice of the heuristic algorithm, we have written a FORTRAN computer program and used it to solve 260 randomly generated linear multiplicative programming problems of various sizes. Our experimental results show that the computational requirements of the heuristic algorithm are not overly burdensome when compared to the effort required to solve a linear multiplicative programming problem The framework of the outcome-space, cutting-plane algorithm is taken from a pure cutting plane, decision set-based method developed by Horst and Tuy for solving concave minimization problems. By adapting the approach of this method to an outcome space reformulation of the linear multiplicative programming problem, rather than directly applying the method to the original decision set formulation, it is expected that considerable computational savings can potentially be obtained. We also show how additional computational benefits might be obtained by implementing the new algorithm appropriately. To illustrate the new algorithm, we apply it to the solution of a sample problem. VII

PAGE 8

CHAPTER 1 lNTRODUCTION 1.1. The Multiplicative Programming Problem Multiplicative programming problems are mathematical optimization problems in which the objective function contains a product of several real valued functions defmed over a common domain and the feasible decisions are describe by a nonempty set. These problems occur is a wide variety of application areas. For example, Konno and Inori ( 1989 ) studied a bond portfolio optimization problem in which the portfolio's performance is measured by a number of indices such as the average coupon rate, the average terminal yield, and the average length to maturity. The goal of the portfolio manager is to improve the performance of the portfolio by purchasing or selling bonds in the marketplace subject to some limiting constraints. The manager must consider multiple incomparable objectives s uch as maximizing the average ter1ninal yield and minimizing the average maturity time. Konno and Inori choose to optimize several objectives simultaneously by multiplying them together since the objectives do not share a common scale. Another example of a multiplicative programming problem, given in Maling, Mueller and Heller ( 1982), is a packaging problem encountered in designing very large scale integrated circuit (VLSI) chips and laying out building floor plans or manufacturing plant facilities. In the problem, the overall rectangular dimensions of the feasible layout 1

PAGE 9

2 plans are constrained rather than fixed. Different layout plans with differing overall rectangular dimensions are obtained according to how the components of a system are arranged within each plan. The objective is to find the arrangement of components that minimizes the overall layout area subject to certain constraints on the area and the perimeter of the layout. Henderson and Quandt (1971, p. 15) also give an application of multiplicative programming problems. Their example is from microeconomics. In their example, a rational consumer wishes to find a combination of two commodities to purchase from which he will derive the highest possible level of satisfaction. Budgetary constraints and the availability of the commodities limit the quantities the consumer may purchase The consumer's level of satisfaction is captured by his utility function, which is assumed to be the product of the quantities of the two commodities. The rational consumer's problem is then formulated as maximizing his utility function subject to the budgetary and commodity availability constraints. The multiplicative programming problem or, more briefly, the multiplicative program, may be formulated mathematically as /J minh(x)= Ilt 1 (x), s t.xe X, }= I where p 2 is an integer, X Rn, and, for each j = 1, 2, ... p, f 1 : X R satisfies f 1 (x) 0 for all x e X. For simplicity we will assume throughout this proposal that the minimum of problem (Px) is achieved at some point x* e X In addition we will assume that p is significantly less than n since this holds for virtually all applications of

PAGE 10

3 multiplicative programming problems. If f j (x) = 0 for some j E {1, 2, ... p} and some x E X, then clearly x is a global optimal solution. This condition can be checked by solving p minimization problems min {J j (x) x E X }, j = l 2, ... p Therefore we may assume without loss of generality that for each j = 1, 2, ... p, f j (x) > 0 holds for all XE X. The objective function h of problem (Px) is generally not a convex function As a result problem ( P x) belong s to a cla ss of nonconvex programming problems called global optimization problems In contrast to convex programming problem s, there may be many local minima for problem (Px) that are not globally optimal. Conventional local optimization method s based on gradients, subgradients, conjugate directions, or the Karush-Kuhn-Tucker conditions, for instance, are at best guaranteed only to fmd a local minimum. These methods mu s t then terminate, since there is neither a local criterion for certifying the global optimality of a given solution nor a way to deter1nine how to proceed to a better so lution if the solution is not globally optimal. From the perspective of computational complexity, problem (Px) is a difficult problem that is known to be NPhard even when the objective function is simply h(x) = x 1 x 2 and the feasible region X is a polyhedron (Matsui 1996 ). When in addition to the a ss umptions given previously for problem ( P x ), X is a convex set and for each j = l 2, ... p f j : X R is a concave function we obtain the concave case of problem ( P x ), called the concave multiplicative programming problem. The convex case of problem ( P x ), called the convex multiplicative programming

PAGE 11

4 problem, is obtained when, in addition to the assumptions made previously for problem (Px ), X is a convex set and, for each j = 1, 2, ... p, J i : X R is a convex function. A special linear case of problem (Px), called the linear multiplicative programming problem, is obtained when, in addition to the assumptions make previously for problem (Px ), X is a compact polyhedron and, for each j = 1, 2, ... p, Ji : X R is a linear function (Konno and Kuno 1992). 1.2. Reformulations of the Multiplicative Programming Problem During the 1990's there has been a resurgence of interest in problem (Px ). Encouraged by the rapid advances in high speed computing, researchers began developing and testing new methods for solving global optimization problems that arise in practical applications, including problem (Px ). Included among the global optimization methods used to solve problem (Px) for the special case when p = 2 are various parametric simplex method-based algorithms (e.g., Konno and Kuno 1992, Konno and Kuno 1995, Konno, Yajima, and Matsui 1991, and Schaible and Sodini 1995), branch and bound procedures (e.g., Kuno 1996 and Muu and Tam 1992), and various other types of algorithms ( e.g., Konno and Kuno 1990, Pardalos 1990, and Tuy and Tam 1992). When p > 2, globally solving problem (Px) has been shown empirically to require considerably more computational effort than when p = 2 (see, e.g., Ryoo and Sahinidis 1996). A smaller number of the algorithms for solving problem (Px) when

PAGE 12

5 p > 2 solve the problem directly without reforrnulating it as an outcome-space problem. Included among these, for instance, is the polyhedral annexation algorithm of Tuy (1991). Most of the algorithms for solving problem (Px) when p > 2, however, solve the problem indirectly by globally solving an outcome-space reformulation of the problem instead. This is because in practical applications p is routinely much smaller than n, often by two or more orders of magnitude. As a result, working in R P is computationally less challenging than working in R n Let y E R P denote the p-vector with )th entry equal to y j j = I, 2, ... p. For each j = 1 2, .. p, let y j E R satisfy where y j = +oo is po s sible, and let y E R P denote the vector with )th entry equal to y j )=1,2, .. ,p. Let J(x) denote the vector J(x)=l/i(x),J 2 (x), . ,JP(x)] r where f j : X R, j = 1, 2, . p, are the functions used in defining problem (P x ). Thoai (1991) and later Konno and Kuno (1995) based their outer approximation algorithms for respectively solving the convex and linear cases of problem (Px) on one of the more direct refo11nulations of problem (Px) as an outcome-space problem Their reformulation is given by min (I y j s.t. y E y s j= l where y s = {_y ER P J(x)~ y~ y for somexE X }.

PAGE 13

6 Falk and Paloc sa y (1994) based their branch and bound, image space algorithm for the linear case of problem ( P x) on another outcome-space refo11r1ulation that is closely related to problem ( P r ~ ). Their reforr11ulation is given by min [I yi, s .t. ye Y j = l where Y = { y e R P y = Cx forsomexE X J and C is a ( p x n ) matrix who s e row s are c~, j = 1, 2 ... p. 1.3. Purpose and Organization of the Dissertation This dissertation has two main purpo s es The fust is to develop and test a heuristic algorithm that finds a good solution, though not necessarily a globally optimal solution, for the linear case of problem ( P x ). The s econd purpose is to develop an exact global solution algorithm for the linear case of problem (Px) that is potentially more efficient than existing algorithms for this problem. Since the linear multiplicative programming problem is known to be an NP-hard multiextremal global optimization problem, it is inherently more difficult to globally solve than a convex programming problem of the sa me size. In s ome application cases, a solution will adequately meet the requirement s of a user; see, e g ., Konno and Inori (1989). In these cases the use of a heuristic algorithm seems to be appropriate for finding a satisfactory solution. To date however there is no known heuristic algorithm tailored to finding a good solution for the linear multiplicative programming problem In their review of algorithm s for solving problem ( P x ), Konno and Kuno (1995) do not mention

PAGE 14

7 any heuristic algorithms for problem (Px ) and our survey of the literature has revealed none. To develop the heuristic algorithm, we first analyze the concave multiplicative programming problem The analysis yields a new way to write a concave multiplicative programming problem as a concave minimization problem. As a result, a concave multiplicative programming problem can be solved by using any existing concave minimization algorithm without resorting to a reformulation of the problem We also show that some relationships exist between concave multiplicative programming problems and certain multiple-objective mathematical programs. These relationships are exploited to develop the heuristic algorithm for the linear case of problem (Px ). For cases where a linear multiplicative program must be solved for an exact global optimal solution we expect that globally solving the outcome-space refor1r1ulation (P s ) Y instead will result in a significant decrease in the computationa l effort over that required to directly solve the problem. This is because in typical applications of linear multiplicative programs, p is s everal orders of magnitude smaller than n As a result, working in R P should be computationally less challenging than working in R n To globally solve the outcome-space refor111ulation (PY ~ ) of a linear multiplicative program, we develop an outcome-space, pure cutting plane algorithm that works in RP. The framework for the algorithm is taken from a pure cutting plane, decision set-based concave minimization method developed by Horst and Tuy (1993). We show how to adapt this method to solving the reformulation (PY ~ ) of a linear multiplicative program for a global extreme point optimal solution. Once this global solution is found, we can

PAGE 15

8 recover a globally optimal solution for the linear multiplicative program in decision space. As a further computational enhancement, we also show that for purposes of implementation, the mechanics of the outcome-space, cutting-plane algorithm can be applied to the smaller problem (Py) instead of problem (PY ~ ). The organization of the proposal is as follows. In Chapter 2 we present a review of the literature on multiplicative programming problems. In Chapter 3 we analyze the concave multiplicative programming problem, apply the results to develop a heuristic algorithm for the linear multiplicative programming problem, and report test results using the heuristic algorithm on some randomly-generated problems. In Chapter 4 we analyze the reformulation problem (PY ~ ) and show that, under certain convexity assumptions on Y ~ problem (P Y ~ ) has a global extreme point optimal solution y* E f ~ We then present a procedure that is guaranteed to find a strict local optimal extreme point solution for the reformulation problem (Pr ~ ) of the linear multiplicative program. In Chapter 5 we present an outcome-space cutting-plane algorithm for globally solving a linear multiplicative program. The algorithm employs the strict local optimal search procedure presented in Chapter 4. We also illustrate the algorithm by applying it to the solution of a sample problem Finally, in Chapter 6, we give an overall summary and conclusions, and we discuss directions for further research.

PAGE 16

CHAPTER2 A REVIEW OF THE LITERATURE ON MULTIPLICATIVE PROGRAMMJNG PROBLEMS 2.1. Organization of the Literature Review In this chapter we present a review of the literature on methods proposed for solving multiplicative programming problems. The only known literature review on multiplicative programming problems appears in Konno and Kuno ( 1995 ) In their literature review Konno and Kuno defined multiplicative programming problems as ''a class of minimization problem s containing a product of several convex functions either in its objective function or in its c onstraints They included problems in which the objective function contained the summation of a convex function and the product of convex functions Konno and Kuno ( 1995 ) organized their literature review based on whether the problem data are linear or non] inear and on the number of functions that appear in the objective function They considered solution methods for the following multiplicative programming problems. The first multiplicative programming problem considered by Konno and Kuno is the special case of quadratic programming (LMP2) minf(x) = ((c 1 x)+d 1 )((c 2 ,x)+d 2 ), s.t. XE D where D : = { x E Rn Ax b, x 0} is a non-empty polytope (bounded polyhedron) in 9

PAGE 17

10 which A is an mxn matrix, bE R"', and, for each i = l, 2, c' E R n\ {o} and d; ER. In addition, it is assumed that, for each x E D, ( ci, x) + d; > 0, i = l, 2. The second multiplicative programming problem that they considered is the convex multiplicative programming problem (CMP) min f(x) = Ilt i (x), s.t.xE X, j=I where X Rn is a nonempty, compact, convex set and, for each j = 1, 2, ... p, Ji : Rn R is a convex function that satisfies J i (x) > 0 for all x e X. Konno and Kuno (1995) considered two special cases of problem (CMP): (1) the case where p = 2 and (2) the case where p 2 and the problem data are linear. The second case may be defined as the following extension of problem (LMP2): (LMP) minf(x)= Ii[(c;,x)+d ; ], s.t.xe D, i= l where p 2 is an integer and, for each i = 1, 2, .. p, ( ci, x) + d; > 0 holds for all x e D. Finally, Konno and Kuno (1995) considered three classes of problems related to problem (CMP) In the first class is the following problem: (GCMP) minf(x)=f 0 ( x)+ ft 2 i_ 1 (x)f 2 i (x), s.t.xE X, }=I where, for each j = 0, I, ... 2q, J i : R R is a convex function that satisfies J i (x) > 0 for all xe X. The second class is a special case of (GCMP) in which q = l and the problem data are linear. This class may be defined as the following extension of problem (LMP2) :

PAGE 18

11 (GLMP) minf(x ) =(c 0 ,x)+((c 1 ,x)+d 1 )((c 2 ,x)+d 2 ), s.t.xe D, where c 0 e R 11 and ci, d ;, i = 1, 2, and D are defined as in problem (LMP2). The third class of problems considered by Konno and Kuno (1995) is the minimization of a convex function over a feasible region that includes a product of convex functions in its constraint set. Konno and Kuno's coverage of the literature is not exhaustive. They focused on algorithms that have been demonstrated by computational experiments to be practical for reasonably large problems ( Konno and Kuno 1995, p. 370). Algorithms proposed by Konno, Kuno and their associates have been te s ted on randomly generated problems and the results reported. However, computational results have not been reported by most of the other researchers and therefore their methods were not included in the review Since the publication of the review by Konno and Kuno, two more multiplicative programming problems have been discussed in the literature The first problem adds a convex function to the objective of problem (LMP2) to obtain the problem: (CLMP) minf (x)=g(x )+( (c 1 ,x)+ d 1 )((c 2 ,x)+d 2 ), s.t.xe D, where g : R R is a twice differentiable convex function and ci, d ;, i = 1 2, and D are defined as in problem ( LMP2 ) The second problem adds a convex function to problem (C MP) to obtain the problem: (CC MP ) min f(x)=J 0 (x)+ ]]J i (x), s.t.xE X, j= I where / 0 : R R is a convex function that satisfies / 0 (x) > 0 for all x E X and f j, j = 1, 2, .. p, and X are defined as in problem (CMP).

PAGE 19

12 The emphasis of this review will be on optimization problems in which a product of functions appears in the objective function. Optimization problems with objective functions that are comprised of a summation of a function and the product of functions are also included in the review. Methods proposed for solving these problems may be adapted to solve a problem whose objective function is strictly a product of functions by setting the added function to the null function. The functions that appear in the objective function will be either convex or linear functions since to date these are the only multiplicative programming problems to appear in the literature. In this review we will not consider optimization problems in which a product of functions appears in the constraint set. Like the review of Konno and Kuno (1995), this literature review is organized based on whether the problem data are linear or nonlinear and on the number of functions that appear in the objective function. It is divided into the following four sections. Section 2.2 reviews the methods proposed to solve problems (LMP2), (GLMP), and (CLMP). Section 2.3 reviews the methods to solve problem (LMP) that are extensions of methods for problem (LMP2). Section 2.4 reviews the methods to solve problems (CMP), (GCMP), and (CCMP). Section 2.5 reviews the methods to solve problem (LMP) as a concave minimization problem. The rationale for organizing the literature review in this way is as follows. Historically, the first algorithms for solving multiplicative programming problems were specifically proposed for solving problem (LMP2). Problems (GLMP) and (CLMP) are grouped with problem (LMP2) since they were conceived as extensions of that problem. Several of the algorithms proposed for solving problem (LMP2) can be extended to solve

PAGE 20

13 the problem (LMP), since they do not depend upon having only two functions in the product term of the objective function. Problems (LMP2), (LM P ), (GLMP), and (CLMP) contain linear functions and polyhedral feasible regions. Algorithms for solving these problems are implemented with the aid of the simplex method, which is used to solve linear programming subprob l ems. The problems (CMP), (GCMP), and (CCMP) contain nonlinear data and must rely on other optimization methods to solve nonlinear convex programming problems. The latter three problems are therefore placed in a separate group. Problems (GCMP) and (CCMP) are included in the group with problem (CMP) because only one article addresses each problem, and they were conceived as extensions of problem (CMP). Finally, two articles appeared in the literature that proposed solving problem (LMP) as a concave minimization problem using techniques that the authors had previously developed. Table 2.1 gives a summary of the multiplicative programming problems considered in this literature review along with the assumptions placed on the feasible region and the objective function of each problem. 2.2. Methods to Solve Problems (LMP2), (GLMP), and (CLMP) The methods for so lvin g problem (LMP2), (GLMP), and (CLMP) are further divided into four categories. In the first category are those methods that analyze problem (LMP2) as a special case of quadratic programming. In the second category are algorithms that analyze problem (LMP2) by searching the outcome set. In the third category are the algorithms that solve an easier parametric programming problem rather than directly solving problems (LMP2), (GLMP), and (CLMP). In the last category are

PAGE 21

Table 2.1. Summary of Multiplicative Program Types and Assumptions on Problems Problem Assumptions on the Objective Function Assumptions on the Objective Function Feasible Region LMP2 D is a bounded polyhedron. (c 1 ,x)+d 1 (c 2 ,x)+d 2 ( c;, x) + di > 0, i = 1, 2 for all x E D GLMP D is a bounded polyhedron. (c 0 ,x)+((c 1 ,x)+d 1 )((c 2 ,x)+d 2 ) ( c O x) > 0 and ( c i x) + d ; > 0 i = 1, 2 for all XE D. CLMP D is a bounded polyhedron. g(x )+ (( c 1 x) + d 1 )(( c 2 x) + d 2 ) g: R n R is a twice differentiable convex function and ( ci, x) + d i > 0, i = 1, 2, for all XE D. LMP D is a bounded polyhedron. 11 [(C i X) + d;] (c ; ,x)+d ; >O, i=l,2, ... ,p, for all xE D. i = I ll1 j(x) For each j = 1, 2 ... p, Ji : R 11 R is a CMP X is a compact convex set. convex function that satisfies f i (x) > 0 for all j=I XE X. fo (x) + f f 2 j-t (x )f2i (x) For each j = 0, 1, ... p, f i : R n R is a GCMP X i s a compact convex set. con vex function that satisfies f i (x) > 0 for all j=I XE X. fo(x)+ 111 i (x) For each j = 0, 1, ... p, Ji : R 11 R is a CCMP X is a compact convex set. convex function that satisfies f i (x) > 0 for all j=I XE X.

PAGE 22

15 two algorithms th a t s olve problem (LMP2 ) based on the method of polyhedral annexation. 2.2.1. Methods Based on Quadratic Programming Since the objective fun c tion of problem ( LMP2 ) can be expressed as f(x) = ( (c' ,x) + d )( ( c 2 ,x ) + d )= r Qx + r r x + d 1 d 2 where r E R n and Q i s a real s ymmetric n x n matrix, problem (LMP2) i s a special class of quadratic pro g ramming Swarup ( 1966a and 1966b ) was the first researcher to analyze problem (LMP2 ) in thi s way but he did not propo s e any exact solution algorithms. His two articles are included in the literature review for completeness Pardalo s ( 1990 ) also analyzed problem (LMP2 ) in this way and he proposed an exact global solution algorithm Swarup ( 1966a ) showed that if both linear functions ( c ;, x ) + d ;, i = 1, 2 are positive over the feasible regi o n D the objective function f is quasiconcave over D It i s well known that generally for any local minimizer of a quasiconcave function over a polytope, there exi s t s an extreme point local minimizer over the polytope that has the s ame function value Swarup proposed a s implex based method for finding such a local optimal solution. The key to the algorithm is a test that deterrr1ines if entering a given nonbasic variable into the current simplex ba s i s will lower the objective function value. A simplex basis of a local optimal s olution can be reached by beginning at any feasible basis and moving through a s equence of simplex tableaux by pivoting in qualifying nonba s ic variable s until none r e main Once a local optimal solution is found, the

PAGE 23

16 algorithm stops. No inforrnation is available to either certify the global optimality of the solution or to determine how to proceed to an improved solution. In another work, Swarup (1966b) formulated the following parametric linear program by introducing an auxiliary variable and moving one of the linear functions into the constraint set: (MPl) min F(x;~) = ( c', x) + d 1 s.t. xED, Since ( c 2 x) + d 2 appears in the constraint set, dual pricing information is available to determine the value of ( c 1 x) + d 1 as is set to achievable values of ( c 2 x) + d 2 over D. Swarup derived a test that uses this inforrnation to detern1ine when is set to a level that corresponds to a local optimal solution. All local optimal solutions can then theoretically be found by parametrically solving problem (MPl) over all achievable values of~A global optimal solution x of problem (LMP2) can then be found by identifying a global solution (x, ~) of problem (MPl). Pardalos (1990) observed that if c 1 and c 2 are linearly independent, then the Hessian matrix Q of the objective function of problem (LMP2) has one positive eigenvalue and one negative eigenvalue, and the remaining eigenvalues are equal to zero. By applying the spectral decomposition theorem of linear algebra, the objective function can be rewritten in terms of two variables. The problem can then be solved by examining the vertices of an orthogonal projection of the feasible region D into a two-dimensional

PAGE 24

17 polytope in the space of the two variables used in the rewritten objective function. Pardalos (1990) proposed an algorithm that enumerates all vertices of the two dimensional polytope until an optimal vertex is found The algorithm may require an exponential number of s teps, but its average computational time complexity is bounded by a polynomial 2.2.2. Methods Based on Searching the Outcome Set The objective function of problem (LMP2) can be expressed as the composite lJI(
PAGE 25

18 feasible region. The approximations are used in a series of optimization problems that are easier to solve than the original problem These optimization problems are sequentially solved until a global optimal solution to the original problem is found. The technique has been very useful in solving global optimization problems in which the feasible region Z is a polytope and the global optimal solution is known to be an extreme point of Z In this f ortn of outer approximation, the algorithm begins by finding a simple polytope P 0 :::> Z with an ea s ily defined inequality representation and an easily calculated set of vertices. A s e rie s of a l g orithmic iterations follows that builds a sequence of decreasing polytopes Po :::> Pi :::> :::> Z in which one polytope is generated in each iteration. In an iteration k of the algorithm the original objective function is evaluated at the extreme points of P k to find an optimal solution v k If v k is an extreme point of Z then v k is a global optimal solution to the original problem. Otherwise a portion of JJ,. \ Z is cut off to fortn P k+I The point v k i s part of the region cut off; i.e. vk is not included in the polytope Pk +I The cut is made by adding a constraint called a cutting plane constraint to the con s traint set that defines P k The cutting plane constraint adds additional vertices to ~ + i that were not pre s ent in P k and the ref ore they must be calculated. Aneja Ag g arwal and Nair ( 1984 ) propo s ed an algorithm that examines the solutions associat e d with the bicriterion programming problem: ( BCP ) VMIN ( y 1 = (c 1 x ) + d 1 y 2 = ( c 2 x ) + d 2 ), s.t xe D.

PAGE 26

19 The intent of problem (BCP) is to simultaneously minimize the two criterion functions y 1 and y 2 Conflicts usually exist between the two criterion functions that prevent a single point of D from simultaneously minimizing both functions. The usual notion of an optimal solution used in single objective linear programming is replaced by the concept of efficient solutions when discussing the solutions of problem (BCP). A solution x is an efficient solution of problem (BCP) if x e D and, whenever for each i = 1, 2, ( c;, x) + d ; ( ci, x) + d ; for some x E D then ( c;, x) + di = ( c;, .x) + d i i = 1, 2. The set of efficient points of D is mapped by


PAGE 27

20 algorithm is then repeated using each of the smaller triangles. The algorithm tertninates when there are no more extreme points of the efficient frontier that need to be searched. In the algorithm of Aneja, Aggarwal, a nd Nair (1984), a new vertex must be calculated for each triangle This is easily done by solving two systems of two equations in the unknowns y 1 and y 2 This special technique however, can not be easily extended to handle cases where p > 2. Falk and Palocsay (1994) also proposed a solution algorithm that searches among the extreme points of Y using a modified outer approximation technique. In the first phase of the algorithm the two linear programs l 1 = min(c 1 x) + d 1 and 1 2 = min(c 2 x) + d 2 x eD x eD are solved for optimal solutions x 1 and x 2 respectively. Two initial vertices y 1 and y 2 of Y are then An initial polytope in outcome-space containing an optimal solution for the problem (YP ) 2 min IJy i ye Y I 1 = such that a 1 y 1 + a 2 y 2 = 1 passes through the point k { i I( l J ) ( 2 2 )J Y = ar~n Y Y1 Y 2 Y1 Y 2 1= 1 ,2 In each iteration of the algorithm, values for a 1 and a 2 are updated and a linear program of the form

PAGE 28

21 (YLP) is solved to remove portions of the initial polytope from the search for an optimal solution for problem (YP) The new vertices generated at each iteration are easily calculated since the isovalue contours of problem (YLP) are linear. The algorithm terminates when the optimal value of problem (YLP) is one. The algorithm proposed by Thoai ( 1991) for solving problem (LMP2) uses an outer approximation technique that begins by enclosing the outcome set Yin a rectangle P 0 In an iteration k of the algorithm, the extreme point (v 1 v 2 ) of the outer approximation that yields the lowest value of the product y 1 y 2 is found. A linear program is then used to deterr1tine if the extreme point (v 1 v 2 ) maps to a feasible point x of D. If not, information is obtained from the linear program to generate a cutting plane constraint that slices off the extreme point (v 1 v 2 ) from the polytope Pk The new vertices generated by the cut are then calculated using a conventional approach (see Horst, Pardalos, and Thoai 1995 or Horst and Tuy 1993). Since the method of deter1nining these new vertices is not dependent on the fact that the dimension of the outcome set is two Thoai' s algorithm can be extended to handle cases where p > 2. In the algorithms of Aneja, Aggarwal, and Nair (1984) and Thoai (1991), the only variations in the linear programs used in successive iterations involve changes in objective function coefficients The authors gain some computational efficiency by restarting the simplex method at the optimal solution of the previous iteration. Only a few simplex pivots are then generally needed to produce a new optimal solution.

PAGE 29

22 2.2.3. Methods Based on Solving a Parametric Master Problem The difficulty in solving problem (LMP2) is caused by the product forrn of the objective function. Konno and Kuno (1992) added a parameter g and formed the following problem that they called the master problem: (MP2) min F(x;~) = ~((c' ,x) + d,)+f ((c 2 x) + d2), s.t. xE D, g ~O. Notice that for a fixed value g' of g, problem (MP2) is a linear programming problem. To solve problem (MP2), Konno and Kuno proposed using a parametric objective function simplex method to the find critical values of g at which new bases become optimal. The values of the objective function F are then evaluated at these bases. A global optimal solution (x g ) of problem (MP2) is found by choosing the basis that minimizes F over these values. Konno and Kuno ( 1992) showed that if (x, g) is an optimal solution of problem (MP2), then x* is a global optimal solution of problem (LMP2). Konno and Kuno tested this algorithm on randomly generated problems (LMP2) with nonnegative problem data that ranged in size from (m, n) = (30, 50) to (220, 200). Their computational experiments showed that the amount of computational time needed to solve problem (LMP2) is not much different from that required to solve linear programs of the same size. In Konno and Kuno ( 1995) the authors slightly simplified the above parametric method by redefining the auxiliary parameter so that convex combinations of the two linear functions are used in the objective function of problem (MP2). This modification

PAGE 30

23 makes it easier to find critical parameter values, since the interval [o, 1] over which the auxiliary parameter ranges is bounded. The rest of the method remained the same. Although Konno and Kuno (1992) did not explicitly say it, their algorithm can be viewed as searching the efficient extreme points of problem (BCP) for one that is a global optimal solution of problem (LMP2). Notice that for a sufficiently small value ~,, an extreme point optimal solution (x', ~') to problem (MP2) coincides with an optimal solution x' of the linear program min { ( c 2 x) + d x E D} Similarly, for a sufficiently large value ... an extreme point optimal solution (x..,, ~*) coincides with an optimal solution x'' of the linear program min {(c 1 ,x)+ d 1 XE D} For any fixed value~> 0, the objective function F( x,~) is a composite objective function for1ned by multiplying the two linear functions by positive values and summing the result. It is well known that any extreme point minimizer of such a composite objective function over the feasible region D is an efficient extreme point of the problem (BCP) (Steuer 1986). The efficient extreme points of problem (BCP) are found by solving linear programs for parameter values between ~, and .... As Aneja, Aggarwal and Nair (1984) have shown, the global solution lies at an efficient extreme point of D in problem (BCP). A disadvantage of the algorithm of Konno and Kuno is that it may require many pivots to solve problem (MP2 ) for all possible parameter values. This will especially be true if there is a great conflict between the two linear functions of the objective function. If for example c 2 = c 1 then every extreme point of D is an efficient extreme point of problem (BCP). Since the size of the set of extreme points of the polytope D grows

PAGE 31

24 exponentially with D, the number of optimal solutions to problem (MP2) over the entire range of parameter values grows exponentially with D and is not bounded by a polynomial. Konno and Kuno in fact observed that the computational time increased as the number of local minima increased. An additional disadvantage of the Konno and Kuno algorithm is that many of the pivots performed will be unnecessary when they are to bases that do not improve on a previously found solution. In another paper, Konno and Kuno (1990) added a convex function to the objective function of problem (LMP2) to obtain the problem (CLMP). With this addition, the objective function may no longer be quasiconcave and therefore, the global minimum may not necessarily be attained at an extreme point of the feasible region D To solve problem (CLMP), Konno and Kuno (1990) proposed an algorithm that solves a parametric master problem which, for a fixed parameter value, is a nonlinear convex programming problem. The algorithm involves solving this master problem a finite number of times, once for each of a finite number of prechosen values for the parameter. A troublesome aspect of the algorithm is that it is difficult to determine the proper parameter values to choose. The authors suggested choosing values for the parameter that are equally spaced in the interval of possible parameter values and solving the resulting master problems to determine a neighborhood containing a globally optimal solution to problem (CLMP). A local search is then done in that neighborhood for a globally optimal solution using the Karush-Kuhn-Tucker conditions Care must be taken however, to attempt to define the spacing between the points to be small enough so that a global optimal solution is not missed.

PAGE 32

25 The difficulty that Konno and Kuno ( 1990) encountered in their method in determining parameter values can be eliminated if we assume that the convex function g in the objective function of problem (CLMP) is a linear function. Problem (GLMP) is obtained by making this replacement. Konno, Yajima, and Matsui (1991) considered problem (GLMP), but they assumed that d 1 and d 2 are zero. To solve problem (GLMP), Konno, Yajima, and Matsui formulated the master problem (MP3) min F(x;~) = (c 0 x) + ~( c 2 x), s.t. xe D, Notice that the parameter appears in both the objective function and in a right-hand side of a constraint. Konno, Yajima, and Matsui ( 1991) showed that x is a global solution of problem (GLMP) if (x, ~) is an optimal solution of problem (MP3). Schaible and Sodini (1995) used problem (MP3) to show that a global optima] solution of problem (GLMP) lies on an edge of D. Konno, Yajima, and Matsui (1991) proposed a parametric simplex algorithm that includes a right-hand side analysis and an objective function analysis to detertnine intervals of parameter values for which bases remain both feasible and optimal. The parametric analysis sweeps through parameter values from ~min= min{(c' ,x)lx e D} to ; max = max~ c 1 x) x e D }. The objective function F is then minimized over each of the intervals.

PAGE 33

26 Konno, Yajima, and Matsui (1991) tested their algorithm on randomly generated problems of up to 350 constraints and 300 variables. They found that the problems can be solved in much the same computational time as that of solving linear programs of equal size. The algorithm of Konno, Yajima, and Matsui (1991) suffers from the same disadvantages as the algorithm of Konno and Kuno ( 1992). In particular, its efficiency depends on the number of pivots performed to solve problem (MP3) for all possible parameter values Also many of the pivots perfor1ned will be unnecessary when they yield bases that do not improve on a previously found solution. Schaible and Sodini (1995) improved the algorithm of Konno, Yajima, and Matsui (1991). From a given simplex tableau for problem (MP3), Schaible and Sodini used parametric analysis to derive a for111ula that calculates the value of the objective function Fas the constraint ( c 1 x) = ~, is set to increasing values of g'. As ~, increases, parametric right-hand-side analysis calculates new values for the basic variables. Schaible and Sodini then derived some optimality conditions that detect when the parameter ~, is set to a value such that from an optimal solution (x', g') of problem (MP3), one obtains a local minimum x' of problem (GLMP). By applying these optimality conditions, Schaible and Sodini were able to develop a simplex-based algorithm that solves problem (MP3) in a finite number of primal and/or dual simplex iterations. The algorithm proposed by Schaible and Sodini ( 1995) has three advantages over the algorithm of Konno, Yajima, and Matsui (1991): (1) It may terminate before the maximum possible parameter value grnax has been reached. (2) It is more efficient in that

PAGE 34

27 it may skip over local optimal solutions that do not improve the objective function value. (3) It can be used even when the feasible region is unbounded, and it can detect when problem (GLMP) is unbounded from below. Muu and Tam (1992) also considered problem (CLMP), but in their work, the feasible region Dis relaxed to a compact convex set. They seem to be the only researchers to have considered this generalization of problem (CLMP). The authors however tested their algorithm using a polytope for the feasible region. Muu and Tam (1992) formulated the parametric master problem (MP3') s.t. xeD, ( c 1 x) + d 1 = ~, 0. They proposed a branch and bound algorithm to solve problem (MP3'). Branch and bound is a technique commonly used by algorithms in global optimization. Branching refers to the successive partitioning of the feasible region and bounding refers to the computation of lower and upper bounds on the global optimum over the partitions. Partitions of the feasible region that produce a lower bound on the objective function that exceeds the best upper bound found so far by the algorithm are eliminated from further consideration. Such partitions are said to be fa thorned. A branch and bound algorithm terminates when all of the partitions have been fathomed. In the algorithm of Muu and Tam (1992), partitions of the feasible region are constructed by restricting the value of ( c 1 x) + d 1 to values within an interval. The algorithm begins by finding an interval I O := [g 1 g 2 ] of achievable values of ( c 1 x) + d 1

PAGE 35

28 by solving the two convex programs ~ 1 := min { ( c 1 x) + d 1 x E D} and ~ 2 := max {(c 1 x) + d 1 x ED}. Optimal solutions u 0 and v 0 are then obtained for the two convex programs A lower bound /3 (I O ) over the interval / 0 of the objective function F of problem (MP3') is found by selecting /3 (I O ):= min {/3 (~ 1 ), /3 (~ 2 )} An upper bound a 0 on F is obtained by selecting a 0 := min {t(u 0 ), J(v 0 )}. The interval / 0 is next bisected and the procedure repeated using the two subintervals. A subinterval that produces a lower bound that exceeds the current upper bound is eliminated from further consideration; i.e. that subinterval is considered to be fathomed. The procedure continues bisecting intervals / k to generating a sequence of solutions {xk = 1 that converge to a limit point x that is a global optimal solution. Computational experiments on problems up to (m, n) = (30, 200) showed that the algorithm is very efficient when both vectors c and d are positive. 2.2.4. Methods Based on Polyhedral Annexation A limitation of conventional optimization methods is that they can become trapped at a local minimum, or even a stationary point, if they are applied to a global optimization problem, e.g. see the algorithms proposed by Swamp (1966a, 1966b). The central problem of a global optimization method then is to overcome this limitation by providing a certification test for global optimality, and if a point is not globally optimal, deterrnining how to move to a better solution. Tuy (1991) called this the subproblem of

PAGE 36

29 ''transcending the incumbent'' where the incumbent is the best feasible solution found so far by an algorithm. Let f be the objective function for problem (LMP2), and let x be a vertex of D that represents th e incumbent s olution for this problem. Then, from Tuy ( 1991 ) to transcend the incumbent, one must find a point in x e D such that f (x) < f (x) or else e s tablish that no s uch point exi s ts, i.e. that x is a global optimal solution for problem (LMP2). Let G : = { x e n J(x) '?: J (x)}, where n is a convex set containing D The problem of transcending the incumbent can then be restated as the following problem ( GCP ) Check if D c G and if not, find a point x e D \ G Problem (GCP ) i s known as the Geometric Complementary Problem. Tuy ( 1990 ) deve l oped the method of polyhedral annexation to solve problem ( GCP). In polyhedral annexation a sequence of polytopes Pi_ c P 2 c c P k c is built by adding a vert e x to the polytope P kI of the previous iteration in such a way that a vertex of D i s annexed into the new polytope P k The sequence Pi n D P 2 n D ... forrns an expanding inner approximation of D. When a polytope P h D is found all of the extreme point s of D have been searched and the algorithm terminates. A ss ociated with the sequence of p o lytopes Pi c P 2 c c P c is the sequence of their polars Pi :) P 2 *:) :) ~ :) where a polar E of a convex set E in R 11 is defined as E :={ye R n (y, x) 1 for all x e E } A dual correspondence exists between the facets of a polytope P k and th e vertice s of its polar P "* The subproblem of determining the

PAGE 37

30 inequality representation of P k, after a new vertex has been added can then be solved by solving the easier problem of computing the vertices of P; The termination condition D has the corresponding condition ~ k D For a more detailed description of polyhedral annexation, see the chapters on inner approximation in Horst, Pardalos, and Thoai (1995 ) or in Horst and Tuy (1993). Tuy and Tam (1992) proposed two algorithms that are derived using the polyhedral annexation method with a dualization and dimension reduction technique developed by Tuy ( 1991) Dualization refers to solving the original problem by solving the dual problem of generating a sequence of polars until a polar Ph* k D is found. The key to the dimension reduction technique is the introduction of a cone into problem (GCP). Tuy and Tam ( 1992) assumed that c 1 and c 2 are linearly independent vectors and then forn1ed the c one K := {x E R n ( c ; x) 0, i = 1, 2 }. Cone K is of interest since if .x e D is an incumbent solution, then for any .x E (.x + K) f (x) f (x) In other words, cone K identifies points in R 11 that can do no better than the incumbent solution x. Computational effort might be saved using cone K since a part of the feasible region D can be eliminated from further consideration and the search narrowed to the remaining portion of D The first algorithm proposed by Tuy and Tam (1992) solves problem (LMP2) by solving problem ( GCP) through the dualization process of generating a sequence of polars until a polar P, ; k D is found. Tuy and Tam (1992) showed that the polar K* of cone K is explicitly given a s K = {y E R" y = t 1 c 1 t 2 c 2 for some t 1 0 t 2 0 }. Any

PAGE 38

31 vertex y in a polar ~ lies in the polar cone K*, and the multipliers t 1 and t 2 used to express y are unique, since c 1 and c 2 are linearly independent vectors. Polar cone K* is used to solve the dual problem by building a collapsing sequence of polars Pi* ::, P; ::, ::, P; ::, with each polar being an improved approximation of v. The search is conducted in the two-dimensional space generated by c 1 and c 2 rather than in the original n-dimensional space Solving the linear program (LP(t)) max {-fi(c 1 ,x)-r 2 (c 2 x)xe DJ, where ti and t 2 are the multipliers used to express some vertex x = -Fie' t 2 c 2 of P h* tests for the termination condition Ph* D*. The second algorithm proposed by Tuy and Tam ( 1992) is motivated by the observation that for a fixed value of t = (t,, t 2 ), problem (LP(t)) is equivalent to the linear program (LP(a)) max { (Cl a (c 2 CI X )) X E D} where a= t 2 / (t 2 +t 1 )e [o, 1]. The first algorithm thus reduces to solving a sequence of linear programs (LP( a)) for different values of the parameter a. The second algorithm proposed by Tuy and Tam (1992) is to parametrically solve problem (LP(a)) for all of the critical values of a at which new bases become optimal. The objective function! of problem (LMP2) is evaluated at each basis and a global optimal solution chosen from those bases. The second algorithm of Tuy and Tam (1992) is essentially the same parametric problem (MP2) used by Konno and Kuno (1992).

PAGE 39

32 Tuy and Tam (1992) ran computational experiments using both the first polyhedral annexation algorithm and the second parametric algorithm. Their results showed that for solving problem (LMP2), the parametric algorithm performed better than the polyhedral annexation algorithm. The polyhedral annexation algorithm is not as efficient because more simplex pivots were required than for the parametric algorithm. Tuy and Tam ( 1992) proposed an improved variant of the polyhedral annexation algorithm that reduces the number of pivots and the number of objective function evaluations. The authors observed that the improved algorithm may potentially be more useful for a problem with an objective function that is difficult to evaluate. The computational experiments run using the parametric algorithm on problems of up to (m, n) = (30, 200) and positive problem data were in line with the results reported in Konno and Kuno (1992). 2.3. Extensions of Algorithms for Problem (LMP2) to Solve Problem (LMP) when p~3 The polyhedral annexation method of Tuy and Tam (1992) and the outcome-space algorithms of Thoai (1991) and Falk and Palocsay (1994) can be extended to the more general problem (LMP) where p 3. Although the algorithms remain unchanged, the subproblem of determining the new vertices becomes more difficult as the number of function terms in the objective function increases. 2.4. Methods to Solve Problems (CMP), (GCMP), and (CCMP) Relatively little work has been done in designing exact global solution algorithms that address problems (CMP), (GCMP), and (CCMP). The algorithms that have been

PAGE 40

.. 33 proposed fall into two categories: (1) methods based on solving a refor1nulated problem and (2) a method based on outer approximation. 2.4.1. Methods Based on Solving a Reformulated Problem Konno and Kuno (1992 ) introduced problem (CMP) where p = 2 and for111ulated a master problem by introducing a parameter into the original problem to separate the two functions of the objective function into a summation. This technique of embedding the original problem into a problem in a higher dimensional space is similar to the one used by the authors in the s ame paper to solve problem (LMP2). At the time, Konno and Kuno were not able to give an algorithm for solving the master problem. In Kuno and Konno ( 1991) the authors proposed a branch and bound algorithm along with an underestimation function to solve it. Computational results for problems of up to (m, n) = (200, 180) indicated that the algorithm is efficient when the objective function is the product of a linear function and a quadratic function and the feasible region is a polytope. Kuno Yajima, and Konno (1993) extended the paramaterization technique of Kuno and Konno ( 1991) for problem (CMP) to handle cases where p 2. They showed that a global optimal sol ution to problem (CMP) can be obtained by solving the equivalent problem (MP4 ) mm ~E S where E = ~ER P ]] ~ i 1, 0 For a fixed f EE, let x*(f) denote an optimal j= l solution of min G(x;f)= f ~J i (x). Let h: E R be defined by h(~):= G(x (~); ~) for xeX I 1 =

PAGE 41

34 any c; E 3. Solving problem (MP4) then reduces to solving the problem in RP given by (MP4') mi_!l h(c;). ~ e .::. Kuno, Yajima, and Konno (1993) showed that his a concave function over 3 and therefore a global optimal solution of problem (MP4') exists on the boundary of 3 They proposed an outer approximation method for solving problem (MP4') and tested their algorithm against two subclasses of problem (CMP): (1) problem (LMP) and (2) problems similar those tested in Kuno and Konno (1991) in which the objective function is the product of a linear and a quadratic function and the constraints are linear inequalities. Computational experiments showed that the total computational time is dominated by that needed for solving the convex minimization master problems for each parameter value. The results also showed that the number of cuts and vertices generated increases rapidly as p increased from 2 to 5. The authors asserted that this was due to inefficiencies in computing new vertices, especially when p exceeds 5. However, if p is held constant, these numbers increased very slowly as the number of constraints and variables increased. The authors concluded that their algorithm is reasonably efficient when p is less than 4. Jaumard, Meyer, and Tuy (1997) added a convex function to the objective function of problem (CMP) to for1n problem (CCMP). The authors showed that problem (CCMP) can be reduced to a quasiconcave minimization problem in RP that is a generalization of problem (MP4') used by Kuno, Yajima, and Konno (1993). In the special case where / 0 = 0 in problem (CCMP), the reduced quasiconcave minimization problem in Jaumard, Meyer and Tuy (1997) can be shown to be equivalent to the one

PAGE 42

35 used by Kuno, Yajima and Konno (1993). Jaumard, Meyer, and Tuy (1997) find a global solution of problem (CCMP) by finding an optimal solution to the quasiconcave minimization problem in R P using a conical branch and bound method. They ran computational experiments using their algorithm on test problems similar to those used by Kuno, Yajima and Konno (1993) and Thoai (1991). The authors report that their results are very sensitive to the magnitude of p and not as sensitive to the size (m, n ) of the constraint matrix. Sniedovich and Findlay ( 1995) analyzed problem (CMP) from the perspective of c-programming but did not give a complete algorithm for solving it. C-programming is a technique developed by Sniedovich (1984) for solving an optimization problem of the form ( CP ) q := min l/f(cp(x )), xeX where X is some nonempty set


PAGE 43

36 For problem (CMP), the objective function can be expressed as the composite 1/f( 0 }. Since problem (CMP) satisfies the requirements of c-programming, it can be solved by solving the parametric problem (MPS') q(~):= rninf ~;/;(x), ~ES, ~ X i= I where S is any subset of R P such that V 1/f(
PAGE 44

37 objective function of problem (GCMP) The master problem is a convex minimization problem in the space R 11 + 2 q and is solved using an outer approximation algorithm. Computational experiments conducted using a polyhedron for the feasible region showed that for q = 1, this algorithm required less than half the computational time required by the branch and bound with underestimation function algorithm proposed in Konno and Kuno (1992) to solve problem (CMP). Tuy (1992 ) gave problem (CMP) as an example of an optimization problem that can be formulated as a Geometric Complementary Problem and solved it using a parametric programming problem. The parametric programming problem is a convex minimization problem in which a positive parameter vector is used to build a composite objective function from the convex functions in the objective function of problem (CMP). A complete algorithm that includes solving the parametric program was not given. 2.4.2. A Method Based on Outer Approximation Thoai ( 1991) extended the algorithm based on the outer approximation technique that he proposed for solving problem (LMP2) to address the solution of problem (CMP) when p = 2. The main idea is to build a sequence of decreasing polytopes P 0 :::) Pi :::) :::) X of the convex feasible region X and a sequence of decreasing ~ ~ polytopes S 0 :::) S 1 :::) :::) Y of the outcome set Y, where Problem (CMP) is then solved by applying a modified version of the algorithm for problem (LMP2). In any iteration k, up to two cuts are introduced, one for Pk and one for S k to obtain tighter approximating sets.

PAGE 45

38 Since the algorithm does not depend on tl1e actual value of p it can be extended to handle cases where p 3 2.5. Methods to Solve Problem (LMP) as a Concave Minimization Problem Konno and Kuno (1992) showed that the objective function of problem (LMP) is not a convex function over the feasible set D Therefore, problem (LMP) is not a convex programming problem. However, since the natural logarithm function In is a strictly increasing concave function on (0, 00 ), it is easy to show that the function defined for all x e D is a concave function. In addition, the optimal solution set of the concave minimization problem (CMIN) min F(x), s.t.xe D, is identical to the optimal solution set of problem (LMP). Therefore, any concave minimization method may be applied to problem (LMP) if the objective function is replaced by its logarithmic equivalent. Using the above transformation modification, Tuy (1991) showed that problem (LMP) could be solved in a reduced dimension space using polyhedral annexation and the dualization and dimension reduction technique. The algorithm presented in Tuy and Tam ( 1992) is essentially an improvement of the one in Tuy ( 1991 ). Ryoo and Sahinidis (1996) also converted problem (LMP) into the problem (CMIN). To solve problem (CMIN), they employed a branch and bound algorithm that incorporates the use of valid inequalities to accelerate convergence. Branch and bound

PAGE 46

39 algorithms may slowly converge to an optimal solution when the gap between the initial upper and lower bounds is large. A valid inequality is a inequality constraint that does not exclude any solution that yields an objective function value lower than the current best upper bound. By introducing valid inequalities into the constraint set, inferior parts of the feasible region may be removed from further consideration without eliminating possible global optimal solutions. A second use of valid inequalities is to reduce the range of values that the variables in the problem can assume. Ryoo and Sahinidis referred to these two uses of valid inequalities as range reduction mechanisms. The performance of the bounding procedure in the branch and bound algorithm is improved by using these range reduction mechanisms, since smaller-sized partitions of the feasible region are used and the variables are restricted to reduced ranges of values. Ryoo and Sahinidis implemented the branch and bound algorithm along with the range reduction mechanisms in a computer program called BARON (Branch-And-Reduce Optimization Navigator). To more easily calculate lower bounds on the objective function F of problem (CMIN) over a partition of the feasible region, the authors replaced F by a linear underestimating function. Lower bounds were then calculated by solving linear programs. The authors tested randomly-generated problems in sizes from (m, n) = (50, 50) to (200, 200), with p ranging from 2 to 5. They reported that only a small fraction of the total CPU time is consumed in the range reduction mechanisms and that there seemed to be a low-order polynomial relationship between the CPU time and the value of p

PAGE 47

CHAPTER3 CONCA VE~MULTIPLICATNE PROGRAMMING PROBLEMS: ANALYSIS AND AN EFFICJENT POINT SEARCH HEURISTIC FOR THE LINEAR CASE 3.1. Introduction An important, but little re s earched area that deserves more attention, is the development of heuri s tic algorithms for finding a good solution for multiplicative programming problem s In s ome applications a good, though not nece ss arily globally optimal solution may adequately meet the requirements of a u s er (Konno and Inori 1989 ) In these ca s es s ince multiplicative programming problems are known to be NP hard the expenditure of computational effort required to globally solve them may not be needed. Thi s chapt e r has two purpose s. The first is to pre s ent an analysis of problem ( P x) when problem ( P x) i s a conca v e multiplicative programming problem. The second purpose is to propo s e a heuristic algorithm de s igned for the case where problem ( P x) is a linear multiplicative programming problem. The analy s i s of the concave multiplicative programming problem i s presented in Section 3 2 This a naly s is show s a new way to write a concave multiplicative programming problem as a concave minimization problem and s ome theoretical consequence s of this. It also s hows some relationships between concave multiplicative programs and certain multiple-objective mathematical programs. In Section 3 3 by using some of the re s ult s of Section 3.2, we present and explain the workings of an efficient40

PAGE 48

41 point search heuristic algorithm that we have developed for the linear multiplicative programming problem. Section 3.4 reports and analyzes some statistics summarizing the computational re s ults that we obtained by coding the heuristic algorithm and applying it to 260 randomly-generated linear multiplicative programs. In Section 3.4 we also report the results of applying the heuristic algorithm to a multiplicative programming problem fo1med from a decision situation using real data. In Section 3.5, we discuss the major results of thi s chapter. 3.2. Analysis Assume in problem ( P x) that X i s a convex set and that, for each j = 1 2, ... p, f j : X R is a concave function; i e., assume that problem (Px) is a concave multiplicative programming problem Consider the function g : X R defined for each XE X by g(x)=log g(x) Then, it is a simple matter to s how that g : X R is a concave function and that the optimal solution s et to the concave minimization problem ming(x), s t xe X, ( 3.1) is identical to the optimal solution set of problem ( Px ) Thus, any concave multiplicative programming problem of the for1n of problem ( Px ), if rewritten in the fortn (3 1), can be solved by applying any appropriate general-purpose concave minimization algorithm to (3.1). For discussions and reviews of concave minimization algorithms, s ee, for instance, Benson ( 1995 ), B e n s on ( 1996 ) Horst and Tuy ( 1993), and Pardalos and Rosen (1987 ).

PAGE 49

42 It is interesting and useful in both practice and theory to observe that, in addition to (3.1), there is at least one other way to rewrite a concave multiplicative programming problem as a concave minimization problem. To show how this can be accomplished, we will first prove the following preliminary result Lemma 3.2.1. Let a E R P satisfy a > 0, and consider the nonlinear programming problem v=min(a,A), s.t.AEA, (3.2) where A= A E R P [[ A j 1, A~ 0 Then, vis finite and problem (3.2) has at least one J = I optimal solution. Proof. Notice that, if AE A, then A> 0 and (a,1) > 0. Therefore, v > 0. This, combined with the fact that A :t 0, implies that v is finite. Now, suppose that, for each j = 1, 2, ... there exists a vector 1 1 E A such that where { e j 4 1 is a strictly decreasing sequence of positive real numbers such that l ime 1 = 0. Then the sequence { A j l i is either bounded or unbounded. J oo J Case 1: { A j h =1 is bounded. Then, for some bounded set A k A, A j E A for each j = 1, 2, ... Therefore by passing to an appropriate subsequence { A1 }je, of { Aj 1 1 if t A necessary, we can guarantee that A = lim A1 exists. Further1r1ore, since A1 E A k A for }e l each j E I, and A is a closed set, A belongs to A By assumption, (3.3)

PAGE 50

43 for each j E J. By talcing the limits over j E J on both sides of (3.3), we conclude that ( a, I)~ v Since IE A, this implies that I is an optimal solution to (3.2). Case 2: { Al h = t is unbounded. Then, for some subsequence { ). / 1eJ of {Al};: 1 and for some k E { 1, 2, ... p }, l im A-{ = + 00 For each j E J, since ).,i E A, ).,i > 0. 00 Combined with the fact that a > 0 implies that for each j E J, ( 3.4) By assumption, for each j E J, ( 3.5) From (3.4) and (3.5), we obtain ( 3.6) for each j E J. By taking the limits over j E J on both sides of (3.6), we conclude that + oo = v, which is a contradiction. Therefore, this case cannot hold, and the proof is complete. Using Lemma 3.2.1, we may now establish the following theorem. Theorem 3.2.1. Assume in problem (Px) that X is a convex set and that Ji : X R, j = 1, 2 .. p, are concave functions. Let g: X R be defined for each x e X by p g(x) = p IT 1 1 (x) 1 /p j=I Then g : X R is a concave function.

PAGE 51

44 Proof. C o nsider the function h : X R defined for each x E X by h( x ) = min f A ifJ (x ) s .t. A EA ( 3.7 ) j=I where A is as de f ined in Lemma 3.2.1. From Lemma 3 2.1 since J i is s trictly positive on X for each j = 1 2, ... p it follows that the minimum in ( 3.7) exist s and i s finite for each x E X. If f o r each A E A we define a function h ). : X R by h ;t (x)= f i 1 J i (x) j = l then for each x E X h(x) may also be written as h(x) = min h ;t ( x ). ..leA ( 3.8 ) Notice that for each A E A h i : X R is a concave function. From this and ( 3.8 ), we conclude that h: X R i s al s o a concave function (Rockafellar 1970 ). To complete the proof we will show that, for each x E X h(x) = g ( x ) Toward this end, fix x E X and let A( x )e X denote an optimal solution to problem (3 7 ) From the Karush-Kuhn Tucker necessary condition s for this problem ( Bazaraa Sherali, and Shetty 1993 ), s ince A(x) > 0, it follows that there exists a nonnegative constant 0 (x) such that p J j ( x )-e(x) rri k ( x ) A j (x)=O, j=l,2 .. p. ( 3.9) k = l Since A(x )e X i s an optimal solution to problem ( 3.7 ) it is easy to see that

PAGE 52

45 Together with (3.9), this implies that A1 (x )J 1 (x) = 0(x ), j = l, 2, ... p. From (3.10), it follows that A 1 (x)=0(x)/ J 1 (x), j = I, 2, ... ,p. By substitution in this implies that 1/p 0(x) = 11 f 1 (x) j=l From equations (3.10) and (3.11), we see that 1/ p f A 1 (x)J 1 (x )= p [IJ 1 (x) j= I }=I (3.10) (3.11) (3.12) Since x e X and A(x )e X is an optimal solution to (3.7), the left-hand-side of equation (3.12) coincides with h(x). By definition of g, the right-hand-side of equation (3.12) equals g(x ), so that the proof is complete. Theorem 3.2.1 can also be proven by using a composite function approach and showing several preliminary results ( Avriel, Diewert, Schaible, and Zang 1987). We offer the proof here, because it is more direct and because we will use it below to help derive a corollary of interest.

PAGE 53

46 Notice from Theorem 3.2.1 that, when problem (Px) is a concave multiplicative program, the optimal solution set of problem (Px) is identical to the optimal solution set of the concave minimization problem min g(x), s.t. xe X, where g: X R is defined for each x E X by g(x)= p[g(x)] 1 1 P. (3.13) In practice, this implies that any concave multiplicative program (Px ), if rewritten in the fotm (3.13), can be solved by app lying any suitable concave minimization algorithm to (3.13). Notice also that problem (3.13) is a simpler reformulation of problem (Px) for the concave case than the typical refor1r1ulation used in the literature to solve problem (Px) in the convex case (see e.g., Konno and Kuno 1992, Kuno and Konno 1991, Thoai 1991, and Kuno, Yajima, and Konno 1993). Theorem 3.2.1 also has some interesting theoretical implications concerning the product of functions. For instance, for any finite set of concave functions fj, j = 1, 2, ... p, each defined on a common nonempty convex domain X R n and each strictly positive on this domain, it is known that the function g : X R defined by their product is not necessarily concave, convex, or quasiconvex on X (Kuno, Yajima and Konno 1993 and Avriel, Diewert, Schaible and Zang 1988). However, from Theorem 3.2 .1 the function f : X R given by J(x) = [g(x )] 1 1 P for each x e X is a concave function on X

PAGE 54

47 In addition, Theorem 3.2.1 implies the following result concerning the product of a set of concave functions. Corollary 3.2.1. Let X and f j j = 1, 2, ... p, be defined as in Theorem 3.2.1, and suppose that g : X R is defined for each x E X by g(x) = fI f 1 (x). j=I Then g: X R is a quasiconcave function. Proof. Choose a e R and let La = {x E X g (x) a}. If a~ 0, La = X is a convex set. If a> 0, then from Theorem 3.2.1 and Rockafellar (1970), the set [ 13 = {xe X p[g(x)] 1 1 P /3} is a convex set, where /3 = pa 1 IP. Since [ 13 =L a this implies that La is a convex set. Therefore, we have shown that for any a E R, La is a convex set. This is equivalent to showing that g : X R is a quasiconcave function (Bazaraa, Sherali, and Shetty 1993), so that the proof is complete. It fol]ows from Corollai y 3.2.1 that any concave multiplicative programming problem (Px) is a problem involving the minimization of a quasiconcave function over a convex set. Many of the most popular algorithms for minimizing a concave function over a convex set are equally suitable for minimizing quasiconcave functions over convex sets (Horst and Tuy 1993 and Benson 1995). As a result, we see that any concave

PAGE 55

48 multiplicative program (Px) can be solved by applying any number of suitable concave minimization algorithms directly to problem (Px ). In particular, no refo1rnulations of problem (Px) are needed to apply these algorithms. Remark 3.2.1. Corollary 3.2.1 has been previously shown to hold for the special case where p = 2, X is a nonempty, compact polyhedron, and / 1 and / 2 are linear functions (see, e.g., Konno and Kuno 1992). The next corollary of Theorem 3.2.1 concerns the minimization problem (3.7) used in the proof of the theorem. Possible uses for this corollary may include the construction of methods for finding local optimal solutions to concave multiplicative programs, although we will not investigate this here. Corollary 3.2.2. Let X and J i j = 1, 2, ... p, be defined as in Theorem 3.2.1, and let A be defined as in Lemma 3 .2 .1. Then, A is a convex set and, for each x E X, the unique optimal sol ution l(x) to problem (3.7) is given by 1 / p \(x)= Ii ti(x) fk(x), k=l,2, .. ,p. j = I Proof. Notice that A may be rewritten according to the relation 1 / p A = ;t E int R : p Ii Ai p (3.14) j = 1 where It is easy to see that, for each j = 1, 2, ... p, h i : int R : R defined for each ;t E int R : by

PAGE 56

is a concave function on int R : that satisfies Therefore, by Theorem 3 .2.1, the function m : int R: R defined for each A E int R: by 1 / p is a concave function. This implies that is a convex set (Rockafellar 1970). By (3.14), this proves that A is a convex set. Now, fix xe X, and let A(x)e A denote an optimal solution to problem (3.7). From the proof of Theorem 3.2.1, this implies that, for each k = 1, 2, ... p, where e(x) is given by (3.11 ), so that the corollary is proven. In addition to its relationships to concave minimization, a concave multiplicative program also has some interesting ties to multiple-objective mathematical programming. In the remainder of this section, we will show some of the theoretical relationships between concave multiplicative programs and certain multiple-objective mathematical programs. In the next section, some practical benefits of those relationships will be demonstrated. Let J(x) denote the vector Vi (x ), f 2 (x ), ... J P (x )]r,

PAGE 57

50 where J 1 : X R, j = I, 2, ... p, are the functions used in defining problem (Px ). Then, the components of the vector f (x) are generally conflicting, in the sense that the infima over X of J 1 (x ), j = 1 2, ... p, are generally not simultaneously achieved at the same point in X. As a result, inherent trade offs in the achievable values of the components of f (x) over x e X are present. To account for these tradeoffs, and to seek what decision makers call a most preferred solution in situations where the goal is to attempt to simultaneously minimize J 1 (x ), j = 1, 2, ... p, over X, one of the most popular approaches is to consider the associated multiple-objective mathematical program VMIN J(x ), s.t. xe X. (3.15) In particular, in typical situations, a most preferred solution in X will exist that is also an efficient solution for (3.15), where an efficient solution is defined as follows. Definition 3.2.1 A point x 0 E Rn is called an efficient solution for (3.15) when x 0 E X and, whenever f (x)~ f (x 0 ) for some xE X, then J(x) = J(x 0 ). An efficient solution is also called a nondominated or Pareto-optimal solution. By generating or searching the set XE of the efficient solutions for (3 .15), decision makers are able to observe the inherent tradeoffs among the objective functions J 1 j = 1, 2, ... p, that are available over X and are often able to choose from XE a most preferred solution. For further discussions on multiple-objective mathematical programming and its applications, the reader may consult, for instance, Cohon (1978), Evans (1984), Luc

PAGE 58

51 (1989 ) Sawaragi Nakayama a nd Tanino (1985 ), Stadler (1979 ), Steuer ( 1986 ) Yu ( 1985), Zeleny ( 1982 ) and references therein. The fir s t relationship between multiplicative programming and multiple-objective mathematical programming is given in the following re s ult. The proof of this result is an elementary exercise Proposition 3 2 1 Any optim a l s olution to problem ( P x) must belong to the efficient set X E of the multiple-objective mathematical programming problem (3.15 ). Notice that Pr o position 3.2.1 holds for arbitrary multiplicative programming problems ( P x ) The next result however is re s tricted to certain types of concave multiplicative program s. Proposition 3 2 2 A ss ume in problem ( P x ) that X is a compact, convex s et and that J i : X R j = 1 2, . p ar e concave functions Then there exists an optimal solution to problem ( P x) which i s an extreme point of X. Proof From Theorem 3 2 1 problem ( P x) can be solved by finding an optimal solution to the concav e minimization problem ( 3 13 ), where g: X R i s the concave function defined by p g(x) = p IJJ i (x) 1 / p j= I for each xe X Since X i s a nonempty compact convex set, from Horst and Tuy ( 1993) problem ( 3.1 3) ha s an optimal s olution that is an extreme point of X. The s e two observations together prove th e de s ired re s ult

PAGE 59

52 Taken together, Proposition 3.2.1 and 3.2.2 imply that any concave multiplicative programming problem with a compact feasible region has at least one optimal solution that is an efficient extreme point solution to the multiple-objective mathematical programming problem (3.15). Special cases of this observations have been alluded to in the literature (see, e.g., Aneja, Aggarwal and Nair 1984 and Sniedovich and Findlay, 1995). In the next section, we put this observation to practical use. 3.3. Efficient Point Search Heuristic Assume in this section that, in problem (Px ), X = {x E R 11 Ax b} is a compact polyhedron, where A is an m x n matrix and b E Rn', and that for each j=l,2, ... ,p, J j (x)=(c j, x), where c j ER 11 foreach j=l,2, ... ,p. Then problem (Px) is a linear multiplicative programming problem or, more briefly, a linear multiplicative program (Konno and Kuno 1992). We have designed and tested a heuristic algorithm for this problem, based in part on some of the results in the previous section. In this section, we will formally state this heuristic algorithm and explain its workings. The multiple-objective program (3.15) associated with a linear multiplicative problem may be written as VMIN Cx, s.t. Ax~ b, (3.16) where C is the p x n matrix whosejth row equals (c j f, j = 1, 2, ... p. Problem (3.16) is a multiple-objective linear programming problem (Steuer 1986 and Yu 1985). Let X ex denote the set of extreme point s of

PAGE 60

53 X = { x E Rn Ax b }. Then, by Proposition 3.2.1 and 3.2.2, an optimal solution to the linear multiplicative programming problem can be found in the set of efficient extreme points of problem (3.16). The set X E ,e x is finite, and various procedures have been developed for generating it in its entirety (see, e.g. Steuer 1986, Yu 1985 and Steuer 1983). It follows that in theory at least, a global optimal solution to a linear multiplicative problem can be found by completely enumerating the set X E,ex of efficient extreme points of the associated multiple-objective linear programming problem (3.16) and, from this set, choosing the point(s) with the smallest value of (see, e.g. Sniedovich and Findlay 1995). Unfortunately, as we shall see Jater, in practice the exponential growth in the size of X E,ex as a function of problem size (Steuer 1986) renders this approach impractical for many cases. The approach of the heuristic algorithm is to efficiently search a dispersed, carefully chosen sample of candidate points from X E,ex in order to find an attractive solution to the linear multiplicative programming problem. To describe and explain the workings of the heuristic, we must first present some theoretical background from the theory of multiple-objective linear programming. Let

PAGE 61

54 W = {w E RP ( e, w) M, w e }, where e E R P is a vector with each entry equal to 1.0, and Mis a positive real number. For sufficiently large M, from Philip (1972) it is known that a point x 0 belongs to the efficient set XE of (3.16) if and only if x 0 is an optimal solution to the weighted-sum problem (3.17) for some w = w 0 E W We will assume henceforth that Mis chosen to be large enough to guarantee that this property bolds. It is also well known that the efficient set XE for (3.16) is given by where, for each w E W, Xi v denotes the optimal solution set of the linear program (3 .17) (Steuer 1986 and Yu 1985) Since the optimal solution set to (3.17) for any w E W is a face of it follows that the efficient set XE for (3.16) is equal to the union of the faces Xi v, we W, of X. Although X E is a connected set (Yu 1985), it is generally nonconvex. The heuristic algorithm will individually identify efficient faces Xw, we W, of X, and find an approximately-optimal extreme point solution to the problem p min IJ (ci,x), s.t.xE X 1 v (3.18) j=l for each efficient face X w that it finds.

PAGE 62

55 Let Y = { y e R P y = C x, for somexe X }, y >= { y eR P y ~ y, forsome y eY} To aid in its search, the heuri s tic algorithm will solve the linear program s t. Cx y Ax~b ( 3.19a ) (3 19b) (3.19c ) for various value s of y E y > and w E W The heuristic relies in part upon the properties of problem (3.19 ) given in the next three results The first two results follow easily from Benson ( 1978 ) Theorem 3.3.1. Suppose that x 0 E R n and let y 0 = Cx 0 Then, x 0 is an efficient solution for (3.16 ) if and only if with y = y 0 x 0 is an optimal solution to (3.19 ) for every weW Theorem 3.3 2 If y E y > and w E W, then (3 19 ) bas at least one optimal solution and any optimal s olution for (3 19 ) is an efficient solution for (3 16). Theorem 3.3.3. Suppose in (3 19 ) that w = w 0 E W and that y = y 0 = Cx 0 where x 0 is an efficient solution for ( 3.16 ) Let (u 0 r z 0 r) denote any optimal solution to linear programming dual of ( 3.19 ) where u 0 repre s ents the dual variables corresponding to the constraints C x y 0 of ( 3 19 ). Let w 0 = u 0 + w 0 and let v 0 = (w 0 J Cx 0 Then x 0 belongs to the efficient face X :-cc-0 of X and X :-cc-0 can be represented as IV W

PAGE 63

56 X w = {x E X (w 0 r Cx = Vo Proof. To prove the theorem, we will show that, with w = w 0 x 0 is an optimal solution to problem (3.17). Suppose in (3.19) that w= w 0 e W and that y = y 0 = Cx 0 where x 0 is an efficient solution for (3.16) given in the theorem. The dual linear program to (3.19) is then given by max-(y 0 ,u)-(b,z), C T AT er 0 s.t. u z = w u,z 0. From Theorem 3.3.1, x 0 is an optimal solution to (3.19) when w= w 0 and y = y 0 By the duality theory of linear programming (Murty 1983), since (u 0 r, zor) is an optimal solution to the linear programming dual of (3.19) when w = w 0 and y = y 0 this implies that (wo)r Cxo =-(yo,uo)-(b,zo). By rearranging this equation and using the definitions of y 0 and w 0 we obtain (w 0 f Cx 0 = -(b, z 0 ). With w = w 0 the dual linear program to (3.17) may be written as max-(b, z), A T cT-0 s.t. z = w z ~O. (3.20) (3.21a) (3.21b) (3.21c) Let z denote an arbitrary feasible solution to problem (3.21). From the definitions of u 0

PAGE 64

57 and w 0 this implie s that (u 0 r, zr ) is a feasible solution to the dual linear program of (3 19 ) Since (u 0 T, z 0 r ) is an optimal solution to the latter problem, it follow s that -(yo, u o) ( b, zo) ~ ( y o u o) ( b, z ) or equivalently -( b z 0 ) ( b z). Notice that since (u 0 r zor ) i s an optimal s olution to the dual linear program to ( 3 19 ) z 0 i s a feasible s oluti o n to ( 3.21 ). By the choice of z, the preceding two s tatements imply that z 0 is an optimal s olution to ( 3.21) Since x 0 is an efficient solution for (3 16 ) with w = w 0 x 0 i s a fea s ible s olution for ( 3.17 ). From ( 3.20) and the duality theory of linear programming ( Murty 1983 ) s ince z 0 is an optimal s olution to (3.21) this implies that, with w = w 0 x 0 i s an optimal s olution to ( 3 17 ), and the proof is complete Notice in Theorem 3 3 3 that, for any t > 0, X = = X 0 This implies that in tw w Theorem 3 3 3 when w 0 W there exist s a t E (0 1) such that tw 0 E W and X 0 = X 0 Thu s, in Theorem 3.3 3 when w 0 W X 0 has an alternate representation tl V lV lV X =o for which w 0 E W For s implicity we may and will assume without loss of I V generality that in Theorem 3. 3 3 w 0 E W. To generat e various p o ints y E Y ~ for u s e in problem (3.19) the heurist i c algorithm will rel y upon the two concepts defined in the next two definitions (s ee e.g., Zeleny 1982 ).

PAGE 65

58 Definition 3.3.1. The point y 1 E R P is called the ideal point of Y when, for each j = 1, 2, ... p, y equals the minimum value of y j over Y. Definition 3.3.2. The point yAI E R P is called the anti-ideal point of Y when, for each j = 1, 2 ... p y; 1 equals the maximum value of y j over Y. Notice that y~ and y; 1 generally do not belong to Y. The algorithm uses these two points as anchor point s in an initialization procedure whose goal is, in part to generate a dispersed sample of points from Y ~ The heuristic algorithm may be stated as follows. Algorithm 3 3.1 Efficient Point Search Heuristic Algorithm Initialization Phase. See Step s 1 through 5 below. Step 1. Find the ideal and anti-ideal points y 1 and yAI of Y. Step 2. Find an optimal solution l(x ) r ,a J E R n+ t to the linear program max a, a~O Step 3. Choose a positive integer S and, for each i = 1, 2, .. S, let Step 4. Choose a positive integer N such that 1 N M p + 1, let w 0 = e ER P, and, for each j = 1, 2, ... p define w 1 E R P by

PAGE 66

. w != I Step 5. Set UB = + 00 i = 0 and j = 0. 59 1, if i i= j, N, if i = j. Efficient Point Search Phase. See Steps 1 through 6 below. Step 1. Set y = y; and w = w i, and find any optimal solution x ii to linear program (3.19). Step 2. Set y = Cxij and w = w i in (3.19), and compute any optimal solution l(u ij f, ( zii f j to the dual linear program to (3.19), where u ii denotes the optimal dual variables corresponding to the constraints Cx y of (3.19). Step 3 Let w ij = u ii + w i. If w ij is a positive multiple of w i'i' for some i' i and j' j such that (i' j') i= (i, j ), then go to Step 6. Otherwise, continue. Step 4. Let v i} = (w ij f Cx ij. For each h = 1, 2, .. n, calculate a h according to the formula II ( ij) r k] a h = L c 'x LC h k= I t-1:k (3.22) .. and find any basic optimal solution x' 1 to the linear program min ( a ,x), (3.23a) (3.23b) Ax~b (3 23c) Step 5. If TI ( c k ,xii)~ UB go to Step 6. Otherwise set x = x ij, and UB = n (ck ,x), k=I k=I and go to Step 6.

PAGE 67

60 Step 6. Set j = j + 1 If j?. p go to Step 1 Otherwise, set i = i + 1 and j = 0. If i S go to Step 1. Otherwi s e, Stop : x E X E.ex is the recommended solution to the linear multiplicative programming problem. In the initialization pha s e of the algorithm s amples of points from Y ~ and from Ware generated. To g enerate the sample of points from Y ~ Step 2 of thi s phase determines the point y* between yA 1 and y 1 such that, of all line segments with endpoints yAJ and y that lie in Y ~ and for which y lies on the line segment connecting y AJ and y 1 the line s egment L connecting yAJ and y has maximum norm. The sample { y; i = 1 2, .. S} of points from Y ~ i s then generated in Step 3 of this phase by partitioning L into S line segments of equal length, where S is a positive integer chosen by the user In Step 4, a sample of p + 1 all integer vectors from W is generated, where for p of the s e vectors the value N of one of the components is chosen by the user from the set { I, 2, ... M p + I} Each iteration of the efficient point search phase of the heuristic executes two key operations. First, it identifies an efficient face X (J of X Second, unless this face has \\I .. been previously identified during an earlier execution of this phase, with w = w u in problem (3.18 ) by u s ing a first-order linear approximation to the objective function of this problem, it finds an extreme point xif of X in this efficient face that i s an approximate optimal solution to ( 3.18 ) Steps 1 thr o ugh 3 of the efficient point s earch phase of the algorithm identify an efficient face of X In Step 1 with y = y; E y > and w = w j E W, the linear program

PAGE 68

61 (3.19) is solved for any optimal solution x lj By Theorem 3.3.2, this optimal solution .. must exist and is an efficient solution for (3.16). In Steps 2 and 3, with y = Cx 1 and w = w 1 in (3.19 ), the dual linear program to (3.19) is solved to yield the vector uil E R P . . and the weighting vector w lj = u 1 + w 1 is computed. From Theorem 3.3.3, the face X w; 1 corresponding to this weighting vector is an efficient face for (3 .16) and contains x ii. Furthermore, from the same theorem, this face can be written as (3.24) where v ii = (w ii f Cx iJ. Step 3 checks whether or not X wu has been identified during a previous execution of this phase of the algorithm. If so, the algorithm proceeds to Step 6 to prepare for another possible iteration of the efficient point search phase of the heuristic. Otherwise control shifts to Steps 4-5. In Steps 4-5 of the efficient point search phase, problem (3.18) is approximately solved using a new efficient face X w as the feasible region. In particular, in Step 4, (3.22 ) is first used to construct the nonconstant portion of a first-order Taylor series linear approximation ( a,x ) of the objective function of problem (3.18) at x = x iJ E X wu Next, using the representation (3.24 ) of the efficient face X u an extreme point minimizer x ii w of (a, x ) over X \V ij is found by solving the linear program (3.23). Notice that x ii E X E,l! x .. (see Rockafellar 1970 ). In Step 5 the value achieved by x lj in the objective function of the linear multiplicative problem is compared to the smallest value UB found thus far for .. this objective function by the search. If x 1 achieves a smaller objective function value

PAGE 69

62 . than UB, xi) becomes the new incumbent solution x and UB is reduced in value accordingly. Notice that the perforrnance of the heuristic algorithm depends in part upon the number, locations and dimensions of the efficient faces (3 24) that are searched via problem (3.23). This, in tum, is partially dependent upon the sizes of the parameters S and N chosen by the user. The goa l is to search as many points of X E ex as possible by generating a variety of distinct efficient faces (3.24) of large dimensions that are dispersed widely throughout XE. Notice that, since each efficient face identified by the heuristic is given in the form (3.24) and searched by solving linear program (3.23), the individual points in X E e .x that are searched by the algorithm are searched implicitly rather than explicitly, i.e., they do not need to be explicitly enumerated. 3.4. Computational Results The heuristic a lgorithm described in Section 3.3 has the following attractive characteristics: (a) it can be implemented using only linear programming methods; (b) it generally implicitly searches many efficient extreme points of (3.16) at once by optimizing over entire efficient faces of (3.16), rather than by explicitly examining individual efficient extreme points of (3.16); ( c) it allows the user to manipulate the nature and extent of the efficient face search through the choices for the input parameters S and N; (d) it finds efficient faces of (3.16) by attempting to globally sample from a variety of regions of the efficient set.

PAGE 70

63 To evaluate the effectiveness in practice of the heuristic algorithm and its features, we have written a VS FORTRAN computer code for the algorithm and used it to solve 260 linear multiplicative programming problems of various sizes. To execute the code on these 260 problem s we used an IBM ES/9000 model 831 mainframe computer. As a further illustration of the effectiveness in practice of the heuristic algorithm we solved a multiple-object linear programming problem in forest management that was derived from a real decision situation using real data To implement Step 3 of the initialization phase of the algorithm, we chose to set S = 4, so that a sample of five points lying between yAJ and y 1 in Y ~ is always generated in this step. We used a value of N = 9 in Step 4 of the initialization phase to help generate the s ample of p + l points from W. To solve the linear programming problems called for by the heuristic, the computer code use s the simplex method procedures given in the subroutines of the Optimization Subroutine Library (International Business Machines 1990). These subroutines employ anticycling rules to handle degeneracy as needed. Therefore, they are especially appropriate for solving instances of problem (3.23), since these problems always contain degenerate extreme points Let and suppose that k is a positive integer. To generate the 260 test problems, we used the following random procedure First for each j = 1 2, ... p, we generated the elements of the vector ci E R n by randomly drawing elements from the set {1, 2, .. 10} Next, we

PAGE 71

64 generated a nonempty, compact polyhedral feasible region X k int R ;. This region can be written as X J R" P > 1 < < 1 2 } -1:XE x=q, =x i =q,J-, ... ,n, where P is a k x n matrix, q E R k, and CJ E R. To accomplish this, first the elements of P were generated by randomly choosing elements from the set { 1, 2, ... 10}. Next, for each i = 1, 2, ... k, the formula II qi= LP i j= I was used to calculate q ; and, finally, CJ was chosen according to the rule q = max {q ; ji = 1, 2, .. k }. Each test problem was constructed to belong to one of four categories, where a category is defmed by the number p of linear functions used in the objective function Ii ( c 1 x) of the test problem. The values p = 2, 3, 4, 5 were chosen to define these J = l categories. We chose these categories in this way because empirical evidence seems to indicate that the complexity of these problems is more sensitive to the magnitude of p than to the magnitudes of k or n (Kuna, Y ajima and Konno 1993). Within each category, the test problems were classified into subcategories of 10 problems, each defined by the values of the ordered pair (k, n). To help evaluate the attractiveness of the solutions found by the heuristic algorithm, we found a global optimal solution for each test problem by completely enumerating all of the efficient extreme points of the associated multiple-objective linear

PAGE 72

65 program (3.16). To accomplish this, we use the ADBASE computer code developed by Steuer (1983) Some stati s tic s summarizing the results of these computations are presented in Tables 3.1-3 4. In each table, each row gives average statistics for a subcategory (k, n) of 10 problems a measure of the worst case perfor111ance of the heuristic, and the number of problems in a category for which a global optimal solution was found. The first statistic is the average number of efficient extreme point s found by ADBASE in solving the problem s by complete enumeration. In some sense, the magnitudes of the s e numbers correspond to the average relative difficulties, by subcategory of each group of 10 linear multiplicative programs in a subcategory The s econd statistic is the average efficiency rating r given by r = 1 [( z H Z mi n )/( zmax Z min )], where z H i s the objective function value returned by the heuristic and where z min and Zmax are the global minimum and maximum values of the objective function of the test problem over the corre s ponding s et of efficient extreme points of (3.16). Thus 0 r 1 and the closer r i s to 1.0 the more attractive the value z H returned by the heuristic is relative to the actual g lobal minimum value zmin The third statistic given for each subcategory in the s e tables i s the average CPU time ( seconds) that the heuristic needed to solve a problem in the s ubcategory. The fourth statistic shows the lowest efficiency rating calculated for a problem in the subcategory It gives a measure of the wor s t case performance of the heuri s tic algorithm when applied to the 10 problems in a subcategory.

PAGE 73

66 Table 3 1 Computational Results : p = 2. Subcategory Avg No. Avg. Eff. Avg. Solutions Lowest Eff No. Exact k n Eff Points Rating r Time ( sec ) Rating r Solutions 25 20 28.8 1 000 0 227 1.000 10 25 30 2 8.8 1 000 0 241 1 000 10 30 40 4 7 9 1 000 0 389 1.000 10 40 30 2 8 2 1.000 0 328 1.000 10 40 50 47.0 0 999 0 504 0 996 8 50 40 3 5 1 0 999 0 453 0 999 9 50 60 29 2 1.000 0 556 1 000 10 60 70 62 3 1.000 1 070 1.000 10 The fifth statistic i s th e number of pr o blem s in a category for which the h e uristic algorithm found a global optimal solution. These four tables show that the solution s returned by the heuristic algorithm give, on the avera g e, quite accurate estimate s of the actual global minimum values for the 260 linear multiplicative te s t problems generated. This i s indicated by the fact that average efficiency rating s by s ubcategory always were a t least 0 920 and in approximately 96 % of the subcategories e x ceeded 0 950 It i s noteworthy that, for these problem s, these ratings r by s ubcategory do not s eem to decline s ignificantly as p, k, and n increase in Table 3.2. Computational Result s : p = 3. Subcategory A v g No. Eff. Avg Eff. Avg. Solutions Lowest Eff. No Exact k n E xt. Point s Rating r Time ( sec .) Rating r Solutions 25 20 33 0 6 0 985 0.321 0.951 4 25 3 0 896.8 0 960 0.469 0 708 5 30 40 873 3 0.987 0.543 0 884 7 40 30 949.3 0.993 0 609 0 968 6 40 50 2 0 7 3 7 0 920 0 967 0 806 4 50 40 1484 9 0 993 0 908 0 961 7 50 60 2846 3 0 995 1 298 0 978 6 60 70 5867.5 0.969 2.495 0.799 2

PAGE 74

67 Table 3.3. Computational Results: p = 4. Subcategory Avg. No. Eff. Avg Eff. Avg. Solutions Lowest Eff. No. Exact k n Ext Points Rating r Time (sec.) Rating r Solutions 25 20 2789.5 0 998 0 426 0.993 4 25 30 7245.9 0.992 0.598 0.945 5 30 40 23656 0.986 1 019 0.947 1 40 30 19034 0.978 0.998 0.923 2 40 50 50889 0.969 1.539 0.918 0 50 40 59443 0.969 1.587 0.843 2 50 50 83780 0.981 1.901 0.890 3 value. In addition, wjth the exception of one subcategory, a global optimal solution was found for at least one problem in each subcategory. The average solution times by subcategories shown in the four tables indicate that, for these test problems, the computational effort required by the heuristic was rather small. In fact these average times were always less than 2.50 seconds. In comparison to exact algorithms that have been used in test situations to globally solve linear multiplicative problems, these times are generally either at least as small or much smaller (see, e.g., Kuno Yajima, Konno 1993 and Ryoo and Sahinidjs 1996). Furthermore, in contrast to solution times for exact algorithms, these average solution times seem much less sensitive to increases in p n, k or to increases in the average number of efficient Table 3.4. Computational Results: p = 5. Subcategory Avg. No. Eff Avg. Eff. Avg. Solutions Lowest Eff. No. Exact k n Ext. Points Rating r Time t'sec.) Rating r Solutions 10 20 1331.4 0.993 0.353 0.941 5 20 10 527 1 0 .9 98 0.294 0.993 2 25 30 57115 0.995 0.962 0.992 2

PAGE 75

68 extreme points that exist in the corresponding problems (3.16); see Kuno, Yajima, Konno (1993) and Ryoo and Sahinidis, (1996). Finally, it is worth noting that we were able to apply the heuristic to much larger problems than those reported in Tables 3.1-3.4. However, the number of efficient extreme points in the associated multiple-objective linear programming problems (3.16) for these cases always exceeded 200,000. Since the ADBASE code cannot be used to find all of the efficient extreme points for such problems, we were unable to completely enumerate the sets of efficient extreme points to find zmin and r values for these problems. Thus, we are as yet not able to draw conclusions concerning the accuracy of the heuristic for any problems larger than those reported in Tables 3.1-3.4. To further illustrate the effectiveness in practice of the heuristic algorithm, we solved a real application problem in forest management that was studied in Steuer and Schuler (1978) as a multiple-objective linear programming problem. The problem involves the allocation of land and budget monies in a way that seeks to maximize objectives in timber production, hunting and cattle grazing in the Swan Creek subunit of the Mark Twain National Forest Steuer and Schuler (1978) provide actual data used to for1nulate their multiple-objective linear programming problem. The problem contains 31 decision variables, 5 linear objective functions, and 13 constraints. Our multiplicative programming problem was formed from this problem by multiplying the 5 linear objective functions together to form a single objective function. The heuristic was then used to search for an approximate solution that maximizes this single objective function subject to the constraints of the forest management multiple-objective linear programming problem.

PAGE 76

69 To help evaluate the attractivenes s of the solution found by the heuristic algorithm we found a global optimal solution by enumerating the 83 efficient extreme points of the as s ociated forest management multiple-objective linear program using the ADBASE computer code An efficiency rating of r = 0.999 was calculated using the slightly modified equation r = 1 [( zmax Z H )/( z max Z min )] since this multiplicative programming problem is a maximization problem rather than a minimization problem This efficiency rating indicates that the heuristic algorithm returned an attractive value zH relative to the actual global maximum value Zmax 3.5. Discussion The result s of this chapter imply that there are at least two ways to rewrite a concave multiplicative programming problem as a concave minimization problem. It f ol]ow s that concave minimization theory and methods can be used in these ways to analyze and s olve concave multiplicative programs. The results also imply that a concave multiplicative programming problem can be analyzed and solved directly without any reformulation, as a qua s iconcave minimization problem over a convex set Furthermore the analysis in the chapter implies that any concave multiplicative programming problem (P x ) with a compact feasible region has at least one optimal solution that is an efficient extreme point solution of the a s sociated multiple-objective mathematical programming problem ( 3.15 ). Therefore the opportunity exists for devising s olution methods for such problems (P x) that search am o ng the efficient extreme points of the associated multiple objective problem s ( 3 15 ) The c hapter proposes a heuristic algorithm that takes this

PAGE 77

70 approach for solving linear multiplicative programs. From the computational results presented for this heuristic algorithm, we conclude that its features and perfor1nance offer significant potential for conveniently finding very attractive solutions with relatively little computational effort to the various applications using linear multiplicative programming encountered in practice Thus, the theoretical and algorithmic results presented in this chapter offer some potential new avenues for more effectively analyzing and solving multiplicative programming problems of various types

PAGE 78

CHAPTER4 A GENERAL MULTIPLICATNE PROGRAMMING PROBLEM IN OUTCOME SPACE 4.1. Introduction Recall from Chapter 1 that the multiplicative programming problem is given by v x =min Ilt 1 (x),s.t xe X, J= l where p 2 is an integer, X is a nonempty set in Rn, and, for each j = 1, 2, ... p, J 1 : X R satisfies J 1 (x) > 0 for all x e X. For simplicity, we assume that the minimum v x in problem (Px) is achieved. For any x e R ", let J(x) denote the p-vector withjth entry equal to J 1 (x), j = 1, 2, ... p. Let ye RP denote the p-vector withjth entry equal to y 1 j = 1, 2, .. p. For each j = 1 2 . p, let S, 1 e R satisfy where y 1 = + 00 is possible, and ]et ye RP denote the vector withjth entry equal to y 1 j = 1, 2, ... p. Although various outcome-space reformulations of problem (Px) have been proposed for solution purposes, one of the most common reformulations is given by the problem v Y = min g(y), s.t. ye Y ~ 71

PAGE 79

72 where Y ~ = ~ E R P J(x)~y~ y for somexE X ], ( 4.1) and where for e ac h y E Y ~ g : Y ~ R is defined by ( 4.2 ) For example problem ( P Y ~ ) i s essentially the reformulation of problem ( P x ) used in the algorithm s of Ben s on ( 1998c ), Falk and Palocsay ( 1994 ) and Thoai (1991 ). Notice that since X is nonempty Y ~ i s a nonempty s et. By con s tructing appropriate global s olution algorithms for problem ( P Y ~ ), this problem provides us with the opportunity to solve problem ( P x) by working in the outcomes pace R P of the problem, rather than in the decision s pace R 11 which i s generally much larger than R P. In order to globally s olve problem ( P Y ~ ), it is important to understand the properties of the set Y ~ defined by ( 4.1 ) of the function g defined by ( 4.2 ), and of problem ( PY ~ ) itself. This chapter undertake s a mathematical analy s is of the outcome-space reformulation ( P y s ) of problem ( P x ). The analysi s is organized accordin g to whether or not the outcome -s pace problem s atisfies conditions for the general ca se, the convex case, or the polyhedral ca s e. For the g eneral ca s e we show for instance that globally s olving either problem ( P x) or problem ( P Y ~ ) e ss entially also globally solves the other problem, and that for any f e a s ible point y for problem ( P y s ), either g ( y ) < g( y ) for some y E Y ~ or y satisfies a c ondition that i s nece ss ary but not sufficient, for it to be a local optimal s olution for probl e m ( P Y ~ ). F o r the convex and polyhedral cases we s how s tronger

PAGE 80

73 results. For example we show for the convex case that any global optimal solution for problem (PY ~ ) must lie on the boundary of Y ~ that the objective function g in problem (P s ) is strictly pseudoconcave on Y ~ and, when Y ~ is closed and contains at least one Y extreme point, that problem (P Y~ ) has an extreme point global optimal solution. The analysis of the general case of problem (PY ~ ) is given in Section 4.2. Section 4.3 provides analytical results for both the convex and polyhedral cases of problem (PY ~ ). 4.2. Results for the General Case of Problem (Py f ) Notice under the assumptions made in Section 4.1 for problem (Px) that Y ~ is a nonempty subset of R : := { z E R P z > 0 }. When y s satisfies this condition, we obtain what we will call the general case of problem (PY ~ ). It is important to establish that by solving the general case outcome-space for1r1ulation (Py f ) of problem (Px ), a global optimal solution for problem (Px) can be recovered. The following result, by showing that problems (Px) and (Pr ~ ) are equivalent in a certain sense, immediately establishes this fact. Theorem 4.2.1. (a) If x is a global optimal solution for problem (Px ), then y = J(x) is a global optimal solution for problem (Pr ~ ). Further1nore, v >' = v x (b) Problem (P Y~ ) has at least one global optimal solution Further1nore, if y* is a global optimal solution for problem (Pr ~ ), then any x* E X such that J(x )~ y is a global optimal solution for problem (Px ).

PAGE 81

74 Proof. (a) Let x be a global optimal solution for problem (Px ) and set y = J(x ). From (4.1) and (4.2) this implies that y* E Y ~ and that Therefore, v Y ~v x If g(y)< v x were to hold for some yE Y~, then, from (4.1) and (4.2), there would exist an x E X such that 0< I11 j (x)~g(y)< v x j= I which contradicts the definition of v x Therefore, g(y )~ v x for all y E Y ~ This implies th > s. < ( ) d y ~ C ll th ( ) d at v Y = v x mce v Y = v x g y an y E 1t 10 ows at v Y -vx g y an y 1s a global optimal solution for problem (PY ~ ). (b) By assumption, we may choose a global optimal solution for problem (Px ). From part (a), this implies that problem (Pr ~ ) has at least one global optimal solution. Suppose that y is a global optimal solution for problem (Pr ~ ). Since y E f f (4.1) implies that we may choose an arbitrary x E X such that J(x )~ y Then, from ( 4.2), since O < f (x ), Since x E X and y is a global optimal solution for problem (Pr ~ ), this implies that V x 11 f j (x )~V y (4.3) j= I

PAGE 82

75 From part (a ) v -" = v x By (4 3 ), this implies that Ii f j (x )= v x Since x E X it follows }= 1 that x is a global optimal solution for problem (Px ). Suppose in the general case of problem ( Pr ~ ) that a point y E f ~ has been generated For algorithmic purposes, it may be valuable to have a tool for finding an alternate point y E f ~ that sati s fies g(y) < g( y ), if such a point exists. The next result gives an idea for potentially helping to create such a tool. To prove this re s ult we need the following lemma. This lemma will also be u s eful in proving several other results later in this chapter. Lemma 4.2 1. A ss ume that y E f ~. Then for any y E Y ~ (1 / p ) ( V g(y) y) = g(y)(1 / p )f (y j /y j ), J= I and g ( y )(l / p )f (y l /yj ) ~ g (y)[g(y )/ g(y)]l /p }= 1 with equality holding in the latter relationship iff, for some constant M > 0 y 1 = M y 1 j=l,2 ... ,p Proof Choo se an arbitrary point y E Y < Suppose that yE Y ~ Then by ( 4 2 ) since Y ~ R : g ( y ) > 0. By definition of g, (1 / p) ( Vg( y ) y )= (l / p)f ITYk J }= I k~j

PAGE 83

76 (4.4) Since (1/ p )'?:.0, (y i /y 1 )> 0 for each j = 1, 2, . p, and p(l/ p) = 1, the arithmeticgeometric mean inequality (Duffin, Peterson, and 2.ener 1967) implies that j= l with equality holding iff for some constant M > 0, y i = M yi for each j = 1, 2, ... p. Together with (4.4), since g(y)'?:_O, this implies the desired results. Theorem 4.2.2. Assume that y E y s If 1. o > in~ (1/ p) t (y j / y j ) JE Y j =I (4 5) then g(y )< g(y) for some yE y s In particular, if y achieves the infimum in (4.5), then g(y)< g(y). Proof Suppose that y E y < If ( 4.5) holds, then for some y E Y ~ 1. o > ( 1/ p) t (y j / y j ) (4.6) j= l Since g(y) > 0, this implies that g(y) > g(y)(l/ P )t (Y1 /Yi). (4.7) }= 1 From Lemma 4.2.1, since y E Y ~ we know that g(y )(1/ p )t (yi /yi )~ g(y)[g(y)/ g(y)]' I P_ (4.8) j= I Since g(y) > 0, together (4.7) and (4.8) imply that

PAGE 84

77 1.0 > [g(y)/ g(y)]l/p. Because g (y) > 0, this implies that g (y) < g (y). The ref ore, g (y) < g (y) for some y E ys. Since, for any point y that achieves the infimum in ( 4.5), ( 4.6) is also satisfied, the argument above also implies that if y achieves the infimum in (4.5), then g(y)< g(y). Notice that when y E Y ~ the infimum in ( 4 5) is either less than 1.0 or equal to 1.0. From Theorem 4.2.2, when this infimum is less than 1.0, a point y in y s such that g(y) < g(y) exists. In particular, in this case y is not a global optimal solution for problem (PY ~ ). The next result covers the case when the infimum in (4.5) equals 1.0. Theorem 4.2.3. Assume that y E Y ~ If (4.9) then y is an optimal solution to vd =min(Vg(y),y-y), ye Y ~ (4.10) Proof From ( 4.9), since y E Y ~ the infimum in ( 4.9) is achieved at y = y. By Lemma 4 2.1, since g(y) is a positive constant, tbjs implies that y also mjnimizes (1/ p )(V g(y), y) over y s Since (1/ p) is a positive constant and -(V g(y), y) is a constant, it is easy to see that this implies that y is an optimal solution to ( 4.10) and

PAGE 85

78 A point y E y s is a local optimal solution for problem (PY ~ ) when there exists an > 0 such that for each yE Y ~ for which llyy ~, g(y )~ g(y). From Theorem 4.2.3, when y E y and ( 4.9 ) hold s, then for any y E y s if there is a o > 0 such that d := (y y ) sati s fies y + Ad E Y ~ for all A such that O < A~ o, the directional derivative of g at y in the direction d will be nonnegative i.e ( Vg(y),d)~O. From Bazaraa, Sherali and Shetty ( 1993), thi s is a necessary but not sufficient condition for y to be a local (or global ) optimal s olution for problem (PY ~ ). 4.3. Results for Convex and Polyhedral Cases of Problem (PY ~ ) When y s in addition to being a nonempty subset of R:, is a convex set, then we obtain what we will call the convex case of problem (Pr ~ ). Similarly, when y s in addition to being a nonempty subset of R : is a polyhedron, then we obtain what we will call the polyhedral ca s e of problem ( Pr ~ ) Each of these types of outcome-space versions of problem ( P x ) arise s from a broad class of decision space problems, as s hown by the next result. Theorem 4 3 1 When X is a convex set and, for each j = 1, 2, ... p, f j i s a convex function on X we obtain the convex case of problem ( PY ~ ). When X i s a polyhedron and, for each j = I 2 .. p f j is linear on R ", we obtain the polyhedral case of problem ( P y s ). Proof A ss ume in addition to the assumptions made in Section 4.1 on X and on f j j = 1, 2 .. p that X i s a convex set and that, for each j = 1 2 ... p f j is a

PAGE 86

79 convex function on X. We will show that y < is a convex set. Choose any y 1 y 2 E Y~. From (4.1), since y 1 y 2 E y s, we may choose x 1 x 2 E X such that fj (x 1 )~ y 1 and J j (x 2 )~y 2 }=1,2, ... p. Suppose that le Rand O~l~l. Then,since A~O and (1-l)~O, for each }=1,2, ... ,p, l 1 (x 1 )+ (1l )Jj (x 2 )~;., y 1 + (1l )y 2 By the convexity of f j j = 1, 2, ... p, on the convex set X, if we set x = Ax 1 + (1A )x 2 then XE X, and, for each j = 1, 2 . p, ( 4.11) (4.12) ( 4.13) From(4 ll)-(4.13 ), J(x)~ly 1 +(I-l)y 2 where .xE X .Since y 1 ,y 2 E y s, y; ~Y holds for each i = 1, 2. As a result, since A, (1A)~ 0, A y 1 + {1A )y 2 y The conditions for A y 1 + (1-A )y 2 to belong to Y ~ are thus satisfied. By the choices of y 1 y 2 and A, this implies that Y is a convex set. Now suppose, in addition to the assumptions made in Section 4. I on X and on fj, j = 1, 2 ... p that X is a polyhedron and that, for each j = 1, 2, ... p, fj is a linear function on R n We will show that Y ~ is a polyhedron By definition, since X is a polyhedron, there exists a finite number q of linear functions g 1 j = 1, 2, ... q, on Rn, and real numbers b j, j = 1, 2, .. q, such that X = {xE R n g j (x)~b j, j = 1, 2, ... ,q}.

PAGE 87

80 Let Z R n+p be defined as th e se t of all s olutions ( x, y ) to the system of linear inequalities ( 4.14 ) ( 4.16 ) given by )=1,2, ... ,p, ( 4.14 ) j = 1, 2, ... p, ( 4 15 ) gj(x ) < b = J' j = 1 2, ... q. ( 4.16 ) Then by definition Z i s a polyhedron in R "+P Let A be the p x (n + p) matrix whose first n columns each equal O E R P and whose last p columns together form the p x p identity matrix. Then from (4 1 ) and the definition of Z, Y = AZ From Rockafellar (1970, Theorem 19.3 ), Y ~ i s a polyhedron in R P. In convex cases of problem (PY ~ ) ( and thus, in polyhedral cases as well ) certain locations within y s for s eeking global optimal solutions can be specified. For instance we have the following re s ult Theorem 4.3.2. Suppose that problem ( P r ~ ) satisfies the conditions for the convex case. Then: (a) Any g lobal optimal so lution for problem (Py s ) belongs to the boundary of r s. (b) If y s is closed and contains at lea s t one extreme point, then there exists at least one global optimal solution for problem ( P y s ) that is an extreme point of y s Proof. A ss ume that y s in addition to being a nonempty subset of R :, is a convex set, i .e., that we have th e convex case for problem (Pr ~ ) Then from Theorem 4 .2. 1, problem ( P Y ~ ) has at lea s t one global optimal solution.

PAGE 88

81 (a) To show this part of the theorem let y denote an arbitrary global optimal solution to problem (Pr f ). Suppose that y is not on the boundary of Y $. By the choice of y and since Y ~ i s a convex set, y s has a nonempty interior. Therefore y must belong to the interior of Y ~ From ( 4.1), thi s implies that for some xE X, f(x)< y must hold. By assumption, since x EX, J(x)> 0. Therefore, if we set y = f(x), it follows that y E Y ~ and From (4.2), this contradicts the global optimality of y in problem (Py f ). Therefore y* must belong to the boundary of y s (b) From the discussion in Section 3.2 s ince Y ~ is a nonempty convex set and, for each j = 1 2, ... p, the function h i ( y) = y i is positive and concave on Y ~, the global optimal s olution se t for problem ( Pr f ) is identical to the global optimal solution set for the problem where for each y E Y ~ g : Y ~ R is the concave function defined by 1/p Since y s is a nonempty clo se d convex set with at least one extreme point from Rockafellar ( 1970 Corollary 18.5.3), it is easy to see that Y ~ can contain no lines ,.. Further1nore s ince problem ( P rf) has at least one global optimal solution problem ( P rs)

PAGE 89

82 also bas at least one global optimal solution. By Rockafellar (1970, Corollary 32.3 1), since g is a concave function on Y s; the latter two statements imply that problem (PY~) has at least one global optimal solution y that is an extreme point of Y ~ Because the optimal solution sets of problems (Pr ~ ) and (PY ~ ) coincide, this completes the proof. Suppose that Y ~ is a nonempty, closed convex subset of R !, and that y f contains at least one extreme point. Then, from Theorem 4 3.2, there will exist at least one global optimal solution for problem (PY ~ ) that is an extreme point of Y ~ and all global optimal solutions for problem (Pr ~ ) will lie on the boundary of Y ~ Neither of these properties however is necessarily shared by the decision set-based problem (Px) whose outcome-space reformulation yields problem (PY ~ ). The following example demonstrates this and in problem ( P x ). Then X is a nonempty, convex set and for each i = l, 2, / ; is a convex, positively -va lu e d function on X Therefore, by Theorem 4.3.1, the problem (Pr ~ ) obtained by formulating the outcome-space version of problem (Px) is guaranteed to satisfy the conditions of the convex case for problem (PY ~ )Furthermore, it is not difficult to show in this case, that Y ~ is compact. Thus, Y s; is closed and contains at

PAGE 90

83 least one extreme point. It is easy to see that the unique global optimal solution to problem (P r~ ) is (y f = (1, 1) which, as guaranteed by Theorem 4.3.2, is an extreme point of y s (and is thus on the boundary of Y ~ ). On the other hand, the only global optimal solution to problem (P x ) is (x f = (1, 2 ), yet x is neither on the boundary of X nor is it an extreme point of X. To present the next result, we need to define two types of functions. Definition 4.3.1. Let Z R 11 be a nonempty convex set, and let h: Z R. The function h is said to be guasiconcave on Z when for each z', z 2 e Z and A e R such that o
PAGE 91

84 Theorem 4.3.3. Suppose that problem (P Y~ ) satisfies the conditions for the convex case. Then, in this problem, g is a strictly pseudoconcave function over the convex set y < Proof. The set y < is a convex set by definition of the convex case for problem (PY ~ ). To show that g is strictly pseudoconcave over Y ~ notice first that by ( 4.2), g can be considered to be well defined over the open set R:. Also notice that g is differentiable over R : and, thus over Y ~ k R : Suppose now that y 1 and y 2 are distinct points in Y ~ that satisfy (v g(y 1 ), y 2 y 1 ) 0 Then from ( 4.2), we obtain o ~( v g ( y1 ), y2 Y 1) = f IJ y~ y ; -y!) k = I j'#-k =t k= l j= I By multiplying both sides of ( 4 17 ) by (1 / p) and rearranging, we obtain that g ( y ) ~ g(yl )( 1 / P )t (y ; / Y! ). k= l From Lemma 4.2.1 ( 4.17) ( 4.18 ) ( 4.19)

PAGE 92

85 with equality holding iff, for some M > 0, y; =MY!, k = 1, 2, ... p. There are two cases to consider. Case (i): There is no M > 0 such that Yi =MY!, k = 1, 2, ... p. Then, in ( 4.19), strict inequality holds, so that from ( 4.18) and ( 4.19), Since g(y')>O, this implies that g(y 2 )< g(y 1 ). Case (ii): For some M > 0, Yi= MY!, k = 1, 2, . p. If we choose such an M, then (4.19) holds as an equality. Thus, from (4.19) and the choice of M, we obtain that 1/p g(yl )(1/ p )L (y; I Y! )= g(yl )~(y 2 )/ g(yl )] (4.20) k= l and that (4.21) respectively. Since g(y 1 )> 0, together (4.18), (4.20) and (4.21) imply that Dividing through by g(y 1 )> 0 yields M ~1. Notice that M "I= 1, since, by assumption, y 1 and y 2 are distinct. Therefore M < 1. By (4.21), since g(y 1 ), g(y 2 )> 0, this implies that g(y 2 )< g(y 1 ), and the proof is complete. Remark 4.3 1. Theorem 4 3.3 justifies and strengthens the claim of Sniedovich and Findlay (1995, p. 317) that when Y ~ is a convex subset of R!, g : Y ~ R defined by ( 4.2) is differentiable and pseudoconcave on Y ~

PAGE 93

86 From Theorem 4.3.3, in the convex case, problem (Pr ~ ) is a global optimization problem involving the minimization of a strictly pseudoconcave function over a convex set Y ~ Therefore, as in the general case, multiple local optimal solutions for problem (P s ) will generally exist that are not globally optimal. Y From Theorem 3 2.1, we know that when Y ~ is a nonempty, convex subset of R:, the function g: Y ~ R defined, as in the proof of Theorem 4.3.2, by g(y) = [g(y )]1 /p (4.22) is concave where g : f ~ R is given by (2). By the next result, when the domain of g is restricted to an appropriate subset of Y ~, a stronger statement can be made. Theorem 4.3.4. Assume that Y ~ is a nonempty, compact, convex subset of R:. For any ae RP and beR such that a>O and b>O, let Z(a,b)=Y ~ n {yeR P( a,y)=b}. Then g: Z(a,b) R defined for each ye Z(a,b) by (4.22) is a strictly concave function for any a e RP and be R such that a > 0 and b > 0. Proof. Assume that y 1 y 2 e Z(a,b) and y 1 -:t y 2 where ae RP, be R, a> 0, and b > 0. Since Z(a,b) is an intersection of two convex sets, it is itself a convex set. Therefore, if we choo s e A e R such that O < A < 1 then z :=Ay 1 +(1-A)y 2 e Z(a,b). Also, by (2) and ( 4.22 ), 1 /p g(z)= IT [;t, y~ + (1-A )y ~ ] (4.23) j= I From Polya and Szego (1972),

PAGE 94

87 p 1 /p p fI [1 y~ + (1-A)y ~ ] AFIJ y~ I / p p + (1-1)"IJy } 1 / p (4.24) j = l j= I j=l with equality holding iff A y~ = K(1-A )y ~, j = 1, 2, .. p, for some positive constant K. Since and 1/p = (1-A )g(y 2 ), (4.23) and (4.24) will imply the desired result if we can show that no K > 0 exists such that A y~ = K(l A )y ~, j = l, 2, ... p. (4.25) Notice that since y 1 =Iy 2 K ;. := [1 / (1A)] does not satisfy ( 4.25). Suppose, to the contrary that for some K > 0, (4.25) is satisfied. Then from ( 4.25) it follows that y' = K[(1-1)/1]y 2 (4.26) Since y 1 y 2 E Z(a,b ), (4.27) Substituting for y 1 in ( 4.27) via ( 4 26), we obtain K[(l A)/ A]( a, y 2 ) = ( a, y 2 ) = b.

PAGE 95

88 Solving here for K, we obtain that K = [i/(1A)]. Since K = K.,. = [i/(1A)] does not satisfy ( 4.25), this contradiction concludes the proof. It is important to notice that the counterpart of Theorem 4.3.4 in the decision space does not hold, even in the polyhedral case. In particular, suppose that X k R n is a nonempty, compact polyhedron and, for each j = 1, 2, ... p, that there exists a c i e Rn such that J i (x) = ( c i, x) > 0 for all x e X. Then, although the function h : X R defined for each x e X by 1 /p p h(x)= I1 (ci' x) j=1 is concave (see Theorem 3.2.1), the function h: X(a,b ) R need not be strictly concave, where a e R P, be R a> 0, b > 0, and X (a b) = x e X f a i ( c 1 x) = b j=l The foil owing example illustrates this observation. Example 4.3.2. Let (4.28) and let J i (x 1 x 2 ) = ( (1, 1 ), (x,, x 2 )), j = 1, 2. Then X is a nonempty, compact polyhedron and, for each j = 1 2, Ji is po s itive and linear on X As guaranteed by Theorem 3.2 1, h: X R, which by ( 4.28 ), i s given by

PAGE 96

89 is concave However, if for example, a 1 = a 2 = 1 and b = 4, then h is not strictly concave on Con s ider now problem ( Py ~ ) when the conditions of the polyhedral case hold Assume also that Y ~ i s a compact set and that y E Y ~ For algorithmic purposes, it may be quite useful in thi s case to develop tools for finding local optimal s olutions for problem ( P r ~ ). The s e tools could then potentially be used to construct global s olution algorithms for the problem that repeatedly move from a local optimal solution to an improved local optimal solution until a global optimal solution is found. The remaining results in thi s section are motivated in part by the de s ire to find such tool s. Notice th a t in the polyhedral case the optimization problem in (9 ) i s a linear program given by p ( LP ) min (1 / p )L ( y j /y j ) s t. y E Y ~ j= I Problem ( LP ) will have an optimal solution y* that can be found, for instance by the simplex method Since y E y < the minimum value vmin in problem (LP ) sati s fies vmin 1.0. As a result, ther e are three pos s ible ca s es for problem (LP). First, vmin < 1.0 may hold. Second vmin = 1.0 may hold with y being the unique optimal solution to problem ( LP). Third vmin = 1 0 may hold, with problem ( LP ) having multiple optimal solutions In the fir s t case from Theorem 4 2 2 it follow s that g ( y ) < g( y ) wher e y* i s any optimal solution t o problem ( LP ), so that a more attractive feasible s olution y to

PAGE 97

90 problem (Pr ~ ) than y has been found. To analyze the second case, we need the following two definitions and lemma. Definition 4 3 3. A point y E Y ~ is a strict local optimal solution for problem (PY ~ ) when there exists an E > 0 such that for each y E y s for which y y and y y < e, g(y )> g(y). Definition 4.3.4. Let Z be a nonempty convex set in Rn, and let h: Z R The function h is said to be strongly quasiconcave on Z when for each z 1 z 2 E Z with z' z 2 we have for each A such that O
PAGE 98

91 and that y = y i s th e unique optimal s olution to problem ( LP ) Then y i s a strict local optimal solution f o r problem ( P r i ). Proof Sinc e g ( y ) > 0 and y = y i s the unique optimal solution to the problem y = y must al s o be the unique optimal solution to the problem p mi~ g ( y ) (1 / p) L ( y j I y j ) } eY J = I Therefore, by Lemm a 4 2.1, y = y i s the unique optimal solution to the problem min(l / p ) ( V g( y ), y) yeY 1 Since (1 / p) > 0 a nd ( V g ( y ) y) i s a con s tant thi s implies that y = y i s the unique optimal solutjon to th e problem min ( V g ( y ) y y). y eY ~ ( 4.29 ) Therefore the optimal value of problem ( 4.29 ) i s 0 and for all ye Y ~ s u c h that y :;:. y, ( V g ( y ), yy) > 0 ( 4 30 ) Let d 1 d 2 . d k repre s ent th e directions of the edges of y s emanating from the extreme point y o ff & From ( 4 30 ), for all i = 1 2 ... k. By Theor e m 4.1.2 in B az araa Sherali, and Shetty ( 1993 ) this implies that there exi s t po s iti ve real s 8 ; i = l 2 ... k s uch that ( 4.31)

PAGE 99

92 for each A ; E (o, o j ) Let 8 = 1 / 2 min { o i l i = l 2, ... k }, and consider the points y and &l l &l 2 &l k y+ y+ .. y+ Then by definition of 8 and (4. 31 ), g ( y +&Ji)> g(y) ( 4.32) for each i = 1 2, . k. Let z be any element of the convex hull of y, y + &1 ;, i = 1 2, ... k such that z :t y and, for each i = 1 2, ... k, z :t y +&J i. Since g is a strictly pseudoconcave function on Y ~ it is al s o a strongly quasiconcave function on Y ~ (Bazaraa, Sherali Shetty 1993 ). As a result by Lemma 4.3.1, g(z ) > min {g( y1 g(y +&Li), i = 1, 2, .. ,k }. ( 4.33 ) From (4.32) and ( 4 33 ), g(z)> g(y). Since 8 > 0, this implies that there exists an e > 0 sufficiently small so that if ZE Y s lz y <, and z :t y, then g(z)> g(y). Under the assumptions of Theorem 4.3.5 if vmin = 1.0 but y = y i s one of two or more optimal solutions to problem ( LP ), then y need not be a strict local optimal solution for problem ( P r ~ ). The following example illustrates this point. Example 4.3 3. Let and let yr= (4, 4 f. Then y < i s a nonempty compact polyhedron in R ; and the assumptions of Theor e m 4 .3. 5 are s atisfied In this case, y E y s and y i s an optimal solution to problem ( LP). However, since (y 8 ) = (4+ 8 4-of E y s and g(y 8 )< g(y) for all values of 8 such that O < 8 3, by Definition 4.3.3, y is not a strict local optimal

PAGE 100

93 solution for problem (Pr s ). (In fact, y is not even a local optimal solution for problem (Py f ). ) Notice that problem (LP) in this case has multiple optimal solutions. In the third case of problem (LP), vmin = 1.0 and problem (LP) has multiple optimal solutions. In this case, by the next result, as in the first case, an improved feasible solution for problem (Pr s ) is at hand. The proof of this result relies crucially on Theorem 4.3.4. Theorem 4.3.6. Assume that problem (Pr f ) satisfies the conditions for the polyhedral case, and that Y ~ is compact. Suppose that y is an optimal solution for problem (LP), and suppose that problem (LP) has multiple optimal solutions. Then, for any y :;= y that is an optimal solution for problem (LP), g(y )< g(y) must hold. Proof. Let y :;= y be an optimal solution to problem (LP) Then, since g(y)> 0, y is also an optimal solution for the problem p min g(y) (1/ p )L (y j /yj ), s.t. y E Y ~ j=I By Lemma 4.2.1, since y E y s this implies that y is an optimal solution to the problem min (1/ p )(V g(y ), y ), s.t. y E Y ~ Since (1/ p )(V g (y ), y) is a fixed number, it follows that y is an optimal solution to the problem min (1/ p )(V g(y), yy), s.t. y E Y ~ (4.34) By assumption, y is an optimal solution for problem (LP). Therefore, the optimal value of problem (LP) equals 1.0. From Theorem 4.2.3, this implies that the optimal value of

PAGE 101

94 problem (4.34) equals 0. As a result, since y* is an optimal solution for problem (4.34), (v g(y), y y) = 0. By the choice of y*, it follows that y*(LP)k{ye Yj(Vg(y),y-y)=o}, (4.35) where y (LP) denotes the optimal solution set for problem (LP) From Theorem 4.3.4, since V g(y) > 0 and (V g(y), y) > 0, the function g defined by (4.22) over the set y := (y ~ n {ye RP (V g(y), y) = (V g(y), y) }) is strictly concave. Notice, in addition, that g is differentiable on Rt. By ( 4.35), y and y* both belong to r. From Bazaraa, Sherali, and Shetty (1993), since y* :t y, the latter three sentences together imply that From (2) and ( 4.22), this implies that ~(y )]1 /p < [g(y)]l /p + (1/ p )i:k(y)l / p /yj](y; yj) j = I = [g(y)]l / p + k(y)l / p Ip ]t[(y;yj)/yj] j = I = [g(y)]l /p + [g(y)][ ( 1 p)/p](1/ P )(v g(y), y Y) ( _)l / p = g y where the last equation follows from the fact that y e y. As a result, g (y ) < g (y). Remark 4.3.2. Suppose that problem (Pr ~ ) satisfies the conditions for the polyhedral case and that Y 5: is compact. Then, by using Theorem 4.3.6 and the discussion preceding it, a

PAGE 102

95 finite guaranteed method can be described for finding a strict local optimal extreme point solution for problem ( PY ~ ) For instance, one such method is as follows Algorithm 4.3 1. Strict Local Optimal Extreme Point Solution Search Initialization Step. Find an initial extreme point y of Y ~ Step 1. Find the optimal value vmin for linear program (LP). If vmin < 1.0, go to Step 2. If vmin = 1.0 go to Step 3. Step 2. Find any optimal extreme point solution y for problem (LP). Set y = y and return to Step 1. Step 3. If y i s the unique optimal solution for problem (LP), then stop: The point y is a strict local optimal solution for problem ( Py ~ ) If problem (LP) has multiple optimal solution s, then continue. Step 4. Find any optimal extreme point solution y "# y for problem (LP). Set y = y and return to Step 1 The method is finite because Y ~ must have a finite number of extreme points. The methods of Ben s on ( 1998d ) and Benson and Sun ( 1998) can be shown to be suitable for executing the initialization step and for solving the linear program (LP) in Steps 1, 2 and 4. Notice that the counterpart to this method for the decision set-based problem (Px) is not guaranteed to succeed This is shown by the following example. Example 4.3.4. Let X = {(x 1 ,x 2 )r E R 2 0 5~x 1 x 2 ~3.0, x 1 + x 2 ~2.0], and, for each x E R 2 let

PAGE 103

96 J 1 ( x )=x 1 +x 2 J 2 (x) = 4x 1 +4x 2 Then X is a nonempty, compact polyhedron, and, for each j = l 2 J i ( x ) is a linear function with po s itiv e values for all x EX If we generate the initial extreme point (x 1 f = (1 5, 0 5) of X for example then the linear program 2 min (1 / 2 )L [r i ( x ) / J i (x )] s.t. x E X j=I has optimal value 1 0 and it ha s exactly two extreme point optimal solution s These are x 1 and (x 2 r = (0 5 l 5) Sin c e f j ( x 1 )= f j ( x 2 ), j = l 2 the counterpart procedure to the above method w o uld in thi s case cycle without tennination between g enerating the extreme point x 1 and generating the extreme point x 2 Neither of these extreme points is a strict local optimal s olution for problem (P x ), where the definition of a s trict local optimal s olution x f o r probl e m ( P x) i s g iven by Definition 4.3.3 with x, X ( P x ), x and IT1 i repla c in g y, Y ~, ( P r~ ), y and g, re s pectively j = I 4.4. Discussion The analy s i s o f the out c omes pace problem fortnulation ( P r i ) of the multiplicative programming problem ( P x) yield s s everal results. The key o ne s are a s follows. First sin ce globally s olving problem ( P r i ) al s o essentially globally s olves problem ( P x), and s in c e Y ~ ge nerally lie s in a significantly smaller space than X there could be great comput a tional ga in to b e d e rived by con s tructing global opt i mization

PAGE 104

97 algorithms for problem (Px) that work directly 011 problem (Py ~ ) instead of on problem (Px). Second, the potential to create global solution algorithms for problem (Pr ~ ) is quite high. For instance, in the convex case where Y ~ is closed and has at least one extreme point, the analysis has shown that problem (PY ~ ) possesses at least one global optimal solution that is an extreme point of Y ~ This result could potentially allow researchers to create algorithms for solving problem (PY ~ ) that concentrate on searching among the extreme points of y < in ways similar to those used in existing global optimization algorithms for other non convex programming problems (Horst and Tuy 1993). Third, it appears potentially possible to construct global solution algorithms for problem (PY ~ ) that are based, at least in part, upon local optimal solution searches. Indeed, the analysis shows, for example, the potential to create search mechanisms for finding strict local optimal solutions for problem (Pr ~ ) in the polyhedral case if Y ~ is bounded Combined with ideas from global optimization such as relief indicator functions or cutting planes (Horst and Tuy 1993), this suggests that global solution algorithms for problem (PY ~ ) might be possible wherein successive local optimal solutions of smaller and smaller objective function values are found until a global optimal solution is found and the search tenninates.

PAGE 105

CHAPTERS AN OUTCOME-SPACE CUTTING-PLANE ALGORITHM FOR LINEAR MULTIPLICATIVE PROGRAMMING 5.1. Introduction The linear multiplicative programming problem may be written p ( P x) minf(x) = Il ( c j, x ), s.t. xeX, j=l where X k R n i s a nonempty compact polyhedron p 2 is an integer, and for each j=l,2 .. p c jE R n s ati s fie s (cj ,x )> O for all X E X. For each j = l 2 ... p let y j E R sati s fy and let y E R P denote the vector with jth entry equal to y j j = 1 2, ... p Let C denote the p x n matrix who s e jth row equal s c;, j = 1 2 ... p. Recall from Chapter 1 that one of the more direct reformulation s of problem ( P x) as an outcome-space problem is given by ( P r ~ ) min g ( y )= Ii yj, s .t y E y s, j=l where Y ~ = { y E R P C x y y for some x E X }. Closely related to this refor1r1ulation i s the problem 98

PAGE 106

99 p (Py) min g(y )= IJ yi, s.t. ye Y, j= I where Y={ye R P y=Cx forsomexe xJ. In this chapter we develop an outcome -s pace, cutting-plane algorithm for globally solving the linear multiplicative programming problem (Px ). To accomplish this, we use the framework of a pure cutting plane, decision set-based concave minimization method of Horst and Tuy ( 1993, pp 175-184). We show how to adapt this method to solving the outcome-space formulation ( P Y ~ ) of problem (Px) for a global, extreme point optimal solution. Because p i s almo s t always smaller than n, often by several orders of magnitude, we expect that potentially considerable computational savings could be obtained by using the new outcome-space, pure cutting plane algorithm in s tead of a decision set-based approach. As a further computational enhancement, we also show that for purposes of implementation, the mechanics of the outcome-space, cutting-plane algorithm can be applied to the s maller problem ( Pr) instead of to problem (PY ~ ). The next sec tion gives so me theoretical prerequisites that will help to present and justify the steps of the new algorithm. The new outcome-space, pure cutting plane algorithm for globally so lving problem ( P x) is presented and analyzed in Section 5 3. Section 5 4 shows that to further enhance computational efficiency, in practice the new algorithm can b e applied to the outcome-space reformulation (Py) instead of to problem ( PY ~ ). A sample problem is globally so lved with the new algorithm in Section 5 5, and some concluding remarks are g iven in the la s t section.

PAGE 107

100 5.2. Theoretical Prerequisites The outcome-space algorithm uses extreme point search and cutting planes to find a global optimal extreme point sol ution to problem (Pr ~ ) From this solution, a global optimal extreme point solution to the original multiplicative programming problem (Px) can be easily recovered, as we shall see. The approach of the new algorithm relies on several theoretical results. In this section, we review or develop the necessary results of this type. The theoretical prerequisites concern problems (Px) and (Pr ~ ) and their relationships to one another. Before presenting these prerequisites, we must give some preliminary definitions. Definition 5.2.1 Let W be an open set in R n that contains Z ~Rn, and let h: W R. The function h is called strictly pseudoconcave over Z when h is differentiable over Z and, for each pair of distinct points z 1 z 2 E Z, if (Vh(z 1 ), z 2 z 1 ) 0, then h(z 2 )< h(z 1 ). From Bazaraa, Sherali, and Shetty (1993), a strictly pseudoconcave function h: Z R defined over a convex set Z is both pseudoconcave and quasiconcave over Z. The converse to this statement, however, is not true. For details, see Bazaraa, Sherali, and Shetty (1993). Definition 5.2.2. (Bazaraa, Sherali, and Shetty 1993). A point y E Y~ is called a strict local optimal solution for problem (Pr ~ ) when there exists an > 0 such that for each y E Y$ for which y :t= y and yyj < g(y) > g(y). Each of the next five results either is taken directly from or follows easily from results in Chapters 3 and 4. For each result, the appropriate reference is given.

PAGE 108

101 Proposition 5.2.1 (Theorem 4.3.1). The feasible region Y ~ for problem (Py :; ) is a nonempty polyhedral subset of R t := {y E R P y > 0 J Proposition 5.2.2. (Proposition 3.2 2). Problem (Px) possesses a global optimal solution that is an extreme point of X. Theorem 5.2.1. (T heorem 4.3.2). Problem (Py s ) possesses a global optimal solution that is an extreme point of Y ~ Theorem 5.2.2. (Theorem 4 .2. 1 ). If y is a global optimal solution for problem (Py :; ), then any x E X such that ex ~ y is a global optimal solution for problem (Px ). Furthermore the global optimal values of problems (Px) and (Py:;) are equal. Theorem 5.2.3. (T heorem 4.3.3 ). The objective function g of problem (Py ) is a strictly pseudoconcave function over R t Notice also in Proposition 5 .2 .1 that Y ~ is full dimensional and bounded. Additionally notice in Theorem 5.2.2 that given y* as defined there, for any x E X such that ex~ y, x not only is a global optimal solution for problem (Px ), but it also satisfies ex= y *. Together with Theorem 5.2.2 and an observation given in Benson and Sun (1998), this implies that if y* is a global optimal solution for problem (Pr s ), then the linear program min fu ; i= l s. t. ex u = y *, u 0, XE X,

PAGE 109

102 will have at least one optimal solution, and for any optimal basic solution (x, u )= (x ,0 )e Rn +p to this linear program, x is an extreme point global optimal solution for problem (Px ). Thus, given a global optimal solution for problem (Py l! ), a global optimal extreme point solution for problem (Px) can be easily recovered by solving a single linear program for a basic optimal solution. Based upon Theorem 5 2.1, the outcome-space, cutting-plane algorithm to be presented confines its search to extreme points of the nonempty, compact polyhedron Y ~ In fact, the algorithm searches only within a certain subset of the set of extreme points of Y ~. This is justified in part by the following result Theorem 5.2.4. Any point that is a global optimal solution for problem (Py ~ ) must also be an extreme point of y f and a strict local optimal solution for problem (PY ~ ). Proof. Either (i) problem (PY ~ ) has a unique global optimal solution or (ii) problem (P r ~ ) has multiple global optimal solutions. Case (i): Problem (Pr ~ ) has a unique global optimal solution. Let y represent this solution Then, by Theorem 5.2.1, y is an extreme point of Y :;;; Since y is the unique global optimal solution for problem (Py s ), g(y )> g(y) for all ye y s such that y i= y. Therefore, for all > 0 and for each y E Y :;;; such that y i= y and I yYI < g(y )> g(y). By Definition 5.2.2, this completes the proof for this case Case (ii): Problem (P y ~ ) has multiple global optimal solutions Let y be any nonextreme point of Y ~. Consider the linear programming problem

PAGE 110

103 min (1/ p )f (y j /y j } s t. y E Y ~. (5.1) j = I Notice that since y E y s, the optimal value v of ( 5 .1) satisfies v 1. Therefore, either v < 1 or v = 1. If v < 1 then, from Theorem 4.2.2, there exists some y E Y ~ such that g (y) < g (y). Therefore, in thi s case, y is not a global optimal solution for problem If v = 1 then since y is a nonextreme point of Y ~ the linear program (5.1) has multiple optimal solutions, including y. In this case by Theorem 4.3.6, for any y* -:;; y that an optimal solution for (5 1), g(y* )< g(y) must hold. Therefore, in this case, y is not a global optimal solution for problem (P Y~ ). Since either v < 1 or v = 1, the arguments above imply that no nonextreme point of y s can be a global optimal s olution for problem (PY ~ ). Therefore, each of the global optimal solutions for problem ( Py s ) must be an extreme point of Y ~ Let y be an extreme point optimal solution for problem (Pr s ). Then, since the number of extreme points of y s is finite (see Murty (1983), for e.g.), there must exist an > 0 such that if y y < y -:;; y, and y E Y s then y is not an extreme point of y s Choose such an and let y '# y satisfy y y < y E y s Then y is not an extreme point of y s Since each global optimal solution for problem (Pr i. ) must be an extreme

PAGE 111

104 point of y s, this implies that g(y) > g(y). As a result, by Definition 5.2.2 y is a strict local optimal solution for problem (PY ~ ), and the proof is complete. Recall that a point z in a set Z k R i s called an isolated point of Z when it is not a limit point of Z. From Theorem 5 .2. 4, we obtain the following result as an immediate consequence Corollary 5.2.1 Let (y s ) denote the set of global optimal solutions for problem (Pr ~ ). Then every point in (y s ) is an isolated point of (y ~ ). To conclude this section we state the following result. The proof of this result is immediate by definition. Proposition 5 .2 .3 Let y be an extreme point global optimal solution for problem (PY ~ ). Then, for each extreme point y E y s that i s adjacent to y, g(y) g(y) must hold 5.3. Outcome-Space, Cutting-Plane Algorithm To adapt the approach of the decision se t-based Horst-Tuy concave minimization algorithm ( Hor st and Tuy (1993), pp. 175-184 ) to solving the outcome-space for1nulation ( P Y ~ ) of problem ( P x ), three tasks need to be iteratively executed. The first i s a certain strict local optimal solution search, the second involve s the construction of a cutting plane and the third i s a termination test. Before giving a formal statement of the new outcome-space algorithm, we will describe how the algorithm executes each of these three tasks

PAGE 112

105 5.3.1. Strict Local Optimal Solution Search Motivated by Proposition 5.2.3 and Theorems 5 2.1 and 5.2.4, the strict local optimal solution search seeks an extreme point y of Y ~ that is a strict local optimal solution for problem (Pr ~ ) and satisfies g(y )~ g(y) for each extreme point y of Y ~ that is adjacent to y. This search relies heavily upon repeatedly solving the linear program min (1/ p )f (y i /Y i ), s.t. ye y < (5.2) j= I as y is set equal to the values of various extreme points of Y ~ The linear program (5.2) was first proposed and studied in Chapter 4. The next three results for problem (5.2) follow immediately from Chapter 4. For each result, the appropriate reference is given. Theorem 5.3.1.1. ( Theorem4.2.2). Assume that ye Y s If the optimal value of linear program (5.2) is less than 1.0, then g(y )< g(y) for any optimal solution y of problem (5.2). Theorem 5.3.1.2. (Theorem 4.3.5). Assume that _y E y s If y is the unique optimal so]ution to linear program (5.2), then y is a strict local optimal solution for problem Theorem 5.3.1.3. (Theorem 4.3 6). Assume that y E f ~ If y is an optimal solution for linear program (5.2) and this linear program has multiple optimal solutions, then for any optimal solution y -:/= y for problem (5.2), g(y )< g(y) must hold.

PAGE 113

106 From these results, we obtain the following procedure for finding a strict local optimal extreme point solution y E Y ~ for problem (PY ~ ) that satisfies g(y) g(y) for each extreme point neighbor y to y in y s Algorithm 5.3.1.1. Strict Local Optimal Solution Search Procedure. Step 1 Find an initial extreme point y of y s Step 2. Compute the optimal value v for linear program (5.2). If v < 1 0, go to Step 3. If v=l.O, go to Step 4 Step 3. Deter1nine any optimal extreme point solution y for problem (5.2). Set y = y* and return to Step 2. Step 4. If problem ( 5.2 ) has multiple optimal solutions, go to Step 5. Otherwise, determine whether or not g ( y ) < g (y) for some extreme point neighbor y to y in y s If so, find any such neighbor y, set y = y, and return to Step 2. If not, then stop: The point y is a strict local optimal extreme point solution for problem (Py ) that satisfies g(y )~ g(y) for each extreme point neighbor y to y in Y ~ Step 5. Find any optimal extreme point solution y for problem (5.2) that is distinct from y. Set y = y and return to Step 2. By using Proposition 5.2.1 and Theorems 5.3.1.1-5.3.1.3, it is easy to see that this search procedure is guaranteed to be finite and to find a strict local optimal extreme point solution y for problem ( Py ) s uch that g(y )~ g(y) for each extreme point neighbor y to y in Y ~

PAGE 114

107 5.3.2. Cutting Plane Construction Let y be a nondegenerate extreme point of Y ~ that is a strict local optimal solution for problem ( PY ~ ) and satisfies g(y) g(y) for each extreme point neighbor y to y in Y $ and let r E R satisfy r g (y). Given y the new algorithm will seek to find a vector re E R P such that (5.3) When re satisfies ( 5.3 ) the linear inequality (re, yy)~l is called a '.}':valid cutting plane (or '.}':valid cut) for problem (P Y ~ ) (Horst and Tuy 1993). Since y i s a nondegenerate, strict local optimal extreme point solution for problem ( PY ~ ) that satisfies g (y) g (y) for each neighboring extreme point y to y in Y s; since Y ~ has full dimension, and since g is strictly pseudoconcave on R! (cf. Theorem 5.2.3), it follows that an approach used by Horst and Tuy (1993, pp. 85-91) can be used to find a vector re E R P that yields a y-valid cut. The following procedure applies this approach to problem (P Y ~ ). Algorithm 5.3.2.1. Procedure for Cutting Plane Construction. Step 1. Determine the p neighboring extreme points y;, i = 1, 2, ... p, to y in y s: Let I I l 2 u=y-y,z=, ... p. Step 2. For each i E { I 2, .. p} such that (5.4)

PAGE 115

108 set z; = y +a i u ; where a ; eqt1als the value of the supremum in (5.4). For each i e {1, 2, ... p} such that (5.4) does not hold, let z; = y +a;u;, where a ; > 0 is a finite number chosen so that g(z ; )~ y. Step 3. Compute nr = er Q 1 where e ERP is the column vector of p ones, and Q is the p x p matrix with column i equal to (z ; y ), i = 1, 2, ... p, and stop: The linear inequality (n,yy)~l is a ,'-Valid cutting plane for problem (Pr ~ ). Notice that in Step 1 of this procedure, since y is nondegenerate, the p neighboring extreme points described there are guaranteed to exist. Further111ore, because Y ~ has full dimension, u ; i = l, 2, .. p, are linearly independent (Murty ( 1983, pp. 138). Notice also in Step 2 that since y is a strict local optimal solution for problem (Py f ) for which g(y ; )~g(y) i=l,2, ... ,p, foreach i=l,2, ... ,p, either O
PAGE 116

109 Step 2' For each i E { 1, 2, ... s} such that (5.4) holds, set a 1 equal the value of the supremum in (5.4). For each iE {1, 2, ... ,s} s uch that (5.4) does not hold, let a 1 > 0 be a finite number such that g(y+a ; u;)~r. Step 3' Let 1C be any basic solution to the system ( 5.5) and stop: The linear inequality ( 1e, y-y)~ l is a y-valid cutting plane for problem (PY ~ ). Since y s ha s full dimen s ion and y is an extreme point of Y ~, by logic similar to that used in the proof of Lemma ID 1 in Horst and Tuy ( 1993 ), it can be shown that in Step 3', (5.5) has at least one basic solution. Further1nore, by showing a result similar to Proposition ID 1 in Horst and Tuy ( 1993) the modifications to the cutting plane construction procedure described above can be shown to be appropriate for generating a y-valid cutting plane for problem ( P Y ~ ) when y is degenerate 5.3.3. Termination Test The linear programming-based termination test for the outcome-space, cuttingplane algorithm is contained in the following result. Proposition 5 .3.3.1. Let y be an extreme point of ys that is a strict local optimal solution for probl em ( P r ~ ) and sa ti sfies g (y) g (y) for each extreme point neighbor y to y in ys_ Let ( 1e, yy)~ t

PAGE 117

110 be a y-valid cutting plane for problem ( PY ~), where g(y) r and, for some extreme point y 0 of Y ~, r = g(y 0 ). Then ( n yy) > I for all ye Y ~ such that g(y )< y. Hence, if 1 max ( n yy), s.t. ye Y ~ then y 0 is a global optimal solution for problem (Py ~ ). (5.6) Proof. Since (n, y y) 1 is a y-valid cutting plane for problem (PY ~ ), we know from (5.3) that ( n ,y-y)~ l holds for all ye Y ~ such that g(y) 1 can be shown to hold for all ye Y ~ such that g(y )< y. It is easy to see that this implies that if (5.6) holds, then y 0 is a global optimal solution for problem (PY ~ ). 5.3.4. Outcome-Space, Cutting-Plane Algorithm By incorporating the strict local optimal solution search procedure of Section 5.3.1, the cutting plane construction procedure of Section 5.3.2, and the termination test of Section 5.3.3 into the framework of a decision set-based, cutting plane method for concave minimization proposed by Horst and Tuy (1993, pp. 175-184), we obtain the following pure cutting plane algorithm for globally solving problem (Py f ). As shown in Section 5.2, from the global solution for problem (Py ~ ) obtained via this algorithm, a global optimal extreme point solution for problem (Px) can be easily recovered by solving a single linear program Algorithm 5.3.4.1. Outcome-Space, Cutting-Plane Algorithm.

PAGE 118

111 Initialization Step. By using the procedure in Section 5.3. l, find a strict local optimal extreme point solution y 0 for problem (Pr ~ ) that satisfies g(y) g(y 0 ) for each extreme point neighbor y to y 0 in y s Set r = g(y 0 ) and Y 0 = Y ~ Go to Step 0. Step k, k 0 Step k. l. By using the procedure in Section 5.3.2, with Y ~ = Yk, construct a y.valid cut Step k.2. Set n: = n: k, y = yk, and y s = y k in the linear program in (5.6), and compute a basic optimal solution yk to this linear program If (n:k, yk yk )~l, stop : The point y 0 is a global optimal extreme point solution for problem (Pr ~ ), and r is the global optimal value for problem ( P r ~ ) Otherwise, continue. Step k.3. Let Starting from yk, use the procedure in Section 5 3.1 to find a strict local optimal extreme point solution yk +I for the problem ( ) yk+I nung y, s.t. ye that satisfies g ( y) g ( yk+i ) for each extreme point neighbor y to yk+t in y k+i If g(y k+l )< r, then se t y = g(yk +I ) y 0 = yk + I and Y 0 = yk +I, and go to Step 0. Otherwise, go to Step k + 1.

PAGE 119

112 The following result and proof are guided in part by Theorem V .2 in Horst and Tuy (1993). Theorem 5.3.4.1. The outcome-space, cutting-plane algorithm for problem (PY ~ ) is finite when the sequence { n k } is bounded. If the algorithm stops, the point y 0 found by the algorithm is a global optimal extreme point solution for problem (Pr ~ ). Proof. The algorithm consists of a number of groups (or cycles) of steps. The beginning of a typical cycle of steps occurs when for some k 0, in Step k.3 the incumbent solution vector y 0 is set equal to yk + I, the incumbent value r is set equal to g(yk+l ), and y 0 i s set equal to yk + i. During the cycle of steps, one or more '}'-valid cuts is added to Y 0 but the incumbent solution y 0 and the incumbent value r remain unchanged. The cycle ter1ninates when a point yk+t is found such that g(y k +t )< y. In view of Proposition 5.3.3.1 this implies that yk + i strictly satisfies all of the '}'-valid cutting planes created thus far by the algorithm. As a result, yk+I is an extreme point of y s and it is distinct from all extreme points of Y ~ previously encountered. By Proposition 5.2.1 y s has a finite number of extreme points Therefore, the number of cycles in the algorithm must be finite. During a typical cycle one or more '}'-valid cuts of the form (7th' y ,Yh) 1, where h 0 is an integer, is added to the current polyhedron Y 0 k y s. For a typical k 1, from Step h .3 of the algorithm, we know that

PAGE 120

113 ( 5 7 ) h = 0, 1 2, .. k 1 We also note that 1 ( k -k k) } 0 = 'JC ,y -y < ( 5 8) From (5.7 ) ( 5 8 ) and Corollary ill 2 in Horst and Tuy (1993 ) if { n } is bounded, then the length of the cycle mu s t be finite Since the number of cycles in the algorithm is finite, this implie s that when { n } i s bounded the algorithm must be finite. Now s upp ose th a t the a l g orithm stop s. Then for some k 0, with 'JC = n ", y = yk and Y ~ = y k, inequality ( 5.6 ) becom es satisfied after which the algorithm immediately stops. Furthermore a t the time the algorithm stop s, it has found at the beginning of the current cycle of s t e p s an extreme point y 0 of y f and of Y k that is the incumbent solution and s ati s fi es r = g( y 0 ) ~ g ( yk ) In addition, yk i s a strict local optimal extreme point s olution for th e problem min g ( y ) s. t y E Y k that satisfies g ( y ) g ( yk ) for e ach extreme point neighbor y to y k in Y k. Taken together the s e obser v ations imply that we may apply Proposition 5.3.3 1 with n ='!C k, y = yk, y 0 = y, a nd Y ~ = Y k t o conclude that y 0 is a global minimizer of g o ver y k If y E y < and y e y k, then Step k.3 implies that for some f -valid cut ( n j y yj)~ l where f y ( n i yy j ) < 1 mu s t hold By ( 5 3 ), s ince r ~ f thi s implie s that g( y ) ~ r r Sin ce r = g ( y 0 ) it follow s that g ( y ) ~ g ( y 0 ) for all ye Y k LJ { ye Y ~ j y e y k },

PAGE 121

114 i.e., y 0 is a global optimal extreme point solution for problem (Pr ~ ) 5.4. Implementation In practice, by moderately modifying the procedures and the termination test in Sections 5.3.1-5.3.3 and by substituting Y for y f and the phrase ''problem (Py)'' for ''problem (Pr ~ )'' throughout the statement of the new outcome-space, cutting-plane algorithm we obtain an algorithm that globally solves problem (Py) instead of problem (P Y~). Since Y k y < computational savings can generally be expected to result by solving problem (Py) instead of problem (Pr ~ ). The validity of this approach relies in part upon the following two results. Proposition 5 .4.1. Let y be a global optimal solution for problem (PY ~ ). Then y e Y, and any x e X such that Cx y satisfies Cx = y is a global optimal solution for problem (Px ). Proof. Since Cx > 0 for all xe X, it is easy to see by the definition of Y ~ that the global optimality of y in problem (Py ~ ) implies that if Cx y* for some x e X, then ex= y*. As a result, y "' e Y, and, by Theorem 5.2.2, x is a global optimal solution for problem (Px ). A strict local optimal solution for problem (Py) is defined as in Definition 5.2.2, except with Y replacing Y ~ in the definition.

PAGE 122

115 Proposition 5 4 2. Problem (P y) possesses a global optimal solution that i s an extreme point of Y. Any g lobal optimal solution for problem (Py ) must also be a s tric t local optimal s olution for problem ( Py ) Proof From Rockafellar ( 1970 ), Y is a polyhedron Since X is nonempty and compact Y i s al s o nonempty and compact. Ft1rther1nore since Cx > 0 for all x e X, Y { z e R P z > 0 }. Therefore, from Proposition 3 2.2, problem (P y ) has a global optimal solution that is an extreme point of Y A proof that any global optimal solution for problem ( P y) mu s t be a s trict local optimal solution for problem ( P y) i s given by the proof of Theorem 5 .2 4 except with Y repla c ing y s in the proof. Remark 5.4.1. From Propo s ition 5.4 1 we may confine the search for a global optimal solution for probl e m ( P Y ~ ) to Y Stated in a different way, any global optimal solution to problem ( Pr ) i s al s o a global optimal solution to problem (Py ~ ). By Proposition 5.4.2, problem ( P y) ha s a global optimal s olution that is an extreme point of Y, and any global optimal extreme p o int s olution to problem ( P y) must be a strict local optimal solution for problem ( Py ) Taken together since Y is a nonempty compact polyhedron these observations imply that to globally s olve problem ( P y s ) we may apply the outcomespace, cutting plane approach given in Section 5 3 to problem (Pr), rather than to problem ( P Y ~ ). U s ing thi s approach we will find a global optimal solution to problem (P r s ). By Propo s ition 5 4.1 a g lobal optimal extreme point solution to problem ( P x) can then be recovered by s olving a s ingle linear program as explained in Section 5.2.

PAGE 123

116 In addition to s ubstituting Y for Y ~ and the phrase ' problem ( P r)'' for ''problem (P r~) '' throughout th e statement of the new outcome-space, cutting-plane algorithm in Section 5.3, some changes are needed to the procedures given in Sections 5.3.1 and 5.3.2 and in the termination te s t in Section 5 3 3 in order to confine the search to Y y s Let us now explain these change s. The following two re s ult s help to explain the changes needed in the s trict local optimal solution s earch procedure given in Section 5.3.1 to confine the search to Y. The proof of the fir s t r es ult is immediate from the definitions. Proposition 5.4.3 An y strict local optimal solution for problem (Pr ~ ) i s also a strict local optimal solution for problem ( P r). Proposition 5 4.4. As s ume that y E Y. Then any optimal solution for linear program ( 5.2 ) belongs to Y Proof Suppo s e that y i s an optimal s olution to problem (5.2 ), but y* Y. Then, since y E Y ~ but y Y there exists some point x E X such that Cx y* but C x "# y Let y ** = C x *. Then y E y s Further1nore, since y > 0, y y and y ** "# y it follow s that j=l j1 Since y "',. E y <, thi s in e quality contradict s that y"' is an optimal solution to problem ( 5.2 ) Therefore y Y cannot hold and the proof i s complete.

PAGE 124

117 From Propo s ition s 5 4 3 and 5 4 4, if we replace Y ~ with Y in problem (5.2 ) and throughout the strict local optimal solution search procedure of Section 5 3 .1, and if we replace the phrase ' problem ( P y ~ )' by ''problem (Py)'' in the procedure then the modified procedure i s guaranteed to find a strict local optimal extreme point solution y for problem ( P y) that s atisfie s g ( y )~ g( y ) for each extreme point neighbor y to y in f. Thus, to apply the lo c al search procedure in Section 5.3.1 to problem (P y), we need only to replace Y ~ with Y in linear program ( 5 2 ) and throughout the statement of the procedure and to change the phra s e '' problem (P y ~ ) '' to ''problem (Py)'' in Step 4 of the procedure. The cutting plane construction procedure given in Section 5.3.2 relies on the fact that at least p ed g e s of y < will emanate from every extreme point y of Y ~ where p is the dimension of Y ~ The dimension d of Y, however may be less that or equal to p. As a prerequisite to applying the cutting plane construction procedure when problem ( P y) is being solved instead of problem (Pr s ), a knowledge of the value of the dimension d of Y i s required A convenient way to find d is to insert the following step between Steps 1 and 2 of the strict local optimal solution search procedure given in Section 5.3 1 Step l Find all neighboring extreme points y; i = l, 2, ... t, to y in Y. Compute and s ave the rank d of the t x p matrix M, where row i of M is equal to I 1 2 y-y l =, . t

PAGE 125

118 From Murty (1983, pp. 138), the value of d computed in Step 1' equals the dimension of Y. When d = p, the cutting plane construction procedure given in Section 5.3.2 can be directly used when solving problem (Py) by the new outcome-space, cutting-plane algorithm by simply replacing Y ~ by Y and replacing the phrase ''problem (Pr ~ )'' with ''problem (Py)'' throughout the procedure. When d < p, a recent generalized cutting plane procedure given in Benson (1999) can be used instead of the procedure given in Section 5.3.2. The termination test that is needed to adapt the outcome-space, cutting-plane algorithm for problem (PY ~ ) to problem (Py) is obtained by replacing Y ~ in (5.6) by Y. The validity of this change can be supported by proving a result for problem (Pr) that is analogous to Proposition 5.3.3.1. To implement the changes called for in this section to the new outcome-space, cutting-plane algorithm given in Section 5.3.4, mechanics must be available for finding an initial extreme point of Y and, given an extreme point y of Y, for finding all neighboring extreme points in Y to y Given these mechanics, one can then carry out all of operations called for by the revised outcome-space, cutting-plane algorithm, including the solution of all of the required linear programming problems over Y and the construction and incorporation into Y of all required '}'-valid cuts. A finite linear programming-based algorithm guaranteed to find an initial extreme point of Y is given in Benson (1998d). In Benson and Sun (1998), the mechanics are given for finding all extreme points of Y that are adjacent to a given extreme point of Y.

PAGE 126

119 From the previous paragraph, the existence of these procedures guarantees that the outcome-space, cutting-plane algorithm for globally solving problem (Py) can, indeed, be implemented in practice. As a result, as we have seen, a global optimal extreme point solution for problem (Px) can be found. 5.5. Example To illustrate the application of the outcome-space, cutting-plane algorithm to problem (Px ), consider the case with p = 2 and X = {xe R 11 Ax= b, x O }, where 9 9 2 1 0 0 0 0 0 0 0 81 8 1 8 0 1 0 0 0 0 0 0 72 1 8 8 0 0 1 0 0 0 0 0 72 7 I 1 0 0 0 -1 0 0 0 0 9 A= b= 1 7 1 0 0 0 0 -1 0 0 0 9 I I 7 0 0 0 0 0 -1 0 0 9 1 0 0 0 0 0 0 0 0 1 0 8 0 I 0 0 0 0 0 0 0 0 1 8 and where I 0 0 I 1/9 1/9 0 0 0 0 Cl= 0 C2 = 0 0 0 0 0 0 0 0 0 0 0

PAGE 127

120 It can be shown that, in this case, X is a nonempty, compact polyhedron, and Cx > 0 for all xE X, as required in problem (Px ). To solve problem (Px ), as explained previously, we will apply the modified outcome-space, cutting-plane algorithm to problem (Py). In this case, problem (Py) is given by Xi+ (1/9 )~ X 2 + (1/9 )x 3 for somexe X. A summary of the application of the modified outcome-space, cutting-plane algorithm to problem (Py) in this case is given below. The mechanics used on the set Y, including the generation of an initial extreme point of Y and the solution of linear programming problems with feasible regions given by Y, are taken from Benson (1998d) and Benson and Sun (1998). Initialization Step. We execute the search procedure given in Section 5.3.1, but adapted to problem (Py). The initial extreme point y of Y that we find thereby is given by y 7 = (1/9, 8 1/9). As called for by Step 1' in Section 5.4, we find that the extreme point neighbors to y in Y are (y 1 r = (1, 1) and (y 2 f = (9/10, 8 1/10). The matrix M is thus given by M= 8/9 71/90 -7 1/9 -1/90 We set d = rank M = 2, and we save d = 2 for future reference. Next, with Y replacing Y $ in (5.2), the optimal value of linear program (5.2) is found to be v = 1.0, and we find that y is the unique optimal solution to this problem. Therefore, we compare

PAGE 128

121 g (y) = y 1 y 2 = 73/81 to the values of g evaluated at the neighboring extreme points y 1 and y 2 to y in Y. We find that g(y) = 73/81 < g(y 1 )= 1.0 and g(y)= 73/81 < g(y 2 )= 7.29. The strict local optimal extreme point search is therefore terminated, and we set y 0 = y, where yr =(1/9,8 1/9). We set r=g(y 0 )=73/81, Y 0 =Y, andproceedtoStepO. Step 0. Step 0.1. We construct a r-valid cut for problem (Py) with Y = Y 0 at y 0 We accomplish this by executing the procedure given in Section 5.3.2, but adapted to problem (Py) with Y = Y 0 In particular, we first deter111ine whether y 0 has exactly d or more than d neighboring extreme points in Y. In this case, we know from the initialization step that y 0 has exactly d = 2 neighboring extreme points in Y. Therefore, y 0 is nondegenerate, and we will execute the cutting plane procedure given in Section 5.3.2 for the nondegenerate case, modified for problem (Py). Towards this end, we set (u 1 ) = (8/9, -7 1/9) and (u 2 ) = (71/90, -1/90 ). Next, we compute sup {ei > 0 g(y + 0 ; ui )~ r }, i = 1, 2, and we find that for i = 1, this supremum is finite and equals 1.015625 and, for i = 2, this supremum is finite and equals 729.85915. Therefore, we set a 1 = 1.015625 and . a 2 = 729.85915. Then, for each i = 1, 2, we set z' = y +a;u', which results in the values

PAGE 129

122 (z 1 r = (1.013888, 0.888888), ( z 2 j = (575.888888, 0.001565). Next, we compute ( rc 0 f = er Q 1 where 0.902777 Q = 7 .222222 575.777777 8.109546 This yields (rc 0 ) = (-0.0002138, -0.1384883). They-valid cut, after gathering terms, is then given by 0 0002138y 1 +0.1384883y 2 ~0.1233175. Step 0 2. With re= ( n f y = y 0 and Y ~ = Y 0 the linear program in (5.6) is found to have y 0 as a basic optimal solution, where (y 0 J =(73/9,1/9), and an optimal value of 1.1064. Since 1.1064> 1.0000, we continue to Step 0.3 Step 0.3 We first set Y 1 = Y 0 n{yE R 2 0.0002138yl + 0.1384883y 2 ~0.1233175}. Next, starting from y 0 we execute the search procedure given in Section 5.3.1, but adapted to problem (Py) with Y = Y 1 in order to find a strict local optimal extreme point solution y 1 for the problem min g(y ), s .t. y E Y 1 that satisfies g(y) g(y 1 ) for each extreme point neighbor y to y 1 in Y 1 This results in (f 1 r =(8 1 /9, 1/9).

PAGE 130

123 Since g (y 1 )= 73/81 r, we proceed to Step 1. Step 1. Step 1.1. We construct a r-valid cut for problem (Py) with Y = Y 1 at y 1 To do so, we use the procedure in Section 5.3.2, but adapted to problem (Py) with Y = Y 1 We find that y 1 has exactly d = 2 neighboring extreme points in Y 1 so that y 1 is nondegenerate. The resulting r-valid cut is, after gathering terms, 0.1384833y 1 +0.0002138y 2 ~0.1233175. Step 1.2. With n = n 1 y = y 1 and Y 5 = Y 1 we find that the linear program in (5.6) has the point y 2 given by ( y 2 ) T = (1.9003173, 0.8874603) as a basic optimal solution, and that it has an optimal value of 0.8599391. Since 0.8599391 < 1.0, the algorithm stops, since it has found the global optimal solution y 0 for problem (Py), where (y 0 ) = ( 1/9 8 1/9) and r = 73/81 is the global optimal value for problem (Py). As explained in Remark 5.4.1 and Section 5.2, by solving a single linear program, an extreme point global optimal solution to problem (Px) can be recovered from y 0 in the sample problem By solving this linear program, we obtain the extreme point global optimal solution (x) = (0 0, 8.0, 1.0) for problem (Px ).

PAGE 131

124 5.6. Concluding Remarks Both theoretical and empirical evidence have shown that generally an outcome polyhedron can be expected to have a significantly lower dimension, a significantly simpler structure, and far fewer extreme points than the underlying decision set polyhedron. For the case of the linear multiplicative program (Px ), we have in this chapter constructed and validated an outcomes pace, cutting-plane algorithm that searches the outcome polyhedron, rather than the decision set polyhedron for a global optimal extreme point solution. We expect, therefore, that the algorithm proposed here for problem ( Px) will potentially have computational benefits over the decision set-based algorithms that have been proposed for the problem.

PAGE 132

CHAPTER6 SUMMARY AND FUTURE RESEARCH 6.1. Introduction In this dissertation, we have analyzed the multiplicative programming problem (Px). For the linear case of problem (Px), we used these analyses to develop a heuristic algorithm that find s a good solution to the problem and to present a global solution algorithm for the problem. In this chapter we will examine potential avenues for future research that might potentially improve the efficiency and accuracy of the heuristic algorithm We also examine possible avenues for potential global optimal solution algorithms for problem ( P x) based on analytical results we have presented. 6.2. Future Research on the Heuristic Algorithm In preparation for presenting our heuristic algorithm for the linear case of problem (Px) in Chapter 3, we analyzed the case when problem ( Px) is the concave multiplicative programming problem We showed that when problem (P x ) is a concave multiplicative programming problem with a compact feasible region, it has at least one global optimal solution that is an efficient extreme point of the associated multiple objective programming problem ( MOMP ) VMIN J(x ) s.t. xe X where f j : X R j = 1 2 ... p, are the functions used in defining problem ( Px ) and where f ( x ) denote s the vector 125

PAGE 133

126 [J 1 (x ), / 2 (x ), . J P (x )]r. Our heuristic algorithm for the linear multiplicative program with a compact polyhedral feasible region, uses this efficiency result to search for a good solution by limiting the search to the efficient s et of the multiple-objective linear program ( MOLP ) VMIN C x, s.t xe X, where C is the p x n matrix whose jth row equals (c i ) r j = 1, 2, ... p. The heuristic algorithm first id e ntifies an efficient face X 1 11 of problem (MOLP). It then searches the efficient face X 111 for a n good s olution to problem ( P x) by optimizing a linear approximation of the objective function of problem ( P x ) over X w Our experimental results show that the c omputational requirement s of the heuristic algorithm are not overly burdensome when compared to the effort required to solve a linear multiplicative programming problem The performance of the heuristic algorithm depends in part upon the number location s, and dimen s ion s of the efficient face s found in the Efficient Point Search Phase. The user can manipulate the location and number of efficient faces found by selecting values for the parameter s N u se d for the weighting of the functions in the objective function and S th e number of s ample objective function values from Y ~. A topic for further research would be to identify criteria and methods for choosing good values for these parameters s o that a variety of regions of the efficient set are searched for a good solution for problem ( P x ) In Step 1 of th e Efficient Point Search Pha s e, an efficient extreme point is used to identify an effici e nt face X 111 on which that extreme point lies. Currently there is no

PAGE 134

127 known procedure that can be used to find an efficient face of maximal dimension on which an extreme point lies. Since the perf orrr1ance of the heuristic algorithm can be improved by searching maximal efficient faces for good solutio ns, a topic for further research would be to find such a procedure. 6.3. Future Research on an Global Solution Algorithms In Chapter 3 we showed that a concave multiplicative programming problem can be solved as a quasiconcave minimization problem or reformulated as a concave minimization problem. We also showed that a concave multiplicative programming problem with a compact feasible region has a globally optimal solution that is an efficient extreme point of the associated multiple-objective programming problem (MOMP). A potential opportunity therefore exists to improve the efficiency of a concave minimization method for solving the concave multiplicative programming problem or the linear multiplicative programming problem by limiting the search to the efficient extreme points of the associated multiple-objective programming problem (MOM P ) or (MOLP) respectively. For example, for the linear case of problem (Px), enumerative concave minimization methods such as extreme point ranking or cutting plane methods can be modified to limit the search to those extreme points of problem (MOLP) that are efficient. For the concave case of problem (Px), outer approximation and branch and bound methods may be modified to limit the search to efficient extreme points of problem (MOLP). For more information on concave minimization methods, see, for example, Benson (1995) and Horst and Tuy (1993).

PAGE 135

128 In preparation for developing our outcome-space, cutting-plane algorithm for the linear multiplicative programming problem in Chapter 4, we analyzed the outcome-space reformulation problem (Pr ~ ) of the multiplicative programming problem (Px). Since the reformulation (PY ~ ) works in a lower dimensioned space than the space of the feasible decision set X, potential computational savings can be expected by solving problem (PY ~ ) for an optimal solution y* E Y ~ As we have shown, a global optimal solution x* EX for problem (Px) can then be recovered from y*. In Chapter 4 we showed that when problem (Px) is a convex multiplicative programming problem, a global optimal solution y of the reformulation (PY ~ ) will exist on the boundary of the convex outcome set Y ~ In addition, when the convex set Y ~ is closed and has at least one extreme point, our analysis showed that problem (Pr ~ ) has at least one global optimal solution that is an extreme point of y s We also showed that the objective function of problem (PY ~ ) is strictly pseudoconcave over y s Since pseudoconcave functions defined over convex sets are also quasiconcave functions, many of the most popular algorithms for minimizing concave functions over a convex sets are equally suitable for minimizing quasiconcave functions over convex sets (Horst and Tuy 1993 and Benson 1995). Our results thus provide possible avenues for proposing implementable solution methods for the convex multiplicative programming problem that solve instead the reformulation (Pr s: ) using existing concave minimization methods. For example, outer approximation can be used to search the boundary of Y~ for a global optimal y* solution for problem (Pr s: ). This method cannot be applied directly to the

PAGE 136

129 original convex multiplicative programming problem since, as shown in Example 4.3.1, the preimage x EX of y need not lie on the boundary of X. For more info1111ation on concave minimization method s see, for example, Benson ( 1995) and Horst and Tuy (1993). In Chapter 5 we presented the outcome-space, cutting-plane algorithm for solving the case when problem (P x ) is the linear multiplicative programming problem It solves the reformulation problem (PY ~ ) of problem (Px) for a global optimal extreme point solution y E Y $ for problem (PY ~ ), and then recovers a global optimal extreme point solution x EX of problem (P x ) The refor1nulation (PY ~ ) of the linear multiplicative programming problem (Px) offers another computational advantage in addition to that of working in a lower dimensioned space than the space of the feasible region X A large number of extreme points of X collapse into nonfacial structures or into the relative interiors of subfa c e s of the outcome set Y ~ but not vice versa (Benson 1995a). Since the outcome-space, cutting-plane algorithm searches only those extreme points of Y ~ that are the image of extreme points and faces of X under the mapping of the functions in the objective function of problem ( Px), many non-optimal extreme points of X are implicitly bypassed in the algorithm's search We therefore expect that the outcome space, cutting-plane algorithm will potentially have computational benefits over the decision set-based algorithms that have been proposed for the linear multiplicative programming problem. To aid in the implementation of the outcome-space, cutting-plane algorithm, the algorithm can be implemented using only linear programming methods. A topic for

PAGE 137

130 further research is to code the algorithm and test its perforn1ance against other algorithms for solving linear multiplicative programming problems.

PAGE 138

REFERENCES Aneja, Y. P., Aggarwal, V., and Nair, K. P. K. On a Class of Quadratic Programs, European Journal of Operational Research, 18, pp. 62-56, 1984. Avriel, M., Diewert W. E., Schaible, S ., and Zang, I. Generalized Concavity, Plenum, New York 1988. Bazaraa, M. S. Sherali, H. D., and Shetty C M. Nonlinear Programming: Theory and Algorithms, 2nd Ed., John Wiley, New York, 1993. Benson, H. P. Existence of Efficient Solutions for Vector Maximization Problems, Journal of Optimization Theory and Applications 26, pp. 569-580, 1978. Benson, H.P An All Linear Programming Relaxation Algorithm for Optimizing Over the Efficient Set, Journal of Global Optimization, 1, pp. 83-104, 1991. Benson, H P A Geometric Analysis of the Efficient Outcome Set in Multiple Objective Convex Programs with Linear Criterion Functions, Journal of Global Optimization 6, pp. 231-251, 1995a. Benson, H. P. Concave Minimization: Theory, Applications, and Algorithms, Handbook of Global Optimization Edited by R. Hor s t and P. M Pardalos, Kluwer Dordre c ht The Netherlands, pp 43-148, 1995b. Benson, H. P. Deterrruni s tic Algorithm s for Constrained Concave Minimization: A Unified Critical Survey Naval Research Logistics, 43, pp. 765-795, 1996. Benson, H. P A Hybrid Approach for Solving Multiple Objective Linear Programs in Outcome Space, Journal of Optimization Theory and Application s, 98, pp 17-35, 1998a Benson H P. An Outer Approximation Algorithm for Generating All Efficient Extreme Point s in the Outcome Set of a Multiple Objective Linear Programming Problem, Journal of Gl o bal Optimization 13 pp 1 -2 4, 1998b. Benson H.P An Outcome Space Branch and Bound-Outer Approximation Algorithm for Convex Multiplicative Programming to appear in a Special I ss ue in Honor of the 70 th Birthday of H Tuy 1998c 131

PAGE 139

132 Benson, H. P. Pivoting in an Outcome Polyhedron: Part I Motivation and Initialization, University of Florida, Gainesville, FL, Department of Decision and Information Sciences, Working Paper, May, 1998d. Benson, H.P. A Generalized Gamma-Valid Cut Procedure, Journal of Optimization Theory and Applications, 102, pp. 289-298, 1999. Benson, H.P. and Sayin, S. Towards Finding Global Representations of Efficient Sets in Multiple Objective Mathematical Programming, Naval Research Logistics, 44, pp. 47-67, 1997. Benson, H.P. and Sun, E. Pivoting in an Outcome Polyhedron: Part IIExecuting the Pivot Process, University of Florida, Gainesville, FL, Department of Decision and Information Sciences, Working Paper, May, 1998. Carvajal-Moreno, R. Minimization of Concave Functions Subject to Linear Constraints, Operations Research Center Report ORC 72-3, University of California, Berkeley, CA, 1972. Cohon, J. L. Multiobjective Programming and Planning, Academic, New York, 1978. Dauer, J.P. On Degenaracy and Collapsing in the Construction of the Set of Objective Values in a Multiple Objective Linear Programs, Annals of Operations Research, 47,pp.279-292, 1993. Dauer, J. P. and Liu, Y, H. Solving Multiple Objective Linear Programs in Objective Space, European Journal of Operational Research, 46, pp.350-357, 1990. Dauer, J.P. and Saleh, 0. A. Constructing the Set of Efficient Objective Values in Multiple Objective Linear Programs, European Journal of Operational Research, 46, pp. 358-365, 1990. Davis, C. Theory of Positive Linear Dependence, American Journal of Mathematics, 76, pp. 733-746, 1954. Duffin, R. J., Peterson, E. L., and Zener, C. Geometric Programming-Theory and Application, John Wiley, New York, 1967. Ecker, J. G. and Kouada, I. A Finding All Efficient Extreme Points for Multiple Objective Linear Programs, Mathematical Programming, 14, pp.249-261, 1978. Evans, G. W. An Overview of Techniques for Solving Multiobjective Mathematical Programs, Management Science, 30, pp. 1268-1282, 1984.

PAGE 140

133 Falk, J.E. and Palocsay, S. W. Image Space Analysis of Generalized Fractional Programs, Journal of Global Optimization, 4, pp. 63-88, 1994. Fang, S. C. and Puthenpura, S. Linear Optimization and Extensions, Prentice Hall, Englewood Cliffs, NJ, 1993. Gal, T. and Geue, F. A New Pivoting Rule for Solving Various Degeneracy Problems, Operations Research Letters, 11, pp.23-32, 1992. Gallagher, R. J. and Saleh, 0. A. A Representation of an Efficiency Equivalent Polyhedron for the Objective Set of a Multiple Objective Linear Program, European Journal of Operational Research, 80, pp. 204-212, 1995. Geoffrion, A. M. Solving Bicriterion Mathematical Programs, Operations Research, 15, pp. 39-54, 1967. Geoffrion, A. M. Proper Efficiency and the Theory of Vector Maximization, Journal of Mathematical Analysis and Applications, 22, pp. 618-630, 1968. Geue, F. An Improved N-tree Algorithm for Enumeration of All Neighbors of a Degenerate Vertex, Annals of Operations Research, 47, pp. 361-391, 1993. Henderson, J.M. and Quandt, R. E. Microeconomic Theory, McGraw-Hill, New York, 1971. Horst, R. and Tuy, H. Global Optimization: Deterministic Approaches, 2nd Ed., Springer Verlag, Berlin, 1993. International Business Machines, Optimization Subroutine Library Guide and Reference, International Business Machines, Mechanicsburg, PA, 1990. Isermann, H. The Enumeration of the Set of All Efficient Solutions for a Linear Multiple Objective Program, Operations Research Quarterly, 28, pp.711-725, 1977. Jaumard, B., Meyer, C., and Tuy, H. Generalized Convex Multiplicative Programming via Quasiconcave Minimization, Journal of Global Optimization, 10, pp. 229-256, 1997. Konno, H. and Inori, M. Bond Portfolio Optimization by Bilinear Fractional Programming, Journal of the Operations Research Society of Japan, 32, pp. 143158, 1989. Konno, H. and Kuno, T. Generalized Linear Multiplicative and Fractional Programming, Annals of Operations Research, 25, pp. 147-161, 1990.

PAGE 141

134 Konno H. and Kun o, T Linear Multiplicative Programming, Mathematical Programming 56 pp. 51-64, 1992. Konno H. and Kuno T Multiplicative Programming Problems Handbook of Global Optimi z ation Edited by R. Horst and P. M Pardalos, Kluwer, Dordrecht, The Netherland s, pp 369-405 1995 Konno H ., Kun o T ., a nd Yajima Y Parametric Simplex Algorithms for a Cl as s of NP Complete Pr o blem s Who s e Average Number of Steps is Polynomial, Computati o n a l Optimiz a tion and Applications, 1 pp. 227-239, 1992. Konno H. Kuno T a nd Yajima Y Global Minimization of a Generalized Convex Multiplic a tiv e Function Journal of Global Optimization, 4 pp. 47-62, 1994. Konno H ., Yajim a Y ., and Mat s ui T Parametric Simplex Algorithms for Solving a Special Cl ass of Nonconvex Minimization Problems, Journal of Global Optimization 1 pp. 65-81, 1991. Kru s e, H J Deg e n e r a cy Graph s and the Neighborhood Problem Springer-Verlag, Berlin 1986 Kuno T A Pract ica l Al g orithm for Minimizing a Rank-Two Saddle Function on a Polytope J o urnal of th e Operations Re s earch Society of Japan 39, pp. 63-76 1996. Kuno T and Konno H A Parametric Succes s ive Underestimation Method for Convex Multiplicati ve Programnting Problem s Journal of Global Optimization, 1, pp. 267-285 1 99 1 Kuno, T ., Yajim a, Y a nd Konno H. An Outer Approximation Method for Minimizing the Produ c t of S ev eral Convex Fun c tions on a Convex Set Journal of Global Optimizati o n 3, pp 325-335 1993 Luc, D T. Theory of Vector Optimization Springer Verlag, Berlin, 1989 Maling K. Muell e r S H ., and Hell e r, W. R On Finding Most Optimal Rectangular Packa g e Pl a n s Proceedings of 19th De s ign Automation Conference Institute of Electrical and El ec troni cs En g ineer s, N e w York pp. 663-670 1982 Martos B Th e Dir ec t P o wer o f Adjac e nt Vertex Programming Methods Management Scien ce, 1 2 pp 24 1-25 2, 1965 Matsui T. NP-Har d n e s s of Lin e ar Multiplicative Programming and Related Problem s, Journ a l of Gl oba l Optimization 9 pp 113-119 1996.

PAGE 142

135 Morse, J. N. Reducin g the Size of the Nondominated Set : Purning by Clu ste ring Computers and Operations Research, 7, pp. 55-66, 1980. Murty, K. G Linear Programming, John Wiley New York, 1983 Muu, L. D and Tam, B T Minimizing the Sum of a Convex Function and the Product of Two Affine Functions over a Convex Set, Optimization, 24, pp. 57 62, 1992. Pardalos P. M Polyn o mial Time Algorithms for Some Classes of Constrained Nonconvex Quadratic Problems, Optimization, 21, pp. 843-853, 1990. Pardalos P M and Ro se n, J. B Constrained Global Optimization: Algorithms and Application s, Springer-Verlag, Berlin, 1987. Philip, J Algorithm s fo r the Vector Maximization Problems, Mathematical Programming 2, pp 207-229, 1972. Polya, G and Szego, G. Problem s and Theorems in Analysis, 1, Springer-Verlag, Berlin, 1972. Quesada, I. and Gro ss mann I. A Global Optimization Algorithm for Linear Fractional and Bilinear Program s, Journal of Global Optimization, 6, pp. 39 -76, 1995. Rockafellar, R. T Convex Analysis, Princeton University Press, Princeton, NJ, 1970 Ryoo, H. S and Sahinidis, N. V A Branch-and-Reduce Approach to Global Optimization Journal of Global Optimization, 8, pp. 107-138, 1996 Sawaragi Y. Nakayama, H., and Tanino, T Theory of Multiobjective Optimization, Academic Orlando, FL, 1985. Schaible, S. and Sodini, C. Finite Algorithm for Generalized Linear Multiplicative Programming Journal of Optimization Theory and Applications, 87, pp. 441-455, 1995. Shin, W S. and Ravindran, A. Interactive Multiple Objective Optimization: Survey Continuous Case, Computers and Operations Research, 18, pp. 97-114, 1991. Sniedovich, M and Findlay, S. Solving a Class of Multiplicative Programming Problems via C-Programming, Journal of Global Optimization 6, pp 313-319, 1995. Stadler, W. A Survey of Multicriteria Optimization or the Vector Maximum Problem, Part I: 1776-1960 Journal of Optimization Theory and Applications 29, pp 1-52, 1979.

PAGE 143

136 Steuer, R. E. Multiple Criteria Optimization: Theory, Computation, and Application, John Wiley, New York, 1986. Steuer, R. E. Operating Manual for the ADBASE Multiple-Objective Linear Programming Package, College of Business Administration, University of Georgia, Athens, GA, 1989. Steuer, R. E. and Schuler, A. T. An Interactive Multiple-Objective Linear Programming Approach to a Problem in Forest Management, Operations Research, 26, No. 2, pp. 254-269, 1978. Strijbosch, L. W. G., Van Doome, A.G. M., and Selen, W. J. A Simplified MOLP Algorithm: the MOLP-S Procedure, Computers and Operations Research, 18, pp. 709-716, 1991. Swamp, K. Quadratic Programming, Cahiers du Centre d'Etudes de Recherche Operationnelle, 8, No. 4, pp. 223-234, 1966a. Swarup, K. Indefinite Quadratic Programming, Cahiers du Centre d'Etudes de Rechercbe Operationnelle, 8, No. 4, pp. 217-222, 1966b. Thoai, N. V. A Global Optimization Approach for Solving the Convex Multiplicative Programming Problem, Journal of Global Optimization, 1, pp. 341-357, 1991. Tuy, H. Polyhedral Annexation, Dualization and Dimension Reduction Technique in Global Optimization, Journal of Global Optimization, 1, pp. 229-244, 1991. Tuy, H. and Tam, B T. An Efficient Solution Method for Rank-Two Quasiconcave Minimization Problems, Optimization, 24, pp. 43-56, 1992 Wets, J.B. and Witzgall Algorithms for Frames and Lineality Spaces of Cones, Journal of Research of the National Bureau of Standards, 7 lB, pp. 1-7, 1967. Yu, P. L. Multiple-Criteria Decision Making, Plenum, New York, 1985. Yu, P. L and Zeleny M. The Set of All Nondominated Solutions in Linear Cases and a Multicriteria Simplex Method, Journal of Mathematical Analysis and Applications, 49, pp. 430-468, 1975. Zeleny, M. Multiple-Criteria Decision Making, McGraw-Hill, New York, 1982.

PAGE 144

BIOGRAPHICAL SKETCH George Boger received his Bachelor of Science in Mathematics in 1973 from the University of Central Florida. Following graduation he worked in the federal civil service for NASA and the U.S. Navy. In 1992 he received a Master of Science in Operations Research from the Florida Institute of Technology and entered the Ph.D. program in the Department of Decision and Info11nation Sciences at the University of Florida. After graduation, George plans to work in academia, teaching and conducting research 137

PAGE 145

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Harold P. Benson, Chairman Professor of Decision and Information Sciences I certify that I have read this study and that in my opinion it conforms to acceptable standard s of scholarly presentation and is fully adequate, in s cope and quality, as a dissertation for the degree of Doctor of Philosophy. ision and Information Sciences I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. s o J. Vakharia A sociate Professor of Decision and formation Sciences I certify that I have read this study and that in my opinion it conforms to acceptable standard s of scholarly presentation and i s fully adequate, in scope and quality, as a dis s ertation for the degree of Doctor of Philo s ophy. Richard L. Francis Professor of Industrial and Systems Engineering

PAGE 146

This dissertation was s ubmitted to the Graduate Faculty of the Department of Deci s ion and Infor1nation Sci e nce s in the College of Bu s iness Admini s tration and to the Graduate School and was accepted as partial fulfillment of the requirement s for the degree of Doctor of Philosophy December 1999 Dean, Graduate School

PAGE 147

LO 1780 199g 1$(o1t.~


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E65TJAL2Y_OD5NKM INGEST_TIME 2013-03-27T15:21:06Z PACKAGE AA00013619_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES