New closedness results and algorithms for finding efficient sets in multiple objective mathematical programming

MISSING IMAGE

Material Information

Title:
New closedness results and algorithms for finding efficient sets in multiple objective mathematical programming
Physical Description:
vii, 154 leaves : ; 29 cm.
Language:
English
Creator:
Sun, Erjiang
Publication Date:

Subjects

Subjects / Keywords:
Decision and Information Sciences thesis, Ph.D   ( lcsh )
Dissertations, Academic -- Decision and Information Sciences -- UF   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph.D.)--University of Florida, 2000.
Bibliography:
Includes bibliographical references (leaves 144-153).
Statement of Responsibility:
by Erjiang Sun.
General Note:
Printout.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 024909362
oclc - 45805674
System ID:
AA00017693:00001


This item is only available as the following downloads:


Full Text










NEW CLOSEDNESS RESULTS AND ALGORITHMS FOR
FINDING EFFICIENT SETS IN MULTIPLE OBJECTIVE
MATHEMATICAL PROGRAMMING













By

ERJIANG SUN


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2000














ACKNOWLEDGMENTS


I would like to express my appreciation of the members of my entire supervisory

committee, Dr. Harold P. Benson, Dr. S. Selcuk Erenguc, Dr. Gary J. Koehler, and Dr.

Panos M. Pardalos. I am especially grateful to Dr. Benson, my committee chairman, for

his invaluable guidance and tireless support throughout my years in the program. I would

also like to thank Dr. S. Schaible for his support in my academic career in general.

I would like to acknowledge all of the doctoral students, especially George Boger

and Lawrence Nicholson, for their friendships.

Finally, I would like to thank my entire family for the love and support they have

provided me through the years. And I am grateful beyond words to my beloved wife,

Hong Tang, and my daughter, Yantang Sun, who suffered and rejoiced with me through it

all.















TABLE OF CONTENTS
page


A CK N O W LED G M E N T S ................................................... .............. .... ............ ii


A B STR A C T ........................................v.............................

CHAPTERS

1 INTRODUCTION .................................... ............................ .. ..............

Historical Perspective on M CDM .............................................. ..... ...... ..........
M motivation for this R research ................................................................................ .
Contents of this Research ................................................. .......................... 15
Organization of this Dissertation....................... ..... ............................. 17


2 LITERATURE SURVEY ................ ....................... ............................. 18

Approaches to Characterizing Efficient Solutions...................................................18
Closedness of the Efficient Set .................. ............... .... ......... ................ 22
Methods for Solving Multiple Objective Linear Programming Problems ................... 24


3 ON THE CLOSEDNESS OF THE EFFICIENT SET OF PROBLEM MOMP......... 31

D efinitions and N otation.................................................................................. ... 33
Quasiconcave and Strictly Quasiconcave Vector-Valued Functions .........................35
Parametric Representations of the Efficient Set .....................................................47
The Closedness of E(f, X) for General MOMP Problems .................................. 54
The Closedness of E(f, X) for Bicriteria Programming Problems .............................. 59
C including R em arks ................... ....... .... ......................................... ............. 62


4 FINDING THE SET OF ALL EFFICIENT EXTREME POINTS FOR PROBLEM
MOLP IN THE OUTCOME SPACE ............... .............................................63

Decision Set-Based Decomposition of the Weight Set W ....................................... 67
Outcome Set-Based Decomposition of the Weight Set W ................... ................ 70
The Basic Weight Set Decomposition Algorithm ....................................................81
Tree Search Approach for STEP 3 of BWSDA.................................... .................. 89
Concave Programming Approach for STEP 3 of BWSDA............................. .......... ........ 97









D iscu ssion .............................................................. ....... 109
Concluding Remarks .................................................................................... 112


5 FINDING THE EFFICIENT OUTCOME SET OF PROBLEM MOLP .................. 114

P relim inaries .................................................................................. ............... 115
Theoretical Background ..................................................................... ............... 116
The Algorithm for Finding the Efficient Outcome Set ............................................ 133
Concluding Remarks .......................................................... 138


6 SUMMARY AND FUTURE RESEARCH ...................................................... 139

REFEREN CES......................... ................................................. ............... 144

BIOGRAPHICAL SKETCH .......................................................................... ....... 154














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

NEW CLOSEDNESS RESULTS AND ALGORITHMS FOR
FINDING EFFICIENT SETS IN MULTIPLE OBJECTIVE
MATHEMATICAL PROGRAMMING



By

Erjiang Sun

August 2000

Chairman: Dr. Harold P. Benson
Major Department: Decision and Information Sciences

A multiple objective mathematical programming (MOMP) problem involves

simultaneously maximizing or minimizing several noncomparable objective functions over

a nonempty feasible region. In order to help the decision maker find a most preferred

solution to the underlying problem, researchers have shown that one can generally restrict

one's attention to the subset of feasible solutions called the efficient set. One important and

new (compared to traditional single objective mathematical programming) research area in

multiple objective mathematical programming is to investigate the characteristics,

properties and structure of the efficient set and the weakly efficient set. The characteristics

and properties of the efficient set not only have important theoretical meaning but also

provide the theoretical foundations upon which various algorithms are based. Another









important research area is to develop algorithms that can be used to solve large-scale

multiple mathematical programming problems efficiently.

In this dissertation, we first investigate the necessary and sufficient conditions for

the efficient set of the problem (MOMP) to be closed. There are various theoretical,

algorithmic, and practical reasons for investigating the necessary and sufficient conditions

for the efficient set of the problem (MOMP) to be closed. However, only a small number

of results of this type, limited to some special cases, have been developed. In this research,

we present a necessary condition and several sufficient conditions for the efficient set of a

general MOMP problem to be closed. Our approach relies in part upon generalizing the

concepts of quasi-concavity and strict quasi-concavity for real-valued functions to vector-

valued functions. Our approach also relies upon some new characterizations of efficient

solutions of problem (MOMP) that we developed in this research.

The remainder of this research is focused on developing new outcome space-based

algorithms for potentially solving large-scale multiple objective linear programming

(MOLP) problems. In this dissertation, two new outcome space-based weight set

decomposition algorithms have been developed for solving the problem (MOLP) by using

the vector maximization approach. One algorithm is devoted to finding the set of all

efficient extreme points in the outcome space for problem MOLP. The other algorithm is

devoted to finding the entire efficient outcome set of problem MOLP. To our knowledge,

this algorithm is the first one capable of generating the entire efficient outcome set of a

multiple objective linear programming problem directly. Both algorithms are based in part

upon our new partition of the weight set. These two algorithms overcome the difficulties









that face the traditional decision space based weight set decomposition algorithms. They

may prove to be capable of solving large-scale (MOLP) problems efficiently.















CHAPTER 1
INTRODUCTION




Decision making is the process of selecting a possible course of action from all of

the available alternatives. In such processes, the decision maker or decision makers want

to attain one or more than one objective or goal in selecting a preferred course of action

while satisfying the constraints dictated by the environment. In the real world, many

decision making processes involve a single decision maker (DM) who chooses among a

countable or an uncountable set of alternatives by using two or more criteria. These

processes are called multiple criteria decision making (MCDM) processes. When the

values of the criteria are assumed to be known with certainty, the MCDM problem is

called deterministic; otherwise it is called non-deterministic (or stochastic). Throughout

this dissertation, we will limit our discussion to deterministic MCDM problems.



1.1. Historical Perspective on MCDM


In this subsection, we will give a brief historical perspective on the development of

MCDM. We will confine our discussion of solution techniques to the mathematical

programming approaches. Our review is largely based upon the perspective required for

understanding the chapters to follow. For deeper background and additional references in

the field, the reader is referred to the books and survey papers by Cohon (1978), Stadler









(1979), Hwang and Masud (1979), Zeleny (1982), Chankong and Haimes (1983), Evans

(1984), Sawaragi et al. (1985), Yu (1985), Gal (1986), Steuer (1986), Shin and Ravindran

(1991), Dyer et al. (1992), Korhonen et al. (1992), Pardalos et al. (1995), and references

therein.

Mathematically, one way to represent a multiple criteria decision making (MCDM)

problem is by modeling it as a multiple objective mathematical programming problem. A

multiple objective mathematical programming problem may be written

MOMP: v-max f(x) = (fl(x), ... (x))T

s.t. x eX,

where x eR" is a vector of decision variables, fi, i = 1,2,...,p, are objective functions and

X is the set of feasible decision alternatives.

X is usually called the decision set of problem MOMP. Let

Y= { yeRP y = f(x), X}.

Y is usually called the outcome set of problem MOMP.

When f(x) is a linear vector-valued function and X is a convex polyhedron, the

problem MOMP is called a linear vector maximization problem or a multiple objective

linear programming (MOLP) problem.

In problem MOMP, the goal is to simultaneously maximize all of the objective

functions. However, with rare exceptions, the objective functions of problem MOMP

conflict with one another. As a result, a solution that simultaneously maximizes all of the

objective functions exists only in rare cases. Therefore, instead of trying to find a solution

that maximizes all of the objective functions simultaneously, researchers and practitioners

generally try to find a solution that maximizes the DM's preferences. In other words,








"solving" problem MOMP means finding a most preferred solution of the DM. In order to

find such a solution, some information about the DM's preference structure is required.

The most complete way of expressing this information is by finding the DM's utility

function, also called a value function, over the objective function space of the problem

(see e.g. Keeney and Raiffa, 1976).

Definition 1.1.1. A function u, which associates a real number u(f(x)) to each x in

X, is said to be a utility function representing a particular decision maker's preference

structure provided that

(1) f(x1) ~ f(x2) if and only if u(f(x1)) = u(f(x2)) for all x', x2 in X; and

(2) f(x) f(x2) if and only ifu(f(x')) > u(f(x2)) for all x', x2 in X,

where f(x') f(x2) denotes that the decision maker is indifferent between outcomes f(x1)

and f(x2), and f(x') >- f(x2) denotes that the decision maker prefers f(x') to f(x2).

Given a utility function u for the DM, the decision maker's most preferred solution

is one that maximizes u over all feasible solutions; such a solution is also called a best

compromise solution (in the sense that it is typically a compromise among the problem's

various objective functions). Under the assumption that more of each objective function is

preferred to less, a best compromise solution must also be an efficient solution, where an

efficient solution is defined as follows.

Definition 1.1.2. A point xOR" is called an efficient solution (or a nondominated

solution) of MOMP if xo EX and there is no x eX such that f(x) > f(xo) and f(x) # f(x).

The set of all efficient solutions of problem MOMP is usually called the efficient

decision set of problem MOMP. If xo is an efficient solution of problem MOMP, then y =

f(xo) is usually called an efficient outcome of problem MOMP. The set of all efficient









outcomes of problem MOMP is called the efficient outcome set of problem MOMP. We

will refer to both the efficient decision set and the efficient outcome set as the efficient set

when the results apply to both sets.

Closely related to the concept of an efficient solution is the concept of a weakly

efficient solution. A weakly efficient solution may be defined as follows.

Definition 1.1.3. A point xORn is called a weakly efficient solution (or a weakly

nondominated solution) of MOMP if xo X and there is no x eX such that f(x) > fx).

Similarly, we have the concepts of weakly efficient outcome, weakly efficient

decision set, weakly efficient outcome set and weakly efficient set.

The concept of efficiency has played a useful role in analyzing problem MOMP. In

particular, it has been found to be very useful in maximizing the decision maker's utility

function over X when the form of u is unknown. The notion of an efficient solution was

first introduced by Pareto in 1896 (see Pareto, 1896). However, until the 1950's when

Koopmans (1951) introduced the concept of an efficient vector, very few researchers were

involved in the field of MCDM. An early formulation of the problem of vector

maximization was due to Kuhn and Tucker (1951) and later to Karlin (1962). A direct

extension of Koopmans' ideas also appears in a book by Charnes and Cooper (1961).

In order to exclude certain efficient solutions that display an undesirable anomaly,

several authors have developed several more sophisticated concepts of efficiency. Kuhn

and Tucker (1951) were the first to propose a special concept of efficiency which they

called proper efficiency. Geoffrion (1968) later refined Kuhn and Tucker's definition. The

underlying domination cone utilized in both of these definitions was the ordinary

nonpositive orthant. In 1974, Yu (1974) introduced a domination structure defined by a








generalized convex cone. Soon after, Borwein (1977) proposed a definition of proper

efficiency for the case when the domination cone is any nontrivial, closed convex cone.

Benson (1979) and Henig (1982) later refined Borwein's definition. In a related vein, Hu

(1990) subsequently introduced the concept of major efficiency when the domination

structure is defined by a certain nonconvex cone.

Much of the research in MCDM has occurred since 1970. In fact, Zeleny (1982)

notes that the general area of multiple criteria decision making was the most rapidly

growing area of operations research during the 1970's. Most of the MCDM research in

the 1970's was focused on the theoretical foundations of multiple objective mathematical

programming and the development of algorithms and procedures for solving some of these

problems especially multiple objective linear programming problems and problems with

discrete alternatives.

Since a best compromise solution must be an efficient solution under mild

assumptions usually satisfied in practice, one approach to solving MOMP is to find all or a

part of the efficient set and present them to the DM for evaluation. This kind of approach

is usually referred to as the vector maximization approach. During the 1970s, most

researchers focused on developing and investigating algorithms that use this approach. As

a result, many algorithms for determining the set of all efficient extreme points or the

entire efficient decision set of a multiple objective linear program were developed in this

period. We will give a literature review of the algorithms that use the vector maximization

approach for solving multiple objective linear programming problems in Section 2.3. The

reader is also referred to the review paper by Evans (1984) and to the book by Steuer








(1986) for deeper reviews of the algorithms that were developed during this period for

multiple objective linear programming.

One important and new (compared to traditional single objective mathematical

programming) research area in multiple objective mathematical programming is to

investigate the characteristics, properties and structure of the efficient set and the weakly

efficient set. The characteristics and properties of the efficient set not only have important

theoretical meaning but also provide the theoretical foundations upon which various

algorithms are based. For example, many algorithms are based upon the theoretical fact

that an efficient solution can be characterized as an optimal solution to an appropriate

single objective mathematical programming problem. In addition, the simplex-based

methods for finding the entire efficient decision set of a multiple objective linear

programming problem, for example, are usually based in part upon two theoretical facts.

The first of these is the fact that the efficient decision set for such a problem is the union

of its efficient faces. The second is the fact that the set of all efficient extreme points for

such a problem is connected in the sense that one can move from one extreme point to any

other without having to leave the edges of the efficient decision set (Yu and Zeleny,

1975). For details of various simplex-based algorithms for finding the entire efficient

decision set of problem MOLP, the reader is referred to Steuer (1986).

During the 1970's, research on the characteristics, properties and structure of the

efficient set and of the weakly efficient set was focused on developing various kinds of

approaches for characterizing efficient solutions and weakly efficient solutions of problem

MOMP. We will give a more detailed review of the characteristics, properties and








structure of the efficient set of problem MOMP in Sections 2.1-2.2. The reader is also

referred to the review paper by Gal (1986).

Duality and stability theories were also substantially investigated by many authors

during the 1970s. The reader is referred to the book by Sawaragi et al. (1985) and to the

review paper by Gal (1986) for details.

During the 1980's, emphasis shifted toward the implementation ofMCDM models

and algorithms on computers. Decision support systems were developed to aid this

implementation.

With respect to the development of new algorithms, in part since the size of the

efficient set can make it difficult to find a final best compromise solution, interactive

procedures moved to center stage in the 1980s. This class of procedures relies on the

progressive definition of the DM's preferences along with the exploration of the criterion

space. The progressive definition takes place through a DM-analyst or DM-machine

dialogue at each iteration.

A typical interactive algorithm can be characterized by the following procedure:

(1) find a solution (preferably feasible and efficient); (2) interact with the DM to obtain

his/her reaction/response to this solution; and (3) repeat the above steps until the current

solution is "close enough" to a best compromise solution or until some other termination

criterion is met. For details about the research of this type in this period, see the review

papers by Korhonen et al. (1992) and Shin and Ravindran (1991) and the references

therein.

As we mentioned earlier, one important topic in multiple objective programming is

the investigation of the characteristics and properties of the efficient set. During the 1980s,








research in this topic was focused on investigating the connectedness of the efficient set

and of the weakly efficient set. The connectedness of the weakly efficient set of problem

MOMP was first studied by Warburton (1983). For further results on the connectedness

of the weakly efficient set, the reader is referred to Luc (1989). For the connectedness of

the efficient set, one open problem raised in this period was whether or not the efficient set

of an m-dimensional MOMP problem is connected when all objective functions are

continuous and strictly quasiconcave, and the decision set is compact and convex. Schaible

(1983) solved the problem in the case of two objective functions, and then Choo et al.

(1985) obtained a partial solution in the case of three objective functions. Finally,

Daniilidis, Hadjidis and Schaible (1997) completely solved the problem in the case of three

objective functions. For higher dimensions, Hu and Sun (1993) solved the problem in the

case where the efficient set is closed. Subsequently, Sun (1996) solved the problem in the

case where one objective function is strongly quasiconcave. Finally, Benoist (1998)

extended the results of Sun (1996) and solved the problem.

Since the late 1980s, MCDM research has increasingly focused on how to support

the real world decision making process. Traditionally, any problem in operations research

is divided into two phases, formulation and solution. However, this scheme is improper in

the real world. It is not fully realistic to assume that a DM is able to formulate a problem

precisely prior to its solution. For the majority of decision making problems, the actual

process is a multiply-repeated, cyclic process of "formulation-solution-analysis-correction-

...." (see Statnikov and Matusov, 1995). As a result, in recent years, much MCDM

research has focused on developing systems that support the entire decision making








process from problem structuring through solution and implementation (see, e.g.

Korhonen and Wallenius, 1988).



1.2. Motivation for this Research


The real world challenges us with many large-scale multiple criteria decision

making problems. It is known that the efficient decision set of problem MOMP is generally

a complicated nonconvex set and grows rapidly as the size of the problem increases (see

e.g. Benson, 1998a). As a result, even with the aid of recent decision support systems, the

algorithms that find all or a substantial part of the efficient decision set for problem

(MOMP) in the decision space are usually not able to usefully solve large-scale problems.

For example, consider some results reported by Benson. In Benson (1998a), the

ADBASE algorithm of Steuer (1989) was used to find the set of all efficient extreme

points in the decision space of some randomly-generated multiple objective linear

programming problems with four objective functions. Benson found that when n=30 and

X is described by 25 linear inequalities, the average number of efficient extreme points in

X in a set often randomly-generated problems was 7245.90. When n was increased to 50

and X was described by 50 linear inequalities, this average jumped to 83,780.60 points.

With n=60 and with 50 linear inequalities describing X, each of the ten problems that were

randomly-generated in this study exceeded the solution capacity of ADBASE, indicating

that the number of efficient extreme points in the decision space in each of these problems

exceeded 200,000. These results indicate that, for a large-scale MCDM problem, we









either cannot generate all or a substantial part of the efficient decision set, or the generated

set will probably overwhelm the DM.

It is largely because of this that interactive procedures became popular in the

1980's. The interactive approach is actually a learning process. The DM, in fact, takes part

in the solution process of an interactive procedure. Because of these and other advantages,

it seems reasonable to think that interactive approaches would be very useful in practice.

However, Korhonen et al. (1992) pointed out that, in actuality, only a handful of

interactive multiple objective procedures have been applied and implemented in practice.

One problem with interactive approaches is that nearly all of them require

consistent responses from the DM to be successful. For large-scale problems, it may

becomes very difficult for the DM to respond consistently. This is because, for a large-

scale problem, it may take numerous iterations to finally reach a solution that satisfies the

DM. Furthermore, each iteration may take much more time for a large-scale problem than

for a small-scale problem. Thus, the assumption that the DM can provide consistent

responses usually draws severe criticism. Some researchers, such as French (1984), feel

that the value of the interactive methods is significantly diminished due to this drawback.

Another problem with the interactive approach is that only a few points in the

feasible set or the efficient set are generally explored. This is in part because the learning

process is tedious and the DM tends to get tired after a relatively small number of

interactions. As a result, the DM may not be provided with sufficient information about

the efficient set to make a useful choice of a best compromise solution. For large-scale

problems, this situation may become worse because of the larger size of the efficient set.








The discussions above show that it is still very important to develop effective

algorithms that can solve large-scale problems. Some ideas and algorithms have been

proposed, in particular, for potentially solving large-scale multiple objective linear

programming problems by the vector maximization approach.

One of these ideas involves analyzing multiple objective linear programming

problems in the outcome space (see e.g. Dauer, 1987; Benson, 1995a). Three reasons for

analyzing multiple objective linear programming problems in the outcome space, rather

than in the decision space, were summarized in Benson (1998c).

First, the dimension p of the outcome space is typically much smaller than the

dimension n of the decision space. As a result, the efficient outcome set is invariably much

smaller and has a much simpler structure than the efficient decision set; see, e.g. Benson

(1995a, 1997, 1998a-c), Dauer (1987, 1993), Dauer and Saleh (1990, 1992) and Dauer

and Liu (1990). Generating all or parts of the efficient outcome set is therefore expected,

in general, to be less computationally demanding than generating all or portions of the

efficient decision set. Furthermore, it also follows that the DM will be less likely to be

confused or overwhelmed if all or portions of the efficient outcome set are presented to

him or her than if all or portions of the efficient decision set are presented.

Second, the DM generally prefers searching for a most preferred solution by

examining the efficient outcome set rather than the efficient decision set. This has been

shown by empirical research.

Third, it is well known that frequently the objective functions map many points in

the efficient decision set onto either a single outcome or onto essentially-equivalent








outcomes in the efficient outcome set. Thus, generating points directly from the efficient

outcome set avoids risking redundant calculations of points in the efficient decision set.

To our knowledge, the first algorithm capable of generating the set of all efficient

extreme points in the outcome space for general multiple objective linear programming

problems was proposed by Benson (1998a). This algorithm uses outer approximation

techniques from global optimization (Horst and Tuy, 1993). It works by generating a finite

number of polyhedra that each approximates an "efficiency-equivalent" polyhedron for the

problem. The algorithm is finite, works in the outcome space, and can be implemented

relatively easily by using univariate search techniques, linear programming, and some

global optimization techniques. The most computationally-demanding task in the

algorithm calls for determining the set of extreme points of each new polyhedron that is

created by adding a linear inequality cut at each step of the algorithm. This is

accomplished with the aid of outer approximation. As a by-product of the algorithm, the

entire weakly efficient outcome set of problem MOLP is also generated (see Benson,

1998a, 1998c).

A newer outcome space-based method for multiple objective linear programming,

called a hybrid approach, has also been proposed by Benson (1998c). This approach

adapts two global optimization, decision set-based methods to outcome space. These

methods are a special simplicial partitioning technique of Ban (see Ban, 1983; Tuy and

Horst, 1988), proposed originally for solving concave minimization problems, and a

general outer approximation method that has been used very frequently to help solve a

variety of global optimization problems (see Horst and Tuy, 1993 and Horst and Pardalos,

1995). In particular, the simplicial partitioning technique is systematically integrated into









the outer approximation scheme in outcome space to determine all efficient extreme points

in the outcome set of the problem in a finite number of iterations.

The two algorithms of Benson have many potential advantages (see Benson,

1998a, 1998c). Because of their advantages, these two algorithms may prove to be

suitable for solving large-scale multiple objective linear programming problems.

However, since the two algorithms of Benson, to our knowledge, are the only two

outcome space based algorithms that are capable of generating the set of all efficient

extreme points of a general MOLP problem, and since we don't know how well these two

algorithms will work, we need to develop and explore more new outcome space based

algorithms. In particular, because different kinds of problems may need different

approaches, the algorithms of Benson (1998a, 1998c) may work well on some problems

but not on others. As a result, we may need alternative algorithms for efficiently finding

the efficient outcome set of problem MOLP in some cases. This also calls for more

outcome space based algorithms for generating the set of efficient extreme points of

problem MOLP.

The rationale for generating the set of all efficient extreme points is based on the

assumption that the best efficient extreme point is an acceptable approximation of a best

compromise solution. However, this assumption may not true. As a result, we may need to

generate the entire efficient set in some cases. The algorithms in Benson (1998a and

1998c) can generate the entire weakly efficient set in the outcome space. Since the work

of generating the entire efficient set in the outcome space from the weakly efficient set in

the outcome space is generally nontrivial, we need algorithms that can directly generate

the entire efficient set in the outcome space.








As we know, there are various kinds of algorithms for finding the set of efficient

extreme points in the decision space or the entire efficient decision set of problem MOLP.

All of these algorithms are based in part on some kind of modified simplex method. The

idea of using simplex techniques was also adapted by some authors to develop simple-like

procedure in the outcome space (see e.g. Dauer and Saleh, 1990; Dauer and Liu, 1990).

However, all these simple-like procedures have to deal with certain unfortunate

difficulties, such as tedious bookkeeping, backtracking, and degeneracy of extreme points.

To overcome these problems, we would like to develop non-simplex-like algorithms.

Motivated by all of the above issues, part of this research will focus on developing

new outcome space based, non-simplex-like algorithms for solving problem MOLP.

As we have mentioned earlier, one important research area in multiple objective

mathematical programming is to investigate the characteristics, properties and structure of

the efficient set. Among the properties of the efficient set, closedness is of interest. One

reason is that some algorithms are based in part upon this property (see e.g. Kornbluth and

Steuer, 1981). However, up to now, there are only a few results about the closedness of

the efficient set for some special cases. These results are scattered or implied in several

papers; see e.g. Yu and Zeleny (1975), Schaible (1983) and Choo and Atkins (1983). We

will give a review on these results in Section 2.2. The reason that only a few results on

closedness are reported in the literature is probably that the efficient set is generally not

closed. Even when all of the objective functions are continuous and strictly qusaiconcave

and the feasible set is compact, the efficient set need not be closed (see Choo and Atkins,

1983). Thus, an interesting and important question is whether or not there are general

conditions under which the efficient set of problem MOMP is closed.








Motivated by this, we will focus part of this research on investigating necessary

and sufficient conditions for the efficient decision set of problem MOMP to be closed.



1.3. Contents of this Research


In this research, we will present two new outcome space-based algorithms that

may be potentially useful for solving large-scale multiple objective linear programming

problems.

One algorithm is devoted to finding the set of all efficient extreme points in the

outcome space for problem MOLP. Since this algorithm is based on the decomposition of

the weight set. We will call it Basic Weight Set Decomposition Algorithms (BWSDA). The

algorithm BWSDA works in the following way: At each iteration, the algorithm will first

either find a weight vector which will lead to an unexplored efficient extreme point in the

outcome space or conclude that all points in YE r Yex have been generated. If a new

weight vector is found at some iteration, the algorithm will in the next iteration call for

solving at most (p+l) linear programs in order to find an unexplored efficient extreme

point in the outcome space.

Two different kinds of approaches will be developed for finding, if it exists, a new

weight vector in each iteration that leads to an unexplored extreme point in the outcome

space. One approach uses a tree search method. The other calls for solving a special

concave minimization problem over a polyhedron. These two different approaches yield

two versions of the Basic Weight Set Decomposition Algorithm, Weight Set









Decomposition Algorithm-- I (WSDA-- I) and Weight Set Decomposition Algorithm-- I

(WSDA-- II).

The other algorithm is devoted to finding the entire efficient outcome set of

problem MOLP. To our knowledge, this algorithm is the first one capable of generating

the entire efficient outcome set of a multiple objective linear programming problem.

The remainder of this research is focused on investigating necessary and sufficient

conditions for the efficient set of problem MOMP to be closed. We first introduce a new

definition of strict quasiconcavity for a vector-valued function. This definition extends the

definition of strict quasiconcavity for a real-valued function to vector-valued functions.

We will show that the efficient decision set of a general MOMP problem is closed when

the vector-valued function formed by the objective functions is strictly quasiconcave under

our new definition and the feasible region is compact and convex. For the special case of

the bi-objective programming problem, we will extend the results obtained by Schaible

(1983). In Schaible (1983), it is shown that the efficient decision set for a bi-objective

programming problem is closed when the two objective functions are continuous and

strictly quasiconcave and the feasible region is compact and convex. In this dissertation,

for the bi-objective programming problems, we will extend Shaible's results to cases

where the feasible region need not be convex and the objective functions need not be

strictly quasiconcave.








1.4. Organization of This Dissertation


The dissertation is organized in the following way. In Chapter 2, we give a

literature review of common approaches for characterizing efficient solutions, a literature

review of studies on closedness of the efficient set of problem MOMP, and a literature

review of the algorithms for solving MOLP problems. In Chapter 3, we will study some

new approaches for characterizing efficient solutions and some necessary and sufficient

conditions for the efficient set of problem MOMP to be closed. New definitions for the

quasiconcavity and strict quasiconcavity of a vector-valued function are also given in this

chapter. In Chapter 4, we will give two new algorithms for finding the set of all efficient

extreme points of a multiple objective linear programming problem in the outcome space.

In Chapter 5, we will give a new algorithm for finding the entire efficient set of a multiple

objective linear programming problem in the outcome space. Finally, in Chapter 6, we will

discuss some ideas for further research.














CHAPTER 2
LITERATURE SURVEY



In this chapter, we will present a literature review of some important approaches

to characterizing efficient solutions, a literature review of the closedness of the efficient

set and a literature review of the algorithms for solving multiple objective linear

programming problems.

2.1. Approaches to Characterizing Efficient Solutions


In order to operationalize the concept of an efficient solution, a common idea is to

relate it to a familiar concept. The most common strategy of this type is to characterize

efficient solutions in terms of optimal solutions of appropriate scalar optimization

problems. There are many ways of forming appropriate scalar problem for accomplishing

this. Among them are two kinds of commonly used scalar optimization problems. These

are "the weighting problem" and "the kth-objective e -constraint problem Since our

theoretical results and algorithms are based on these two common approaches, we will

give a brief review on these two kinds of scalar optimization problems and their

relationships with the MOMP problem. For other approaches to characterizing efficient

solutions in terms of appropriate scalar optimization problems, the reader is referred to the

book by Chankong and Haimes (1983) and to references therein. For example, a strategy

used by Sun (1996) is to characterize efficient solutions of an MOMP problem in









RP in term of efficient solutions of some particular MOMP problems in RP-'. One

advantage of this strategy is that we can use the induction method with this approach to

obtain some theoretical results (see Sun, 1996).

Let

W = (wl, ..., WpT I wj > 0 for allj = 1,...,p },

WO = { w = (wi, ... W)T I wj > 0 for all j = 1,...,p }.

We will refer to both W and W as the weight set associated with the MOMP.

The weighting problem can be defined as follows:

p
P(w): max w Jfj(x)
j3i
s.t. x e X,

where w e W is a nonnegative weight.

The weighting problem P(w) has been used to analyze the efficient solutions since

the vector maximization problem was first formulated by Kuhn and Tucker in 1951. Here

we will state three major results which show relationships between P(w) and MOMP in

the cases that fi, ..., fp are general real-valued functions, concave functions and linear

functions, respectively. In the following, Result 1 is due to the work of Geoffrion (1968),

Zadeh (1963) and Yu (1974). Result 2 is due to Karlin (1959). Some similar results can be

found in DaCunha and Polak (1967), and Yu (1974). Result 3 is due to Zeleny (1974), Yu

and Zeleny (1975) and Isermann (1976).

Result 1. x* is an efficient solution of MOMP if there exists w e W such that x* is

an optimal solution of P(w) and either one of the following two conditions holds:

(i) wj > 0 for allj = ,...,p;








(ii) x* is the unique optimal solution of P(w).

For a given weight vector wo, if we solve P(wo) and obtain an optimal solution x',

we can claim from Result 1 that x* is an efficient solution of MOMP if either x* is the

unique optimal solution of P(wo) or w e WO. Thus, Result 1 implies that we can find

some efficient solutions of (MOMP) by solving P(w) for some properly chosen value of w.

Result 2. Assume that X is a convex set and fj, j=l,...,p, are concave on X. If x*

is an efficient solution of (MOMP), then there exists w e W such that x* is an optimal

solution of P(w).

The above result implies that under the convexity assumptions on f and X, we can

find all efficient solutions by solving P(w) for all w e W. Since Result 2 does not

guarantee the efficiency of an optimal solution of P(w) when w e W, we need to verify

whether an optimal solution ofP(w) is efficient.

Result 3. Assume that X is a convex polyhedron and fj, j=l,...,p, are linear

functions. Then, x* is an efficient solution of (MOMP) if and only if there exists a weight

vector w E Wo such that x* is an optimal solution ofP(w).

The above Result 3 implies that the set of all efficient solutions of a multiple

objective linear programming problem can be found by solving P(w) for all w e W.

Many theoretical results and algorithms in the literature on multiple objective

mathematical programming are based upon the above three results. Our algorithms in

Chapter 4 and Chapter 5 are also based upon Result 3. In Chapter 4, we will decompose

the weight set Wo into a number of subsets. Based upon this decomposition, we will then

present two new algorithms for finding the set of all efficient extreme points and an








algorithm for finding the set of all efficient points for problem MOLP in the outcome

space.

The kth-objective e-constraint problem was introduced by Haimes (1970) and

Olagundoye (1971), and it can be defined as follows:

Pk(e): max fk(X)
s.t. fj(x) j, j= 1,...,p, j k,
x X,

where e = (ei,..., sk-1, Ek+i,..., Ep)T. For a given x*, we will use Pk(e*) to represent the

problem Pk(e), where s j = e j* = f,(x*), j # k..

The theoretical equivalence between Pk(e) problems and a general MOMP problem

was established by Haimes et al. in 1971 and, then, was extended by Chankong and

Haimes in their book in 1983. Some other results for general MOMP problems were also

obtained by Lin (1976, 1977). Their main results are stated in the following.

Result 4. Let x* be an optimal solution of Pk(8*) with 8 = fj(x'), j # k.. Then x* is

an efficient solution of MOMP

(i) if and only if x* is an optimal solution of Pk(8*) for every k = 1,...,p;

(ii) if and only if the optimal value of Pk(e) is strictly less than fk(x*) for any 8 2 ;

(iii) if x* is the unique optimal solution of Pk(8*) for some k.

The sufficient condition for efficiency in Result 4 (i) requires Pk(e) to be solved for

all k before a conclusion can be drawn. The sufficient condition in Result 4 (ii) requires

one to compare fk(x') with the optimal values of all problems Pk(s), e > s*. The sufficient

condition in Result 4 (iii) requires x* to be the unique optimal solution of Pk(e*).









Under some generalized convexity assumptions, Sun (1996) obtained the following

result.

Result 5. Let X c R" be a nonempty compact and convex set, and let fi, i = 1,...,p

be continuous and quasiconcave on X. If fk is strongly quasiconcave on X, then x* is an

efficient solution of problem MOMP if and only if there is e e {(e,,..., Ek-1, Sk+1,..., p)TI si

= fi(y) for all i # k, y e X} such that x* is an optimal solution of Pk(g*).

In this research, we will give a different sufficient condition which does not have

these requirements for an optimal solution of Pk(E) to be efficient. Some necessary and

sufficient conditions for the efficient set to be closed will be obtained based upon this

sufficient condition.



2.2. Closedness of the Efficient Set


One important research area in the theory of multiple objective mathematical

programming is to investigate the topological properties of the efficient set and of the

weakly efficient set. Among these properties, closedness is of particular interest. The

closedness of the efficient set and of the weakly efficient set is both theoretically and

algorithmically important. Following are some reasons for this.

(1) The closedness of the efficient set guarantees the existence of at least one

optimal solution for the problem of optimizing a continuous function over the efficient set

of a bounded MOMP problem.

(2) An algorithm for finding all weakly efficient extreme points of a multiple

objective linear fractional programming problem that was proposed by Kornbluth and









Steuer (1981), is based in part on the facts that the weakly efficient set is both closed and

connected.

(3) The closedness of the efficient set in bi-objective strictly quasiconcave

programming has been used by Daniilidis et al. (1997) to prove the connectedness of the

efficient set in three-criteria quasiconvex programming.

It is well known that the weakly efficient set is closed when the objective functions

are continuous and the feasible set is closed (Choo and Atkins, 1983). However the

efficient set is not closed, in general, even if the objective functions are continuous and

strictly qusaiconcave and the feasible set is compact (Choo and Atkins, 1983; Steuer,

1986). Up to now, only a few results exist concerning the closedness of the efficient set

for some special cases. Yu and Zeleny (1975), for instance, showed that the efficient set of

a multiple objective linear programming with a compact feasible region can be represented

by the union of its maximal efficient faces. Since the number of maximal efficient faces is

finite and each maximal face is closed, the efficient set of a multiple objective linear

programming problem with a compact feasible region is closed. Later, Schaible (1983)

showed that the efficient set of a bi-objective programming problem is closed when the

objective functions are continuous and strictly quasiconcave and the feasible region is

compact and convex.

As a result, it is important to investigate necessary and sufficient conditions for the

efficient set of a general MNOMP problem to be closed.









2.3. Methods for Solving Multiple Objective Linear Programming Problems


In the literature, the methods for solving a MOLP problem are usually classified

into three categories on the basis of the time at which the DM needs to articulate his/her

preference structure over the set of feasible alternatives (see Hwang, 1979, and Evans,

1984).

The first category consists of techniques that require prior articulation of the DM's

preferences. This means that the preference information of the DM is given to the analyst

(who is responsible for solution of the MOLP) before the analyst actually solves the

problem. The information may be either (1) cardinal information, or (2) mixed (ordinal and

cardinal) information.

The major methods using the approach of cardinal information are utility function

methods. In all of the utility function methods, the MOMP (or MOLP) is converted to

max U(f)= U(ft,...,fp), s.t. x e X,

where U(f) is the utility function of the DM over the multiple objectives. Thus these

methods require that U(f) be known prior to solving the MOMP (or MOLP). The

literature on utility function methods and on problems of determining U(f) is discussed and

reviewed in Farquhar (1977), Huber (1974), Keeney and Raiffa (1976), and Dyer and

Sarin (1979). The major advantage of utility function methods is that if U(f) has been

correctly assessed and used, it will ensure the most satisfactory solution to the DM. The

major difficulty with the utility function methods is that the DM is required to articulate

preference judgements in an information vacuum.









The methods using the approach of mixed (ordinal and cardinal) information are

called goal programming (Lee, 1973), and lexicographic ordering (Keeney and Raiffa,

1976).

Goal programming was originally proposed by Charnes and Cooper (1961) for a

linear model. The method requires the DM to set goals that he/she wishes to attain for

each objective function. A preferred solution is then defined as the one that minimizes the

deviations from the set goals. In the most common formulation of goal programming, the

DM, in addition to setting goals for the objective functions, must also be able to give an

ordinal ranking of the objectives. For recent developments in goal programming, we refer

the reader to the book edited by Tamiz (1996).

The lexicographic method requires the objective functions be ranked in order of

importance by the DM. The preferred solution obtained by this method is the one that

maximizes the objectives starting with the most important and proceeding according to the

order of the importance of the objectives. Since the obtained solution is very sensitive to

the ranking of the objectives given by the DM, the analyst should exercise caution in

applying this method when some objectives are of nearly equal importance.

The second category of methods for solving problem MOLP consists of methods

that require the progressive articulation of preferences by the DM. This class of methods,

generally referred to as interactive methods, relies on the progressive definition of the

DM's preferences along with the exploration of the criterion space. The progressive

definition takes place through a DM-analyst or DM-machine dialogue at each iteration.

As explained by Hwang and Masud (1979), and Shin and Ravindran (1991), the

advantages of the interactive methods are (1) there is no need for 'a priori' preference









information; (2) they each yield a learning process for the DM from which the DM

eventually understands the behavior of the system; (3) only local preference information is

needed; and (4) since the DM is part of the solution process, the solution obtained has a

better prospect of being implemented. Again, as explained by Hwang and Masud (1979),

and Shin and Ravindran (1991), the disadvantages are (1) solutions depend upon the

accuracy of the local preference information that the DM can give; (2) for many methods

there is no guarantee that the preferred solution can be obtained within a finite number of

interactive cycles; (3) much more effort is required of the DM than in the methods in the

previous category; (4) nearly all interactive methods require consistent responses from the

DM to be successful.

For surveys of interactive algorithms, we refer to Steuer (1986) and Shin and

Ravindran (1991).

The third category of solution methods for problem MOLP consists of methods

that require posteriori articulation of preferences. Each of these methods, generally

referred to as vector maximization approaches, typically generates all or a substantial part

of the efficient set. The efficient set or the subset of the efficient set that was generated is

then presented to the DM for evaluation. Since our algorithms lie in this category, we will

give a more detailed review of the algorithms in this category.

Algorithms in this category can be divided into two subclasses based upon the

solutions they seek:

(a) those which concentrate on finding a representative subset of the efficient set;

(b) those which concentrate on finding the entire efficient set.









Algorithms in this category can also be classified as either decision-space-based or

outcome-space-based. A decision-space-based algorithm works in the decision space and

concentrates on finding all or part of the set of the efficient solutions. An outcome-space-

based algorithm works in the outcome space and concentrates on finding all or part of the

set of efficient objective function values. Most current algorithms are decision-space-based

algorithms.

The algorithms in subclass (a) include those of Evans and Steuer (1973a, b),

Zeleny (1974), Steuer (1976a, b), Ecker and Kouada (1978), Armand and Malivert

(1991), Benson and Sayin (1997), Benson (1998a, c), among others. The algorithms in

Benson and Sayin (1997) and Benson (1998a, c) are outcome-space-based algorithms.

The algorithms in subclass (b) include those of Yu and Zeleny (1975), Isermann

(1977), Gal (1977), Ecker et al. (1980), Armand and Malivert (1991), Armand (1993),

among others. All of these algorithms are decision-space-based algorithms.

The most frequently generated representative subset of the efficient decision set for

the algorithms in subclass (a) is the set of all efficient extreme points in the decision space.

The rationale for generating the set of all efficient extreme points is based upon the

assumption that the best efficient extreme point is an acceptable approximation of a best

compromise solution.

Steuer (1976a) notes that algorithms for finding all efficient extreme points in the

decision space generally consist of three phases. Phases I and II consist of finding an initial

extreme point and an initial efficient extreme point, respectively. Phase III involves finding

all remaining efficient extreme points. Phases I and II are easy to implement and only

require classical linear programming procedures. Phase III is where the various algorithms









differ in their approaches. Steuer also notes that there are three classes of approaches for

Phase III among those algorithms for finding all efficient extreme points in the decision

space. The three classes are

the decomposition of parametric space approach;

the adjacent efficient basis approach;

the adjacent efficient extreme point approach.

We will give a detailed discussion of the decomposition of the parametric space

approach later in Chapter 4. The adjacent efficient basis approach involves pivoting among

all efficient bases. The adjacent efficient extreme point approach involves the pursuit of all

efficient edges emanating from each efficient extreme point. All of these algorithms use

modified simplex method approaches.

One general difficulty of these algorithms concerns the handling of degeneracy. It

is will known that several bases may correspond to a single degenerate vertex. Since all of

these methods use basic feasible solutions to characterize the extreme points of the

feasible region, this makes the determination of all efficient extreme points more

complicated when degeneracy is present. To deal with degeneracy, some special pivoting

rules are used in Armand and Malivert (1991).

It is well known that the efficient decision set of a MOLP problem can be

decomposed into the union of its maximal efficient faces in the decision space (Yu and

Zeleny, 1975). Each efficient face is completely determined by the efficient extreme points

and extreme rays that lie in it. Thus, the algorithms for finding the entire efficient decision

set in subclass (b) usually work in the following way. First, all efficient extreme points and

extreme rays are generated. Next, some identification and bookkeeping techniques are









usually used to generate all maximal efficient faces. All of these algorithms are also based

upon some type of modified simplex method approach and must deal with the degeneracy

problem.

Apart from the computational burden of employing these decision-space-based

methods and the complications induced by the presence of degeneracy, there are a number

of other problems related to these algorithms. One problem is that the efficient decision set

is generally a complicated nonconvex set that grows rapidly as the size of the problem

increases. Thus, generating the set in its entirely is possible only in certain special cases.

Another problem is that the size and nature of the generated set in the decision space can

easily overwhelm the DM. It has been noted (Dauer, 1987, 1990, and Benson, 1995a) that

since the number of objective functions is usually much less than the number of variables

in problem MOMP, the efficient outcome set has a much simpler structure than the

efficient decision set. Recently, some researchers have begun to develop tools for

analyzing problem MOLP in the outcome space, rather than in the decision space of the

problem.

To our knowledge, the first algorithm that is capable of generating the set of all

efficient extreme points in the outcome space for a general MOLP problem was proposed

by Benson in 1998 (Benson, 1998a). This algorithm uses outer approximation techniques

from global optimization (Horst and Tuy, 1993). It works by generating a finite number of

polyhedra that each approximates an "efficiency-equivalent" outcome polyhedron for the

problem. The most computationally-demanding task in the algorithm calls for determining

the set of extreme points of each new polyhedron that is created by adding a linear









inequality cut at each step of the algorithm. This is accomplished with the aid of outer

approximation.

A newer outcome-space-based method for multiple objective linear programming,

called a hybrid approach, has also been proposed by Benson (1998c). This approach

adapts two global optimization, decision set-based methods to the outcome space. These

methods are a special simplicial partitioning technique of Ban (see Ban, 1983, and Tuy and

Horst, 1988), proposed originally for solving concave minimization problems, and a

general outer approximation method that has been used very frequently to help solve a

variety of global optimization problems (see Horst and Tuy, 1993, and Horst and

Pardalos, 1995). In particular, the simplicial partitioning technique is systematically

integrated into the outer approximation scheme in outcome space to determine all efficient

extreme points in the outcome set of the problem in a finite number of iterations.

To our knowledge, there are no direct algorithms for finding the entire efficient

outcome set. The algorithms in Benson (1998 a, 1998c), however, can be used to find the

entire weakly efficient outcome set.

As discussed in Section 1.2, it is important to develop new outcome-space-based

algorithms for finding the set of all efficient extreme points in the outcome space, and to

develop algorithms for finding the entire efficient outcome set directly.














CHAPTER 3
ON THE CLOSEDNESS OF THE EFFICIENT SET OF PROBLEM MOMP



We restate problem MOMP as follows:

MOMP: v-max f(x) = (fi(x), ... f,(x)),

s.t. x X,

where p 2 2, x e R" is a vector of decision variables, fi, i = 1,2,...,p are objective functions

and X is the set of feasible decision alternatives. Since problem MOMP has p objective

functions, we call it a p-dimensional multiple objective mathematical programming

problem (or a p-dimensional MOMP problem). The vector-valued function f: X->RP is

called a p-dimensional vector-valued function. The efficient decision set and weakly

efficient decision set of problem MOMP are denoted by E(f, X) and Ew(f, X), respectively.

The efficient outcome set and weakly efficient outcome set of problem MOMP are

denoted by E(f(X), R ) and Ew(f(X), R ), respectively.

One important research area in the theory of multiple objective mathematical

programming is to investigate the topological properties of the efficient set and of the

weakly efficient set. Among these properties closedness is of interest. The closedness of

the efficient set and of the weakly efficient set are both theoretically and algorithmically

important (c.f Section 1.2 and Section 2.2).

It is well known that the weakly efficient decision set Ew(f, X) is closed when the

objective functions are continuous over X and the feasible set is closed (Choo and









Atkins, 1983). However the efficient decision set is not closed in general, even if the

objective functions are continuous and strictly qusaiconcave on X and the feasible set is

compact (Choo and Atkins, 1983; Steuer, 1986). As mentioned earlier, the only

closedness results for the efficient set that exist in the literature concern certain special

cases (see, e.g., Section 2.2). Thus the question as to what the necessary conditions are

for the efficient set of a general MOMP problem to be closed remains open. The question

as to whether or not we can find any sufficient conditions for the efficient set of a general

MOMP problem to be closed also remains open.

Motivated, in part, by answering these questions, we focus this chapter on finding

necessary and sufficient conditions for the efficient decision set E(f, X) of problem MOMP

to be closed. We will first introduce a new definition of strict quasiconcavity for a vector-

valued function. This definition extends the definition of strict quasiconcavity for a real-

valued function to vector-valued functions. It is shown that a vector-valued function is

strictly quasiconcave on a compact convex set X if each of its component functions is

strongly quasiconcave or if each of its component functions is linear and X is a polytope.

We also show that the reverse of the above relations may not be true. Under our new

definition, it is shown that the efficient decision set E(f, X) of problem MOMP is closed

when the vector-valued function f(x) is strictly quasiconcave and the feasible region X is

compact and convex. Some other sufficient conditions are also obtained. By applying

some of the new results to bicriteria mathematical programming problems, we obtain a

new result on the closedness of E(f, X), which extends a result obtained by Schaible

(1983). In Schaible (1983), it is shown that the efficient decision set for a bicriteria

programming problem is closed when the two objective functions are continuous and









strictly quasiconcave and the feasible region is compact and convex. Here, for the

bicriteria programming problem, we extend Shaible's result to cases where the feasible

region need not be convex and the objective functions need not be strictly quasiconcave. A

necessary condition for E(f, X) to be closed will also be given in this chapter.

This chapter is organized in the follow way. In Section 3.1, we introduce some

notation and review the definitions of upper semicontinuous and lower semicontinuous

point-to-set maps. In Section 3.2, we introduce and study new definitions for a vector-

valued function to be quasiconcave and strictly quasiconcave. Some new results on

characterizing the efficient set of a p-dimensional MOMP problem in terms of the optimal

solution sets of some single-valued optimization problems and in terms of the efficient sets

of some (p-1)-dimensional MOMP problems are given in Section 3.3. In Section 3.4, we

present some necessary and sufficient conditions for the efficient set of a general MOMP

problem to be closed. The results on the closedness of a bicriteria programming problem

are given in Section 3.5. Finally, some concluding remarks are given in Section 3.6.



3.1. Definitions and Notation


Suppose that X c R" is a nonempty and compact set, and that fi, i = 1, 2, ..., p, are

continuous functions on X. We define

fi = min {fi(x) I x e X}, (3.1)

f, = max {fi(x) I x X}, (3.2)

f(i)(x) = (fi(x),..., fl.i(x), fi+(x),..., f,(X))T, (3.3)


and









i,)(X) = { fi(x) x e X}, (3.4)

for all i = 1, 2,...,p. Notice that f(;)(x) is a (p-1)-dimensional vector-valued function on X if

p > 2, and f(i)(X) is a nonempty set in R'.

Let i {1, 2, ...,p}. For any t e [ fi,f], and e e fi)(X), we define

xi(t) = {x eX fi(x) t}, (3.5)

and

X()( ) = {x E X I f(i)(x)> }. (3.6)

Let y = (y',...,ym)T, z = (z1,...,Zm)T be any two vectors in R" with m 2 1. We define

min{ y, z,}
min {y,z}= .... (3.7)
min { Y, Zm})

Let F be a point-to-set map from a set U c Rk to subsets of a set V c Rm, and let

U' c U. Then, the union of the sets F(u) for u e U' is denoted by u{F(u)l u e U'), and the

intersection of the sets F(u) for u e U' is denoted by n{F(u)l u e U'}.

Now, we review the concepts of upper semicontinuous and lower semicontinuous

point-to-set maps.

Definition 3.1.1. Let F be a point-to-set map from a set U c Rk to subsets of a set

V c R". Then, F is said to be

(a) upper semicontinuous at a point u* e U, if {uk c U, uk -> u* (when k oc)

and vk e F(uk) with vk v* (when k -* oc) imply that v* e F(u*);

(b) lower semicontinuous at a point u* e U, if {uk} c U, uk -> u* (when k oc)

and v* e F(u*) imply the existence of an integer M and a sequence of points {vk}c V

such that vk e F(Uk) for all k 2 M and vk -> v* when k -> oc.











3.2. Quasiconcave and Strictly Quasiconcave Vector-Valued Functions


The definitions of quasiconcavity, strict quasiconcavity, and strong quasiconcavity

for a real-valued function can be stated as follows.

Definition 3.2.1. Let S c R" be convex, and let g be a real-valued function defined

on S. Then, g is said to be

(a) quasiconcave on S, if

g(Xx' + (1-I)x2) > min {g(x'), g(x2)}

for any x', 2 S, (0, 1);

(b) strictly quasiconcave on S, if

g(Xx' + (1-X)x2) > min (g(x'), g(x2)}

for any x', x2 e S, g(x') g(x2), and X e (0, 1);

(c) strongly quasiconcave on S, if

g(Xx' + (1-X)x2) > min (g(x'), g(x2)}

for any x, x2 e S, x' # x2, and X e (0, 1).

We now review some properties that we will need concerning the quasiconcavity

and strict quasiconcavity of real-valued functions. For an excellent review of various

properties of quasiconcave and strictly quasiconcave functions, we refer the reader to the

book by Avriel et al. (1988).

The following property is well known. It can be proved directly from the

definition.








Property 3.2.1. Let g be a real-valued function defined on the convex set S c R".

Then, g is quasiconcave on S if and only if the level set

L(a) = {x e SI g(x) ac}

is convex for any scalar a e R' such that L(a) # 0.

The following properties will be useful in this chapter.

Property 3.2.2 (Karamardian, 1967). Let g be a continuous real-valued function

defined on the convex set S c R". If g is strictly quasiconcave on S, then it is also

quasiconcave on S.

Property 3.2.3 (Elkin, 1968, Martos, 1969). Let g be a continuous quasiconcave

real-valued function defined on the convex set S c R". Then, g is strictly quasiconcave on

S if and only if every local maximum of g in S is also a global maximum of g on S.

Property 3.2.4 (Zang and Avriel, 1975). Let g be a real-valued function defined on

the convex set S c R". Then, L(a) is a lower semicontinuous point-to-set map on G = (a

L(a) 0)}ifand only if every local maximum ofg in S is also a global maximum on S.

From Properties 3.2.2-3.2.4, we immediately have the following result concerning

strictly quasiconcave functions.

Property 3.2.5. Let g be a continuous real-valued function defined on the convex

set S c_ R". Then, g is strictly quasiconcave if and only if g is quasiconcave on S and L(a)

is a lower semicontinuous point-to-set map on G = {a I L(a) #0}.

Properties 3.2.1 and 3.2.5 imply the following alternative definitions for

quasiconcave and strictly quasiconcave functions.

Definition 3.2.2. Let S c R" be convex, and let g be a continuous real-valued

function defined on S. Then, g is said to be








(a) quasiconcave on S, if the level set

L(a) = (x e SI g(x) > a}

is convex for any a e R1 such that L(a) # 0;

(b) strictly quasiconcave on S, ifg is quasiconcave on S and

L(a) = {x e S| g(x) > a}

is a lower semicontinuous point-to-set map on G = { I L(a) 0}).

We now extend Definition 3.2.2 to vector-valued functions. To aid in this

extension, we define a vector-valued function to be continuous on a set if each of its

component functions is continuous on the set.

Definition 3.2.3. Let f(x) = (fi(x),...,f(x))T be a p-dimensional continuous vector-

valued function defined on the convex set X c R". Then, f is said to be

(a) quasiconcave on X, if the level set

M(y) = {x e XI f(x) > y)

is convex for any y E RP such that M(y) # 0;

(b) strictly quasiconcave on X, if f is quasiconcave on X and the point-to-set map

M(y) = {x e X f(x) 2 y}

is lower semicontinuous on G' = {y e RP | M(y) # 0}.

We now present some properties of quasiconcave vector-valued functions and

strictly quasiconcave vector-valued functions. Some of these properties will be used later

in this chapter in the proofs of our results on closedness. In addition to this, these

properties also show us the relationships among various classes of generalized convex

functions and linear functions.








Theorem 3.2.1. Let f(x) = (fi(x), ..., fp(x))' be a p-dimensional continuous and

quasiconcave vector-valued function defined on the convex set X c R". Assume that

{i1,...,ik) C {1, 2,...,p}, where 1 < k < p. Then, f'(x) = (f, (x),...,fi (x))'r is continuous

and quasiconcave on X.

Proof. We only need to prove the case where k = p-1. Without loss of generality,

we assume that {il,...,ik} = { 1,..., p-}. Choose y' E RP-' such that {x e XI f'(x) > y' }

0. Let y, z e {x e XI f'(x) > y' }. Since X is convex, we have

Xy + (1-X)z X

for any 0 < X < 1. Since y, z e {x e XI f'(x) 2 y' }, we have f'(y) > y' and f'(z) 2 y'. It

follows that

min {f'(y), f'(z)) > y'.

Let y = min {f(y), f(z)}. Then, y, z {x e XI f(x) 2 y }. Since f is quasiconcave, we have

that {x E XI f(x) > y ) is convex. It follows that

y + (1-X)z e{x e X f(x)2 y }

for any 0 < < 1. That is

f(ky + (l-X)z) y = min {f(y), f(z)}.

This implies that f'(Xy + (I-X)z) 2 min {f'(y), f'(z)} > y' for any 0 < X < 1. As a result, the

set {x e X1 f'(x) 2 y' ) is convex. By Definition 3.2.3, f' is quasiconcave on X. O

Theorem 3.2.2. Let f(x) = (fi(x), ..., f,(x))T be a p-dimensional continuous vector-

valued function defined on the convex set X c R". Then, the vector-valued function f is

quasiconcave on X if and only if each of its component functions f1, i = 1, 2, ...,p, is

quasiconcave on X.








Proof If f is quasiconcave on X, then it follows from Theorem 3.2.1 that every

component function of f is quasiconcave on X.

Now, we assume that every component function fi is quasiconcave on X for i e

(1,...,p). To prove fis quasiconcave on X, we need to prove that the level set

M(y)) = {x e X f(x) > }

is convex for any y in R' with M(y ) f 0.

Let y be any given point in RP such that M(y *) a 0, and let x, y e M(y *).

Then, f(x) > y and f(y) 2 y *. Since fi is quasiconcave on X for each i =1,...,p, we have

that

Sf, I(Ax + (1 -2)y)
f(2 Ax + (1 A)y)= ......
fP (A2x+(1-A)y))



min {f, (x), f, (y)})


= min {f(x), f(y)} y *

for any 0 < A< 1. This implies Ax + (1-A)y e M(y*) for any 0 < A< 1. Therefore,

M(y *) is a convex set. D

Theorem 3.2.3. Let f(x) = (fi(x), ..., fp(x))' be a p-dimensional, continuous strictly

quasiconcave vector-valued function defined on the compact, convex set X c R". Assume

that {i1,...,ik} C {1, 2,...,p), where 1 < k < p. Then f'(x)= (f, (x),..., f" (x))T is strictly

quasiconcave on X.








Proof We only need to prove the case where k = p-1. Without loss of generality,

we assume that {il,...,ik) = {1,..., p- }.

Since f is continuous and strictly quasiconcave on X, we have that it is also

quasiconcave on X. It follows from Theorem 3.2.1 that f'(x) = (f,(x), ..., f1 (x))T is

quasiconcave on X. Therefore, to complete the proof, we need only to prove that the

point-to-set map

M'(y) = {x e XI f'(x) > }

is lower semicontinuous on G' = {y e RP-'I M'(y) 0 0}.

Towards this end, let y e G', {y k} c G', and y k -+ y when k -> oc, and let x*

E M'(y *). By definition, to prove that M'(y) is lower semicontinuous at y *, we need to

find an integer K and a sequence {xk}such that xk e M'(y k) for k > K and xk -- x* when k



Since f is continuous on the compact set X, fp(x) has minimum value on X. Let

S* = min {fp(x) I x X}, and let t* =, Then, fp(x*) 2 f *. This, together with x*


e M'(y *), yields f(x*) 2 t*. Therefore, x* e M(t*) = { x e XI f(x) 2 t*}. Let tk = Y
#*)

for k = 1, 2,.... We have tk -> t* when k -- oc. Since y k e G' (k = 1, 2,...), it follows that

there exists yk e X such that

f,(yk) k








for all k = 1, 2,.... Since f = min {fp(x) I x E X}, we have fp(yk) > f *. Therefore, f(yk)


S = tk for k = 1,2.... It follows that tk e G = {t e RP M(t) 0} for any k. As a


result, since fis continuous on the compact set X, we can show that t* e G. Therefore, t*

e G, and we have found a vector x* e M(t*) and a sequence of vectors {tk} c G such

that tk -- t* when k -- oc. Since f is strictly quasiconcave on X, the point-to-set map M(t)

= { x e X1 f(x) > t} is lower semicontinuous at t* e G = {t e RPI M(t) # 0}. Therefore,

there exist an integer K and a sequence of vectors {xk) such that xk e M(tk) for k > K and

xk x* when k -+ oc. That is, there exist an integer K and a sequence {xk) such that xk

M'(yk) for k > K and xk -+ x* when k -> oc. This implies that M'(y) is lower

semicontinuous at y*. By the choice of y*, this implies that M'(y) is lower

semicontinuous on G'. Consequently, the proof is complete. D

Before going further, we need to prove a lemma. Consider the system

Du = v *
uTy*=0 (3.8)
u>0,

where D is a n x q matrix, v* e R" and y* e R' are two given vectors, and y* > 0. Let I =

{il yi* # 0, i = 1,...,q}. Let ui = 0 for i e I, and let u' be the vector whose elements are ui, i

S{ 1,...,q}\I. Then, the above system is equivalent to the linear system

{D'u' = v *
u'> 0, (3.9)

where D' is a submatrix of D.








If the system (3.8) is consistent, then, from linear programming theory, there is a

nonsingular submatrix of D', denoted by B, and a subvector of v*, denoted by v', such that

u* = Bv') is a solution of (3.8). Notice that Ilv'I < Iv*|I. Therefore, lu*|| = IB-'v'I < IB
0

'II IIv'll| ,1B'Il IIv*ll, where IB''II = max {1B-'vll I Ilvj = 1). Notice that since B is a

nonsingular submatrix of D', it is also a nonsingular submatrix of D. Let M = max {(lB'iI I

B is a nonsingular submatrix of D}. Then, Ilu*|ll MI|v*ll. Consequently, we have proven

the following lemma, which will be used in the proof of Theorem 3.2.4 (c).

Lemma 3.2.1. Let D be an n x q matrix, let v* = Rn and y* e Rq be two given

vectors with y* > 0, and let M = max {JB'111 I B is a nonsingular submatrix of D}. If

system (3.8) is consistent, then it has a solution u* such that Ilu*ll < M1\'*1I

Theorem 3.2.4. Let f(x) = (fi(x), ..., fp(x))T be a p-dimensional continuous vector-

valued function defined on the nonempty compact, convex set X c R".

(a) If the vector-valued function f is strictly quasiconcave on X, then the real-

valued function fi is strictly quasiconcave on X for each i = 1,..., p.

(b) If the real-valued functions f,, i = 1,...,p are strongly quasiconcave on X, then

the vector-valued function f is strictly quasiconcave on X.

(c) If the real-valued functions fi, i = 1,...,p are linear functions and X is a

polytope, then the vector-valued function f is strictly quasiconcave on X.

Proof (a) The result follows immediately from Theorem 3.2.3, Definition 3.2.2,

Definition 3.2.3, and Property 3.2.5.

(b) Since f2, i = 1,...,p are strongly quasiconcave on X, they are also quasiconcave

on X. It follows from Theorem 3.2.2 that the vector-valued function f is quasiconcave on








X. So, by the definition of strict quasiconcavity we only need to prove that the point-to-

set map y -> M(y) = {x e XI f(x) 2 y } is lower semicontinuous at any given point y in

RP such that M(y *) a 0. Towards this end, let y e RP satisfy M(y *) # 0 Let {y k} c

G'= {y I M(y) # 0 }, yk -' when k -> oc, and x* e M(y *). Since X is a compact

set, M(y k) is a nonempty compact set for each k. Therefore, we may choose xk e M(y k)

for each k such that

Ixk x* =min ({lx x* I x EM(y k).

Assume that y* is a cluster point of the sequence {xk} and y* # x*. Then, there is a

subsequence of {xk} convergent to y*. Without loss of generality, we assume that xk -> y*

when k oc. Then, Ilxk x*il y* x*|| when k -+ oc. So, there is a positive integer K1

such that for all k 2 K1,

III xk x*l ly* x*ll I < 0.5 y* x*II.

Therefore,

SXk x*1 >2 0.5 IIy* x*ll (3.10)

when k > Ki.

For each k, since xk E M(Y k), f(xk) > y k. By the continuity of f, this implies that

f(y*) > y *. Since x* e M(y *), we also have that f(x*) y *. Since fi, i =1, 2, ..., p, are

strongly quasiconcave and y* # x*, it follows that f(Ax* + (1-A)y*) > min {f(x*),

f(y*)}> y for any A such that 0 < 2< 1. By choosing A = 0.6, we obtain that f(0.6x* +

0.4y*) > y *. Since 7k y when k -> oc, there exists a positive integer K2 such that

f(0.6x* + 0.4y*) > y k when k > K2. That implies that 0.6x* + 0.4y* e M(yk) when k >

K2. Therefore,








ii (0.6x* + 0.4y* )- x* I|I min {Ix x*ll I x e M( k)}

= Ixk- x*ll

for all k > K2. This implies that

0.4 |Iy* x*ll > Ixk x*l (3.11)

for all k > K2.

From (3.10) and (3.11), it follows that

0.4 Ily* x*l > 11 xk x*|| > 0.5 liy* x*ll

for all k > max {K1, K2). This is a contradiction. Therefore y* = x* must hold. By the

choice of y*, we have proven that xk -- x* when k -+ oc. Thus, M(y) is lower

semicontinuous at y *.

(c) Since f is a linear vector-valued function and X is a polytope, we may assume

that f(x) = Cx and X = (x e R"I Ax < b, x > 0}, where C is a p x n matrix, A is a m x n

matrix and b e Rm. Similarly to (b), we only need to prove the point-to-set map y -+

M(y) = {x e XI Cx > y } is lower semicontinuous at any given point y in RP with

M(y *) 0.

Suppose that y e RP and M(y *) 0. Suppose that {y k} c G' = {y I M(y)

0), yk -+ y when k -> c, and x* E M( *). Since X is a compact set, M(yk) is a

nonempty compact set. Therefore, we may choose xk e M(y k) for each k such that xk is

an optimal solution of the problem (Q) given by

min (1/2)|Ix x*112

s.t. Cx > yk, Ax < b, x 0.








Problem (Q) is a convex quadratic programming problem. By using the Karush-

Kuhn-Tucker conditions, we see that xk is an optimal solution if and only if xk is a feasible

solution and there exists a vector u e Rp+" such that

xk x* + [-CT, A', -I]u = 0, (3.12)
Cxk -yk
cXk k

(u)T bAxk =0, (3.13)
xk


u 0, (3.14)

where I is the n x n identity matrix. In Lemma 3.2.1, let D = [-CT, AT, -I], v* = x* xk, y*

= [(Cxk 7k)T, (b Ak)T, (xk)T]T, and M = max {IB'1I |I B is a nonsingular submatrix of

D). By Lemma 3.2.1, for each k = 1, 2,..., there exists a vector uk that satisfies (3.12)-

(3.14) such that Ilukll < M Ixk x*ll.

Since (Xk} c X, and X is compact, {xk) has convergent subsequence. Without loss

of generality, we may assume that (xk} is a subsequence convergent to a vector z*. Since

xk e M(y k) for each k, z* e M(y *). Notice that M is a fixed number and Ilukil < MIIxk -

x*ll for each k. Therefore, {uk) is bounded. This implies that {uk} has at least one cluster

point. Let u* be a cluster point of {uk). Then, from (3.12)-(3.14), letting k -+ ac, we

obtain

z + [CT, A, -I]u* = 0,

ICz*-y*

(u*)T b-Az* = 0,
z*


u* > O.

This implies that z* is an optimal solution of the problem









min (1/2)ixx x*|1
s.t. x e M(y *).

Since x* e M(y *), the above problem has the unique optimal solution x*. Therefore, z* =

x*.

Summarizing, we have found a sequence {xk) such that xk e M(y k) for each k and

xk -> x*. Thus, M(y) is lower semicontinuous at y *. By the choice of y *, the proof is

complete. D

Remark 3.2.1. It is well known that a linear real-valued function need not be

strongly quasiconcave and a strongly quasiconcave real-valued function need not be linear.

Therefore, the converses of Theorem 3.2.4 (b) and Theorem 3.2.4 (c) are not true.

Let f= (fi,...,f) be a p-dimensional continuous vector-valued function defined on

the compact, convex set X. Then, from the above results, we have the following

relationships shown in Figure 3.1.


Figure 3.1









3.3. Parametric Representations of the Efficient Set


In this section, we will present some new results that characterize the efficient

solutions of a p-dimensional MOMP problem. These characterizations are in terms of the

optimal solutions of appropriate parametric scalar optimization problems or in terms of

efficient solutions of appropriate (p-1)-dimensional parametric MOMP problems.

First, we present a new result that characterizes the efficient solutions of a MOMP

problem in term of the optimal solutions of appropriate parametric scalar optimization

problems. As we mentioned in Section 2.1, one of the commonly used scalar optimization

problems is the "ith-objective, e-constraint problem", which can be defined as

Pi(e): max fi(x)
s.t. x e X()( 6),

where i e { 1, 2,...,p}and e e Rp'. Let us define the set of all optimal solutions of problem

Pi(s) by X>(iI).

From Theorem 4.1 of Chankong and Haimes (1983), we know that x* e E(f, X) if

and only if x* is an optimal solution of Pi(s*) for every i = 1, ...,p, where 6* = f(i)(x*). The

following theorem extends this result.

Theorem 3.3.1. Let X c R" be nonempty, and let f = (f,...,fp)T be a p-dimensional

vector-valued function defined on X. Then, x* e E(f, X) if and only if for every i = 1, ...,

p, there is E' e f(i)(X) such that x* is an optimal solution of Pi(e'), i.e.

p
E(f, X)= niU ,{ o) sf(,)(x)}.
i=l


Proof. By Theorem 4.1 of Chankong and Haimes (1983),








P
E(f, X)g iU nte) i e c f.)(x))
i=1

P
Now, we prove the reverse inclusion. Let x* e flUj (i>() 16o f,(X)}.
i=1

Suppose to the contrary that x* o E(f, X). Then, there is x e X such that

f(x) > f(x*), and f(x) : f(x*). (3.15)

Without loss of generality, suppose that fi(x) > fi(x*). Now, we claim that

x* e U (l) () fE(,)(X))}.

Otherwise, suppose x* e Xo(*() ( for some e *e f() (X). Since x* e X1) (e*),we have

f(l)(x*) *. (3.16)

By (3.15) and (3.16), f(i)(x) > f(i)(x*) > E*. This together with x e X yields x e

X(1)(e*). By our assumption that fi(x) > fi(x*), we have x* o Xj,(e*). This contradicts

our assumption that x* e X() (E*). Thus, x* U { X(1 (e) lee f(,)(X)}. This contradicts

P
x* e n U{ f lc f. (x)}.


Therefore, U ,)(e) 1e f(,)(X)}c E(f, X). D
i=1

Next we present a new result that characterizes the efficient solutions of an

MOMP problem with p objective functions in terms of efficient solutions of some

particular MOMP problems with p-1 objective functions.

Sun (1996) introduced the following (p-1)-dimensional parametric MOMP

problem









P(i)(t): v-max f(i)(x)
s.t. x e Xi(t),

where i { 1, 2,...,p), t e R1. The efficient decision set and the efficient outcome set of

problem P(i)(t) are denoted by E(f(i), Xi(t)) and E(f(i)( Xi(t)), RP~"), respectively. When p =

2, problem P(i)(t) is a scalar optimization problem. For convenience, we still use E(f(i),

Xi(t)) and E(f(i)( Xi(t)), RP') to present the optimal solution set and optimal value set,

respectively, when p = 2. Sun (1996) proved the following result.

Theorem 3.3.2. Let X c R" be a nonempty compact convex set, and let f =

(fi,...,f)T be a p-dimensional continuous and quasiconcave vector-valued function defined

on X. If there isj e { 1,...,p} such that fj is strongly quasiconcave on X, then x* e E(f, X)

if and only if, for any given i e (l,...,p}\ {j}, there is t E [fi,f,] such that x* is an

efficient solution of P()(t), i.e.

E(f, X)= { E(fi), Xi(t))l t [f,,f,])

for any given i { 1,...,p)\ {j}.

In the following, we present a new result that characterizes an efficient solution for

a general MOMP problem.

Theorem 3.3.3. Let X c R" be a nonempty compact set, and let f= (fi,...,P)T be a

p-dimensional continuous vector-valued function defined on the compact set X. Then, x*

e E(f, X) if and only if for every i = 1, ..., p there is ti e [fi,f,] such that x* is an

efficient solution ofP(i)(ti), i.e.

P
E(f, X)= U{E(f ,), X,(t)) t [f ,,f ]}.
1=1


Proof We first prove that








P
E(f, X) c U{E(f(O), X, (t)) t [fi,f 1)}. (3.17)
1=I

Let x* e E(f, X). Suppose to the contrary that

P
x* nu{E(f(o, Xi(t))|I t [fi, ]}.


Then, there is i e {1,...,p} such that x* e U{E(fo),Xi(t)) t e [fi,f ]}. That is, for any

t [f, f ], x* e E(f(i), Xi(t)). Take t* = fi(x*). Since x* t E(fv), Xi(t*)), there is x e

Xi(t*) such that f(i)(x) > f(o)(x*), f<1)(x) # f()(x*), and fi(x) > t* = fi(x*). Thus, we have

found a x e X such that f(x) > f(x*) and f(x) f(x*). This contradicts x* e E(f, X). Thus,

(3.17) holds.

Now, we prove that

P
nU{(E(f(,,, Xi(t)) t e [f,,f, ]} c E(f, X). (3.18)


Let x* E nU.{E(f,), Xi(t))I t c [.f, ]}. Suppose to the contrary that x* z E(f,
i=1

X). Then there exists a vector x e X such that

f(x) > f(x*) and f(x) # f(x*). (3.19)

This implies that there exists an i 6 (1, 2,...,p} such that f(i)(x) 2 f(i)(x*), and f(o)(x)

f(i)(x*). Without loss of generality, suppose that f(l)(x) > f(l)(x*), and f(i)(x) : f()(x*).

We claim that

x* 0 U(E(fo), X,(t)) tE[f,,f,]}. (3.20)

To verify (3.20), suppose that x* e E(f(l), Xi(t*)) for some t* e [f,,f,]}. Since x* e

XI(t*),








fi(x*) > t*. (3.21)

By (3.19) and (3.21), fi(x) > fi(x*) > t*. This together with x e X yields x e XI(t*). Since

f()(x) > f(l)(x*), and f()(x) # f(o)(x*), this implies that x* 0 E(f(), XI(t*)). This contradicts

our assumption that x* E E(fi), Xi(t*)). Thus (3.20) must hold.

The fact that (3.20) is true contradicts the assumption that x* e


Uj {E(f(,, X,(t))1 t e [ f, f ]}. Therefore, (3.18) is true, and the proof is complete. I
i=1

Next, we present a new result that characterizes the efficient solutions of a p-

dimensional MOMP problem in terms of the optimal solutions of problem Pi(e), where e is

an efficient outcome of the (p-1 )-dimensional MOMP problem P(1)(t) for some t e [f,, fi ].

Theorem 3.3.4. Let X c R" be a nonempty compact set, and let f = (fi,...,fp,) be a

p-dimensional continuous vector-valued function defined on X. Assume that i { 1,..., p}.

Then, x* e E(f, X) if and only if there is an efficient outcome e of problem P(1)(t) for

some t e [f,,, ] such that x* is an optimal solution of problem Pi(6 ), i.e.

E(f, X)=u { X(() (e) e e F} (3.22)

where F = u { E(f(,)( Xi(t)), R -') It e [fit f ]}.

Proof. We first prove that

E(f, X) c u { X(,()| e F}. (3.23)

Suppose that x* E E(f, X). Set t = fi(x*) and e = f(i(x*). Then x* e X)(e) and

Sf(i)(Xi(t)).

Suppose that e s E(f)( Xi(t)), RP'1). Then there exists x e Xi(t) such that






52

f(i)(x) > e = f(i)(x*) and f()(x) ; f(i)(x*). (3.24)

Since x e Xi(t), it follows that

fi(x) > t = fi(x*). (3.25)

From (3.24) and (3.25), we obtain that

f(x) > f(x*) and f(x) : f(x*).

This contradicts that x* e E(f, X). Therefore e e E(f(i)( Xi(t)), RP~ ) c F.

Now suppose that x* X^) ,(). Then there exists a vector x' e X) (e) such that

fi(x') > fi(x*). (3.26)

Since x' e X),,(), it follows that

f(<)(x') 2> = fi)(x*). (3.27)

By (3.26) and (3.27), we have that f(x') 2 f(x*) and f(x') # f(x*). This contradicts that x*

e E(f, X). Hence, x* e X ,) ().

Since 6 F and x* e X5()(), (3.23) holds.

We now demonstrate the inclusion opposite to (3.23). Let t e [f 1], E

E(f()(Xi(t)), R6-'), and x* e Xi)(e). Since E E f
such that f(i)(x) = e. Thus, x e XO)(e). This together with x* XO)(e)and x e Xi(t)

yields that

fi(x*) 2 fi(x) > t. (3.28)

Suppose that x* v E(f, X). Then there exists a vector x' E X such that

f(x') > f(x*) and f(x') ; f(x*). (3.29)

By (3.28) and (3.29), since x* e X,)(E), we have that








fi(x') > fi(x*) > t, f()(x') 2 f()(x*) > E (3.30)

Since e e E(fi)(Xi(t)), RP'), it follows from (3.30) that

f(i)(x') = f)(x*)= E. (3.31)

Thus, by (3.29), fi(x') > fi(x*). From (3.31), since x' e X, this contradicts that x* e

X() (). Therefore, x* e E(f, X), and we have proved the opposite inclusion to (3.23). D

Theorem 3.3.5. Let X c R" be a nonempty compact set, and let f = (f,.. .,f)T be a

p-dimensional continuous vector-valued function defined on X. Assume that i { 1,..., p}.

If there exist a t* E [f,, f ] and an e* e f(j)(X) such that x* e X(,) (*)and c* e

E(f(i)(Xi(t*)), RP '), then fi)(x*) = *

Proof. Since e* e E(f(i)(Xi(t*)), RP A), there exists an x e Xi(t*) such that f()(x) =

6*. Therefore, x e X(,)(e*). Since x* e X(,) (*)and x e Xi(t*), this implies that

fi(x*) > f(x) 2 t*.

Therefore, x* Xi(t*). Since x* e X, this implies that f(i)(x*) e f,)(Xi(t*)).

Suppose that f(i)(x*) ;e *. It follows from x* E X(1) (*)that

f(i)(x*) > *

Thus, we have found a vector f(i)(x*) e f(i)(Xi(t*)) such that fi)(x*) e and f(i)(x*) > e *

This contradicts that e E(f(i)(Xi(t*)), R-'). Consequently, ii)(x*) = s *. D








3.4. The Closedness of E(f, X) for General MOMP Problems


We now are in a position to focus on our main topic in this chapter: Conditions for

the closedness of the efficient solution set E(f,X) of problem MOMP. We first present a

necessary condition for E(f, X) to be closed.

Theorem 3.4.1. Let X c R" be a nonempty compact set, and let f = (fl,...,f,)' be a

p-dimensional continuous vector-valued function defined on X. If E(f,X) is closed, then

for any ie { 1,...,p}, the point-to-set map s -+ X(, )() is upper semicontinuous on F = v

{E(f(i)(Xi(t)), RP') It [f,,f, ]}.

Proof Let {( k} c F, e e F, E k -- xk E X ()(ek), and Xk-+ x. To complete

the proof, we need to show that x e X(,)(E). From Theorem 3.3.4, for each k, we have

that xk e E(f, X). Since E(f, X) is closed and xk- x, this implies that x e E(f, X).

Furthermore, from Theorem 3.3.5, for each k we have that f(i)(xk) = k. Since f is

continuous, this implies that f(i)(x) = e. Therefore, x e X,)(E).

Suppose, to the contrary, that x a X^()(s). Then, there exists x' e X
that fi(x') > fi(x). Since x' e X(j)(e) and f()(x) = e, fi)(x') f(i)(x). Because fi(x') > f(x),

this implies that f(x') 2 f(x) and f,(x') > fi(x). This contradicts xe E(f,X), so that the proof

is complete. O

By adding an additional condition to the necessary condition in Theorem 3.4.1, we

obtain the sufficient conditions given in the next result for E(f, X) to be closed.

Theorem 3.4.2. Let X c R" be a nonempty compact set, and let f = (fl,... ,f)T be a

p-dimensional continuous vector-valued function defined on X. If there is an i









{1,...,p}such that the point-to-set map e -+ Xj,)() is upper semicontinuous on F = u

(E(f(i)( X(t)), R ')j t [f ,,f]}, and such that the point-to-set map t -+ E(f(i(Xi(t)),

RP-') is upper semicontinuous on [fi, f ], then E(f, X) is closed.

Proof. Let {x") be a sequence of vectors in E(f, X) such that x" -- x* as n oc.

We need to show that x* e E(f, X). For each n, since x" e E(f, X), it follows from

Theorem 3.3.4 that if we choose i as stated in the theorem, then there exist t" e [f,, ,]

and e" e E(f(i)(Xi(t")), RP-') such that x" e X0)(e"). Since {tn} and { e n) are bounded, it

follows that they have convergent subsequences. Without loss of generality, assume that

{t") and { e } are two convergent subsequences with t" -> t* and e" -> e *. Since the

point-to-set map t -> E(f(i)(Xi(t)), R"-') is upper semicontinuous on [fi, f ], it follows that

S* e E(f()(Xi(t*)), RP ') c F. (3.32)

Since the point-to-set map e X^(,)(e) is upper semicontinuous on F, it also follows that

x* E X((,)(e*) (3.33)

By (3.32), (3.33) and Theorem 3.3.4, x* E E(f, X). D

One may notice that, in practice, it is generally quite difficult to verify the

conditions in Theorem 3.4.2 for a given MOMP problem. This may suggest that Theorem

3.4.2 be of little practical use. However, as we will see in the next section, Theorem 3.4.2

can be used to develop some other more practical sufficient conditions.

In the following, we will present some sufficient conditions for E(f, X) to be closed

under some generalized convexity assumptions on f








Theorem 3.4.3. Let X c R" be a nonempty, compact convex set, and let f =

(f,...,fp)T be a p-dimensional, continuous, vector-valued function defined on X. If the

vector-valued function f is strictly quasiconcave on X, then E(f, X) is closed.

Proof By Theorem 3.3.1, we have that

E(f, X)= Ufile>) lEf()o(X)).
i=1

To prove that E(f, X) is closed, we need only to prove that U({X(i)(c-) seff((X)}is

closed for every i = 1,...,p.

Let i e {1, 2,...,p), {xk} c U(j{X(i)()1 ef(,)(X)} and xk -+ x* when k oc.

Then, there exist 8k E f(i,(X) such that xk Xk ( ) for k = 1, 2,.... Since { ek}

f(i)(X), and f(,(X) is bounded, { ek } has at least one convergent subsequence. Without

loss of generality, assume that { k} is an arbitrary convergent subsequence, and that

sk --> *. Since f(,(X) is closed and { sk } f(,)(X), E e f(i)(X). For each k, since xk

e X(,)(.k), we know that xk E X()(ck), i.e. f(,)(xk)> _k. By continuity of f(i), this

implies that f(i)(x*) > c *, i.e. x* e X()(6*).

Suppose that x* e X0((e*). Then there exists y* e X(,(e*) such that

f,(y*) > f,(x*). (3.34)

Since fis strictly quasiconcave on X, it follows from Theorem 3.2.3 that f(; is also strictly

quasiconcave on X. By Definition 3.2.3, this implies that X(,(e) is a lower

semicontinuous map on { e X, )() # 0). Therefore, since y* e X()(e*), X(,(e) is









lower semicontinuous at *. Hence, there exists {ykk such that yk e X()(ek) for k large

enough, and such that

yk -> y* when k -- oo. (3.35)

It follows from (3.34), (3.35) and xk -> x* that

fi(yk) > f,(k)

when k large enough. This contradicts that xk e X (, (k)for all k = 1, 2,... Therefore, x*

e X(i(e*). Consequently, E(f, X) is closed. O

Theorem 3.4.4. Let X c R" be a nonempty compact convex set, and let f =

(f,,...,f)T be a p-dimensional, continuous, vector-valued function defined on X. If each

function fi, i = 1,...,p, is strongly quasiconcave on X, then E(f, X) is closed.

Proof This result follows immediately from Theorem 3.2.4 (b) and Theorem

3.4.3. D

A different result related to Theorem 3.4.4 is given in Theorem 4.1.11(i) of Luc

(1989).

Theorem 3.4.5. Let X c R" be a nonempty compact convex set, and let f =

(fi,...,fp)T be a p-dimensional continuous quasiconcave vector-valued function defined on

X. If there exists at least one i{ 1, ..., p} such that fi is strongly quasiconcave on X and

the optimal solution set map X,)(e) of problem Pi(6) is upper semicontinuous on f()(X),

then E(f, X) is closed.

Proof Let (xk}c E(f, X) and xk -- x*. Choose i as in the statement of the

theorem. Since f is continuous and quasiconcave on X, and fi is strongly quasiconcave on






58


X, Theorem 3.1 of Sun (1996) implies that for each k there exists a vector 8k e f(,)(X)

such that xk E X1)(Ek). Since { e }c f()(X), and f(,)(X) is bounded, { Ek } has at least

one convergent subsequence. Without loss of generality, assume that { k is a

convergent subsequence and 6k- ) as k co. Then e* e f(,(X), since f(j(X) is

closed. Since the point-to-set map e -> X5()(E) is upper semicontinuous on f(i)(X), we

have that x* e X()(e*). Since f is continuous and quasiconcave on X, and f, is strongly

quasiconcave on X, from Theorem 3.1 of Sun (1996), x* e E(f,X). Consequently, E(f,X)

is closed. D

Theorem 3.4.6. Let X c R" be a nonempty compact convex set, and let f =

(fi,...,fp)T be a p-dimensional continuous quasiconcave vector-valued function defined on

X. If there exists at least one is { l,...,p} such that fi is strongly quasiconcave on X and the

point-to-set map e -- X,)(e) is lower semicontinuous on f(i)(X), then E(f, X) is closed.

Proof By Theorem 3.4.5, we only need to prove that the point-to-set map from e

to the optimal solution set X(l)(c) of problem Pi(E) is upper semicontinuous on fi)(X).

Let { k } fi)(X), 6 f(i)(X), 6k ) *, k e X()(ek), xk -) x*. Since, for each k, xk

e X()(6k), we have xk e X,)(e6k), i.e. f()(xk)2 6ek By continuity of f on X, this

implies that f(,)(x*) 2 6*, i.e. x* E X(,(e*).

Suppose that x* X^() (6*). Then there exists a point y* e X,,(6*) such that

f (y*) > f (x*). (3.36)









Since X(,)() is lower semicontinuous at 6 *, there exists a sequence {yk} such that for k

large enough, yk e X(, (Ek), and such that

yk -+ y* as k -+ oo. (3.37)

It follows from (3.36), (3.37) and xk -_ x* that there exists an integer k of sufficient

magnitude such that

f(yk) > f,(xk).

Since yk e XO)(ek) for k sufficiently large, this contradicts that xk E I,(ek) for all k = 1,

2,.... Therefore, x* e X^((e*). Hence, the point-to-set map e -> XR()(E) is upper

semicontinuous on f(i)(X). ]



3.5. The Closedness of E(f, X) for Bicriteria Programming Problems


In this section we consider the closedness of E(f, X) for the special case when p=2.

Theorem 3.5.1. Let X c R" be a nonempty compact set, and let f = (fi, f2)T be a

two-dimensional continuous vector-valued function defined on X. If every local maximum

solution offi in X is also a global maximum solution offi on X for i = 1, 2, then E(f, X) is

closed.

Proof Since every local maximum solution of f; (i = 1, 2) in X is also a global

maximum solution off, on X, it follows from Theorem 3.3 of Zang and Avriel (1975) that

the point-to-set map t -> Xi(t) is lower semicontinuous on [f, f, ], and the point-to-set

map 6 -+ X2( ) is lower semicontinuous on [f, f2 ]. To establish the theorem, we will

use these two results and Theorem 3.4.2. Thus, it is sufficient to show that the point-to-set









map e --. X())() is upper semicontinuous on [f2 f, 2], and that the point-to-set map t -+

E(f2(XI(t)), R ) is upper semicontinuous on [fI, f, ].

We first prove that the point-to-set map t -+ E(f2(XI(t)), R') is upper

semicontinuous on [ f,, ]. Suppose that t* [f,, f, ], that (tk} c I[f,, satisfies tk

t* as k oo, that yk e E(f2(Xi(tk)), R ) for each k, and that yk y* as k oo. Since yk

e f2(Xi(tk)), it follows that there exists xk e Xi(tk) such that

yk= f2(xk) and f(xk) > tk. (3.38)

Since {xk} is bounded, it follows that it has at least one convergent subsequence. Without

loss of generality, assume that {xk} is a convergent subsequence and that xk -+ x* e X for

some x*. Then, by continuity off,

y* = f2(x*) and fi(x*) 2 t*. (3.39)

Hence, y* e f2(Xl(t*)).

Suppose that y* o E(f2(Xi(t*)), R ). Then there exists a vector x' e XI(t*) such

that

f2(x') > y* = f2(x*). (3.40)

It follows from the lower semicontinuity of the point-to-set map t -- XI(t) at t* e [ f, Tf ]

that there exist {zk} c X such that zk eXi(tk) for k large enough, and

zk when k oo. (3.41)

By (3.38), (3.40), (3.41) and xk -- x*, we have that

f2(zk) > f2(xk) = yk (3.42)









when k large enough. Notice further that zk e Xi(tk) for each k large enough. This

together with (3.42) contradicts that yk e E(f2(Xl(tk)), R ) for each k. Therefore, y* e

E(f2(XI(t*)), R ). Hence, the point-to-set map t -- E(f2(XI(t)), R') is upper

semicontinuous on [f,, f, ].

Similarly, we can also prove that the point-to-set map e -+ X5(^0(e) is upper

semicontinuous on [f2 f2 ].

Consequently, we have that E(f, X) is closed. O

Remark 3.5.1. In Schaible (1983), Schaible proved that the efficient solution set

E(f, X) is closed if X is compact and convex and fi is continuous and strictly quasiconcave

on X for each i = 1, 2. However, we do not require any convexity assumptions on X in

Theorem 3.4.6. Therefore, Theorem 3.5.1 extends the result of Schaible (1983) to cases

where f need not possess any type of generalized concavity or convexity properties and X

need not be convex.

Remark 3.5.2. It is well known that every local maximum solution of a continuous

and strictly quasiconcave function on a convex compact set is a global maximum solution.

However, we can easily find examples to show that a continuous function may not be

strictly quasiconcave even though each of its local maximum solutions is also a global

maximum solution. Therefore, Theorem 3.5.1 extends the result of Schaible (1983) to the

cases where f; may not be strictly quasiconcave for any i = 1, 2.









3.6. Concluding Remarks


From the literature, it is known that the efficient decision set E(f, X) of problem

MOMP is generally not closed when p > 3, even if every component of f is strictly

quasiconcave on X and X is a compact, convex set (see, e.g., Choo and Atkins (1983) and

Steuer (1986)). It is also well known that the efficient decision set of a multiple objective

linear programming problem is closed. This leaves open the question of whether or not,

when X is nonempty, compact and convex, there exist classes of vector-valued, nonlinear

functions f whose components are not all strictly quasiconcave on X for which it is

guaranteed that E(f, X) is closed.

In this chapter, we have answered the question posed above, and several other

questions concerning the closedness of E(f, X). One of the main tools that we used to

accomplish this was to introduce the new notions of quasiconcavity and strict

quasiconcavity for vector-valued functions. These notions are direct extensions of the

definitions of quasiconcavity and strict quasiconcavity for real-valued functions.

Using these two new definitions and some other results, we showed that E(f, X) is

closed when f is a continuous, strictly quasiconcave vector-valued function over the

nonempty, compact convex set X. We also showed that if each component of f is

continuous and strongly quasiconcave on the nonempty, compact convex set X, then E(f,

X) is closed. We went on to show several other necessary, sufficient, and necessary and

sufficient conditions for E(f, X) to be closed. For instance, in the bicriteria case, we

extended a result of Schaible (1983). Our new result gives a sufficient condition for E(f,

X) to be closed in cases where f is continuous and X is nonempty and compact.














CHAPTER 4
FINDING THE SET OF ALL EFFICIENT EXTREME POINTS
FOR PROBLEM MOLP IN THE OUTCOME SPACE




A multiple objective linear programming problem (MOLP) can be written as

follows:

MOLP: v-max Cx
s.t. Ax < b
x >0,

where C e RPX", A e Rm"n, b e Rm. Then, the decision set X for problem MOLP is

X = {x e R" Ax
and the outcome set Y for problem MOLP is

Y= {Cx x e X).

Throughout this chapter we will assume that X is a compact set. It follows that Y is also a

compact set. In order to make the chapter more self-contained, we will restate some

concepts for problem MOLP.

A point x0 E R" is called an efficient solution for problem (MOLP) when x0 e X

and there exists no point x e X such that Cx > Cx0 and Cx # Cxo. Similarly, a point yO e

RP is called an efficient outcome for problem (MOLP) when yO e Y and there exists no

point y e Y such that y 2 yo and y # yo. The set of all efficient solutions and the set of all

efficient outcomes for (MOLP) are called the efficient decision set and the efficient









outcome set, respectively, for problem MOLP, and are denoted by XE and YE,

respectively. The set of all extreme points of X and the set of all extreme points of Y are

denoted by Xex and Yex, respectively. We will also call a point in XE r) Xx an efficient

extreme point in decision space, and a point in YE n Yex an efficient extreme point in

outcome space.

During approximately the past thirty years, various algorithms have been

developed for finding the set XE n Xex, or the entire set XE. Although these algorithms are

quite different in certain ways, they all utilize some modified version of the simplex

method. One general difficulty with these methods is that, because of their complexity and

the complexity of XE and XE r X.x, they often encounter CPU time and computer storage

limitations. Most of the computer time is consumed by efficiency tests that evaluate the

nonbasic variables at each basis (see, p. 245 of Steuer (1986)). The computer storage

limitations arise due to the sheer size of XE and XE n Xex (Benson, 1998a). Another

general difficulty concerns the handling of degeneracy (see Section 2.3). Apart from these

difficulties, there are some other problems related to these methods. For example, these

methods frequently generate a set so large that it overwhelms the DM (see e.g. Benson,

1998a). Because of these difficulties, these methods have achieved only limited success in

practice.

Recently, some researchers have begun to turn their attention to investigating tools

and methods for generating all or part of the efficient outcome set YE, (see e.g. Benson,

1995a, 1997, 1998a, 1998b, and 1998c; Benson and Sayin, 1997; Dauer, 1987, 1993;

Dauer and Liu, 1990; Dauer and Saleh, 1990; Dauer and Gallagher, 1996). This is in part

because the dimension of the outcome space is usually much smaller than the dimension of









the decision space. Furthermore, the efficient outcome set generally has a much simpler

structure than the efficient decision set. Thus, generating all or part of the efficient set in

the outcome space is expected to be more practical than doing so in the decision space.

In this chapter, we will present a weight set decomposition algorithm for

generating the set YE n Yex. The approach of decomposing the weight set was originally

developed by Gal and Nedoma to deal with multiparametric linear programming problems

(see Gal and Nedoma, 1972). Later, this approach was adapted by Zeleny for use in

attempting to generate the set XE r Xex (see Zeleny, 1974).

The weight set decomposition approach involves "decomposing" the weight set W

= { w = (w, ... wp)1 I wj > 0 for all j = 1,...,p } into a finite number of subsets. In Zeleny

(1974), the weight set W0 is decomposed into a finite number of subsets associated with

the different efficient bases in the decision space of problem MOLP (the definition of an

efficient basis will be given later in Section 4.1). We call this decomposition the decision

set-based decomposition of the weight set W.

In this chapter, we will decompose the weight set W into a finite number of

subsets associated with the different efficient extreme points in the outcome space of

problem MOLP, rather than the different efficient bases in the decision space of problem

MOLP. We call this decomposition the outcome set-based decomposition of the weight

set W. Unlike the decision set-based decomposition, our outcome set-based

decomposition will establish a one-to-one correspondence between the efficient extreme

points in the outcome set and subsets of W. Based upon this decomposition, we will then

present a new algorithm, called Basic Weight Set Decomposition Algorithm, (BWSDA),

for generating the set YE r Yex.









The algorithm BWSDA works in the following way: At each iteration, the

algorithm will first either find a weight vector which will lead to an unexplored efficient

extreme point in the outcome space or conclude that all points in YE r Yex have been

generated. If a new weight vector is found at some iteration, the algorithm will in the next

iteration call for solving at most (p+l) linear programs in order to find an unexplored

efficient extreme point in the outcome space.

Two different kinds of approaches will be developed for finding, if it exists, a new

weight vector in each iteration that leads to an unexplored extreme point in the outcome

space. One approach uses a tree search method. The other calls for solving a special

concave minimization problem over a polyhedron. These two different approaches yield

two versions of the Basic Weight Set Decomposition Algorithm, Weight Set

Decomposition Algorithm-I (WSDA--I) and Weight Set Decomposition Algorithm-II

(WSDA--II).

This chapter is organized in the following way. In Section 4.1, we will review the

decision set-based decomposition of W developed by Zeleny (1974). Our outcome set-

based decomposition of W0 will be given in Section 4.2. The new algorithms for

generating the set YE r Yex will be given in Section 4.3. In Section 4.4, we will compare

the new algorithms with Zeleny's decision set-based algorithm (see Zeleny, 1974) and

with Benson's outcome set-based algorithms (see Benson, 1998a, c). Section 4.5 gives

some conclusions.









4.1. Decision Set-Based Decomposition of the Weight Set W


As stated in Section 2.1, one common strategy for dealing with problem MOMP is

to characterize efficient solutions of problem MOMP in terms of optimal solutions of the

weighting problem P(w). For problem MOLP, we use LP(w) to denote the corresponding

weighting problem P(w), i.e. LP(w) is given by

LP(w): max wTCx
s.t. Ax< b
x>0

in the case of problem MOLP. From Theorem 2.6 in Yu and Zeleny (1975), a vector x0 e

R" is an efficient solution of problem MOLP if and only if there exists a vector w e W

such that xo is an optimal solution for problem LP(wo). We can therefore, in theory, find

the set of all efficient solutions for problem MOLP by solving problem LP(w) for a

properly chosen set of weights in W.

In order to introduce the concept of an efficient basis for problem MOLP and to

show relationships between efficient bases for problem MOLP and subsets of W, we first

transform LP(w) to its standard form. By adding slack variables, LP(w) is equivalent to

the following problem:


LP(w)': max wT

-x
s.t. A b,



), ,


where s eRm, C= (C, Opx.), and A=(A, I,.,).









Let B = Bmxm be a nonsingular submatrix of A, and let B' denote the remaining

submatrix. B is called a basis of problem LP(w)'. If the basic solution determined by B is

also an optimal solution, then B is called an optimal basis of problem LP(w)'. If there is a

w in WO such that B is an optimal basis of problem LP(w)', then it is called an efficient

basis of MOLP. Let C, be the submatrix of C related to B, and let CB, the remaining

submatrix. The simplex tableau T of LP(w)' related to the basis B is of the form


=I B-'B' B-'b)
0 wTZ w'z'

where Z = C, B-'B' C8. and z = C, B-'b. Let x(B) denote the basic solution determined

by B. If B-b > 0, then x(B) is a basic feasible solution. In this case, the point determined

by x(B) is an extreme point of X. If additionally wTZ > 0, then the basic feasible solution is

an optimal solution for problem LP(w)'. If we further have w > 0, then the basic feasible

solution is an efficient extreme point of problem MOLP in the decision space and the basis

B is an efficient basis.

Let W = { w = (wi, ..., w)T I wj > 0 for all j = 1,...,p }. For any given efficient

basis B for problem LP(w)', consider the polyhedral cone W(B) defined in Zeleny (1974)

by

W(B) = {w| wTZ >0}.

From Theorem 3.6 in Yu and Zeleny (1975), there exists a finite number of efficient bases,

Bi,..., Bq, such that

W
Wc UW(B,). (4.1)
1=]









As a result, W can be decomposed into a finite number of subsets W r- W(Bi), i=l,...,q

such that for each x' e XE r Xe there exists at least one W(Bi), i { ,...,q}, such that x'

= x(Bi) and x' is an optimal solution of problem LP(w)' for any w in W(Bi)

We will call the decomposition of WO which is based upon (4.1) the decision set-

based decomposition of W.

The weight set W can also be decomposed into subsets associated with different

efficient extreme points in the decision space. To see this, for a given point x in X, let

W(x) be defined by

W(x)= {weRPI WrCx > wrCx' for all x' in X}. (4.2)

It can be shown that

We UW(x). (4.3)
xeX r'XE

A similar result can be found on p.157 of Chankong and Haimes (1983). As a result of

(4.3), the weight set W can be decomposed into a union of subsets WO n W(x), x e XE n

Xex.

Zeleny (1974) discussed the algorithmic possibilities and strategies for locating XE

n Xex by using decision set-based decompositions of W based upon (4.1) or (4.3).

However, as noted by Zeleny (1973, 1974), there are some difficulties implementing

algorithms that are based upon either of these decompositions. Two of the major

difficulties are as follows.

The first difficulty pertains to algorithms that might decompose WO using (4.1). In

this approach, a difficulty arise due to the fact that the one-to-one correspondence

between an efficient basis B and the subset W(B) may be destroyed when degeneracy









occurs. When x is a nondegenerate efficient extreme point, there is only one efficient basis

B associated with x. In this case, it can be shown that W(x)=W(B) When x is a

degenerate efficient extreme point, two or more efficient bases can correspond to x.

Suppose that B1,...,Bi are the distinct efficient bases associated with x. Then, it can be

shown that


W(x)= UW(B).
1=1

In this case, there is a possibility that each W(B;) is a strict subset of W(x), and that W(Bi)

# W(Bj) for any given two different bases Bi and Bj associated with x. This implies that in

methods that use a decision space-based decomposition of W based upon (4.1), all

efficient extreme points may be discovered before Wo is fully decomposed.

The second difficulty pertains to approaches that might decompose Wo based upon

(4.3). This difficulty is due to the fact that there need not be a one-to-one correspondence

between the efficient extreme points x and the sets W(x). Given two different efficient

extreme points x' and x1, there is a possibility that W(x') = W(xJ). This would imply that in

some cases, we may fully decompose Wo via a decision set-based decomposition method

based upon (4.3), yet all efficient extreme points may not be found thereby.



4.2. Outcome Set-Based Decomposition of the Weight Set W


In this section, we will present an outcome set-based decomposition of W. We will

show that the outcome set-based decomposition approach overcomes the difficulties of

the decision set-based decomposition approaches. Some useful properties of the outcome

set-based decomposition are also given.








In the following, we will first introduce some definitions and notation. All

definitions from convex analysis are standard and may be found, for instance, in

Rockafellar (1970).

Let y denote a point in Y. Define W(y) by

W(y) = { weRPI wTy > wTy' for all y' in Y ). (4.4)

The set W(y) is called the normal cone to Y at y (see Rockafellar (1970) for example). It

is easy to show that x is an optimal solution to problem LP(w) if and only if we W(y),

where y = Cx.

For a convex set S in RP, the intersection of all convex cones containing S is also a

cone and is called the convex cone generated by S. We denote the convex cone generated

by a convex set S by cone (S).

For any convex set U in Rk, the interior and the relative interior of U will be

denoted int U and ri U, respectively.

Proposition 4.2.1. For a point y in Y, let Y-{y} = (y'-yl y'e Y}. Then, cone (Y-

{y}) is a polyhedral cone containing the orign and

cone (Y-{y}) = {k(y'-y)l y'e Y, k >0}.

Proof. Since Y is a polyhedron and yeY, Y-{y} is a polyhedron containing the

origin. It follows from Corollary 19.7.1 of Rockafellar (1970) that cone (Y-{y}) is a

polyhedral cone containing the origin. Furthermore, by Corollary 2.6.3 of Rockafellar

(1970) we have cone (Y-{y}) = {k(y'-y)l y'e Y, k > 0}. 3

For a cone K in RP, the cone {z'e RP < 0 for any z in K} is called the polar

cone of K and is denoted by K.








Proposition 4.2.2. For each y e Y, the cones cone (Y-(y}) and W(y) are

polyhedral sets and are polar to one another.

Proof The results are immediate from Theorem 2.1 of Varaiya (1967), Theorem

14.1 of Rockafellar (1970), and Corollary 19.2.2 of Rockafellar (1970). O

Given a nonempty convex set K in RP, let O0K be defined by

0'K = yeRPly + K c K}.

The set 0'K is called the recession cone of K. From Theorem 8.1 of Rockafellar (1970),

the recession cone 0+K is a closed cone containing the origin. If K is a closed convex cone,

it is easy to verify that 0*K = K. Recall that the lineality of a nonempty convex set K is the

dimension of the set (-0OK) n O'K.

Proposition 4.2.3. Let y e Yex. Then, the lineality of cone (Y-{y)) equals zero.

Proof Since y e Yex, it is easy to show that cone (Y-{y}) is a pointed cone. That

is,

(- cone (Y-{y})) n cone (Y-{y})= {0}.

Therefore, the lineality of cone (Y-{y}) equals zero. l

Since Y is a polyhedron, it has a finite number of extreme points. Therefore, Y has

at most a finite number of efficient extreme points. Since Y is compact, the problem of

minimizing w y over Y has at least one extreme solution for any w>0. Therefore, Y has at

least one efficient extreme point. Without loss of generality, assume that YE Yex =

{y1,.. .,yq}, where q > 1 is an integer.

Notice that the efficient outcome set of problem MOLP is identical to the efficient

decision set of the problem








v-max y, s.t. yeY,

where Y is the outcome set of problem MOLP. By applying (4.3) to the above problem,

we obtain the following result.

Theorem 4.2.1. W c W(y').


Theorem 4.2.1 implies that W can be decomposed into the union of its subsets W

n W(y'), i=l,...,q. We will call this decomposition of W the outcome set-based

decomposition. In the following, we will present some properties of W(y'). Then we will

show that the decomposition of WO given above establishes a one-to-one correspondence

between the efficient extreme points y', i = 1, 2,..., q, ofY and the subsets WO W(yi), i

=1,...,q, of W.

Theorem 4.2.2. For each i = 1,2,...,q, Won int W(y') # 0.

Proof. Choose i E {1, 2, ..., q}. By Propositions 4.2.1 and 4.2.2, cone (Y-{y')) is

a polyhedron in RP containing the origin, and cone (Y-{y'}) and W(y') are polar to each

other. Thus, from Corollary 14.6.1 of Rockafellar (1970), the dimension of W(y') equals

p-k, where k is the lineality of cone (Y-{y'}). Since y' e Yex, by Proposition 4.2.3, the

lineality of cone (Y-{y'}) equals zero. Therefore, the dimension of W(y') equals p. This

implies that the relative interior of W(y') equals its interior, i.e.

ri (W(y') )= int (W(y')). (4.5)

Since y' e YE, WrW(y') 0. Notice that WO is an open set. Consequently, by Corollary

6.3.2 of Rockafellar (1970), it follows that

W n ri (W(y') # 0. (4.6)

From (4.5) and (4.6), we have W n int (W(y')) ; 0. D









Remark 4.2.1. For each i = 1, 2, ..., q, since int (Wo 'W(y')) = Wor int W(y'), it

follows from Theorem 4.2.2 that W n W(yi) has a nonempty interior. Based upon this

result, the outcome space-based decomposition method will subdivide the weight set W

into a union of subsets of W with nonempty interiors. We will use Example 4.2.1 later to

illustrate this property. The example will also show that this property does not hold for the

decision space-based decompositions.

Theorem 4.2.3. Let w e Wor int W(y') for some i e{1,...,q}. Then y' is the

unique optimal solution to the linear program

max {wTy yeY}.

Proof. Let ek, k = 1, ..., p, denote the vector in RP whose k-th component equals

one and whose other components equal zero. Since W(y') is a polyhedron, it is a convex

set. Since we int W(y'), this implies that there exists a scalar M sufficiently large so that

for each k = 1,...,p,

w + k ek int W(y'),
M

where Wk is the k-th component of w. Notice that


Z(w + i) pw + ( )w = (p + )w. (4.7)
k= M M M


Setwk =w+ w ek, k= 1,...,p, and set t = 1/(p+ ). Then
M M

t > 0 and, by (4.7),

p
w = twk. (4.8)
k=1


Since weint W(y'), yi is an optimal solution to max {wTyl yeY}.









Suppose that there exists a vector y eY such that wTy = wTyi. By (4.8), we have

P P
wTyi= :t(wk) y' and wTy = Lt(wk)Ty. (4.9)
k=1 k=l

Since wkeint W(yi) for all k = 1,2,...,p, we have

(wk)Tyi > (wk)Ty

for each y in Y and for all k=l,...,p. Since y eY, this implies that

(wk)Tyi > (wk)T

for each k=l,...,p. It follows from t > 0 that

t(Wk)Tyi > t(wk)T (4.10)

for each k=l,...,p.

We now claim that for each k = 1, 2, ..., p,

(wk)Tyi = (k)Ty.

Suppose, to the contrary, that there exists some koe { 1,2,...,p} such that

(wko )Tyi > (wko)Ty

Then,

t(w ko )Tyi > t( W k )Ty. (4.11)

From (4.9)-(4.11), we see that

P P
wTyi= y> t(wk > k )T y=WT
k=l k=l

which contradicts the assumption that wr = w'yi. We have thus proved the claim that for

each k = 1, 2, ..., p,


(wk)Tyi =(w)T y .








By the claim proven in the previous paragraph and the definitions of wk, k=l,...,p,

(w+ wk k )Tyiw+ W ek)T
M M

for each k = 1, ..., p. Since w y = wTyi, this implies that
Wk k)Ti k (k(412)
(ek e)Tyi W()Te (4.12)
M M

for each k = 1,...,p. Since w e Wo, we know that wk > 0, k = l,...,p. Thus, by (4.12),

since M > 0, (ek)Ty' = (ek)T for each k = 1,...,p. As a result,

yi= y.

Consequently, we have proven that, ifwTyi = wTy and y eY, then y' = y. Therefore, y' is

the unique optimal solution to max {wTyI yeY}. E

Remark 4.2.2. Let y' be an efficient extreme point of problem MOLP in the

outcome space. From Theorem 3.1 in Benson (1982), there exists a weight vector w in W

such that y1 is the unique optimal solution to the linear program max {wTyl yeY}.

Furthermore, from Theorems 4.2.2 and 4.2.3, we see that y' is the unique optimal solution

to the linear program max {wyl yeY} for any we Won int W(y'), where WV int

W(y')>0. This observation slightly extends the result in Theorem 3.1 of Benson (1982).

Theorem 4.2.4. Suppose that i, j { 1, 2, ..., q} and that y' i yJ. Then,

W n int W(y') n W(yi) = 0.

Proof Suppose that WO n int W(y') n W(y') # 0. Then, we may choose a point w

in W n int W(y') W(yj). Since w e WO int W(y'), it follows from Theorem 4.2.3 that

y' is the unique optimal solution to the linear program

max {wTy yeY}.








Since w e W(y'), yJ is an optimal solution to the linear program

max {wryl yeY}.

Therefore, yi = y. This contradicts y1i yj. Thus, W n int W(y') r W(y') = D

Remark 4.2.3. Theorem 4.2.4 implies that there exists an outcome space-based

decomposition of WO that consists of a union of subsets of W with no interior points in

common. Later, Example 4.2.1 will illustrate this property.

Theorem 4.2.5. Suppose that i, je {1, 2,...,q} and that yi y'. Then, WO W(y')

WO n W(y').

Proof. The result follows immediately from Theorem 4.2.2 and Theorem 4.2.4. D

Remark 4.2.4. Theorem 4.2.5 implies that there exists a one-to-one

correspondence between the efficient extreme points in the outcome space and the subsets

of the corresponding weight set W.



Theorem 4.2.6. Let H be a nonempty face of Y, and let y, y be two points in ri H.

Then, W(y)= W( ).

Proof. Since H is a nonempty face of Y, it is a convex set. In fact, from

Rockafellar (1970), since Y is a polyhedron, H is a nonempty polyhedral set. By Theorem

6.4 of Rockafellar (1970), since y, y e ri H, we may choose t1, t2 > 1.0 such that

(1-ti) + tx e H,

(l-t2) y + t2 E H.

By the definition of W( y), for each w in RP,

weW(y) iff wT(z- ) <0, forallz e Y,








iff wT(z- 9)+wT(-y)< 0, forall z Y, (4.13)

By choosing z = (1-ti) 9 + t1 y in (4.13), we obtain

wT[t](y 9)] + wT(9y)< 0.

This implies that

(ti 1) wT(5- y)<0.

Since tj > 1, we see that

wT(y y) 0. (4.14)

Similarly, by choosing z = (1-t2) y + t2 9 in (4.13), we see that

wT(9-Y))< 0. (4.15)

Together (4.14) and (4.15) yield

w T( 9)=0,

so that wT = WTy. By definition of W(y), this implies that weW(y) if and only if

weW(9). O

A set S' is said to be a proper subset of S if S' is a nonempty subset of S and S'S.

Theorem 4.2.7. Let I be a proper subset of{1, 2,..., q}. Then, Wo0 (UW(y')) is
ieI

a proper subset of W.

Proof. Since I is a proper subset of {1, 2,..., q}, we may choose some j {1, 2,...,

q} such that jel. By Theorem 4.2.2, we can choose a point we W n int W(y'). Since j

I, we have y' # y' for each i e I. It follows from Theorem 4.2.4 that w z W(y') for each i e

I. Therefore, we WO (U W(y')). Consequently, Wo0 (U W(y')) is a proper subset of
iEl JeI









Remark 4.2.5. Theorem 4.2.7 will be used later to prove that the Basic Weight Set

Decomposition Algorithm for finding all efficient extreme points in the outcome space is

finite and valid.

The following example is a slight modification of Example 22 on p. 185 of Steuer

(1986). It shows that the outcome space-based decomposition establishes a one-to-one

correspondence between the efficient extreme points y', i = 1, 2,..., q, and the subsets W

nW(y'), i = 1, 2, ..., q, of W. It also shows that, for each i { 1, 2, ..., q}, W nW(y') has

a nonempty interior, and that, for any i :- j and i, j e {1, 2, ..., q), WO r-W(y') and W

nW(y') have no interior points in common.

Example 4.2.1. Consider the MOLP problem

max x+ + 2x2 = y
-2x1 = Y2
s.t. 2x, + x3 6
-xi + 3x2 6
-xI + X2 < 2
x3 2
X1, x2, X3 2 0.

The decision set X (see Figure 4.1) has eight extreme points. These are x' = (0, 2,

0)T, x2 = (0, 2, 2)T, x3 = (3, 3, 0)T, X4 = (2, 8/3, 2)T, x5 = (0, 0, 0)T, x6 = (0, 0, 2)T, x7 = (2,

0, 2)T and x8 = (3, 0, 0)T. Here, x' and x2 are each degenerate extreme points, and x3, x4,

...,x8 are nondegenerate extreme points. It can be shown that in this problem Xex r XE =

{x', X2, X3, X4}. The outcome set Y (see Figure 4.2) has four extreme points. These are y'

= (4, O)T, y3 = (9, -6)T, y8 = (3, -6)T and y5 = (0, )T. In the outcome space, the set of

efficient extreme points is given by Yex n YE = {y3, y3}. In Figure 4.2, for each i = 1, 2, ...,

8, y' is the outcome point corresponding to x'.



















Figure 4.1. Decision Set X


Figure 4.2. Outcome Set Y


It can be shown that

W(x') = W(x2) = (weR21 w1 > 0, 5w1 6w2 < 0),

W(x3) = {weR2l w1 >0, 5w1 6w2 > 0),

and

W(x4) = {weR2I w > 0, 5w, 6w2 = 0}.

It is obvious the correspondence between x' and W0 r- W(x'), i = 1, 2, ..., 8, is not

a one-to-one correspondence. Notice that int W(x4) n W = 0 and that int (Wo n W(x1))

= int (W n W(x2)).








For y' and y3, it is not difficult to show that

W(y') = { weR2 W 12 0, -5w, + 6w2 2 0},

and

W(y) = { weR2I w1 > 0, -5wi + 6W2 < 0}.

It is obvious that the correspondence between y' and W n W(y'), i = 1, 3, is a

one-to-one correspondence. Notice also from the above observations that while int W(x4)

r Wo = 0, int W(y') n Wo 0 and int W(y3) r W # 0. Furthermore, notice that Wo =

(Wo nW(y')) u (Wo nW(y3)), and that the subsets (Wo n int W(y')) and (W r int W(y3))

of Wo have no points in common.



4.3. The Basic Weight Set Decomposition Algorithm


The algorithm BWSDA is based on the decomposition of the weight set Wo. At

each iteration, the algorithm will first either find a weight vector which will lead to an

unexplored efficient extreme point in the outcome space or conclude that all points in Ye

r YE have been found.

Two questions arise. One is how to find a weight vector which will lead us to an

unexplored point in Yex n YE, or to determine that such a weight vector does not exist.

We will address this question later. The other question is how to find a point in Yex rn YE,

given a weight vector in Wo. The following two results will answer the latter question.

Theorem 4.3.1. Assume that w is a weight vector in Wo. If x* is the unique optimal

solution to problem LP(w), then y* = Cx* is an efficient extreme point of problem MOLP

in the outcome space RP.









Proof. Since x* is the unique optimal solution to problem LP(w), y* is the unique

optimal solution to the problem max {wTyl yeY}. By linear programming theory, this

implies that y* is an extreme point of Y. Since weWo, it follows from Geoffrion (1968)

that x* is an efficient solution of problem MOLP. Therefore, y* is an efficient outcome of

problem MOLP. Consequently, y* is an efficient extreme point of problem MOLP in the

outcome space R'. O

Theorem 4.3.2. Assume that w is a weight vector in W0 and that x* is an optimal

solution to problem LP(w). If y' is an extreme point of the set Y n {(y RPI wTy =

wTCx*}, then y' is also an efficient extreme point of problem MOLP in the outcome space

RP.

Proof. Notice that as in the proof of Theorem 4.3.1, it follows that x* is an

efficient solution of problem MOLP and y' is an efficient outcome of problem MOLP.

Thus, we only need to prove that y' is an extreme point of Y.

Suppose, to the contrary, that y' is not an extreme point of Y. Then, there exist y',

y2 Y, y'#y2 such that

y' = ty' + (1-t)y2

for some t such that 0 < t < 1. Choose such a t. Then

wTy' = twTy + (1-t)wTy2. (4.16)

We claim that wTy = wTy2. To show this claim, suppose that w y # wy2

Without loss of generality, suppose that w'y' < wTy2. It then follows from (4.16) that

wTy'< twTy2 + (1-t)wTy2

=wT2. (4.17)





83


Since y' is in Y n {y e RPI wTy = wTCx*}, wTy' = wTCx*. Because y2eY, we may choose

a vector x2eX such that y2 = Cx2. As a result, by (4.17), it follows that

wTCx* < wTCx2.

But since x2EX, this contradicts that x* is an optimal solution to problem LP(w), so that

the claim must hold.

By (4.16) and the claim proven in the paragraph above,

wTCx* = wry = w'yI = w y2

Therefore, y', y2e Y n {y e Rp' wTy = WTCx*}. This contradicts that y' is an extreme

point of Y {y e RPI wTy = wTCx*}. Hence, our assumption that y' is not an extreme

point of Y must be false. O

Basic Weight Set Decomposition Algorithm (BWSDA)

STEP 1. Set W' = W. Choose any point w' e W'. Find any optimal extreme point

solution x' to problem LP(w'). Set k := 1, EX := 0, EYo := 0.

STEP 2. If xk is the unique optimal solution to problem LP(wk), then set xk = xk and yk

= Cxk. Otherwise, find any extreme point yk of the set Y({yeRPj (wk)Ty

(wk)TC k }, and any extreme point xk of X such that yk = Cxk. Set EXk = EXk-l

u{Xk}, EYk = EYk-l u{yk} and Wk+1 = Wk\W(yk).

STEP 3. Find any point wk+1 in Wk+l. If such a point wk+' does not exist, stop: Yex r YE

= EYk. Otherwise, go to STEP 4.

STEP 4. Find any optimal extreme point solution xk+' to problem LP(wk+1). Set k

k+1 and go to STEP 2.








Notice that the feasible region of problem LP(w) is a nonempty compact

polyhedral set. Thus, for any point w in RP, problem LP(w) has at least one optimal

extreme point solution. Thus, STEP 1 and STEP 4 are well defined.

In STEP 2, if xk is the unique optimal solution to problem LP(wk), then it follows

from Theorem 4.3.1 that yk = Cxk is an efficient extreme point of problem MOLP in the

outcome space RP. Otherwise, the algorithm calls for finding an extreme point of the

compact polyhedral set Ynr{yRP' (wk)Ty = (wk)TCk }. By the theory of linear

programming, such an extreme point yk always exists. Therefore, STEP 2 is well defined.

Furthermore, by Theorem 4.3.2, yk is an efficient extreme point of problem MOLP in the

outcome space RP.

In STEP 3, if Wk+' = 0, then the algorithm stops. Otherwise, the algorithm calls

for finding any point in Wk+'. Therefore, STEP 3 is also well defined.

The following result shows that the algorithm is finite and valid.

Theorem 4.3.3. The Basic Weight Set Decomposition Algorithm terminates at

iteration q 2 1, and EYq = Yex n YE, where q is the number of points in Yex. YE.

Proof Since Y is nonempty and compact, problem MOLP has at least one efficient

extreme point in the outcome space. Thus, q > 1. Since X and Y are nonempty and

compact, we can always find the point y' called for in STEP 2 of the first iteration of

algorithm BWSDA.

Suppose that k > 1 and EYk = {y1, ..., yk}. It follows from Theorems 4.3.1 and

4.3.2 that, for each 1 I i < k, y' is an efficient extreme point of problem MOLP in the

outcome space. Thus, EYk is a set of efficient extreme points of problem MOLP in the









outcome space. Suppose that EYk # Yx r- YE. In Theorem 4.2.7, let I = {1, 2,...,k}.

k
Then, we have that Wn(U W(y')) is a proper subset of WO. From this, there is a point w
i=i

k
in WO such that w UW(y'), i.e., such that w eWk+. Therefore, when EYk e Yx r YE
i=1

the algorithm can always find a point wk+' E Wk+1 in STEP 3 of iteration k, and the

algorithm will continue at least to STEP 2 of iteration k+l. In STEP 2 of iteration k+l,

the algorithm will then find a new point yk+1 e Yex r YE such that wk+'e W(yk+1). Since

k
wk+l UiJW(y), wk+1 eW(y') for any I < i k. Therefore, yk' # y' for all 1 i < k. This
i=l

implies that when EYk # Yx r) YE, the algorithm will continue to the next iteration where

it will find an unexplored point in Yex r YE. Since the number of points in Yex n YE is q,

the algorithm will eventually stop in iteration q with EY' = Yex n YE. [

Now, we address the implementation of BWSDA.

In STEP 1 or STEP 4, we need to find any optimal extreme point solution,

denoted by xk, to the linear programming problem LP(wk) for some k 2 1. This can be

accomplished, for instance, by using the simplex method of linear programming.

In STEP 2 of BWSDA, for any given k, it may be necessary to find an extreme

point yk of the set Yrn{yRP| (wk)Ty = (wk)TC xk ), and any extreme point xk of X such

that yk = Cxk. We explain now one method for accomplishing this.

By definition of Y, Yn~{yeRP ( (k)T = (wk)TC k }= (Cx Ax 5 b, x > 0, (wk)TCx

= (wk)TC k ). Let


D = [C, Opxm],









A= A Imxm
=(wk)C 01xm

-~b
b [(wk )T C k

and

Z= {z| Az= b, z 0),

where zT = (x1, sT) e R"n. Then

Yn(yeR'I (wk)Ty = (k)TCk } = DZ {Dz| z eZ}.

Since xk is an extreme point of {x e R"n Ax < b, x > 01, it is also an extreme point of {x

e R" Ax < b, x > 0, (wk)Cx = (wTCXk It follows that ((x)r,(Sk)T) is an extreme

point of Z, where -k = b Axk. We need to find an extreme point yk of DZ and an

extreme point xk Of X such that yk = Cxk. Starting with z0 = ((Xk)T, (k )T), we can find

such points yk and xk by using an algorithm in Benson and Sun (2000). Using this

approach, we obtain the following procedure.

Procedure for finding yk and xk in STEP 2 of BWSDA- Subprocedure I:

Stepl. Find any optimal extreme point solution, denoted by z = to the following


linear program LPDi:

LPDi: min , s.t. z e Z,

where D1 denotes the first row of D. If z' is the unique optimal solution to LPDI, then set

xk = X', y = C i and stop. Otherwise, let vl denote the optimal value of LPD1, set i=2,

and go to Step i.









Step i (i = 2, 3,...,p). Find any optimal extreme point solution, denoted by z = to


the following linear program LPD;:

LPDi: min ,
s.t. = vt, t = 1, 2, ...,i-l,
zE Z,

where Dt denotes the t-th row of D, and vt is the optimal value for LPD,. If z' is the unique

optimal solution to LPDi, or if i = p, then set xk = R', yk = Ci'and stop. Otherwise, let vi

denote the optimal value of LPDi, set i = i+1, and go to Step i.

From Benson and Sun (2000), the point yk obtained by the above procedure is an

extreme point of DZ = Yn {yeRP' (wk)Ty = (wk)TCk }. Notice that Subprocedure I

involves solving at most p linear programming problems. The following result shows that

the point xk obtained by the procedure is an extreme point of X.

Theorem 4.3.4. The point xk obtained by Subprocedure I is an extreme point of X.

Proof. It is obvious whenever the procedure stops at some Step i, i > 1, that xk

eX. Suppose, to the contrary, that x is not an extreme point of X. Then, there exist x', x"

in X such that x' # x" and

xk = a x' + (- o) x" (4.18)


for some a such that 0 < a < 1. Since z' = e Z and k = i', it follows that


(wk)TCxk = (w)TC k

This implies that xk is an optimal solution to problem LP(wk), since xk is an optimal

solution to this problem. This together with (4.18) yields that x' and x" are also optimal

solutions to problem LP(wk). Therefore,









(wk)TC x' = (wk)TC xk (wk)TC X" = (wk)TC k

Notice that x' and x" are in X. Thus, x' and x" are distinct points in {x e R"I Ax < b, x > 0,


(wk)TCx = (Wk)TC k }. Let s' = b Ax', s" = b Ax", z' = and z" = VJ. We then
s' S"

have that z' and z" are two distinct points in Z such that

z= az' + (1- c)z". (4.19)

Therefore,

Dz' = aDz' + (1- a)Dz".

From this equation, since z' is an optimal solution to problem LPDi, we obtain that for

each t= 1, 2,....,i,

vt = = a + (1- a). (4.20)

With t =1, since z', z" e Z and 0 < a < 1, (4.20) implies that z' and z" are distinct

optimal solutions to problem LPDI. As a result, z' and z" are feasible solutions to problem

LPD2. By setting t = 2 in (4.20), since 0 < a < 1, this implies together with (4.20) that z'

and z" are distinct optimal solutions to problem LPD2. As a result, z' and z" are feasible

solutions to problem LPD3. With t = 3 in (4.20), this implies in a similar manner that z'

and z" are distinct optimal solutions to problem LPD3. By continuing in this fashion, we

see that z' and z" are distinct feasible solutions to problem LPDi. This together with (4.19)

yields that z' is not an extreme point of the feasible region of problem LPDi. This

contradicts that z' is an optimal extreme solution of problem LPDi. [

In STEP 3, we need to find, if it exists, a point wk+1 e Wk+', where Wk+' is usually

a non-closed, nonconvex cone. We will present two approaches that either find a point









wk+I Wk+l or show that no such point exists. These two approaches will yield two

versions of the Basic Weight Set Decomposition Algorithm, WSDA-I and WSDA-II.



4.3.1. Tree Search Approach for STEP 3 of BWSDA

k
Let k e {1, 2, ..., q}. Notice that Wk+1 = W0 \ UW(y'). To introduce the first
1=1

approach, we will first present some necessary and sufficient conditions for w 1 W(y'),

where = {1, 2,...,q}.

Suppose that x e X, y Y, and y = Cx. Notice that W(y) = { weRPI wTy > wTy'

for all y' in Y } = { weRPI wTCx > wTCx' for all x' in X ). Therefore, w e W(y) if and

only if x is an optimal solution to problem LP(w). By duality theory of linear

programming, this implies that w e W(y) if and only if there exists a point u e R" such

that

CTw ATu < 0, (4.21)

uT(A b) = 0, (4.22)

xT(CTw ATu) = 0, (4.23)

u 0. (4.24)

Let Ai be the ith row of A for i = 1, 2,... ,m, and let E; be the ith row of the nxn

identity matrix for i = 1, 2, ...,n. For each given pair (x', y'), i = 1, 2,...,q, where x' is an

extreme point of X and y' = Cx', let

ID(x') = {j{ 1,2,...,m}| Ajx'= bj}.

For each i = 1, 2, ..., q, let A' be the matrix whose rows are Aj, jID(x'). For each i = 1, 2,

..., q, if ID(x') = 0, let A' equal the scalar 0. For each i = 1, 2, ..., q, let








Io(xi) = { je{1,2,...,n} xj'= 0),

and let E' be the matrix whose rows are El, leIO(x'). For each i = 1, 2, ..., q, if Il(x') = 0,

let E' equal the scalar 0.

Let i e {1, 2, ..., k). From (4.21)-(4.24), we know that w e W(y') if and only if

there exist a vector u' > 0 and a vector v' > 0 such that

CTw -(Ai)Tui + (E)Tvi = 0. (4.25)

Therefore, w v W(y') if and only if the above linear system (4.25) of n equations has no

solution (u', v') > 0. By Farkas' Lemma, this implies that w e W(y') if and only if there

exists a point d' e R" such that

wTCdi > 0,

A'd' < 0, E'd' > 0.

Let

D(i) = {d' R"I A'd' 0, E'd' > 0},

and let CD(i) be the image of D(i) under C. We immediately obtain the following result.

Theorem 4.3.5. Let i e {1, 2, ..., q}. For a given w e RP, w z W(y') if and only if

there exists d' e D(i) such that w'Cd' > 0.

The following theorem gives a way to generate D(i) and CD(i) for each i = 1, 2, ...,

q. For any set T c R", let cone T = {at I a > 0, t e T).

Theorem 4.3.6. For each i = 1, 2,...,q, we have

(1) D(i) = cone (X-{x'});

(2) CD(i) = cone (Y-{y'}).








Proof. (1). (a) First, we will show D(i) c cone (X-{x'}) for any given i { 1, 2, ...,

q).

Let i E {1, 2,..., q), and let d E D(i). Then, A'd < 0, E'd > 0. Let t be any positive

number. It follows immediately that A'(td) < 0, E'(td) > 0. Therefore, A'(td + x' x') 0

and E'(td + x' x') > 0. This implies that

A'(td + x') < A'x' = b', (4.26)

E'(td + x') 2 E'x' = 0, (4.27)

where b' is the vector whose components are bj, j eID(x'). For each j ID(x'), we have

Ajx' < bj.

Thus, for t sufficiently small, we have

Aj(td + x') < bj (4.28)

for allj e ID(x'). By (4.26) and (4.28),

A(td + x') < b, (4.29)

for t sufficiently small. Similarly, notice that, for any j z I(xi), xj > 0. Thus, when t is

sufficiently small, we have

tdj + xj' > 0, (4.30)

for all j e Io(x'). By (4.27) and (4.30), we have

td + x' > 0, (4.31)

for t sufficiently small. From (4.29) and (4.31), we may choose a t > 0 such that td + x' e

X. Then, td e X (x'}. Since cone(X-{x')) is a cone and t > 0, this implies that d E

cone(X-{x'}). We have thus proven that D(i) g cone (X-(x')).








(b) We now will prove the opposite inclusion. Let d e cone (X-{x'}). Then, we

may choose x e X and t > 0 such that

d = t(x-x').

By the definitions of A' and b', since x e X, this implies that A'd = t(A'x A'x') = t(A'x -

b') < 0. Furthermore, since x e X, this also implies by definition of E' that E'd = t(E'x -

E'x') = tE'x 2 0. Therefore, d e D(i). We have thus proved that cone (X-{x'}) c D(i).

By (a) and (b), D(i) = cone (X-{x'}).

(2). Let i { 1, 2,..., q}. It is obvious that

C(cone (X-{x'})) = cone (Y-{y'}).

This implies from the first part of this theorem that

CD(i) = cone (Y-{y'}). D

Let i { 1, 2, ..., q}. Since X is a polyhedral convex set and x' is an extreme point

of X, cone(X-{x')) is a polyhedral convex cone generated by the edges of X emanating

from x'. It follows that D(i) is a polyhedral convex cone generated by the edges of X

emanating from x'. Since X is compact, all edges of X emanating from x' can be exactly

determined by determining all extreme points of X adjacent to x'.

Let Sat(x') denote the set of all extreme points adjacent to x' in X. Then, D(i) can

be generated by determining all points in the set St(x') -{x'}. The set Sat(x') can be

generated by using the simplex method. Similarly, CD(i) is a polyhedral convex cone

generated by the edges of Y emanating from y'. If we let Sat(y') denote the set of all

extreme points adjacent to y' in Y, then CD(i) can be generated by determining all points

in Sat(y') {y}. The set Sat(y') can be generated by using the algorithm in Benson and Sun

(2000).









Theorem 4.3.7. For each i = 1, 2,..., q, w e W(y') if and only if there exists a point

Kx e Sat(x') -{xi) such that wTC K > 0.

Proof Without loss of generality, assume that for some positive integer li that

St(x') = {x1", x1,...., x1 }. Since D(i) can be generated by all points in Sat(x') -{x1}, we

have


D(i)= { a,(x''-x') a, > 0, t= 1, 2, ..., li).
t=1

By Theorem 4.3.5, w 4 W(y') if and only if there exists a point d' e D(i) such that

wTCdi > 0.

i,
Suppose that d = Ya, (x-' x'), where a, 2> 0, t = 1, 2,..., li. Then,
t=1

11
w'Cdi = CatwrC(x'' x').


If wTCd' > 0, then from the above equation, there exists at least one j e { 1, 2,...,

li} such that wTC(x1i- xi) > 0. Set K' = x'j- xi. Then wTC > 0.

On the other hand, if X' e S,t(x') -{xi} and wTC J > 0, then by Theorem 4.3.5,

since KI e D(i), w o W(y'). D

k
Remark 4.3.1. Let k e {1, 2, ..., q}. From Theorem 4.3.7, w v UJW(y')if and
1=1

only if there are points x' e Sat(x') {x'), i = 1, 2,...,k, such that wTC > 0 for all i = 1,

k
2, ...,k. Notice that Wk"' = Wo \ UW(y'). So, w e Wk+1 if and only if w > 0 and there

are points S(x) i= 2,...,k, such that >forall= 1 2, ...,k.
are points K' e St(x') {x'}, i = 1, 2,...,k, such that wTCK' > 0 for all i = 1, 2, ...,k.




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E9UGBFRH0_KRZ2PA INGEST_TIME 2013-10-10T02:08:08Z PACKAGE AA00017693_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES