<%BANNER%>

On the Linear Programming and Group Relaxations of the Uncapacitated Facility Location Problem

Permanent Link: http://ufdc.ufl.edu/UFE0042232/00001

Material Information

Title: On the Linear Programming and Group Relaxations of the Uncapacitated Facility Location Problem
Physical Description: 1 online resource (120 p.)
Language: english
Creator: Khalil, Mohammad
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: facility, group, location, relaxations, uncapacitated
Industrial and Systems Engineering -- Dissertations, Academic -- UF
Genre: Industrial and Systems Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The uncapacitated facility location problem (UFLP) is a classical problem in the Operations Research community that has been studied extensively. Various approaches have been proposed for its solution. In this thesis we seek to determine if the group approach to mixed integer linear programming (MILP) could lead to new advances in the solution of UFLP. To determine whether group relaxations of UFLP can be solved efficiently, we first determine the maximum possible determinant (MPD) of a basis of its linear programming relaxation (LPR). This motivates the investigation of the bases of the LPR of UFLP. Although we show that the MPD is exponential (in terms of the number of customers and the number of facilities), we also show that most bases have small determinants. We give several algorithms to construct bases of the LPR of UFLP. In particular, we present three algorithms to construct unimodular bases, one algorithm to construct bases with desired determinant, and one algorithm to construct bases with the MPD. We show that the solutions corresponding to the bases of the LPR of UFLP with MPD we describe are feasible. We also show that the corresponding linear programming (LP) solution is not very fractional. Finally, we use the above results to study two small instances of UFLP. In the first, we assume that we have two customers and/or two facilities and we show that the LPR of UFLP always describes the convex hull of its integer solutions. In the second, we assume that we have three customers and three facilities and we show that the convex hull of integer solutions can be obtained by adding six inequalities to its LPR. This thesis is organized as follows. In Chapter 1, we give a brief introduction to UFLP and to group relaxations in MILP. In Chapter 2, we obtain some results about the polyhedral structure of the LPR of UFLP. In Chapter 3, we determine the MPD of a basis of the LPR of UFLP, we discuss the feasibility of the solutions corresponding to the bases of the LPR of UFLP with MPD we construct, and conclude with comments about the efficiency of using group relaxations to solve UFLP. In Chapter 4, we study two instances of UFLP with small number of customers and/or facilities. In the first, we assume that we have two customers and/or two facilities while in the second we assume that we have three customers and three facilities. We obtain convex hull descriptions for the set of integer solutions to these problems. In Chapter 5, we describe experimental results on solving the group relaxations of UFLP. Finally, we give a conclusion and discuss future research directions in Chapter 6.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Mohammad Khalil.
Thesis: Thesis (M.S.)--University of Florida, 2010.
Local: Adviser: Richard, Jean-Philippe.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2012-08-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0042232:00001

Permanent Link: http://ufdc.ufl.edu/UFE0042232/00001

Material Information

Title: On the Linear Programming and Group Relaxations of the Uncapacitated Facility Location Problem
Physical Description: 1 online resource (120 p.)
Language: english
Creator: Khalil, Mohammad
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: facility, group, location, relaxations, uncapacitated
Industrial and Systems Engineering -- Dissertations, Academic -- UF
Genre: Industrial and Systems Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The uncapacitated facility location problem (UFLP) is a classical problem in the Operations Research community that has been studied extensively. Various approaches have been proposed for its solution. In this thesis we seek to determine if the group approach to mixed integer linear programming (MILP) could lead to new advances in the solution of UFLP. To determine whether group relaxations of UFLP can be solved efficiently, we first determine the maximum possible determinant (MPD) of a basis of its linear programming relaxation (LPR). This motivates the investigation of the bases of the LPR of UFLP. Although we show that the MPD is exponential (in terms of the number of customers and the number of facilities), we also show that most bases have small determinants. We give several algorithms to construct bases of the LPR of UFLP. In particular, we present three algorithms to construct unimodular bases, one algorithm to construct bases with desired determinant, and one algorithm to construct bases with the MPD. We show that the solutions corresponding to the bases of the LPR of UFLP with MPD we describe are feasible. We also show that the corresponding linear programming (LP) solution is not very fractional. Finally, we use the above results to study two small instances of UFLP. In the first, we assume that we have two customers and/or two facilities and we show that the LPR of UFLP always describes the convex hull of its integer solutions. In the second, we assume that we have three customers and three facilities and we show that the convex hull of integer solutions can be obtained by adding six inequalities to its LPR. This thesis is organized as follows. In Chapter 1, we give a brief introduction to UFLP and to group relaxations in MILP. In Chapter 2, we obtain some results about the polyhedral structure of the LPR of UFLP. In Chapter 3, we determine the MPD of a basis of the LPR of UFLP, we discuss the feasibility of the solutions corresponding to the bases of the LPR of UFLP with MPD we construct, and conclude with comments about the efficiency of using group relaxations to solve UFLP. In Chapter 4, we study two instances of UFLP with small number of customers and/or facilities. In the first, we assume that we have two customers and/or two facilities while in the second we assume that we have three customers and three facilities. We obtain convex hull descriptions for the set of integer solutions to these problems. In Chapter 5, we describe experimental results on solving the group relaxations of UFLP. Finally, we give a conclusion and discuss future research directions in Chapter 6.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Mohammad Khalil.
Thesis: Thesis (M.S.)--University of Florida, 2010.
Local: Adviser: Richard, Jean-Philippe.
Electronic Access: RESTRICTED TO UF STUDENTS, STAFF, FACULTY, AND ON-CAMPUS USE UNTIL 2012-08-31

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0042232:00001


This item has the following downloads:


Full Text

PAGE 1

1 ON THE LINEAR PROGRAMMING AND GROUP RELAXATIONS OF THE UNCAPACITATED FACILITY LOCATION PROBLEM By MOHAMMAD KHALIL A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2010

PAGE 2

2 2010 Mohammad Khalil

PAGE 3

3 This work is dedicated to my parents, my wife, Rania, my daughters, Habiba and Jana, and my siblings, Shaimaa, Sara, and Ahmed, for their continuous love and support

PAGE 4

4 ACKNOWLEDGMENTS First of all, I would like to express my sincere appreciation to my advisor, Dr. JeanPhilippe Richard, for his invaluable guidanc e. In addition to providing vision and encouragement, Dr. Richard gave me unlimit ed freedom to explore new avenues for research. He was always available to discus s my ideas and give me excellent feedback. His enthusiasm encouraged me to produce better work. He has taught me not only Operations Research, but also how to communicate effectively and how to write technical work. His devotion to teach me, to pay close attention to my thoughts (even when they are wrong), and to make this thes is as best as it can has been and will continue to be an inspiration to me. Without his help this work would not have been possible. Second, I would like to thank Dr. Youngpei Guan, the member of my examining committee, for the time he spent reviewing this thesis. I am grateful to Dr. Attia Gomaa, the former chair of Industrial Engineering Department at Fayoum University, who was the first person to introduce Operations Research to me and encouraged me to pur sue an academic career. His support is enduring. I am also indebted to Mr. Mahmoud Talaat who built my basic mathematical skills and inspired me to study engineering. Finally, I would like to thank my wife and daughters for their patience, kindness, and continuous support during my studies at the University of Florida.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................. 4LIST OF TABLES ............................................................................................................ 7LIST OF FIGURES .......................................................................................................... 8LIST OF ALGORITHMS .................................................................................................. 9LIST OF ABBREVIATIONS ........................................................................................... 10LIST OF NOTATIONS ................................................................................................... 11ABSTRACT ............. ............... ............... ............... ................ ............... ............... ........... 12 CHAPTER 1 INTRODUCTION .................................................................................................... 141.1 The Facility Location Problem ........................................................................... 151.2 Literature Review .............................................................................................. 161.3 The Uncapacitated Facility Location Problem ................................................... 171.4 Group Relaxations of Mixed Integer Programming ........................................... 202 ON THE POLYHEDRAL STRUCTURE OF THE LINEAR PROGRAMMING RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM .............. 282.1 Introduction ....................................................................................................... 282.2 Case 1: ytFmmn ........................................................................................ 312.3 Case 2: ytFmmn ........................................................................................ 322.4 Case 3: ytFmmn ....................................................................................... 452.5 Constructing UFLP Instances of Desired Determinant ...................................... 623 MAXIMUM POSSIBLE DETERMINANT OF BASES OF THE LINEAR PROGRAMMING RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM .............................................................................................................. 793.1 Computing the MPD of 1 Matrices .................................................................. 793.2 Computing the MPD of (0,1) Matrices ............................................................... 823.3 Computing the MPD of Bases of the LPR of UFLP ........................................... 843.4 On The Feasibility of The LP Solution to UFLP that has The MPD ................... 853.5 Solving Group Relaxations of UFLP ................................................................. 95

PAGE 6

6 4 SPECIAL CASES .............. ............... ................ ............... ............... ............. ......... 1024.1 Case 1: Two Customers and/or Two Facilities ................................................ 1024.2 Case 2: Three Customers and Three Facilities ............................................... 1025 EXPERIMENTAL RESULTS ................................................................................. 1116 CONCLUSION AND FUTURE RESEARCH ......................................................... 116LIST OF REFERENCES ............................................................................................. 117BIOGRAPHICAL SKETCH .......................................................................................... 120

PAGE 7

7 LIST OF TABLES Table page 3-1 Maximum possible determinant of a 1 square matrix of size hh. ................. 803-2 MPD of a (0,1) square matrix of size hh and number of square (0,1) matrices that have the MPD. .............................................................................. 823-3 Maximum possible determinant of B and kT for given number of columns corresponding to iy and it variables. ................................................................ 983-4 Probability that the MPD of pseudo bases of the LPR of the UFLP for given Cn and Fn is less than or equal Uh. ............................................................ 1005-1 Selection of the parameters Cn and Fn in the construction of UFLP experiments. ..................................................................................................... 1115-2 Experimental Results. ....................................................................................... 114

PAGE 8

8 LIST OF FIGURES Figure page 1-1 Constraint matrix, A of the LPR of the UFLP formulation shown in (1.3). ......... 191-2 The group network associated with the MIP problem in Example 1.1. ............... 262-1 Matrix A in Example 2.2. ................................................................................... 302-2 Basis B that was obtained from matrix A in Example 2.2. ............................... 302-3 Illustration of the different cases encountered in the proof of Lemma 2.5. ......... 432-4 Final arrangement of columns included in B if y tFmmn ........................... 462-5 Illustration of how to apply ERO (2.5) using (2.7) in Example 2.2. ...................... 512-6 B in Example 2.2 after column permutations in accordance with Figure 2-4. .... 522-7 Submatrix G reflects the selected ij x columns in Example 2.2. ........................ 522-8 Illustration of Example 2.5. a) Matrix B and b) Matrix B .................................. 554-1 The six bases of the LPR of UFLP (with 3CFnn ) that have determinant absolute values equal to 2. ............................................................................... 1044-2 A complete bipartite graph between the set of facilities and the set of customers for UFLP with 3CFnn .............................................................. 1084-3 A complete bipartite graph between the set of facilities and the set of customers that corresponds to the matching associated with each of the inequalities in (4.10). ........................................................................................ 109

PAGE 9

9 LIST OF ALGORITHMS Algorithm page UFLP-UNI-1 .................................................................................................................. 4 4UFLP-UNI-2 .................................................................................................................. 4 4UFLP-UNI-3 .................................................................................................................. 4 4UFLP-DET ..................................................................................................................... 63UFLP-BASIS ................................................................................................................. 64UFLP-INSTANCE .......................................................................................................... 78HADAMARD .................................................................................................................. 80BINARY ........................................................................................................................ 83

PAGE 10

10 LIST OF ABBREVIATIONS LP Linear Programming. IP Integer Programming. MIP Mixed Integer Programming. MILP Mixed Integer Linear Programming. LPR Linear Programming Relaxation. FLP Facility Location Problem. UFLP Uncapacitated Facility Location Problem. MPD Maximum Possible Determinant. ERO Elementary Row Operation. ECO Elementary Column Operation.

PAGE 11

11 LIST OF NOTATIONS F Set of potential locations where facilities can be opened. C Set of customers. O Set of opened facilities where OF. FnF CnC min,CFnnn if Cost of opening facility ,i iF. ijc Cost of assigning customer j to facility ,, i iF jC. ,, c A b The cost vector, constraints matrix, and the right-hand-side vector in the standard LP form min,:,0 cx subject to Axb x I Identity matrix. 0 Matrix of zeros. E Matrix of ones. n rC n-combination of r. ,hkA Matrix A with h rows and k columns. ,Ahk Component in row h and column k of matrix A. ,A.h thh column of matrix A. Ak. thk row of matrix A.

PAGE 12

12 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science ON THE LINEAR PROGRAMMING AND GROUP RELAXATIONS OF THE UNCAPACITATED FACILITY LOCATION PROBLEM By Mohammad Khalil August 2010 Chair: Jean-Philippe Richard Major: Industrial and Systems Engineering The uncapacitated facility location problem (UFLP) is a classical problem in the Operations Research community that has been studied extensively. Various approaches have been proposed for its solution. In this thesis we seek to determine if the group approach to mixed integer linear programming (MILP) could lead to new advances in the solution of UFLP. To determine whether group relaxations of UFLP can be solved efficiently, we first determine the maximum possible determinant (MPD) of a basis of its linear programming relaxation (LPR). This motivates the investigation of the bases of the LPR of UFLP. Although we show that the MPD is exponential (in terms of the number of customers and the number of facilities), we also show that most bases have small determinants. We give several algorithms to construct bases of the LPR of UFLP. In particular, we present three algorithms to construct unimodular bases, one algorithm to construct bases with desired determinant, and one algorithm to construct bases with the MPD. We show that the solutions corresponding to the bases of the LPR of UFLP with MPD

PAGE 13

13 we describe are feasible. We also show that the corresponding linear programming (LP) solution is not very fractional. Finally, we use the above results to study two small instances of UFLP. In the first, we assume that we have two customers and/or two facilities and we show that the LPR of UFLP always describes the convex hull of its integer solutions. In the second, we assume that we have three customers and three facilities and we show that the convex hull of integer solutions can be obtained by adding six inequalities to its LPR. This thesis is organized as follows. In Chapter 1, we give a brief introduction to UFLP and to group relaxations in MILP. In Chapter 2, we obtain some results about the polyhedral structure of the LPR of UFLP. In Chapter 3, we determine the MPD of a basis of the LPR of UFLP, we discuss the feasibility of the solutions corresponding to the bases of the LPR of UFLP with MPD we construct, and conclude with comments about the efficiency of using group relaxations to solve UFLP. In Chapter 4, we study two instances of UFLP with small number of customers and/or facilities. In the first, we assume that we have two customers and/or two facilities while in the second we assume that we have three customers and three facilities. We obtain convex hull descriptions for the set of integer solutions to these problems. In Chapter 5, we describe experimental results on solving the group relaxations of UFLP. Finally, we give a conclusion and discuss future research directions in Chapter 6.

PAGE 14

14 CHAPTER 1 INTRODUCTION Many commercial companies, service orga nizations, and public sector agencies face the problem of deciding where to loca te their facilities. The facilities can be factories, warehouses, retailers, hospitals, schools, etc., or even routers and caches for firms working in web services. This proble m is known in the Operations Research community as the facility location problem (FLP). It is considered to be a long term decision problem that can affect the success of an organization. FLP has vast applications and it is often considered to be a pillar of supply chain management. As a result, it has been studied extensively in the literature. Numerous variants of the problem have been considered and a vast array of solution methodologies has been proposed. Variants of the problem include situations where customers’ demands are deterministic or stochastic, the capacities of fa cilities are finite or infinite, the potential locations are continuous or discrete, etc. In this thesis we study a variant of the problem that is known as uncapacitated facility location problem (UFLP) in which demand is deterministic, the location of facilities must be chosen from a given set, and the capacity of these facilities is sufficiently large to handle all customers’ demands. Solution methodologies for UFLP are diverse and include enumeration techniques, cutting plane approaches, approximation algorithms, and heuristics. In this thesis, our goal is to determine whether the group approach to mixed integer linear programming (MILP) could be useful in studying UFLP. The remainder of this chapter is organized as follows. In Section 1.1, we describe different variants of FLP. In Section 1.2, we give a brief literature review of studies of

PAGE 15

15 FLP. In Section 1.3, we present the classi cal mixed integer programming formulation of UFLP and its linear programming relaxation (LPR). Finally, in Section 1.4 we give an introduction to group relaxations in mixed integer programming. 1.1 The Facility Location Problem Given a set of potential locations and a set of customers, the facility location problem (FLP) seeks to determine: how many facilities should be opened, where the open facilities should be allocated, what customers should be assigned to what open facility, so that the total cost associated with opening facilities and assigning customers to open facilities is minimized. The study of FLP dates back to the 1960s [1,2,3,4,5,6]. Different variants have been considered over the years. We next review some of the common variants of the problem that have been studied. We refer to [7,8,9] for detailed discussions. The uncapacitated facility location problem (UFLP) assumes that the capacities of the facilities are infinite while the capacitated facility location problem (CFLP) assigns a maximum capacity limit for each facility. The p-center problem is similar UFLP except that in the p-center problem the number of facilities that should be opened is fixed and the problem minimizes the maximum distance between a custom er and its assigned facility. We say that a customer is “covered” if the distance between this customer and its assigned facility is less than or equal a given distance. The p-median problem is a variant that minimizes the number of uncove red customers under the restriction that the number of opened facilities is at most equal to p. If each customer’s demand must be entirely satisfied from a single facility, the problem is called a single-sourcing facility location problem. Otherwise, customer’s

PAGE 16

16 demands can be satisfied from multiple facilities yielding multi-sourcing facility location problems. The discrete facility location problem is a variant of the problem where facility locations must be selected from a discrete set. However, if the locations are given as coordinates that are continuous, the problem is called a planar facility location problem. The deterministic facility location problem assumes that the customers’ demands are deterministic and known beforehand, whil e the stochastic facility location problem assumes that the customers’ demand is known only through a probability distribution. 1.2 Literature Review The use of exact solution algorithms (branch-and-bound and cutting plane algorithms) to solve FLP is discussed in [10,11]. The polyhedral structure of the FLP is studied in [12,13,14,15,16,17,18]. Cho, Padberg, and Rao [15] showed that for two customers and/or two facilities, the LPR of UFLP always has an integer optimal solution. In Chapter 3, we prove the same result by different means. As far as approximation algorithms are concerned, different greedy algorithms have been developed in [7,19], linear programming (LP) rounding algorithms were provided in [20,21,22], and primal dual algorithms were given in [23,24]. Guha and Khuller [25] proved that there is no approximation algorithm for the UFLP that can give better approximation factor than 1.463 (unless P=NP). The best approximation factor known is 1.52 and was obtained by Mahdian and Zhang [26]. Other heuristics have also been proposed for this problem including local search, simulated annealing, and variable neighborhood search; see [27,28].

PAGE 17

17 1.3 The Uncapacitated Facility Location Problem Given a set F of Fn potential locations and a set C of Cn customers, UFLP is concerned with finding a subset OF of facilities to open and an assignment of customers to open facilities that minimizes to tal cost. Each facility is assumed to have an infinite capacity; hence, an open facility can satisfy the demand of all customers assigned to it. Further, we assume that each customer’s demand is entirely satisfied from a single facility. We next present the classical mixed integer programming (MIP) formulation of the UFLP and its LPR [7]. Inputs: if: cost of opening facility ,.i iF ijc: cost of assigning customer j to facility ,,. i iF jC Decision variables: 1if facility is opened,, 0otherwise. 1if customer is assigned to facility,,, 0otherwise.i iji iF y= j i iF jC x= UFLP can be formulated using MIP as follows min1.1.a : 1,1.1.b ,,1.1.c 0,1,1.1.d 0,1,.1.1.eijijii iF, jCiF ij iF iji i ij cxfy subject to x jC xy iF jC y iF x iF jC (1.1)

PAGE 18

18 The objective function (1.1.a) minimizes the total cost associated with opening facilities (such as construction costs) and assigning customers to facilities (such as transportation or routing costs). Constraint (1.1.b) requires that each customer j is assigned to exactly one facility jC The second constraint (1.1.c) ensures that customer j is assigned to facility i only if facility i is open (in other words constraint (1.1.c) impose that 0ijx whenever 0iy ). Constraints (1.1.d) and (1.1.e) enforce the binary nature of variables ,.iji x y In particular, the fact that 0,1ijx imposes singlesourcing of customers demand. The LPR of the previous formulation (1.1 ) can be obtained by relaxing the decision variables from binary to continuous as shown in (1.2). min1.2.a : 1,1.2.b ,,1.2.c 1,1.2.d ,0,.1.2.eijijii iF, jCiF ij iF iji i iji cxfy subject to x jC xy iF jC y iF xy iF jC (1.2) In (1.2), the relaxed constraint 1ijx was removed since it is implied by iji x y and 1.iy LPR (1.2) can be transformed to a standard LP min :,0cx subject to Axb x (1.3) by introducing slack variables ijs in (1.2.c) and it in (1.2.d), yielding

PAGE 19

19 min1.4.a : 1,1.4.b 0,,1.4.c 1,1.4.d ,0,.1.4.eijijii iF, jCiF ij iF ijiji ii iji cxfy subject to x jC xsy iF jC yt iF xy iF jC (1.4) The LPR in a standardized form, (1.4), can be used to describe matrix A and vectors b and c in (1.3). Let ,nnI denote the identity matrix of size nn and ,nnE denote the matrix of all ones of size ,. nn The structure of the matrix ,22CCFFCFFnnnnnnnA is shown in Figure 1-1, where all empty spaces are zeros. ij x ijs iy it 111Cn x .. x 212Cn x .. x 1FFCnnnx .. x 111Cns .. s212Cns .. s1FFCnnns .. s1 Fn y .. y 1Fnt .. t ,CCnnI ,CCnnI ,CCnnI ,CCnnI ,CCnnI ,1Cn-E ,CCnnI ,CCnnI ,CCnnI ,CCnnI ,1Cn-E ,FFnnI ,FFnnI Figure 1-1. Constraint matrix, A of the LPR of the UFLP formulation shown in (1.3). The objective and the right-hand-side are given as follows 1,22 1,2 1,2,CFF F CFnnniji n nnccf (1.5) and

PAGE 20

20 ,11,1,1,,,.CCFFCCFFT nnnnnnnnbE0E (1.6) 1.4 Group Relaxations of Mixed Integer Programming Many practical applications of Operati ons Research can be modeled using MIP. Two main families of methods are used to solve MIP problems in practice; branch-andbound and cutting planes algorithms. The two algorithms rely on solving the LPR of a MIP problem. The group relaxation approach (also known as corner relaxation) was introduced by Gomory [29]. Group relaxations can be used to replace the LPR in the branch-andbound algorithm because it is “simple” to optimize linear functions over its feasible region [30]. We next describe how to constr uct the group relaxation of a MIP problem. Consider the following MIP min ,,1,1:,.n mnnncx subject to Axb x (1.7) To obtain its corner relaxation, we first remove the integrality constraint and hence obtain its linear relaxation min ,,1,1:,.n mnnncx subject to Axb x (1.8) We next solve the LPR in (1.8) using the simplex algorithm. Let B x and N x denote the basic variables and the non-basic variables of the optimal LP solution obtained by simplex. Also, let BA denote the submatrix of A corresponding to the basic variables and NA the submatrix of A corresponding to the non-basic variables. Similarly, denote by Bc and Nc the subvectors of c corresponding to the cost elements of B x and .N x We rewrite (1.8) as

PAGE 21

21 min :,,.BN BBNNBBNNBNcxcx subject to AxAxb xx (1.9) The optimal solution x of the LPR and the optimal value *z are given by 1,0,** BBNxAb x (1.10) and .* BBzcx (1.11) To obtain the corner relaxation of (1.7), we create the problem min :,,BN BBNNBBNNBNcxcx subject to AxAxb xx (1.12) where the nonnegativity of the basic variables B x has been relaxed. It is possible to reformulate the problem (1.12) using the Smith Normal Form of .BA Lemma 1.1 [31]: Given a nonsingular integer matrix A of size ,,nn there exist ,nn unimodular matrices 1U and 2U such that 12, SNFAUAU is a diagonal matrix with positive elements 1,…,ndd such that id divides 1,id for 1,…,1. in SNFA is called the Smith Normal Form of .A To reformulate (1.12), we compute the Smith Normal Form of the optimal LP basis ,BA 12.BSSNFAUAU (1.13) Then, we multiply the constraints in (1.12) by 1U on the left and write min 1 12211:,,BN NNBBNNBNzcx subject to UAUUxUAxUb xx (1.14) where Nc is the reduced cost of variables N x in (1.9), i.e., 1.NNBBNcccAA

PAGE 22

22 After substituting 12 BUAU by S in (1.13), we obtain min 1 211:,,.BN NNBNNBNzcx subject to SUxUAxUb xx (1.15) Since Lemma 1.1 states that 2U is a unimodular matrix, 1 2U is also a unimodular matrix. It follows that 1 2 m BUx for every .m Bx Therefore (1.15) reduces to min 11:mod,.N NNNNNzcx subject to UAxUb S x (1.16) Let id be the diagonal elements of S for 1,…,. in Then the thi row of (1.16) is simply considered modulo .id For every diagonal element in the thi row of S that is equal to one, it is clear that the corresponding thi row of (1.16) can be removed. After removing these rows, (1.16) represents the corner relaxation problem or group minimization problem associated with the MIP in (1.7). We next present an example to illustrate the aforementioned steps. Example 1.1: Consider the following MIP, 12345 12345 1234 12345 5min84232186129 : 222362 3431 353262 x xxxx subject to xxxxx xxxx xxxxx x (1.17) The constraints matrix A the objective c and the right-hand-side b are 2221362 11340, 84232186129, and 31. 3153262 Acb (1.18) Relaxing the integrality constraint, we obtain the following LP

PAGE 23

23 12345 12345 1234 12345 5min84232186129 : 222362 3431 353262 x xxxx subject to xxxxx xxxx xxxxx x (1.19) For this linear program, the basic and non-basic variables in the optimal solution are given by 12345,, and ,.BN x xxxxxx (1.20) Therefore, 22213 113, 40, 84232, and 186129. 31532BNBNAAcc (1.21) The optimal solution and the optimal value of the LP (1.19) are 115.5 15.5, 0, and 775. 0*** BBNBBxAbxzcx (1.22) The simplex tableau corresponding to this solution is 45 145 245 345min775186129 : 5.250.2515.5 42.515.5 1.750.750. xx subject to xxx xxx xxx (1.23) The Smith Normal Form for the optimal basis BA can be verified to be 12201100121 111, 020, and 010. 302004011 USU (1.24)

PAGE 24

24 We then compute 11modNNUAxUb S as in (1.16), to obtain 45 4545min775450, : 5862mod1 8531mod2,,. 91362mod4 xx subject to xxb xx (1.25) The first row of (1.25) is removed since 1,11 S and (1.25) reduces to 45 4545min775450, : mod2 011 ,,. mod4 112 xx subject to xxb xx (1.26) We refer to (1.26) as the corner relaxation problem or group minimization problem associated with (1.17). To solve the corner relaxation problem associated with a MIP problem, a directed graph GVE is created that is called the group network This network representation was first introduced by Shapiro [32]. Each vertex of the network corresponds to a vector 12, ,, ,m where i for all 1,, im and .ii d The number of vertices in the group network is therefore equal to the absolute value of the determinant of the optimal basis 1det.m Bi iAd For each column j corresponding to a non-basic variable, and for every vertex ,kgV where 0,,det1,BkA we create a directed arc from the vertex kg to the vertex 1mod.kjgUA S The cost associated with this arc is equal to j c Problem (1.16) then reduces to finding the shortest path

PAGE 25

25 from the vertex (0, 0,..,0) to the vertex corresponding to 1mod. Ub S Any appropriate shortest path algorithm can be used to solve this problem. The solution of the shortest path algorithm yields optimal values for the non-basic variables in an optimal solution of (1.16). We then substitute these values into (1.15) to obtain an integer solution of (1.7). Although this solution is integer, it is not necessary nonnegative. However, if the obtained integer solution is indeed feasible (i.e., nonnegative) then it is an optimal solution for (1.7). We refer to [30] for more information and show next an example that illustrates the use of a shortest path algorithm to solve the group minimization problem. Example 1.1-continued: Since det8,BA the group network has 8 vertices that are arranged along 2 dimensions (because we hav e only two elements different from one on the diagonal of matrix S). For each of the eight vertices, we draw two arcs corresponding to non-basic variables 4 x and 5. x We represent the arcs that are corresponding to 4 x by continuous lines and the arcs corresponding to 5 x by dotted lines. The cost of all the arcs drawn with solid lines is 44 c while the cost of the arcs drawn with dashed lines is 550. c In this network, we are looking for the shortest path from node (0,0) to node (1,2). Figure 1-2 illustrates the group network wher e the source and destination nodes are highlighted in black. Solving the shortest path problem, we obtain a shortest path from (0,0) to (1,2) that visits vertex (0,1). This solution is repr esented by heavy lines in Figure 1-2. The first arc corresponds to non-basic variable 4 x while the second arc corresponds to nonbasic variable 5 x As a result, an optimal solution to the group minimization problem is

PAGE 26

26 Figure 1-2. The group network associated with the MIP problem in Example 1.1. 41 x and 51. x Substituting into (1.23) we obtain 1234521, 9, 1, 1, and 1. xxxxx (1.27) We have an integer solution to the MIP in (1.17) whose objective value is equal to 829. Although this solution is integer it is no t optimal for (1.17) since it is infeasible as 30. x Further, since the group minimization problem is a relaxation of (1.17), we have that the optimal value of (1.17) is at least 829. Given a MIP problem, the number of arcs that originates from each vertex in the group network of its corner relaxation is equal to the number of non-basic variables of its LPR. Further, the number of vertices in the group network is equal to the absolute value of the determinant of the optimal basis of its LPR. Therefore, the size of the group network is a direct function of the determinant of the optimal basis of the LPR of this MIP problem. Since the running time of shortest path algorithms is a function of the number of vertices and arcs of the network, then it follows that the difficulty of solving

PAGE 27

27 the group relaxations of MIP problem is intimately related to the maximum possible determinant (MPD) of the bases of its LPR. Therefore, to determine the difficulty of solving group relaxations of UFLP, it is important to determine the MPD of the bases of its LPR.

PAGE 28

28 CHAPTER 2 ON THE POLYHEDRAL STRUCTURE OF THE LINEAR PROGRAMMING RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM 2.1 Introduction As mentioned in Section 1.4, the difficulty of solving the group relaxation of a MIP problem is directly related to the determinant of the optimal basis of its LPR. Therefore, it is important to determine the bases with MPD. It is also important to obtain information about its unimodular bases since they yield integer solutions. This motivates the investigation of the polyhedral structure of the LPR of UFLP. Assumptions: The formulation shown in (1.4) is the one for which we study bases. Unless otherwise mentioned, the variables in the different bases we discuss are arranged from left to right in the following order: ,ijiji x sy then ,it iF, jC More precisely, ij x comes before **ij x if *ii or if *ii and .* j j The same ordering is applied to .ijs Similarly, iy comes before *i y and it comes before *it if .*ii First we introduce two lemmas that will be us ed in the remainder of this chapter to compute determinants and to obtain the inverse of block matrices. Lemma 2.1 [33]: Let ,qqpqqpPRQ and p pV If V is an invertible matrix, then 1detdetdet. PQ VPQVR RV (2.1) Lemma 2.2 [34]: Let ,qqpqqpPRQ and p pV If P and V are invertible matrices, then

PAGE 29

29 11 111 1 11 111. PQVRPQVRQV PQ RV VRPQRPVRPQ (2.2) A basis of ,22CCFFCFFnnnnnnnA is a square submatrix of ,22CCFFCFFnnnnnnnA that is invertible and that has CCFFnnnn rows and columns. The total number T of bases that may be obtained from matrix A is bounded above by 2222! !!CFF CCFFnnn CFF nnnn CCFFCFFCnnn TC nnnnnnnn (2.3) Example 2.1: For UFLP with 4,Cn and 4,Fn the total number of different bases of LPR is less than or equal to 4010 246.285210. C Let B denote an arbitrary square submatrix of A of size .CCFFnnnn For all iF we define i xM and i sM to be the sets of indices j of variables ij x and ijs whose associated columns are included in .B Similarly, we denote by y M and tM the sets of indices of variables iy and it whose associated columns are included in B Finally, let ,ii xsymmm and tm denote the cardinality of sets ,ii xsyMMM and tM and let xm and sm be the sums of i xm and i sm respectively, for iF i.e., and .iiii xxxsssyytt iFiFiFiFmmMmmMmMmM (2.4) Example 2.2: For UFLP with 4,Cn and 4,Fn the constraint matrix A is given in Figure 2-1. The basis B presented in Figure 2-2 is obtained by selecting the columns marked with () in Figure 2-1. Using our notation, we write

PAGE 30

30 ijx ijsiy it 1 i 2 i 3 i 4 i 1 i 2 i 3 i 4 i 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 1 1 1 1 1 1 Figure 2-1. Matrix A in Example 2.2. ijx ijsi y it 1 1 1 1 1 1 1 1 1 1 1 1 1 -1 1 -1 1 -1 1 -1 1 -1 B 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 1 1 1 Figure 2-2. Basis B that was obtained from matrix A in Example 2.2.

PAGE 31

31 12341,2,3,1,2,4,1,3,4,2,3,4,xxxxM M M M 12344,3,2,1,ssssM M M M and 1,2,3,4.ytMM Further, 3,1,1,…,4,ii xsmmi 12,xm and 4.sytmmm To simplify the study of the different bases of the LPR of UFLP, we divide the discussion into three main sections; y tFytFmmnmmn and .ytFmmn 2.2 Case 1: y tFmmn Lemma 2.3 : Let Bbe an arbitrary square submatrix of A of size CCFFnnnn that is such that .ytFmmn Then det0. B Proof: Let B be any submatrix of A of size .CCFFnnnn We consider now the last Fn rows of .B Denote by ir the number of elements that are equal to one in the thi of these rows. Clearly 0,1,2,ir 1,…,,Fin see Figure 1-1. Further, 1.Fn iytF i=rmmn Therefore, there exists i with 0,ir i.e., the thi of the last Fn rows of B is identically zero. It follows that det0 B since B contains a zero row. Example 2.3: When 4,Cn and 4,Fn any submatrix B of A of size (24,24) that has 1,2yM and 4tM is singular because the rd23 row is identically zero, i.e.,

PAGE 32

32 1,243,.23,..CCFBnnnB0 Similarly, we obtain the following result. Lemma 2.4: Let B be an arbitrary square submatrix of A of size CCFFnnnn such that .ytFMM Then det0. B Proof: We consider the last Fn rows of B If ,ytiFMM then ,.CCFBnnni is identically zero showing that det0. B If the condition of Lemma 2.3, y tFmmn holds then the condition of Lemma 2.4, ,ytFMM also holds. Therefore, Lemma 2.4 is a strict generalization of Lemma 2.3 because it can also be applied when y tFmmn Example 2.3-contined: Any submatrix B of A of size (24,24) that has 1,2yM and 1,4tM is singular because 1,243,.23,..CCFBnnnB0 2.3 Case 2: y tFmmn Before proceeding with the discussion of the case where y tFmmn we introduce a new notation. In particular, we introduce an elementary row operation (ERO) that will modify the structure of B by eliminating some of its elements (converting nonzero elements to zero).

PAGE 33

33 In the matrix A every column corresponding to variable ij x has exactly two components equal to one; one in the th j row and one in the th Cinj row; see Figure 1-1. Columns corresponding to variables ijs however have exactly one component equal to one, which is located in the same row as the second component equal to one in the ij x column; i.e., it is located in the th Cinj row. In summary; if .,ijAx and .,ijAs denote the columns in matrix A corresponding to variables ij x and ijs, then 1if,,2.5.a ,,, 0otherwise,2.5.bC ij hjinj iFjCAhx (2.5) and 1if ,2.6.a ,,, 0otherwise.2.6.bC ijhinj iFjCAhs (2.6) Example 2.2-continued: Column 24 x has exactly two components equal to one; in the 4th and the 12th row of A Also column 24s has a single one in the 12th row. For every ij x column that has been selected in B ,,iFjC we may subtract from the row that has the first one component, the row that has the second one component. The corresponding (ERO) is then described as ,:,.,.,..i xCiFhMBhBhBinh (2.7) ERO (2.7) can also be obtained by multiplying matrix B on the left with a simple matrix. More precisely, we multiply B with matrix 1iER to perform elimination in the first upper Cn components of column ij x where

PAGE 34

34 1 11,1if 1,2.8.a ,,-1if ,2.8.b 0otherwise.2.8.ci CCFF iii CxERhhhnnnn iF ERERhhinhM (2.8) Note that 1det1.iER Further applying this transformation for all iF we obtain 1.i iFB=ERB (2.9) Clearly, detdet.BB (2.10) Because ERO (2.7) is made only to rows ,., Bh where ,i xhM it only affects the first upper Cn rows while the remaining CFFnnn rows remain unchanged. As a result, every one component in the first Cn rows and in the columns associated with the ij x will be eliminated and a block of zeros will be obtained, ,.Cxnm0 For the columns associated with the variables ,ijs if B has a ijs column that has the same indices, i and j as a ij x column that also belongs to B then (2.7) will cause ,ijAjs to take on a coefficient -1 instead of zero. Otherwise ,ijAjs remains unchanged and so equal to zero. In summary, ,1:,1:CxCxnmBnm0 (2.11) and .1,2.12.a ,,, 02.12.bi x i sij i xjM iFjMBjs jM (2.12)

PAGE 35

35 Lemma 2.5: Let Bbe an arbitrary square submatrix of A of size CCFFnnnn such that ytFmmn and ,ytFMM then det0,1.B Proof: If we consider the last Fn rows of B since y tFmmn and ytFMM then we know that every row ,.CCFBnnnh has exactly one component that is equal to one, .hF With column permutations we can obtain an identity matrix of size ,FFnn in the lower right corner of B (note that the column permutations only affect the sign of determinant while our discussion is concerned with the absolute value of the determinant), i.e., ,1:,1:.FFCCFCCFFCCFCCFFnnBnnnnnnnnnnnnnn=I (2.13) Moreover, the first CCFnnn columns of B are a combination of ij x and ijs columns (since the iy and it columns form the last Fn columns of B as ytFmmn). It follows that the last Fn rows of the first CCFnnn columns are all zeros, in other words, ,1:,1:.FCCFCCFCCFFCCFnnnnBnnnnnnnnnn=0 (2.14) We now use these observations to decompose B into blocks of matrices to ease the calculation of its determinant, 12 ,, 34 ,,detdet.CCFCCFCCFF FCCFFFnnnnnnnnnn nnnnnnBB B BB (2.15)

PAGE 36

36 Vertically, 13 TBB corresponds to ij x and ijs columns while 24 TBB is associated with iy and it columns. The rows are divided into two blocks, one that has the upper CCFnnn rows and the other contains the lower Fn rows. Using (2.13) and (2.14) we can substitute 3B and 4B as follows 12 ,, ,,detdet.CCFCCFCCFF FCCFFFnnnnnnnnnn nnnnnnBB B 0I (2.16) Using Lemma 2.1, we obtain that 121 ,,,,,detdetdet.FFCCFCCFCCFFFFFCCFnnnnnnnnnnnnnnnnnnBIBBI0 (2.17) This expression reduces to 1 ,detdet.CCFCCFnnnnnnBB (2.18) In (2.18), 1B is composed of the first CCFnnn rows of a set of ij x and ijs columns only, ,.iFjC Because the total number of ij x columns in A is CFnn and the total number of ijs columns in A is also equal to CFnn (which is less than CCFnnn), then the columns of 1B cannot be composed solely of ij x or of ijs columns. Next we study different cases based on the choice of ij x and ijs columns in 1B : Case 1: All ij x columns are included in 1, B i.e., .xCFsCmnnmn We decompose 1B into blocks of matrices to ease the calculation of its determinant as follows: 11 ,, 1 11 ,,detdet.CCFCC CFCFCFCxs nnnnn xs nnnnnnnBB B BB (2.19)

PAGE 37

37 In (2.19), the matrices 11T xxBB contain all ij x columns while matrices 11T ssBB contain all ijs columns. The upper block 11xsBB contains the upper Cn rows of 1. B Since all ij x columns are included in 1, B then 1 ,,,CFCFCFCFx nnnnnnnnBI see (2.5). We apply ERO (2.7) on (2.19). As a result, we obtain 1B instead of 1B with 11detdet BB. Since ERO (2.7) transform 1 ,CCFx nnnB into ,CCFnnn0, we obtain 111 ,,,, 1 111 ,,,,detdetdet.CCFCCCCFCC CFCFCFCCFCFCFCxss nnnnnnnnnn xss nnnnnnnnnnnnnnBB0B B BBIB (2.20) We then permute the positions of the blocks of 1B so that the invertible square block ,CFCFnnnnI is moved to the lower right corner of 1B i.e., 1 ,, 1 1 ,,detdet.CCCCF CFCCFCFs nnnnn s nnnnnnnB0 B BI (2.21) Using Lemma 2.1 we obtain that 11111 ,,,,,,detdetdetdet.CFCFCCCCFCFCFCFCCCsss nnnnnnnnnnnnnnnnnnBIB0IBB (2.22) Since all the ij x columns are included in 1B then all the ijs columns have the same indices as some of the ij x columns. It follows that from (2.12.a), 1 ,CCs nnB is a matrix where all elements are zeros except for exactly Cn components that are equal to -1, one in each column. It is easily verified that, depending on the arrangements of the -1 components in 1 ,,CCs nnB 1 ,detCCs nnB will be either 0 or 1. In particular, 1 ,detCCs nnB will be equal to 1 only if there is a -1 component in every row and in every column of 1 ,.CCs nnB

PAGE 38

38 This will happen when the indices j of the ijs columns covers the range 1,…,.Cn Using (2.10), (2.18) and (2.22) we therefore conclude that 11 ,1if ,2.23.a detdetdet 0otherwise.2.23.bCCi s s iF nnC-M BBB (2.23) Case 2: All ijs columns are included in 1, B i.e., ,sCFmnn .xCmn Decomposing 1B as described in Case 1, we obtain 11 ,, 1 11 ,,detdet,CCCCF CFCCFCFxs nnnnn xs nnnnnnnBB B BB (2.24) where the blocks have sizes different from those in (2.19). Since every ijs column has a single component equal to one and all ijs columns are included in 1, B then from (2.6) we know that 1 ,,CCFCCFs nnnnnnB0 and 1 ,,.CFCFCFCFs nnnnnnnnBI Using these observations in (2.24) we obtain 111 ,,,, 1 111 ,,,,detdetdet.CCCCFCCCCF CFCCFCFCFCCFCFxsx nnnnnnnnnn xsx nnnnnnnnnnnnnnBBB0 B BBBI (2.25) We then apply Lemma 2.1 to obtain 11111 ,,,,,,detdetdetdet.CFCFCCCCFCFCFCFCCCxxx nnnnnnnnnnnnnnnnnnBIB0IBB (2.26) Now observe that 1 ,CCx nnB is a square matrix that has exactly Cn components that are equal to one, one in each column. Similar to (2.23) in Case 1, we write that 11 ,1if ,2.27.a detdetdet 0otherwise.2.27.bCCi x x iF nnC-M BBB (2.27)

PAGE 39

39 Case 3: Not all the ij x nor all the ijs columns are included in 1, B i.e., ,sXCCFmmnnn ,CsCFnmnn ,CxCFnmnn and there is in 1B a subset of the ijs columns that can be combined with the ij x columns to obtain an identity matrix of size ,CFCFnnnn in the lower left square corner of 1, B i.e., we can obtain an identity matrix in 11:,1:CCCFCCCFBnnnnnnnn that is solely composed of ij x and ijs columns. As mentioned before, every ij x column has two components that are equal to one. Denote by xV the set of indices of rows that have the second one component among the ij x columns in 1. B Also, let sV denotes the set of indices of rows that have the one element among the ijs columns in 1.B Since, for any given ,i i xM and i sM have all the indices j of ij x and ijs that belong to 1, B then using (2.5) and (2.6) we obtain i xxC iF jMVinj (2.28) and .i ssC iF jMVinj (2.29) An identity matrix of size ,CFCFnnnn in the lower left corner of 1B can be obtained only if we have exactly one component in each row k, for 1,…,CCCFknnnn (regardless of whether it is associated with ij x or ijs). In other words, we will see such an identity matrix if 1,…,.CCCFxsnnnnVV (2.30)

PAGE 40

40 When creating the above-mentioned identity matrix, we first select all ij x columns present in 1,B and then add ijs columns when needed. Define xsV to be the set of indices of the rows that have no one component associated with any of the ij x columns. We need a ijs column for each of these rows, where 1,…,.xsCCCFxVnnnnV (2.31) For all ,iF we define ii xssMM to be the set of indices j of the ijs columns that will be combined with the ij x columns to obtain the aforementioned identity matrix and denote by i sM the set of indices j of the remaining ijs columns. Note that .iii ssxsMMM Further, let xsm denote the sum of the cardinality of ,i xsM i.e., .i xsxs iFmM We have that .xxsCFmmnn Let 1, ,,xsmxsvvV for 1,,,xskm kv denote an index of a row that has no one component associated with any of the ij x columns. For every ,kv we can use (2.6) to determine the indices i and j of a ijs column that gives one component in the row with index .kv Those ijs columns will be selected to complete the desired identity matrix and hence, they are associated with .i xsM Let where, .i xsCxs Cv j M ijvinvV n (2.32) We perform column permutations and decompose 1B in such a way that the left blocks contain the ijs columns of i xsM and all the ij x columns while the right block is composed of the remaining ijs columns.

PAGE 41

41 .11 ,, 1 11 ,,detdetCCFCC CFCFCFCxss nnnnn xss nnnnnnnBB B BB (2.33) We then apply ERO (2.7) on (2.33), so as to obtain 1B instead of 1.B Given the condition that was set in the definition of Case 3 and our above selection of the columns, we have that 1 ,,CFCFCFCFxs nnnnnnnnBI and 1 ,,,CCFCCFxs nnnnnnB0 and so 1 ,, 1 1 ,,detdet.CCFCC CFCFCFCs nnnnn s nnnnnnn0B B IB (2.34) We then follow the steps that we used in Case 1. The result is similar except that because not all of the ij x columns are included in 1B we cannot claim that (2.12.a) hold. It may be that some of the ijs columns share indices with the ij x columns (2.12.a), or the reverse case (2.12.b). In 1 ,,CCs nnB the number of components that are equal to -1 can only be shown to be less than or equal to Cn and no more than one nonzero element is present in a single column. It follows that the value 1 ,detCCs nnB cannot be greater than one. If all the ijs columns included in 1 ,CCs nnB (the columns associated with i sM) share the indices j and i with ij x columns in 1 ,,CCFxs nnnB then we have exactly Cn components that are equal to -1. Then whether 1 ,det1CCFxs nnnB depends on the arrangements of these components, i.e., 1if &, &,2.35.a det 0otherwise.2.35.biiii xsxs iFiFMCiFjMjMMC B (2.35)

PAGE 42

42 Case 4: Not all the ij x nor all the ijs columns are included in 1,B i.e., ,sXCCFmmnnn ,CsCFnmnn ,CxCFnmnn and there is no way to find in 1B a subset of the ijs columns that can be combined with the ij x columns to obtain an identity matrix of size ,CFCFnnnn in the lower left square corner of 1,B i.e., we cannot obtain an identity matrix in 11:,1:CCCFCCCFBnnnnnnnn that is solely composed of ij x and ijs columns: Compared to Case 3, we have at least one row that has no one components corresponding to any of the ij x nor the ijs columns, 1,…,.CCCFxsnnnnVV (2.36) In this case, because there is no way to make 1 ,,,CFCFCFCFxs nnnnnnnnBI there is at least one row in 1 ,CFCFxs nnnnB that has all of its elements equal to zero, and hence 1B is singular, i.e., det0.B Example 2.4: For 4,Cn and 4,Fn we illustrate in Figure 2-3 the different cases considered in the proof of Lemma 2.5. For each of these cases, we give an example of a submatrix B of A satisfying the corresponding conditions. Submatrix B is obtained by selecting the columns marked with (). Proposition 2.1: In the first three cases of the proof of Lemma 2.5, we obtained conditions that make det1;B (2.23.a), (2.27.a), and (2.35.a). We can use these conditions to construct unimodular bases of the LPR of UFLP. Algorithms UFLP-UNI-1,

PAGE 43

43 (a) ijx ijsiy it 1 i 2 i 3 i 4 i 1 i 2 i 3 i 4 i (b) ijx ijsiy it 1 i 2 i 3 i 4 i 1 i 2 i 3 i 4 i (c) ijx ijsiy it 1 i 2 i 3 i 4 i 1 i 2 i 3 i 4 i (d) ijx ijsiy it 1 i 2 i 3 i 4 i 1 i 2 i 3 i 4 i (e) ijx ijsiy it 1 i 2 i 3i 4 i 1 i 2 i 3i 4 i (f) ijx ijsiy it 1 i 2 i 3 i 4 i 1 i 2 i 3 i 4 i (g) ijx ijsiy it 1 i 2 i 3 i 4 i 1 i 2 i 3 i 4 i Figure 2-3. Illustration of the different cases encountered in the proof of Lemma 2.5. a) Case 1, det1,B b) Case 1, det0,B c) Case 2, det1,B d) Case 2, det0,B e) Case 3, det1,B f) Case 3, det0,B and g) Case 4, det0.B

PAGE 44

44 UFLP-UNI-2, and UFLP-UNI-3 (corresponding to Cases 1 to 3) describe how to select variables ,ijiji x sy and it with indices i and j that correspond to ,ii xsyMMM and tM to obtain unimodular bases. Algorithm UFLP-UNI-1. Input: ,FFnC and .Cn Output: Sets of indices of variables ,ijiji x sy and it (, ,ii xsyMMM and tM) that yield a basis B with det1.B 1: i xiFMC 2: iF let i sMC such that i sC iFMn and i s iFCM 3: yMF 4: tyMFM Algorithm UFLP-UNI-2. Input: ,FFnC and .Cn Output: Sets of indices of variables ,ijiji x sy and it ( ,ii xsyMMM and tM) that yield a basis B with det1.B 1: ,iF let i xMC such that i xC iFMn and i x iFCM 2: i siFMC 3: yMF 4: tyMFM Algorithm UFLP-UNI-3. Input: ,FFnC and .Cn Output: Sets of indices of variables ,ijiji x sy and it ( ,ii xsyMMM and tM) that yield a basis B with det1.B 1: ,iF let i xMC such that i x iFMC 2: ,iF ii xsxMCM 3: ,iF let ii sxMM such that i s iFMC and i sC iFMn 4: ,iF iii sxssMMM 5: yMF 6: tyMFM

PAGE 45

45 2.4 Case 3: y tFmmn Lemmas 2.3, 2.4 and 2.5 discuss situations where y tFmmn We now investigate submatrices for which y tFmmn When performing this study, it suffices to consider situations where ,ytFMM otherwise B is singular; see Lemma 2.4. In the following discussion, we will have to consider extra columns corresponding to the iy and it variables as compared to the cases we considered above. We first consider the effect of ERO (2.7) on such columns i y and .it When ,ytFmmn we will first permute the columns to obtain an identity matrix of size ,FFnn in the lower right square corner of .B We chose this identity matrix to be composed of all the it columns, supplemented by i y columns when necessary. Let tyM denote the set of indices i of the i y columns that will supplement the it columns and denote by M the set of indices i of the remaining i y columns. Finally, let tym and m denote the cardinality of tyM and M i.e., ,tytMFM (2.37) y tyMMM (2.38) and .tytymMmM (2.39) Next, we select all the ij x columns and add, if needed, some ijs columns to obtain an identity matrix of size ,CFCFnnnn as we did in Case 3 of the proof of Lemma 2.5 (we use the same notation here). Then we permute the columns again so that the final

PAGE 46

46 arrangement of the columns from left to right is as follows; the i y columns that are associated with M the ijs columns that are associated with ,i sM the ij x and the ijs columns corresponding to ,i xsM then the it columns and the i y columns corresponding to .tyM Figure 2-4 describes the final arrangement of columns. The sets of columns and the number of columns in each set are given in the first and second rows respectively. iy associated with M then ijs corresponding to i sM ijs corresponding to i xsM and all ij x i y associated with tyM and all it sxsCmmmn xxsCFmmnn tytFmmn Figure 2-4. Final arrangement of columns included in B if y tFmmn The first Cn components in every i y column are zeros. There are exactly Cn components equal to -1 and exactly one component equal to one in the rest of the column; i.e., = 1if ,2.40.a ,1if ,2.40.b 0otherwise.2.40.cC iCCFhinkkC iFyhhnnni (2.40) For every iy column in ,B ERO (2.7) subtracts a row that has -1 component (2.40.a) from a row that has a zero component (2.40.c). ERO (2.7) is applied only to the rows with indices j that corresponds to ij x columns in B i.e., ,i x j M see (2.8). This leads to a specific structure of the first Cn rows of the i y columns after ERO (2.7). Let the submatrix ,CynmG represent the upper Cn rows of the i y columns in B For all ,yiM the structure of the submatrix ,CynmG will reflect the ij x columns that have been

PAGE 47

47 included in B as follows; for every ij x column that is in B i.e., ,i x j M ,1 Gji and for every ij x that is not in B i.e., ,i x j M ,0 Gji and in this case, ijs is in B i.e., 1if ,2.41.a ,, 0otherwise.2.41.bi x yhM iMGhi (2.41) It should be noted that the column arrange ment given in Figure 2-4 locates the variables iy in two different parts of the matrix B Therefore a vertical decomposition is also applied to ,.CynmG We denote by 1 ,CnmG and 2 ,CtynmG the submatrices made of the first Cn rows of the iy columns corresponding to M and to ,tyM respectively. Further, the first Cn columns of ,B see Figure 2-4, are composed of iy and ijs columns that are associated with .i sM We define submatrix ,CsxsnmmL to denote the first Cn rows of the ijs columns corresponding to i sM (the ijs columns part of the first Cn columns of B). Clearly the upper left square corner of B of size ,CCnn will be written as 1 ,,1:,1:.CCsxsCCnmnmmBnnGL (2.42) From (2.12), we know that after applying ERO (2.7), the ijs columns may now have a -1 or zero component in the first Cn rows. It follows that the number of -1 components in ,,CsxsnmmL say ,r will be less than or equal to sxsmm (the number of its columns) since no more than one -1 co mponent can be present per column. If ,sxsrmm then at least one column in ,CsxsnmmL is identically zero and therefore 1:,1:CCBnn is singular. Similarly, if sxsrmm but we find more than one -1

PAGE 48

48 component in the same row, then 1:,1:CCBnn is singular. Therefore, 1:,1:CCBnn can only be nonsingular if sxsrmm and every -1 component has a unique row index. Observe that i s iFM gives the indices j that correspond to the ijs columns in ,CsxsnmmL and hence the indices of the rows of ,CsxsnmmL that have -1 components (depending on the case that holds, 2.12.a or 2.12.b). Therefore, i s iFHCM represents the indices of the rows of ,CsxsnmmL that do not have -1 components. Since we are only interested in the case where 1:,1:CCBnn is nonsingular, we know that sxsrmm and it follows that i ssxs iFMmm and .i sCsxs iFHCMnmmm Next, we decompose the submatrix 1:,1:CCBnn in (2.42) horizontally such that its upper rows are associated with i s iFM and the lower rows are composed of the rows corresponding to ,H i.e., 1 1 ,, ,, 1 1 ,, ,,1:,1:ii sssxs sxssxssxs iFiF sxs sxsMmMmm mmmmmmm CC mmmmm HmHmmGL GL Bnn GL GL (2.43) where we denote by mmD the submatrix composed of the rows of 1 ,CnmG that have indices in .H We next illustrate our notation on an example. Example 2.2-continued: Applying ERO (2.7) on the basis B given in Figure 2-2 produces the following sequence of matrix operations.

PAGE 49

49 1,1,.1,.5,.. iBBB 1,2,.2,.6,.. iBBB 1,3,.3,.7,.. iBBB 2,1,.1,.9,.. iBBB 2,2,.2,.10,.. iBBB 2,4,.4,.12,.. iBBB 3,1,.1,.13,.. iBBB 3,3,.3,.15,.. iBBB 3,4,.4,.16,.. iBBB 4,2,.2,.18,.. iBBB 4,3,.3,.19,.. iBBB 4,4,.4,.20,.. iBBB The result can also be obtained by applying (2.9). Matrices 1 1ER to 4 1ER are shown in Figure 2-5 (a) to (d). After column permutation is done according to the order of Figure 2-4, B is given in Figure 2-6 where th e shaded area represents submatrix ,.CynmG Figure 2-7 then illustrates how the structure of the submatrix ,CynmG reflects the selection of the ij x columns in B Using the symbols introduced above, we write that 1,2,3,4, 1,2,3,4,ttyytyMMMMM 1,2,3,4, 1,2,3,4,ii sxs iFiFMM and .i s iFM 1,2,3,4.i s iFHCM Because ,tyM then 1. GG Further, ,i s iFM 11:,1:,CCBnnGG and ,1110 1101 1011 0111mmDG

PAGE 50

50 (a) 1 2 3 4 5 6 7 8 9 101112131415161718 19 20 21 22 2324 1 -1 1 -1 1 -1 1 1 1 1 1 1 1 1ER 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 (b) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1 -1 1 -1 1 1 -1 1 1 1 1 1 2 1ER 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

PAGE 51

51 (c) 1 2 3 4 5 6 7 8 9 101112131415161718 19 20 21 22 2324 1 -1 1 1 -1 1 -1 1 1 1 1 1 3 1ER 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 (d) 1 2 3 4 5 6 7 8 9 101112131415161718 19 20 21 22 2324 1 1 -1 1 -1 1 -1 1 1 1 1 1 4 1ER 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Figure 2-5. Illustration of how to apply ERO (2.5) using (2.7) in Example 2.2. a) 1 1, ER b) 2 1, ER c) 3 1, ER and d) 4 1. ER

PAGE 52

52 iy ijxijs it 1 1 1 1 1 1 1 1 1 1 1 1 -1 1 -1 1 -1 1 -1 1 -1 1 B -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 1 1 1 1 Figure 2-6. B in Example 2.2 after column permutations in accordance with Figure 2-4. 1110 1101 1011 0111G 1 i 11 x 2 i 21 x 3 i 31 x 4 i 41 x 12 x 22 x 32 x 42 x 13 x 23 x 33 x 43 x 14 x 24 x 34 x 44 x G 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1 Figure 2-7. Submatrix G reflects the selected ij x columns in Example 2.2. We illustrate the same result on another example where the number of facilities and the number of customers are different. Example 2.5: For 5,Cn and 3,Fn we show in Figure 2-8 (b) the matrix B corresponding to the submatrix B obtained by selecting the columns marked with () in Figure 2-8 (a). Using the symbols introduced above, we have

PAGE 53

53 1,3, 2, 1,3,ttyytyMMMMM 3,4,5,i xs iFM and 1,2,3.i s iFM 4,5.i s iFHCM The upper Cn rows of the first Cn columns of B, can be computed to be 11-100 110-10 1:,1:. 1000-1 11000 01000CCBnn 1:,1:CCBnn decomposes into the submatrices G and L corresponding to the iy and the ijs columns, 111 111 101 110 011 G and -100 0-10 00-1 000 000 L In Figure 2-8 (b), the submatrix ,CynmG is shaded inside of .B We know that 12, GGG where 1G contains the columns associated with the iy variables with indices i for iM and 2G contains the columns associated with the remaining iy variables, i.e., 111 11 10 11 01G and 21 1 1 0 1G

PAGE 54

54 Finally, the submatrix ,mmD is composed of rows of 1G that have indices ,i for ,iH 1 1 1 ,11 11 10 11 01sxsmmm mmG G G and 1 ,,11 01mmmmDG We now use the above derivation to compute the determinant of bases of the LPR of UFLP. Lemma 2.6: Let B be an arbitrary square submatix of A of size CCFFnnnn such that ytFmmn and .ytFMM If B is nonsingular, then ,detdet.CCFFCCFFnnnnnnnnm,mBD Proof: The condition ytFMM ensures that we can obtain an identity matrix of size ,FFnn in the lower right corner of B possibly after permuting columns Next, we permute the columns in accordance to Figure 2-4 and apply ERO (2.7). There are two cases that are similar to Cases 3 and 4 in the proof of Lemma 2.5: Case 1: There is in B a subset of the ijs columns that can be combined with the ij x columns to obtain an identity matrix of size ,CFCFnnnn in 1:,1:,CCCFCCCFBnnnnnnnn i.e., we can obtain an identity matrix in 1:,1:CCCFCCCFBnnnnnnnn that is solely composed of ij x and ijs columns.

PAGE 55

55 (a) ijx ijsiy it 1 i 2 i 3 i 1 i 2 i 3 i (b) iy ijs ijxijs it i y it 1 1 -1 1 1 1 -1 1 1 -1 1 1 1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 B -1 1 1 -1 1 -1 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 1 Figure 2-8. Illustration of Example 2.5. a) Matrix B and b) Matrix B We decompose the columns of B into the three blocks described in Figure 2-4. We also decompose the rows of B into three blocks; the first Cn rows, the middle CFnn rows, then the last Fn rows. We obtain that ,,, ,,, ,,,detdet.CCCCFCF CFCCFCFCFF FCFCFFFysxsty nnnnnnn ysxsty nnnnnnnnnn ysxsty nnnnnnnBBB BBBB BBB (2.44)

PAGE 56

56 Because our first step was to permute the columns to obtain an identity matrix of size ,FFnnin the lower right square corner of B we know that ,,.FFFFty nnnnBI Also since the block ,FCFxs nnnB is composed of ij x and ijs columns only, then ,,;FCFFCFxs nnnnnnB0 see (2.5) and (2.6). Further, given the condition that was set by the definition of Case 1 (refer also to (2.30) to verify when this condition holds), then ,,.CFCFCFCFxs nnnnnnnnBI The ijs columns in the middle vertical block of B that are corresponding to ,i xsM have no indices shared with any of the ij x columns (otherwise we would not be able to obtain the identity matrix, ,,CFCFCFCFxs nnnnnnnnBI), and therefore we know from (2.12.b) that ,,.CCFCCFxs nnnnnnB0 The expression (2.44) simplifies to ,,, ,,, ,,,detdet.CCCCFCF CFCCFCFCFF FCFCFFFysty nnnnnnn ysty nnnnnnnnnn ys nnnnnnnB0B BBIB B0I (2.45) Now, we group some of the blocks and introduce the following notation to ease the computation of the determinant of B We write ,,, 12 ,, ,,, 34 ,, ,,,detdetdet,CCCCFCF CCCCFF CFCCFCFCFF CFFCCFFCFF FCFCFFFysty nnnnnnn nnnnnn ysty nnnnnnnnnn nnnnnnnnnn ys nnnnnnnB0B BB BBIB BB B0I (2.46) where 1 ,,,CCCCys nnnnBB (2.47)

PAGE 57

57 2 ,,,,CCFFCCFCFty nnnnnnnnnB0B (2.48) 3 ,,CFC CFFC FCys nnn nnnn ys nnB B B (2.49) and ,, 4 ,,.CFCFCFF CFFCFF FCFFFty nnnnnnn nnnnnn nnnnnIB B 0I (2.50) Applying Lemma 2.1 on (2.50), we obtain 4 1 ,,,,,,detdetdet1.CFFCFFFFCFCFCFFFFFCFty nnnnnnnnnnnnnnnnnnnnBIIBI0 (2.51) Next, we apply Lemma 2.2 on (2.50) to obtain 1 1 ,,,,,, 4 ,,,,.CFCFCFFCFCFCFCFCFFFF CFFCFF FCFFFFCFFFtyty nnnnnnnnnnnnnnnnnnnn nnnnnn nnnnnnnnnnIBIIBI B 0I0I (2.52) We next apply Lemma 2.1 on (2.46) to write 1 41243 ,,,,,detdetdet.CFFCFFCCCCFFCFFCFFCFFCnnnnnnnnnnnnnnnnnnnnnnBBBBBB (2.53) From (2.51) we know that 4 ,det1.CFFCFFnnnnnnB Hence, (2.53) reduces to 1 1243 ,,,,detdet.CCCCFFCFFCFFCFFCnnnnnnnnnnnnnnnnBBBBB (2.54) We now use (2.48) and (2.52) to calcul ate the part of the expression enclosed in square brackets in (2.54). Specifically, .1 ,,,, 24 ,,,, ,, 2 ,,,CFCFCFCFCFFFF CCFFCFFCFFCCFCF FCFFF CCFCFCCFFty nnnnnnnnnnnnn ty nnnnnnnnnnnnnnn nnnnn ty nnnnnnnnnIIBI BB0B 0I 0BB (2.55)

PAGE 58

58 Substituting (2.55) into (2.54), we obtain 123 ,,,detdet.CCCCFFCFFCnnnnnnnnnnBBBB (2.56) From (2.48) and (2.49), we can rewrite (2.56) as 11 ,,,,,, ,detdetdet.CFC CCCCFCFCCCFFC FCys nnn tytyys nnnnnnnnnnnnn ys nnB BB0BBBB B (2.57) To compute det, B we next investigate the structure of ,CFty nnB and ,.FCys nnB Submatrix ,CFty nnB represents the upper Cn rows of the right vertical block in Figure 2-4 which is a combination of the it columns and the i y columns associated with .tyM We know that the first Cn components of the it columns are all zeros. They are unaffected by ERO (2.7), see Figure 1-1. Although the first Cn components of i y columns are all zeros, some of these components may be changed to ones after applying ERO (2.7), see (2.41). In summary, ,CFty nnB is composed of tm columns that are identically zero and have indices i ,t iM and of tym columns that may have nonzero components and have indices i .ty iM Let hu be a (0,1) column of ,,CFty nnB ,Fhn then ,CFty nnB can be presented as ,1 ,1,2.58.a .2.58.bC CF Cnt ty nn nty0hM B.h uhM (2.58) Submatrix ,FCys nnB is formed from the lower Fn rows of the left vertical block in Figure 2-4. It is composed of the iy columns associated with M and the ijs columns corresponding to .i sM Form (2.6) we know that the last Fn components of the ijs

PAGE 59

59 columns are zero and will remain unchanged after applying ERO (2.7). Also referring to (2.40.b) we observe that the last Fn components of the i y columns only have one nonzero component that is equal to one and will not be affected by ERO (2.7). In conclusion, ,FCys nnB has sxsmm columns that are identically zero and have indices j ,i s jMiF and also has m columns that have exactly one component equal to one and have indices ,i .iM If he denote a column of ,,FCys nnB ,Chn whose components are all zero except for one component that is equal to one, then we write ,1 ,1 ,,2.59.a .2.59.bF FC Fn ys nn nehM B.h 0 hM (2.59) Using the information of (2.58) and (2.59) we now compute the product ,,.CFFCtyys nnnnBB From the structure shown in (2.59) if 12, ,, mhhhM we conclude that ,,,1,2,,1,1.,.,.,,CFFCCFCFCFCCtyystytyty nnnnnnnnnnmnnBBBhBhBh00 (2.60) i.e., the product matrix is obtained by selecting the appropriate columns of ,.CFty nnB Since 12, ,, mhhhM and we know from (2.37) that ,tyMM it is clear that 12, ,, .mtyhhhM Further, because 12, ,, ,mtyhhhM then using (2.58) it is simple to verify that ,,112.,, ,, .CFCty nnnmBk0khhh We conclude that ,,,1,1,1,1,1,.CFFCCCCCCCCtyys nnnnnnnnnnnBB000000 (2.61) Substituting (2.61) into (2.57), we obtain 11 ,,,detdetdet.CCCCCCnnnnnnBB0B (2.62)

PAGE 60

60 Matrix 1 ,CCnnB is formed from the first Cn rows of the left vertical block of Figure 24. The columns of 1 ,CCnnB are a combination of the i y columns associated with M and the ijs columns corresponding to .i sM We discussed the structure of 1 ,CCnnB in the section preceding this proof. We apply the decomposition in (2.41) using the same notation here. As mentioned previously, after applying ERO (2.7) every ijs column may have a -1 or zero component in the first Cn rows; see (2.12). We define r to be the number of -1 components in ,.CsxsnmmL If ,sxsrmm then at least one column in ,CsxsnmmL that is identically zero and therefore 1:,1:CCBnn is singular. Similarly, if sxsrmm but we find more than one -1 component in the same row, 1:,1:CCBnn is singular. Therefore, 1:,1:CCBnn can only be nonsingular if sxsrmm and every -1 component has a unique row index. This case holds only if all the ijs columns that are corresponding to i sM share indices with the ij x columns. The ijs columns will have an identity matrix of size ,,sxssxsmmmm with -1 coefficients, in the rows that have indices j ,i s iF j M therefore ,,.sxssxssxssxsmmmmmmmmLI The remaining rows (that have indices j i s iF j HCM) are identically zero, ,,.sxssxsmmmmmmL0 Now (2.43) can be rewritten as 1 ,, 1 1 ,,.sxssxssxs CC sxsmmmmmmm nn mmmmmGI B G0 (2.63)

PAGE 61

61 We permute the blocks of 1 ,CCnnB in (2.63) so that the invertible square matrix ,sxssxsmmmmI is now located in the lower right corner, 1 ,, 1 1 ,,.sxs CC sxssxssxsmmmmm nn mmmmmmmG0 B GI (2.64) We next apply Lemma 2.1 on (2.64) to obtain 1111 ,,,,,,detdetdet.CCsxssxssxssxssxssxsnnmmmmmmmmmmmmmmmmBIG0IG (2.65) Note that we previously denoted 1 mmG by mmD Then using (2.62) and (2.65) we write 11 ,,,detdetdetdet.CCnnmmmmBBGD (3.66) Case 2: There is no way to find in B a subset of the ijs columns that can be combined with the ij x columns to obtain an identity matrix of size ,CFCFnnnn in 1:,1:CCCFCCCFBnnnnnnnn i.e., we cannot obtain an identity matrix in 1:,1:CCCFCCCFBnnnnnnnn that is solely composed of ij x and ijs columns. In this case, we apply the same re-ordering of the columns as in Case 1 but we do not perform ERO (2.7). Considering B instead of B we obtain ,,, ,,, ,,,detdet.CCCCFCF CFCCFCFCFF FCFCFFFysxsty nnnnnnn ysxsty nnnnnnnnnn ysxsty nnnnnnnBBB BBBB BBB (2.67) Using the same notation as in (2.46), we obtain

PAGE 62

62 ,,, 12 ,, 34 ,,, ,, ,,,detdetdet.CCCCFCF CCCCFF CFCCFCFCFF CFFCCFFCFF FCFCFFFysxsty nnnnnnn nnnnnn ysxsty nnnnnnnnnn nnnnnnnnnn ysxsty nnnnnnnBBB BB BBBB BB BBB (2.68) Since we are not able to create ,,CFCFCFCFxs nnnnnnnnBI (refer to (2.36) in Case 4 of the proof of Lemma 2.5), then clearly 4 ,det0.CFFCFFnnnnnnB Further, because we did not proceed with ERO (2.7), we know that ,,CCys nnB which is composed of the first Cn components of the ijs or iy columns will be a matrix of zeros, i.e., ,,,CCCCys nnnnB0 see (2.6.b) and (2.40.c). It follows that B in (2.68) is composed of blocks of matrices where the blocks on the diagonal, 1 ,CCnnB and 4 ,,CFFCFFnnnnnnB have determinant zero. Therefore, B is singular. This concludes the proof. 2.5 Constructing UFLP Instances of Desired Determinant Theorem 2.1: Let Bbe an arbitrary square submatix of A of size CCFFnnnn such that : ,ytFmmn ,ytFMM i.e., it is possible to obtain an identity matrix of size ,FFnn in the lower right corner of B and 1,…,,CCCFxsnnnnVV i.e., we can obtain an identity matrix of size ,CFCFnnnn in 1:,1:.CCCFCCCFBnnnnnnnn Then, detdet.BD Further, Algorithm UFLP-DET describes how to compute the matrix D from the submatrix .B

PAGE 63

63 Algorithm UFLP-DET. Input: A square submatrix B of A of size CCFFnnnn such that ,ytFmmn ,ytFMM and 1,…,.CCCFxsnnnnVV Sets of indices of variables ,ijiji x sy and it (, ,ii xsyMMM and tM ) that compose B Output: Matrix D such that detdet.BD 1: tytMFM 2: ytyMMM 3: iF ii xsxMCM 4: iF iii ssxsMMM 5: ,:,.,.,.i xCiFhMBhBhBinh 6: i s iFHCM 7: Consider the iy columns with indices i iM D is obtained by selecting rows with indices j jH from these columns. Algorithm UFLP-DET is mimicking the steps of the proof and the section that precedes Lemma 2.6; see also Example 2-5. It should be noted that if the first condition of Theorem 2.1 does not hold, then B is singular or unimodular, see Lemma 2.3, (2.23), (2.27), and (2.35). Further, if any of the last two conditions of the same theorem do not hold then B is singular. Next, we determine the maximum size of matrix D Then we give an algorithm to produce bases of UFLP of desired determinant. Corollary 2.1: For given Cn and ,Fn the maximum size of D is n where min,.CFnnn Proof: From (2.43) m,mD is a submatrix of 1:,1:CCBnn that is formed by the lower block of its iy columns. Hence, the maximum number of columns that m,mD may have is Fn (the maximum number of iy columns). Further, m,mD is a submatrix of

PAGE 64

64 Algorithm UFLP-BASIS. Input: A nonsingular (0,1) matrix m,mD such that det.m,mDd Parameters ,Cn and Fn such that Cnm and .Fnm Output: Sets of indices of variables ,ijiji x sy and it ( ,ii xsyMMM and tM ) that yields a basis B with det. Bd 1: Let MF such that Mm 2: Let HC such that Hm 3: MFM 4: 1MM (1M is a subset of M that can be chosen of any size) 5: 1 tMMM 6: 1 tyMMM 7: ytyMMM 8: Let 12 mhhhH and 12 mkkkM 9: For i = 1 to m 10: For j = 1 to m 11: If Dji =1, then let ,iikk xxjMMh else let iikk ssjMMh 12: End For 13: End For 14: HCH 15: j H let rr xxMMj and rr ssMMj where rF ( r can be any element in F). 16: i xxC iF jMVinj 17: i ssC iF jMVinj 18: 1,…,CCCFxsVnnnnVV 19: vV let i x j M or i s j M where C Cv ijvin n (for every i and j we can choose ij x variable (i x j M ) or ijs variable (i s j M ) but we cannot choose both). 1:,1:,CCBnn therefore the maximum number of rows that m,mD may have is Cn (the size of 1:,1:CCBnn ). Also, m,mD is a square matrix of size ,. mm It follows that the maximum size of D is nn where min,.CFnnn

PAGE 65

65 Theorem 2.2: Let m,mD be an arbitrary (0,1) nonsingular matrix such that det.m,mDd Then for any Cnm and Fnm the basis B constructed by Algorithm UFLP-BASIS is such that det. Bd We next explain the steps of Algorithm UFLP-BASIS. Firstly, we emphasize that we will use D as a submatrix of the upper left corner of B of size ,;CCnn see (2.43). In Step 1, we specify a set, M of the indices i of i y variables, for iF such that Mm The basis B that will be obtained as output of Algorithm UFLP-BASIS will have iy columns with indices i iM Matrix D will be a submatrix of these columns. In Step 2, we do an operation similar to Step 1 but in terms of rows instead of columns, i.e., we determine a set of indices, H of rows such that D will be a submatrix of these rows. Steps 1 and 2 are concerned with finding the left vertical block of B in Figure 24. Steps 3 to 7 focus on constructing the right vertical block of B in the same figure. We select columns corresponding to it variables that have the same indices as the iy variables that were selected. To obtain an identity matrix of size ,FFnn in the lower right corner of B we supplement (if necessary) the selected it variables by extra columns that may be either it or .iy Clearly, the indices i of the additional columns should be chosen from FM

PAGE 66

66 Steps 8 to 13 use the elements of D to select variables ij x and :ijs if ,1, Dji then we select the variables ij x to be included in the basis and if ,0, Dji we select the variables ijs to be included in B. The indices i and j of the ij x and ijs variables depend on the elements of M and H In Step 14, we determine the set of indices, H of the rows of 1:,1:CCBnn that are not associated with D (where we should have an identity matrix in the same diagonal of 1:,1:CCBnn as D ). Step 15 obtains such an identity matrix by selecting two variables ij x and ijs with the same indices i and j for each row corresponding to H so (2.12.a) holds. For the middle vertical block of B in Figure 2-4, Steps 16 to 18 determine the subset of indices of rows, in the range 1,…,,CCCFnnnn that have no component equal to one that is associated with any of the ij x or ijs variables. Finally, Step 19 selects ij x or ijs variables to supplement that range of rows such that there is one component that is equal to one in each row. When ,CFnnm the above algorithm simplifies tremendously. In this case, the algorithm simply selects all the iy and it variables to be included in B Further, for each ,1 Dji we select the column corresponding to variable ij x and for each ,0 Dji the column corresponding to variable ,ijs ,. iF jC We demonstrate the use of Algorithm UFLP-BASIS in the following example.

PAGE 67

67 Example 2.6: Let 1 110 101. 01 D= Clearly, det2. D We next show how to construct a basis B of UFLP with 5Cn and 4,Fn such that det2. B We first choose 1,2,3,M 3,4,5,H and 4.M Since 1,2,3,4,tM we select 1234, .ttttB Because tyM and 1,2,3,yM we choose 123, .yyyB We now consider matrix .D As 1 .,11, 0 D= we set 13,4,xM 15,sM i.e., we set 131415, x xsB Similarly, for 1 .,20, 1 D= we select 23,5,xM 24,sM which implies that 232524, x xsB Also, for 0 .,31, 1 D= we select 34,5,xM 33,sM which implies that 343533, x xsB Finally, we write 1,2,H 41,2,xM 41,2,sM 41424142, x xssB and 8,9,13,15,19,20,21,22,xV 10,14,18,21,22sV with 6,7,11,12,16,17,23,24,25, V we then choose 111221223132434445, , x sxsxsxsxB

PAGE 68

68 Now, we are able to use Algorithm UFLP-BASIS to construct UFLP bases from any given nonsingular (0,1) matrix m,mD for instances of UFLP with parameters Cnm and .Fnm Given ,CDn and ,Fn we next determine the number of different bases of UFLP (with given Cn and Fn ) that we can construct from the same matrix D using Algorithm UFLP-BASIS. In other words, we want to determine the number of different bases of UFLP (with given Cn and Fn ) that we can construct that are such that Algorithm UFLP-DET will produce matrix D when applied to these bases. Proposition 2.2: Let m,mD be an arbitrary (0,1) nonsingular matrix. Then for given Cn and Fn such that Cnm and ,Fnm the number of different UFLP bases that we can construct that produce matrix m,mD when inputed to Algorithm UFLP-DET is equal to 22.C CFCFC Fnm nnmnnn n FmmnCC Proof: Denote by R the number of different UFLP bases that we can construct that have matrix m,mD. These matrices can be computed from Algorithm UFLP-BASIS. We next consider the steps of this Algorithm. For these steps we compute the number of choices 125, ,, RRR We then compute R as 5 1 h hR In Step 1, we select a subset M of F The number of ways to select M equals ,F MC and therefore 1.Fn mRC In Step 2, we perform the same operation for H and C and so 2.Cn mRC

PAGE 69

69 From Step 3 to 7, we decide whether the elements of M will correspond to either it or iy variables. Since, we have two choices for each element that belongs to ,M we write 322.FM nmR To obtain an identity matrix of size ,CFCFnnnn in 1:,1:,CCCFCCCFBnnnnnnnn at least one component must equal one in each of the rows with indices 1,…,.CCCFnnnn Every column corresponding to either ij x or ijs variable gives one component in one of those rows. Steps 8 to 13, select 2m columns corresponding to either ij x or ijs variables depending on the values of the components of D At this point, the number of rows that have at least one component that is equal to one corresponding to either ij x or ijs variables is 2. m In Steps 14 and 15 we select columns corresponding to ij x and ijs variables with the same indices. The indices j are determined to be the elements of H while the indices i are chosen from F It follows that we have CHnm FFnn ways to choose the indices i and j and so 4.Cnm FRn The number of columns corresponding to ij x and ijs that are selected in Steps 14 and 15 is equal to 22.CHnm The ij x and ijs variables with the same indices give two components equal to one in the same row and we are counting the rows that have at least one component that is equal to one. Hence, only half of those columns are counted. The number of rows that have at least one component that is equal to one is 2.Cmnm

PAGE 70

70 For the remaining rows, 2,CFCnnmnm Steps 16 to 19 may select either ij x or ijs columns to supplement these rows. It follows that we have two choices for each of these rows and so 252.CFCnnmnmR Finally, we write, 25 12C CFCFC Fnm nnmnnn n hFmm hRRnCC (2.69) yielding the result. It should be noted that for given matrix D ,Cn and ,Fn Algorithm UFLP-BASIS produces R different bases of UFLP (with given Cn and Fn ). The fact that they are different is due to the following argument. In Steps 1, 2, and 4 of Algorithm UFLPBASIS, we choose arbitrary elements for the subsets M H and 1. M Also, in Step 15 we choose arbitrary value for .r The elements of M H and 1, M and the value of .r determine the indices of the variables that will be selected to be included in B Therefore, every setting of these parameter s produces unique basis. It follows that for given input matrix, all the output bases are different from each other. Further, for two input matrices we may have two identical settings of these parameters. However, every input matrix produces bases that are different from the bases produced by the other matrix as long as the two input matrices are not identical. This is because that the components of the input matrices determine whether the selected variables are ij x or ijs (ij x for every component equals one and ijs for every components equals zero). Since the two input matrices are not i dentical, the identical settings of the

PAGE 71

71 abovementioned parameters determine identical indices but with different types of variables (ij x or ijs ). Example 2.6-continued: Given D as defined before, the number of different UFLP bases B with 5Cn and 3,Fn that we can construct from matrix D is equal to 2 435 33231440 RCC. It follows that if we know all (0,1) matrices of size ,,mm for min,,CFmnn that have determinant absolute value equal to d then we are able to compute the number of UFLP bases with given Cn and Fn (such that Cnm and Fnm) that have the same determinant absolute value. Theorem 2.3: For given Cn and ,Fn let ,Nmd denote the function that returns the number of different (0,1) nonsingular matrices, ,m,mD of size ,, mm for min,,CFmnn that have determinant absolute value equal to .d Then the number of different UFLP bases, B with the given Cn and Fn that are such that det Bd is 22,.C CFCFC Fnm nnmnnn n FmmnCCNmd Using Theorem 2.2, for given Cn and Fn we can construct UFLP bases from any nonsingular (0,1) matrix, D of size mm such that Cnm and Fnm. Also, the absolute value of the determinant of a basis of the LPR of UFLP can be calculated by

PAGE 72

72 computing its matrix D using Theorem 2.1. We use these results to obtain information about the maximum possible determinant of bases of the LPR of UFLP. Corollary 2.2: Given Cn and ,Fn the absolute value of the maximum possible determinant of a basis of the LPR of UFLP is equal to the absolute value of the maximum determinant of a (0,1) matrix of size nn where min,.CFnnn The proof simply uses Algorithm UFLP-BASIS. In Chapter 3, we investigate how efficient it is to solve the group relaxations of UFLP using standard algorithms. The running time of these algorithms is directly affected by the absolute value of the determinant of the LP optimal basis. In turn, the value of this determinant is a function of m the size of the matrix D see Theorem 2.1. Corollary 2.3: Given ,Cn ,Fn and m such that Fnm and ,Cnm the absolute value of the maximum possible determinant of a basis of the LPR of UFLP that has a D matrix of size mm is equal to the absolute value of the maximum determinant of a (0,1) matrix of size ,. mm We next give (0,1) matrices of size ,,hh 2, h that can be used to construct UFLP bases with determinant equal to 1, 2, .., 1 h for any instance with Cnh and .Fnh Let the matrix hhU be formed by subtracting an identity matrix from a matrix of ones of the same size, i.e.,

PAGE 73

73 ,,,,011 111 1 110hhhhhhhhUEI (2.70) We next introduce some results that we will use to compute the determinant of hhU. Lemma 2.7: Let hqTE and qkWE be matrices of ones, then ,.hkTWqE Proof: Define ,.hkQTW For any 1,,ih and 1,,, j k we write, 11,,,1.qq hk rrQijTirWrjq Lemma 2.8: Let ,,,,hhhhhhUEI then 1 ,,. 1hh hhhhE UI h Proof: Define ,,. 1hh hhhhE QI h We write 2 ,,, ,,,,,,,. 111hhhhhh hhhhhhhhhhhhhhEEE UQEIIEI hhh (2.71) Using Lemma 2.7 to compute 2 ,,hhE (2.71) can be rewritten as ,, ,,,,. 11hhhh hhhhhhhhhEE UQEI hh (2.72) Expression (2.72) then reduces to ,,,,,1 1. 11hhhhhhhhhhh UQEII hh (2.73) Lemma 2.9: Let 2. h If ,,,,hhhhhhUEI then ,det1.hhUh

PAGE 74

74 Proof: We prove this lemma by induction. For 1, h 1,10, U and therefore 1,1det0. U (2.74) We now assume the result holds for 1,…, hk and prove that it holds when 1. hk We have 1,11,11,11,1011 111 1 110kkkkkkkkUEI (2.75) Next, we decompose 1,1 kkU into blocks so as to isolate a matrix kkU in the lower right block, i.e., 1,11, 1,1 ,1, 1,1011 1011 0 11 11 1110kkk kk kkk kkE U EU (2.76) Applying Lemma 2.1 on (2.76), we obtain 1 1,1,1,,,1detdetdet0.kkkkkkkkUUEUE (2.77) From our induction hypothesis, we know that ,det1.kkUk Further, Lemma 2.8 gives an exact form for 1 kkU that we substitute in (2.77) to obtain ., 1,11,,,1 1,,,1 1,,1det1det0 1 1det0 1kk kkkkkk kkkk kkE UkEIE k EEE kEE k (2.78)

PAGE 75

75 We next compute the part enclosed in square brackets in expression (2.78). From Lemma 2.7 we know that 1,,1 kkEEk and so (2.78) reduces to 1,,,1 1,1det1det. 1kkkk kkEEE Ukk k (2.79) Again, using Lemma 2.7, we know that 1,,1,.kkkkEEkE Therefore, 2 1,,,11,,1.kkkkkkEEEkEEk Substituting in (2.79), we obtain 2 1,1det1det1det1. 111kkkkk Ukkkkk kkk (2.80) For 2 h and 1,…,1, kh define ,k hhU to be a matrix obtained from ,hhU by replacing 1 hk of the one elements of its first column with zeros. The number of one elements in the first column of ,k hhU is therefore equal to k Matrix ,k hhU can be used to construct UFLP bases with determinant equal to 1, 2, .., 1 h for all instances of UFLP with Cnh and .Fnh Lemma 2.10: For 2 h and 1,…,1,kh ,det.k hhUk Proof: We decompose the matrix ,k hhU as in (2.76), i.e., ,1,1 1,11,1 ,011 1011 0 11 11 1110hhh k hh k hhh hhE U EU (2.81)

PAGE 76

76 In (2.81), 1,1 k hE is a (0,1) vector of size 1,1 h that has exactly k one elements. We apply Lemma 2.1 on (2.81) to obtain 1 ,1,11,11,11,1detdetdet0.kk hhhhhhhhUUEUE (2.82) Again, using Lemmas 2.8 and 2.9, we write .1,1 ,1,11,11,1 1,11,11,1 1,11,1det2det0 2 2det0 2hh k k hhhhhh k hhhh k hhE UhEIE h EEE hEE h (2.83) Let 1,11,1.k hhQEE Since 1,1 k hE has only kone elements and the remaining elements are all zero, then 1 1,11,1 1.,,..h k hh rQErErk Substituting in (2.83) we obtain 1,11,11,1 ,det2det. 2k hhhh k hhEEE Uhk h (2.84) Similarly, let 1,11,1.k hhhQEE Then for 1,,1, jh 1 1,11,1 1,.,,..h k hhh rQjEjrErk (2.85) Hence, 1,11,1.hhQkE Also, if 1,11,11,11,1,k hhhhWEEE then from Lemma 2.7 1,11,11,11,11,11.hhhhWEQkEEkh Finally, we write ,1 det2det2det. 22k hhkh k Uhkhk hh (3.86)

PAGE 77

77 It should be noted that Lemma 2.9 is a special case of Lemma 2.10 as 1 ,, h hhhhUU and it follows that 1 ,,detdet1.h hhhhUUh Further, the result is not limited to changing values of elements in the first column of ,.hhU It can be verified that the same result holds when applying the same changes to any single row or any single column of the matrix. The following example illustrates Lemmas 2.9 and 2.10. Example 2.7: For =4, h 4,44,44,4, UEI i.e., 4,40111 1011 1101 1110 U From Lemma 2.9 and 2.10, 3 4,44,4detdet3. UU For each one element in the first column that is changed to zero, the determinant value drops by one. For =2 k and =1, k we write 2 4,40111 0011 1101 1110 U for which it can be verified that 2 4,4det2, U and 1 4,40111 0011 0101 1110 U for which we have 1 4,4det1. U

PAGE 78

78 Proposition 2.3: For 2 h and 1,…,1,kh we can construct UFLP bases with determinant absolute value equal to k for any instance with Cnh and .Fnh Algorithms UFLP-INSTANCE describes how to obtain such an instance. Algorithm UFLP-INSTANCE. Input: h and k such that 2 h and 1,…,1. kh Parameters ,Cn and Fn such that Cnh and .Fnh Output: Sets of indices of variables ,ijiji x sy and it ( ,ii xsyMMM and tM ) that yields a basis B with det. Bk 1: Let ,,, k hhhhhhUEI 2: For i = 2 to 1 hk 3: ,,10k hhUi 4: End For 5: Apply Algorithm UFLP-BASIS on k hhU Cn and Fn to obtain basis B

PAGE 79

79 CHAPTER 3 MAXIMUM POSSIBLE DETERMINANT OF BASES OF THE LINEAR PROGRAMMING RELAXATION OF UNCAPACITATED FACILITY LOCATION PROBLEM As observed in Corollary 2.2, we need to determine the MPD of a (0,1) matrix to determine the MPD of bases of the LPR of UFLP. This is the purpose of this chapter which is organized as follows. In Section 3.1, we study the MPD of a 1 matrix since it is related to the MPD of (0,1) matrices. In Section 3.2, we discuss the MPD of a (0,1) matrix. In Section 3.3, we determine the MPD of bases of the LPR of UFLP. Section 3.4 shows that the solutions corresponding to the bases of the LPR of UFLP with MPD we create are feasible. In Section 3.5, we c onclude with comments on the efficiency of using group relaxations to solve UFLP. 3.1 Computing the MPD of 1 Matrices The problem of determining the maximum possible determinant of a matrix whose elements are 1 is a well-known pr oblem called “Hadamard Problem”. This name pays tribute to Hadamard [35] who first discussed in 1893 how to obtain upper bounds on the determinant of these matrices. Lemma 3.1 [35]: Let hhK be a matrix of size ,hh whose components are 1. Then, ,deth hhKh This bound is not attained unless h is equal to 1, 2, or is a multiple of 4, [36]. We define Qh to be the function that returns the absolute value of the maximum possible determinant of hhK of size ,.hh Table 3-1 shows the exact values of ;Qh see [37].

PAGE 80

80 Table 3-1 [37]. Maximum possible determinant of a 1 square matrix of size ,hh. h Qh h Qh 1 1 10 73728 2 2 11 327680 3 4 12 2985984 4 16 13 14929920 5 48 14 77635584 6 160 15 418037760 7 576 16 4294967296 8 4096 17 21474836480 9 14336 18 14602888806 Further, we refer to a Hadamard Matrix as any 1 matrix that achieves this bound. Golomb and Baumert [38] show how to construct Hadamard matrices whose dimensions are multiples of 4. The construc tion requires that any two rows or columns have half of their components of the same sign and half of their components of different sign. The rows are therefore pairwise orthogonal. Using a Kronecker product construction [39], Algorithm HADAMARD show s how to construct Hadamard matrices of size ,hh where 2,rh .r Algorithm HADAMARD [39]. Input: h where 2,rh .r Output: Hadamard matrix, ,,hhJ such that ,det.h hhJh 1: 1J 2: If 2log1h return, else 3: For i = 1 to 2logh 4: JJ J JJ 5: End For Hadamard matrices have the following interesting property. Lemma 3.2 [38]: If hhJ is a Hadamard matrix of size ,,hh then ,,,.T hhhhhhJJhI

PAGE 81

81 Lemma 3.2 holds for all Hadamard matric es. For Hadamard matrices obtained by Algorithm HADAMARD, the following properties also hold. Corollary 3.1: If hhJ is a Hadamard matrix of size ,hh that has been obtained using Algorithm HADAMARD, then 1 ,,1 ,hhhhJJ h (3.1) ,1.,1,hJE 1,1,.,hJE (3.2) 2,,,kh 11,,0,hh iiJikJki (3.3) and 2,,,kh 22,,1.hh iiJikJki (3.4) Example 3.1: For =4,h the Hadamard matrix obtained using Algorithm HADAMARD is constructed as follows: 1,11,J 1,11,1 2,2 1,11,111 11JJ J JJ and 2,22,2 4,4 2,22,21111 1111 1111 1111JJ J JJ Note that 4,4det416,JQ matching the value in Table 3-1.

PAGE 82

82 3.2 Computing the MPD of (0,1) Matrices A problem similar to Hadamard’s is the problem of finding the maximum determinant of square (0,1)-matrix. For this problem, we have the following result. Lemma 3.3 [40]: Let ,hhS be a (0,1) matrix of size ,.hh Then, 1 ,1 det. 2h hh hh S Let Uh to be the function that returns the absolute value of the MPD of ,hhS of size ,.hh Also, denote by Nh the number of square (0,1) matrices of size ,hh that have determinant equal to .Uh Table 3-2 shows the values of Uh and .Nh These values are obtained from [41] and [42]. Table 3-2 [41 and 42]. MPD of a (0,1) square matrix of size ,hh and number of square (0,1) matrices that have the MPD. h Uh Nh 1 1 1 2 1 6 3 2 6 4 3 120 5 5 7200 6 9 1058400 7 32 151200 8 56 391910400 9 144 27433728000 10 320 11 1458 12 3645 13 9477 14 25515 15 131072 16 327680 17 1114112

PAGE 83

83 In [41] and [42], the authors count the matrices that have the positive value of .Uh However, we are interested in the number of matrices that have determinant absolute value equal to .Uh Therefore, the values of Nh for 2h in Table 3-2 are double the values in [41] and [42]. We denote by ,,hhW any binary matrix that achieves .Uh If 1,1hhJ is a Hadamard matrix that is obtained using Algorithm HADAMARD, then a binary matrix that achieves the maximum possible determinant, ,,hhW can be obtained from 1,1hhJ as shown in Algorithm BINARY. Algorithm BINARY [40]. Input: Hadamard matrix, 1,1,hhJ where 1 1,1det1.h hhJh Output: A binary matrix, ,,hhW such that 1 ,1 det. 2h hh hh W 1: For i = 2 to 1h 2: .,.,.,1JiJiJ 3: End For 4: 1 2 JJ 5: ,2:1,2:1hhWJhh Steps 1 to 3 in Algorithm BINARY subtract the first column of J from all other columns. From (3.2) we know that the components of the first column of J are all ones. Hence, the components of all columns of J (except the first column) are changed from -1,1 to -2,0. After dividing J by -2 in Step 4, these components change from -2,0 to -1,0. Step 5, selects the submatrix of size hh in the lower right corner of J and denote it by ,.hhW

PAGE 84

84 Example 3.2: We next apply Algorithm BINARY to obtain 3,3. W Consider the matrix 4,4J obtained by Algorithm HADAMARD in Example 3.1. Steps 1 to 3 of the algorithm subtract the first column of 4,4J from all other columns, i.e., .,2.,2.,1,JJJ .,3.,3.,1,JJJ and .,4.,4.,1.JJJ We obtain 4,41000 1202 1022 1220 J We then divide 4,4J by -2 to obtain 4,40.5000 0.5101 0.5011 0.5110 J Finally, 3,3W is obtained by selecting the lower right block of 4,4J of size 3,3 i.e., 3,3101 011. 110 W Note that 3,3det32,WU matching the value in Table 3-2. 3.3 Computing the MPD of B ases of the LPR of UFLP Using Corollary 2.2 and Lemma 3.3, we can derive an upper bound for the MPD of an arbitrary basis of the LPR of UFLP.

PAGE 85

85 Theorem 3.1: Given Cn and ,Fn the absolute value of the determinant of any basis of UFLP is less than or equal to 11 2n nn Un where min,.CFnnn Similarly, if a restriction is made on m the size of the matrix D (see Corollary 2.3), then the upper bound can be improved as follows. Theorem 3.2: Given ,Cn ,Fn and m such that Fnm and ,Cnm the absolute value of the determinant of any basis of UFLP for which D has size ,mm is less than or equal to 11 2m mm Um Example 3.3: For 100Cn and 8,Fn the absolute value of the determinant of bases of the LPR of UFLP is less than or equal to 56. Example 3.4: For 100Cn and 8,Fn the absolute value of the determinant of bases of the LPR of UFLP for which 3m is less than or equal to 2. 3.4 On The Feasibility of The LP So lution to UFLP that has The MPD In this section we show that the basic solution associated with the basis produced by the Algorithm BINARY and Algorithm UFLP-BASIS is feasible for UFLP.

PAGE 86

86 We also show the surprising result that this basic solution is a 1 1 h multiple of an integer vector, where 2,rh 0, r showing that, although the determinant is large, the corresponding solution is not very fractional. In Section 3.1, we presented how to obtain Hadamard matrices of size h (where 2,rh r ) that have the MPD. From these matrices, in Section 3.2, we described a way to obtain (0,1) matrices of size 1 h (where 2,rh 0r ) that have the largest possible determinant. Further, we can use the obtained binary matrices to construct bases of UFLP that have the same determi nant using Algorithm UFLP-BASIS. In the remainder of this section, we study the bas ic feasible solutions associated with these bases. Algorithm BINARY describes how to obtain a (0,1) matrix, ,,hhW from a Hadamard matrix, 1,1hhJ obtained by Algorithm HADAMARD. We observe that using an elementary column operation (ECO) we can achieve the same result as Algorithm BINARY. In particular, we observe that ,1,12:1,2:1hhhhW=Jhh (3.5) where 1,11,1 1,12hhhh hhJEC J (3.6) and where 1,1,1if 11,3.7.a 1,-1if 21,3.7.b 0otherwise.3.7.chhECjjjh ECECjjh (3.7)

PAGE 87

87 Example 3.2-continued: We show how to obtain 4,4J from 4,4J using (3.6) and (3.7). Consider 4,41111 1111 1111 1111 J Then 4,41111 0100 0010 0001 EC Using (3.6) we obtain 4,44,4 4,4111111110.5000 111101000.5101 1 111100100.5011 22 111100010.5110 JEC J Using (3.5) we obtain 3,3101 011 110 W as desired. We next compute the inverses of 1,1 hhJ and hhW as they will be used later. We emphasize that the Hadamard matrices, 1,1,hhJ we used here are obtained by Algorithm HADAMARD.

PAGE 88

88 1,1 hhJ is obtained by subtracting the first column of 1,1 hhJ from all the other columns. Given the specific structure of Hadamard matrices 1,1,hhJ the elements of the first row of 1,1 hhJ will now be all zero except for the first element. Also, the elements of the first columns remain unchanged. As we divide all the elements by -2, all the elements of the first column now become equal to -0.5. We decompose 1,1 hhJ into four blocks in such a way that we have hhW in the lower right block, i.e., 1, 1,1 ,1,0.5 0.5h hh hhh0 J EW (3.8) We then apply Lemma 2.2 on (3.8) to obtain 1, 1 11 1,1 ,,1,2 .h hh hhhhh0 J WEW (3.9) Since we do not know 1 ,,hhW we use (3.6) to compute 1 1,1hhJ and hence we obtain more information about 1 ,.hhW We write 1 1,11,1 111 1,11,11,12. 2hhhh hhhhhhJEC JECJ (3.10) From Corollary 3.1, we know a relation between 1,1 hhJ and 1 1,1,hhJ i.e., 1 1,11,1 11 1,11,11,12 21hhhh hhhhhhJEC JECJ h (3.11) Further, it is easily verified that the inverse of the simple matrix 1,1 hhEC is equal to

PAGE 89

89 1 1,1,1if 11,3.12.a 1,1if 21,3.12.b 0otherwise.3.12.chhECjjjh ECECjjh (3.12) When we multiply a matrix by 1,1 hhEC on the right, 1,1 hhEC is subtracting the first column of that matrix from all other columns. Instead, when multiplying a matrix by 1 1,1 hhEC on the left, all rows are added to the first while other rows remain unchanged. Using these observations together with (3.3), we decompose the product 1 1,11,1 hhhhECJ as follows 1, 11 1,11,11,1 ,11,11 22 2:1,2:1 11h hhhhhh hhhh0 JECJ EJhh hh (3.13) Equating (3.9) and (3.13), we have 1, 1, 1 11 1,1 ,1 ,,1, 1,12 2 2 2 2:1,2:1 11h h hh h hhhhh hh0 0 J E WEW Jhh hh (3.14) We conclude that 1 ,1,12 2:1,2:1. 1hhhhWJhh h (3.15) We then use hhW as input to Algorithm UFLP-BAS IS to obtain UFLP bases that have the same determinant absolute value as ,.hhW For simplicity we assume that .CFnnh Therefore, the obtained basis, say B will have CFnn columns corresponding to the ij x and ijs variables, Fn columns corresponding to the iy variables, and Fn columns corresponding to the it variables.

PAGE 90

90 We apply ERO (2.7) on B and permute its columns so that we have all the ij x and ijs columns to the left, all the it columns to the right, and all the iy columns in the middle. Then B has the following structure ,,, ,1,1,1 ,, ,1,1,1 ,,,.CCFCF CCFC CFCFCFF CCFC FCFFFFFnnnhhnn nnnn nnnnnnn nnnn nnnnnnn0W0 E00 I0 B 00E 0II (3.16) We permute the horizontal blocks to locate an invertible block on the lower right corner of size ,.CFCFnnnn We group some of the blocks and introduce the following notation to ease the computation of the inverse of B ,1,1,1 ,, 1 ,, ,1,1,1 2 ,, ,,, ,,,.CCFC CFCFCFF CFCFCFCF CCFC CFCFCFCF CCFCF FCFFFFFnnnn nnnnnnn nnnnnnnn nnnn nnnnnnnn nnnhhnn nnnnnnnE00 I0 IB 00E B 0B 0W0 0II (3.17) In particular, ,1,1 1 ,, ,1,1CCF CFCFCFF CFCnnn nnnnnnn nnnE0 B0 0E (3.18) and ,, 2 ,,.CF CFCF FFFFhhnn nnnn nnnnW0 B II (3.19) We apply Lemma 2.2 on (3.19) to obtain

PAGE 91

91 1 1 ,, 2 ,1 ,,.CF CFCF FFhhnn nnnn hhnnW0 B WI (3.20) Again we apply Lemma 2.2 on (3.17) and write 1 12 ,,, 1 1 2 ,,.CFCFCFCFCFCF CFCFCFCFnnnnnnnnnnnn nnnnnnnnIBB B 0B (3.21) We use (3.18) to compute 1 12 ,,.CFCFCFCFnnnnnnnnBB 1 ,CFCFnnnnB is formed of CFnn rows such that all the elements of every row are zero except for one element that is equal to -1. The product 1 12 ,,CFCFCFCFnnnnnnnnBB will be formed of rows of 1 2 ,CFCFnnnnB such that for every -1 element in 1 ,CFCFnnnnB with column index k the thk row of 1 2 ,CFCFnnnnB is included in that product, 1,…,,Fkn i.e., 1 2 1 2 1 12 ,, 1 2 1 2 ,1,. 1,. ,. ,.CFCF CFCF CCF CFCFCFCF CFCF CFCF CCFnnnn nnnn nnn nnnnnnnn nnnnF nnnnF nnnB B BB Bn Bn (3.22) Since we assumed that ,CFnnh we know from (3.20) that the first Fn rows of 1 2 ,CFCFnnnnB are 1 ,,.CFhhnnW0 Hence,

PAGE 92

92 1 ,1, 1 ,1, 1 12 ,, 1 ,1, 1 ,1, ,1,. 1,. ,. ,.F F CCF CFCFCFCF F F CCFhhn hhn nnn nnnnnnnn hhn hhn nnnW0 W0 BB Wh0 Wh0 (3.23) Substitute (3.20) and (3.23) into (3.21), we obtain 1 ,1, 1 ,1, 1 1 ,1, 1 ,1, 1 ,, ,1 ,,1,. 1,. ,. ,.F F CCF CFCF F F CCF CF CFCF FFhhn hhn nnn nnnn hhn hhn nnn hhnn nnnn hhnnW0 W0 I Wh0 B Wh0 W0 0 WI (3.24) We now compute the associated basic feasible solution by multiplying 1B by b in (1.6). Since we permuted the rows in (3.17), we apply the same row permutation on (1.6) to obtain ,11,1,1,,,.CCFFCFCFT nnnnnnnnb0EE (3.25) The LP solution corresponding to B is obtained by computing 1Bb, i.e.,

PAGE 93

93 1 ,1, 1 ,1, 1 1 ,1, 1 ,1, 1 ,, ,1 ,,1,. 1,. ,. ,.F F CCF CFCF F F CCF CF CFCF FFhhn hhn nnn nnnn hhn hhn nnn hhnn nnnn hhnnW0 W0 I Wh0 XBb Wh0 W0 0 WI ,1 ,1 ,1 1 1 1 1 1 1 1 1 1 1 1 11, 1, 1, ,CF C F CCF CCFnn n n h hh j h hh j nnn h hh j h hh j nnn h hh j hh j0 E E Wj Wj Whj Whj Wj Whj .,1 1 1 1 1 ,111, 1,C Fh n h hh j h hh j nWj Whj (3.26)

PAGE 94

94 As we have shown that 1 ,1,12 2:1,2:1 1hhhhWJhh h in (3.15), the sum of the elements of the thk row of 1 hhW is equal to 2 1h times the sum of the elements of the 1thk row of 1,1 hhJ except the first element in that row, 1,…,.kh We described in (3.4) that the sum of the elements of the 1thk row of 1,1 hhJ except the first element in that row is equal to -1. It follows that, the sum of the elements of the thk row of 1 hhW is equal to 2 1 h We use these observations to simplify (3.26) as follows ,1 ,12 1 2 1 1CFC Fnnn nE h X E h (3.27) Theorem 3.3: For 2,rh 0, r let hhW be a (0,1) matrix obtained from Hadamard matrix 1,1,hhJ using Algorithm BINARY. Also, let B be a basis of the LPR of UFLP obtained from hhW using Algorithm UFLP-BASIS. Then the basic solution associated with B is feasible and equal to ,1 ,12 1 2 1 1CFC Fnnn nE h X E h As a result, the LP solution to a UFLP instance associated with the matrix produced by Algorithm BINARY is always feasible. It is interesting to observe that,

PAGE 95

95 although the MPD is very high, the corresponding LP solution is not very fractional since it is a multiple of 1 1h while the determinant is equal to 11 2h hh Example 3.5: The binary matrix of size (15,15) that is obtained from the Hadamard matrix of size (16,16) obtained using Al gorithm HADAMARD has determinant absolute value equal to 131072. A basis of the UFLP that is constructed from that binary matrix such that 15CFnn has the same determinant. Although the basis has a determinant of value 131072, the LP solution corresponding to that basis is an integer multiple of 1 8: ,1 ,1 ,1 ,12 1 1 8 2 7 1 1 8CFC CFC F Fnnn nnn n nE E h X E E h 3.5 Solving Group Relaxations of UFLP The running time of shortest path algorithms to solve group relaxations of UFLP will be determined in most part by the MPD of the optimal basis of the LPR of UFLP. Let ,CFgnn be the function that returns the MPD among the bases of the LPR of the UFLP for a given Cn and ,Fn i.e., ,CFgnnUn where min,.CFnnn Theorem 3.1 shows that ,CFgnn is exponential. It follows that applying traditional techniques to solve the group relaxations of UFLP yields exponential algorithms. However, we next describe results that show that determinant s of bases of UFLP are typically small.

PAGE 96

96 First, the upper bound defined in Theorem 3.1 is a function of n where min,CFnnn Therefore, no matter how big Cn or Fn is, the MPD depends only on the smallest of the two. As an example, 10000,5100,55,555.gggU Next, we give arguments supporting the claim that most of the bases of the LPR of UFLP have small determinants and the upper bound of Theorem 3.1 is attained by very few bases. As mentioned in Section 2.1, A basis of ,22CCFFCFFnnnnnnnA is a square submatrix that is invertible and that has CCFFnnnn rows and columns. As before, denote by B a square submatrix of A of size .CCFFnnnn The assumption made at the beginning of Section 2.1 imposes that the first 2CFnn columns correspond to ij x and ijs variables and the remaining 2Fn columns correspond to i y and it variables. Lemmas 2.3, 2.4, 2.5 and 2.6 establish that the determinant of B depends on the way we select the iy and it variables to be included in .B We define k as the number of columns corresponding to i y and it variables included in B in other words, y tkmm Also, we define kT as the total number of submatrices of A of size CCFFnnnn that have k columns corresponding to i y and it variables. An upper bound on the total number of bases that we may obtain in (2.1) is given by 22 2 2 00.FF CF F CCFFnn nn n knnnnkk kkTTCC (3.28)

PAGE 97

97 The inequality is because the matrices in kT are not required to have determinant different from 0. In (2.1) we obtained a simple formula by counting the total number of ways of selecting a set of columns of A whose number is equal to the number of rows, regardless of what variables correspond to these columns. In (3.28), we obtain the result by counting the number of the columns corresponding to ij x and ijs variables separately from the number of columns corresponding to i y and it variables. We now consider (3.28). For the case where 0,k the basic columns selected correspond only to ij x and .ijs Therefore B is singular according to Lemma 2.3. For 1,…,1,Fkn we have only k columns corresponding to i y or it variables. Similarly, Lemma 2.3 implies that B is singular. It follows that the number of submatrices of A of size CCFFnnnn that are singular up to this point is equal to 011.FnTTT A nonzero value of det B can only occur when .Fkn Lemma 2.5 establishes that det0,1B when .Fkn Therefore, there are FnT submatrices of A of size CCFFnnnn with determinant 0 or 1. As k becomes greater than Fn ,ytFmmn then we know using Lemma 2.6 that we have Fkn columns corresponding to i y variables that are associated with M i.e., .Fmkn Theorem 3.2 implies that the MPD of B is obtained by .FUmUkn

PAGE 98

98 Table 3-3 shows the MPD of B for different values of k for 0,…,2,Fkn and kT (the number of submatrices of A of size CCFFnnnn that have k columns corresponding to iy and it variables). Table 3-3. Maximum possible determinant of B and kT for given number of columns corresponding to iy and it variables. k MPD of B kT 0 0 2 2 0CF F CCFFnn n nnnnCC 1 0 2 2 11CF F CCFFnn n nnnnCC .. .. .. 1Fn 0 2 2 11CF F CCFFnn n nnnnCC Fn 1 2 2CF F CCFFnn n nnnnCC 1Fn 11 U 2 2 11CF F CCFFnn n nnnnCC 2Fn 21 U 2 2 22CF F CCFFnn n nnnnCC 3Fn 32U 2 2 33CF F CCFFnn n nnnnCC 4Fn 43 U 2 2 44CF F CCFFnn n nnnnCC .. .. .. Fnn Un 2 2CF F CCFFnn n nnnnnnCC Table 3-3 stops at Fnn assuming that CFnn and therefore 2.FFnnn However, if ,CFnn i.e., ,Cnn then for 1,…,2FFknnn MPD will be equal to Un It is clear that as we increase k the MPD either increases or remain unchanged. Hence, for ,Fknh 0Fnh k kT is a lower bound on the number of submatrices of A of size CCFFnnnn whose determinants are less than or equal to Uh For ,Fknh although the MPD is greater than ,Uh there are still a number of submatrices of A say T whose determinants are less than or equal to Uh.

PAGE 99

99 However, we do not know the value of T It follows that the total number of submatrices of A whose MPD is less than or equal to Uh is 00.FFnhnh kk kkTTT We define a pseudo-basis to be a square submatrix of ,22CCFFCFFnnnnnnnA of size .CCFFnnnn Note that every basis is a pseudo-basis but singular submatrices are pseudo-bases that are not bases. We are interested in the proportion of bas es whose determinants absolute values are less than or equal to .Uh As a proxy, we will compute the proportion of pseudobases that have determinants absolute values less than or equal to Uh Let ,CFpgnnUh denote the probability that a pseudo-basis of the LPR of the UFLP for given Cn and Fn chosen uniformly at random is less than or equal to .Uh Using the observations in Table 3-3 and (2.1) we write 2 2 00 22,3.29.a 1.3.29.bFF CF F CCFF CFF CCFFnhnh nn n knnnnkk kk nnn CF nnnnTCC hn pgnnUh TC hn (3.29) Table 3-4 shows the value of ,CFpgnnUh for given values of Cn and Fn up to 35 for different h values.

PAGE 100

100 Table 3-4. Probability that the MPD of pseud o bases of the LPR of the UFLP for given Cn and Fn is less than or equal Uh h 2 3 4 5 6 7 8 9 10 Uh 1 2 3 5 9 32 56 144 320 ,CFgnn 2,2g 1 3,3 g 0.963 1 4,4 g 0.918 0.990 1 5,5g 0.881 0.974 0.998 1 6,6 g 0.850 0.956 0.992 0.999 1 7,7 g 0.825 0.938 0.985 0.998 1 1 8,8g 0.804 0.920 0.976 0.995 0.999 1 1 9,9 g 0.787 0.904 0.966 0.991 0.998 0.999 1 1 10,10 g 0.772 0.890 0.957 0.987 0.997 0.999 1 1 1 11,11g 0.759 0.876 0.947 0.982 0.995 0.999 1 1 1 12,12 g 0.748 0.864 0.938 0.976 0.993 0.998 0.999 1 1 13,13 g 0.738 0.853 0.929 0.971 0.990 0.997 0.999 1 1 14,14g 0.729 0.843 0.920 0.965 0.987 0.996 0.999 1 1 15,15 g 0.721 0.833 0.912 0.959 0.984 0.994 0.998 0.999 1 16,16 g 0.714 0.825 0.904 0.953 0.980 0.993 0.998 0.999 1 17,17g 0.708 0.817 0.896 0.947 0.976 0.991 0.997 0.999 1 18,18 g 0.702 0.809 0.889 0.942 0.973 0.989 0.999 0.999 1 19,19 g 0.696 0.802 0.882 0.936 0.969 0.986 0.995 0.998 0.999 20,20g 0.691 0.795 0.875 0.931 0.965 0.984 0.994 0.998 0.999 21,21g 0.687 0.789 0.869 0.925 0.961 0.982 0.992 0.997 0.999 22,22 g 0.682 0.783 0.862 0.920 0.957 0.979 0.991 0.996 0.999 23,23g 0.678 0.778 0.857 0.915 0.953 0.977 0.989 0.996 0.998 24,24 g 0.674 0.773 0.852 0.910 0.950 0.974 0.988 0.995 0.998 25,25 g 0.671 0.768 0.847 0.905 0.946 0.971 0.986 0.994 0.997 26,26g 0.668 0.763 0.842 0.901 0.942 0.969 0.984 0.993 0.997 27,27g 0.664 0.759 0.837 0.896 0.938 0.966 0.982 0.992 0.996 28,28 g 0.661 0.754 0.832 0.892 0.935 0.963 0.981 0.990 0.996

PAGE 101

101 h 2 3 4 5 6 7 8 9 10 Uh 1 2 3 5 9 32 56 144 320 ,CFgnn 29,29 g 0.658 0.750 0.827 0.887 0.931 0.960 0.979 0.989 0.995 30,30g 0.656 0.747 0.823 0.883 0.927 0.958 0.977 0.988 0.994 31,31 g 0.653 0.743 0.819 0.879 0.924 0.955 0.975 0.987 0.994 32,32g 0.651 0.739 0.815 0.875 0.920 0.952 0.973 0.985 0.993 33,33g 0.648 0.736 0.811 0.871 0.917 0.949 0.971 0.984 0.992 34,34 g 0.646 0.733 0.807 0.868 0.914 0.947 0.969 0.983 0.991 35,35 g 0.644 0.730 0.804 0.864 0.910 0.944 0.967 0.981 0.990 Although Theorem 3.1 implies that the MPD of bases of the LPR of UFLP for 35Cn and 35Fn can be as large as 1735310, U Table 3-4 shows that more than 99% of the pseudo-bases of the LPR of UFLP for 35Cn and 35Fn will have determinant less than or equal to 10320 U It should be noted that 320 is not a large number when compared to the size of the matrix A of UFLP for 35Cn and 35,Fn 1295,2520. Further, as Un is function of n where min,CFnnn we have that 100000,351035,1000001035,35100.99 pgUpgUpgU

PAGE 102

102 CHAPTER 4 SPECIAL CASES In this chapter, we study two small instances of UFLP. In the first, we assume that we have two customers and/or two facilities and we show that the LPR of UFLP always describes the convex hull of its integer solutions. In the second, we assume that we have three customers and three facilities and we show that the convex hull of integer solutions can be obtained by adding six inequalities to its LP formulation. 4.1 Case 1: Two Customers and/or Two Facilities If 2Cn and/or 2,Fn then 2. n Using Theorem 3.1 and Table 3-2, we know that the absolute value of the MPD of bases of the LPR of the UFLP with ( 2Cn and/or 2Fn ) is less than or equal 21. U This means that as long as 2Cn and/or 2Fn there always is an optimal solution to the LPR of UFLP that is integer. Theorem 4.1: If 2Cn and/or 2Fn then the LPR of UFLP describes the convex hull of its integer solutions. We note that this result had been obtained before in the literature; see [15]. 4.2 Case 2: Three Custom ers and Three Facilities If 3,CFnn then 3.n Using Theorem 3.1 and Table 3-2, we know that the absolute value of the MPD of bases of the LPR of the UFLP with ( 3CFnn ) is less than or equal 32.U Further, Table 3-2 establishes that there are exactly six (0,1) matrices of size (3,3) that have determinant equal to 2. It follows from Theorem 2.3 that there are exactly six bases of the LPR of UFLP for ( 3CFnn ) that have a determinant whose absolute value is equal to 2. We show next that the basic feasible

PAGE 103

103 solutions corresponding to these bases are fractional and feasible. We then construct six inequalities that can be added to the LP formulation of UFLP to remove these fractional solutions and produce the convex hull of integer solutions. First, we show how to construct the above-mentioned six bases. A (0,1) matrix of size (3,3) that has determinant absolute value that is equal to 2 can be obtained using Lemma 2.9, 3,3011 101. 110 U (4.1) According to Table 3-2, there are only six (0,1) matrices of size (3,3) that have determinant whose absolute value is equal to 2. It is easily verified that these six (0,1) matrices can be obtained by considering all the possible column or all the possible row permutations of 3,3U in (4.1). In fact, since the matrix 3,3U has the form of (4.1), then all the possible column permutations of 3,3U produce six matrices that are identical to the six matrices produced using all the possible row permutations. It follows that the six (0,1) matrices of size (3,3) that have a determinant of absolute value equal to 2 are 011011101110101110 101, 110, 011, 011, 110, and 101. 110101110101011011 (4.2) Given that 3,CFnn we use Algorithm UFLP-BASIS to construct one basis for each of the matrices shown in (4.2); see Figure 4-1. Theorem 2.2 implies that these bases have determinant whose absolute value is equal to 2.

PAGE 104

104 ijx ijsiy it 11x 12x 13x 21x 22x 23x31x 32x 33x 11s12s13s21s22s23s31s32s33s1y 2y 3y 1t 2t 3t ijx ijsiy it 11x 12x 13x 21x 22x 23x31x 32x 33x 11s12s13s21s22s23s31s32s33s1y 2y 3y 1t 2t 3t ijx ijsiy it 11x 12x 13x 21x 22x 23x31x 32x 33x 11s12s13s21s22s23s31s32s33s1y 2y 3y 1t 2t 3t ijx ijsiy it 11x 12x 13x 21x 22x 23x31x 32x 33x 11s12s13s21s22s23s31s32s33s1y 2y 3y 1t 2t 3t ijx ijsiy it 11x 12x 13x 21x 22x 23x31x 32x 33x 11s12s13s21s22s23s31s32s33s1y 2y 3y 1t 2t 3t ijx ijsiy it 11x 12x 13x 21x 22x 23x31x 32x 33x 11s12s13s21s22s23s31s32s33s1y 2y 3y 1t 2t3t Figure 4-1. The six bases of the LPR of UFLP (with 3CFnn ) that have determinant absolute values equal to 2. For each of the above bases, we compute the corresponding basic solution 1Bb where b is the right-hand-side shown in (1.6). The basic solution associated with each of the six bases is non-degenerate and it can be verified that all basic variables have value 0.5. It follows that each of these LP vertices has exactly 9 neighbor vertices. We next show that for each of thes e basic solutions, all neighboring basic solutions correspond to integer solutions. Consider the bases in Figure 4-1, it can be verified that every basis is different from all other bases in more than two basic variables. We know that two bases are neighbors only if they are different in one basic variable (as we do one pivot in the simplex algorithm to obtain a neighbor basis). It follows that each of the above

PAGE 105

105 “fractional” bases does not have any neighbor that corresponds to another “fractional” basis. Since we know that these six bases are the only fractional bases in the LPR of UFLP and they are not neighbors to each other, we conclude that for each of the six bases in Figure 4-1 all the neighbors in the LPR correspond to integer solutions. For each of these six bases, we can therefore pass a plane through its nine neighbors to obtain a valid inequality for the convex hull of integer solutions of UFLP. It is easily verified that this cut is facet-defining for the convex hull of integer solutions to UFLP. We next describe how to obtain the cut corresponding to the first basis in Figure 4-1. The same argument can be applied to the other bases after permuting appropriate indices. The basic solution corresponding to the first basis in Figure 4-1 is as follows 1112212332331322311231230.5. xxxxxxsssyyyttt (4.3) The simplex tableau corresponding to this solution is 11132231111221233233 12132231111221233233 21132231111221233233 231322311112212330.50.5,4.4.a 0.50.5,4.4.b 0.50.5,4.4.c 0.50.50 xxxxssssss xxxxssssss xxxxssssss xxxxsssss 233 32132231111221233233 33132231111221233233 13132231111221233233 2213223111122,4.4.d 0.50.5,4.4.e 0.50.5,4.4.f 0.50.5,4.4.g 0.50.5 s xxxxssssss xxxxssssss sxxxssssss sxxxsss 1233233 31132231111221233233 1132231111221233233 2132231111221233233 313223111,4.4.h 0.50.5,4.4.i 0.50.5,4.4.j 0.50.5,4.4.k 0.50.5 sss sxxxssssss yxxxssssss yxxxssssss yxxxs 1221233233 1132231111221233233 2132231111221233233 3132231111221233233,4.4.l 0.50.5,4.4.m 0.50.5,4.4.n 0.50.5.4.4.o sssss txxxssssss txxxssssss txxxssssss (4.4)

PAGE 106

106 When pivoting variable 13 x inside of the basis, we obtain that 131 x while all other non-basic variables stay at zero. A similar observation can be made for each of the other non-basic variables. It follows that the cutting plane we are looking for passes through the points 132231111221233233 132231111221233233 132231111221233233 13221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, xxxssssss xxxssssss xxxssssss xx 31111221233233 132231111221233233 132231111221233233 132231110, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, xssssss xxxssssss xxxssssss xxxs 1221233233 132231111221233233 1322311112212332330, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1. sssss xxxssssss xxxssssss (4.5) Therefore, the inequality we are looking for is of the form 2231112333131221321. xxsssxsss (4.6) We conclude that the inequality (4.6) is a facet for the convex hull of integer solution to UFLP that removes the fractional solution corresponding to the first basis in Figure 4-1. Repeating the aforementioned steps for the other bases in Figure 4-1, we obtain the following inequalities 132231111221233233 132132111222233133 122331111321223233 122133111322233132 112233121321231,4.7.a 1,4.7.b 1,4.7.c 1,4.7.d xxxssssss xxxssssss xxxssssss xxxssssss xxxssss 3132 1123321213212231331,4.7.e 1.4.7.f ss xxxssssss (4.7) In summary, we have established that for the case where 3,CFnn there are exactly six basic feasible solutions that are fractional for the LPR of UFLP. The

PAGE 107

107 inequalities in (4.7) remove these fractional solutions to form the convex hull of integer solutions to UFLP. Theorem 4.2: If 3,CFnn then the convex hull of integer solutions can be obtained by adding the six inequalities in (4.7) to the LPR of UFLP. We next analyze the inequalities in (4.7) with the view of generalizing them to other instances of UFLP. To interpret the inequalities in (4.7), we use (1.4.c) to substitute the slack variables, ,ijs by .iij y x We write the first inequality (4.7.a) as 1322311121123223331232221. xxxxxxxxxyyy (4.8) From (1.4.b) we know that 1121311. xxx Therefore, 1121 x x can be substituted by 311. x Applying a similar transformation to 1232 x x and 2333,xx we obtain 1322311232, xxxyyy (4.9) which is equivalent to (4.7.a). The same procedure can be applied to all inequalities in (4.7), resulting in the following six inequalities. 132231123 132132123 122331123 122133123 112233123 1123321232,4.10.a 2,4.10.b 2,4.10.c 2,4.10.d 2,4.10.e 2.4.10.f xxxyyy xxxyyy xxxyyy xxxyyy xxxyyy xxxyyy (4.10) We interpret the first inequality of (4.10) as follows. A matching of the set of facilities, ,F to the set of customers, ,C 132231, x xx is first selected. Any solution

PAGE 108

108 x y for which 1322311 xxx clearly satisfies (4.10.a) since we have 1231. yyy Now consider any solution x y with 1322310, xxx then clearly 1232 yyy since no single facility can handle all customers given the condition that 1322310. xxx The remaining inequalities (4.10.b) to (4.10.f) are obtained using different permutations of the matching of F to C For UFLP with 3,CFnn a complete bipartite graph between the set of facilities, F and the set of customers, C is given in Figure 4-2. Each inequality in (4.10) gives a matching between the set of fa cilities and the set of customers. We show in Figure 4-3(a) to (f), the bipartite graph between the set of facilities and the set of customers that corresponds to the matching associated with each of the inequalities in (4.10). Figure 4-2. A complete bipartite graph betwe en the set of facilities and the set of customers for UFLP with 3CFnn

PAGE 109

109 a) b) c) d) e) f) Figure 4-3. A complete bipartite graph betwe en the set of facilities and the set of customers that corresponds to the matching associated with each of the inequalities in (4.10). a) (4.10.a), b) (4.10.b), c) (4.10.c), d) (4.10.d), e) (4.10.e), and f) (4.10.f). Using this interpretation, we next provide a generalization of the inequalities to instances where 3Cn and 3.Fn Definition: Let ˆ ˆ ,, GFCE be a complete bipartite graph between the set of facilities, F and the set of customers, C we denote by ˆE the set of all edges in ˆ. G We say that ˆˆEE “covers the facilities” if iF j C s.t., ˆ,.ijE Note that in each bipartite graph in Figure 4-3(a) to (f), the set of edges “covers the facilities”. Theorem 4.3: Assume that ˆE covers the facilities. Then, the following inequality

PAGE 110

110 ˆ,2.iji iF ijExy (4.11) is valid for UFLP. Proof: Consider any feasible solution x y to UFLP. If any of the ij x variables that are corresponding to ˆ, ijE is equal to 1, then (4.11) is satisfied as 1i iFy is satisfied by all feasible solutions to UFLP. Further, if ˆ,0,ij ijEx i.e., ˆ0 ,,ij x ijE we claim that 2i iFy showing that (4.11) is satisfied. Assume for contradiction that there exists a feasible solution with ˆ0 ,ij x ijE and 1.i iFy Let kF be the only index such that 1.ky Because ˆE covers F jC such that 0.kjx Since \1,ijij iFiFk x x there exists kkF such that 1.kjx It follows that 1,ky a contradiction to the fact that 1.i iFy Theorem 4.3 gives a large family of inequalities for UFLP. This family leads to a convex hull description of the integer solutions to the problem when 3.CFnn An interesting direction of future research is to determine whether it leads to other convex hull description for 3Cn and/or 3.Fn

PAGE 111

111 CHAPTER 5 EXPERIMENTAL RESULTS In this chapter we present experimental results on solving the group relaxation of UFLP. We construct multiple instances of different size. The parameters Cn and Fn are chosen as follows. First we choose Fn to take all possible values from 4 to 50. Then, for each value of ,Fn we create several values of .Cn When 4,,9Fn we generate all values of Cn from Fn to 5.Fn When 10,,50Fn we generate all values of Cn from 5Fn to 5.Fn Table 5-1 describes the selection of the parameters Cn and Fn in the construction of experiments. Table 5-1. Selection of the parameters Cn and Fn in the construction of UFLP experiments. Cn Fn Cn Fn Cn Fn Cn Fn Cn Fn 4 4 5 5 9 9 8 10 14 10 5 4 .. 5 .. 9 9 10 15 10 6 4 10 5 14 9 10 10 .. .. 7 4 6 6 5 10 11 10 45 50 8 4 .. .. 6 10 12 10 .. 50 9 4 .. .. 7 10 13 10 55 50 For each setting of the parameters, Cn and ,Fn we generate 1000 instances with different cost vectors. In all instances, the cost satisfies the triangular inequality. For each facility iF we generate random coordinates ,iihk that are uniformly distributed over the range 40,10. Also, for each customer j C we generate random coordinates j jhk that are uniformly distributed over the range

PAGE 112

112 40,10. Then, the metric distance ijd between each facility i and each customer j is computed as 22.ijijijdhhkk (5.1) Similarly, we then generate the cost per unit distance and the cost of opening facility i ,if such that they are uniformly distributed over the range 40,10. Finally, we compute the cost of assigning customer j to facility i ,ijc as .ijijc d Let Lpx be an optimal solution of the LPR of UFLP and let Lpz be its associated optimal value. In each instance, if Lpx is integer then the instance is ignored, otherwise, lpx is non integer and we solve the corner relaxation of UFLP corresponding to that instance. In our experiments, LP problems are solved using CPLEX. For every parameters setting ,,CFnn we denote by R the number of instances whose Lpx was non integer. If 0, R we obtained an integer solution for the LPR of UFLP in all of the 1000 trials. Therefore, we do not need to solve any corner relaxation of UFLP and hence this parameter setting is omitted from our result table. In Table 5-2 we present results for settings where 0. R Every row represents the instances of a single parameter setting ,.CFnn The first two columns present the values of Cn and Fn corresponding to every instance while the third column gives the R value for this setting.

PAGE 113

113 For any specific instance where 0 R we compute the absolute value of the determinant of the basis corresponding to .Lpx The column maxD contains the maximum of these determinants. In each instance where Lpx is non-integer, we compute an optimal solution of the group minimization problem. This is done using our implementation of Dijkstra's algorithm in Matlab. Denote by Grx and Grz the solution and objective value corresponding to this problem. For each instance, we check the feasibility of Grx by checking if it is nonnegative. The inf column in Table 5-2 represents the number of instances for a parameter setting where Grx is infeasible. Let Ipx be an optimal integer solution of the particular instance of UFLP and denote by Ipz its associated optimal value. If Grx is feasible then we know that it is an optimal solution to the MIP formulation of UFLP and hence GrxIpx and .GrzIpz Otherwise, we compute it using CPLEX. For each of the fractional instances, The value / GrzLpzIpzLpz represents how close Grz is to .Lpz Similarly, / IpzGrzIpzLpz represents how close Grz is to Ipz. The last columns of Table 5-2 give the average x and standard deviation x of GrzLpz IpzLpz and IpzGrz IpzLpz respectively. Next, we interpret the first row of Table 5-2. For parameters setting 5Cn and 10,Fn 7 instances ( 7R ) out of the 1000 instances generated were fractional. Hence, Grx and Ipx were computed for each of these instances. The group relaxation solution was infeasible for 6 of these instances (inf=6). On average for these

PAGE 114

114 instances, the Grz value represents 0.3251 LpzIpzLpz and 0.6749. IpzIpzLpz This shows that group relaxations from problems of these sizes close 32% of the gap that exist between the LP and IP values when LPR does not solve the IP. Table 5-2. Experimental Results. Cn Fn R maxD inf GrzLpz IpzGrz 5 10 7 2 6 0.3251 0.3841 0.6749 0.3841 9 12 6 2 5 0.0711410.16158 0.92886 0.16158 38 42 9 2 4 0.19018 0.12144 0.80982 0.12144 24 19 4 2 2 0.19828 0.11136 0.80172 0.11136 33 31 5 2 2 0.28837 0.16097 0.71163 0.16097 49 53 9 2 3 0.0097010.0825630.9903 0.082563 16 17 8 2 6 0.28879 0.0097130.71121 0.009713 4 5 10 2 6 0.4661 0.23046 0.5339 0.23046 40 38 10 2 4 0.0534360.0555810.94656 0.055581 19 14 8 2 5 0.36607 0.28093 0.63393 0.28093 30 32 42 20.485260.1430.51474 0.143 10 5 3 2 1 0.30444 0.11459 0.69556 0.11459 4 8 6 2 5 0.35983 0.26111 0.64017 0.26111 8 10 7 2 3 0.15138 0.26122 0.84862 0.26122 42 37 6 2 5 0.22951 0.0608840.77049 0.060884 16 21 10 2 5 0.0240140.0411410.97599 0.041141 20 15 7 2 3 0.19268 0.0609130.80732 0.060913 33 35 6 2 2 0.18086 0.21959 0.81914 0.21959 41 43 5 2 3 0.14379 0.29466 0.85621 0.29466 39 43 6 2 1 0.40836 0.17853 0.59164 0.17853 19 15 8 2 4 0.22526 0.0580170.77474 0.058017 10 13 7 2 2 0.40332 0.25001 0.59668 0.25001 33 28 5 2 3 0.39509 0.31746 0.60491 0.31746 15 20 5 2 3 0.14148 0.0764880.85852 0.076488 32 29 8 2 6 0.0341550.30484 0.96585 0.30484 30 27 9 2 6 0.0274650.0741430.97253 0.074143 17 21 8 2 8 0.31876 0.29283 0.68124 0.29283 50 48 9 2 8 0.21214 0.30076 0.78786 0.30076 37 38 8 2 6 0.45277 0.18318 0.54723 0.18318 22 19 10 2 6 0.20866 0.16708 0.79134 0.16708 42 38 7 2 2 0.0770290.31146 0.92297 0.31146 18 22 10 2 1 0.27 0.0227770.73 0.022777 28 23 3 2 1 0.46855 0.17831 0.53145 0.17831 14 9 7 2 3 0.33048 0.21728 0.66952 0.21728

PAGE 115

115 Cn Fn R maxD inf GrzLpz IpzGrz 35 39 4 2 1 0.19733 0.25675 0.80267 0.25675 25 23 8 2 7 0.1295 0.0804990.8705 0.080499 10 12 10 2 3 0.42396 0.0076550.57604 0.007655 11 15 6 2 1 0.47253 0.0486460.52747 0.048646 17 12 3 2 2 0.1885 0.26933 0.8115 0.26933 48 46 6 2 6 0.03364 0.33942 0.96636 0.33942 27 26 10 2 6 0.0907910.13538 0.90921 0.13538 13 10 7 2 2 0.28787 0.3477 0.71213 0.3477 22 19 8 2 4 0.0929430.11424 0.90706 0.11424 34 30 8 2 6 0.14572 0.0480060.85428 0.048006

PAGE 116

116 CHAPTER 6 CONCLUSION AND FUTURE RESEARCH In this thesis, we obtained results about the linear programming and group relaxations of UFLP. We proved that the maximum possible determinant is exponential in terms of min ,CFnn but also gave theoretical and experimental arguments for why most of the bases of the LPR of UFLP have small determinants. It follows that the size of the shortest path problem associated with the group relaxation of UFLP is typically small. Moreover, we have shown that even when the bases of the LPR of UFLP have the MPD, the LP solution corresponding to these bases might not always be very fractional. Based on our results about the LPR of UFLP we gave a new proof that the LPR of UFLP describes the convex hull of its integer solutions for the case where 2Cn and/or 2.Fn We also gave six inequalities that can be added to the LP formulation of UFLP to obtain the convex hull of integer solutions when 3.CFnn We believe that the methodology used can be extended to find the convex hull o UFLP for more general instances where min ,CFnn=3. This work opens new avenues of research. One direction is to develop heuristics/approximation algorithms to obtai n good quality feasible solutions based on the optimal solution of the group minimization problem. Another direction is to extend the study of the group relaxations to other variants of the facility location problem. Finally, we could seek to generalize the family of inequalities developed for the case where 3CFnn to obtain facets of the convex hull of integer solutions of instances where 3Cn and 3.Fn

PAGE 117

117 LIST OF REFERENCES 1. Kuehn, A. and Hamburger, M.: A heuris tic program for locating warehouses. Manag. Sci. 9, 643-666 (1963) 2. Stollsteimer, J.: A working model for plant numbers and locations. J. Farm Econ. 45, 631-645 (1963) 3. Hakimi, S. L.: Optimum locations of switching centers and the absolute centers and medians of a graph. Oper. Res. 12, 450–459 (1964) 4. Hakimi, S. L.: Optimum distribution of switching centers in a communication network and some related graph theoretic problems. Oper. Res. 13, 462–475 (1965) 5. Kaufman, L., Eede, M. V., and Hansen, P.: A plant and warehouse location problem. 4OR. 28, 547-557 (1977) 6. Maranzana, F. E.: On the location of supply points to minimize transport costs. Oper. Res. 15, 261–270 (1964) 7. Daskin, M. S.: Network and Discrete Location: Models, Algorithms, and Applications. Wiley-Interscience, New York (1995) 8. ReVelle, C. S., Eiselt, H. A., and Daskin, M. S.: A bibliography for some fundamental problem categories in discrete location science. European J. Oper. Res. 184, 817–848 (2008) 9. Farahani, R. Z. and Hekmatfar, M. (eds.): Facility Location: Concepts, Models, Algorithms and Case Studies, Contributions to Management Science. SpringerVerlag, Berlin (2009) 10. Cornuejols, G., Nemhauser, G., and Wolsey, L.: The Uncapacitated Facility Location Problem. In: Mirchandani, P. and Fr ancis, R. (eds.) Discrete Location Theory, pp. 119-171. John Wiley and Sons (1990) 11. Nemhauser, G. and Wolsey, L.: Integer and Combinatorial Optimization. John Wiley and Sons (1990) 12. Guignard, M.: Fractional vertices, cuts and facets of the simple plant location problem. Math. Program. 12, 150-162 (1980) 13. Cornuejols, G. and Thizy, J.-M.: Some facets of the simple plant location polytope. Math. Program. 23, 50-74 (1982) 14. Cho, D. C., Johnson, E. L., Padberg, M., and Rao, M. R.: On the uncapacitated plant location problem-I: Valid inequalities and facets. Math. Oper. Res. 8, 579-589 (1983)

PAGE 118

118 15. Cho, D. C., Padberg, M., and Rao, M. R.: On the uncapacitated plant location problem-II: Facets and lifting theorems. Math. Oper. Res. 8, 590-612 (1983) 16. Leung J. M. Y. and Magnanti, T. L.: Valid inequalities and facets of the capacitated plant location problem. Math. Program. 44, 271-291 (1989) 17. Aardal, K., Pochet, Y., and Wolsey, L. A.: Capacitated facility location: valid inequalities and facets. Math. Oper. Res. 20, 562-582 (1995) 18. Cnovas, L., Landete, M., and Marn, A.: On the facets of the simple plant location packing polytope. Discrete Appl. Math. 124, 27-53 (2002) 19. Hochbaum, D. S.: Heuristics for the fixed cost median problem. Math. Program. 22, 148-162 (1982) 20. Chudak, F. and Shmoys, D.: Improved approximation algorithms for the uncapacitated facility location problem. Unpublished manuscript (1998) 21. Shmoys, D., Tardos, E., and Aardal, K.: Approximation algorithms for facility location problems. Proceedings of the 29th Annual ACM Symposium on Theory of Computing, pp. 265-274 (1997) 22. Sviridenko, M.: An improved approximat ion algorithm for the metric uncapacitated facility location problem. Proceedings of the 9th International IPCO Conference on Integer Programming and Combinatorial Optimization, pp. 240-257, May 27-29 (2002) 23. Charikar, M., Khuller, S., Mount, D., and Narasimhan, G.: Facility location with outliers. Proceedings of the 12th Annual ACM-SIAM Symposium on Discrete Algorithms, Washington DC, January (2001) 24. Jain, K. and Vazirani, V.: Primal-dual approximation algorithms for metric facility location and k-median problems. Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science, pp. 2-13, October (1999) 25. Guha, S. and Khuller, S.: Greedy strikes back: improved facility location algorithms. J. Algorithms. 31, 228-248 (1999) 26. Mahdian, M., Ye, Y., and Zhang, J.: Improved approximation algorithms for metric facility location problems. Proceedings of the 5th International Workshop on Approximation Algorithms for Combinatorial Optimization, pp. 229–242 (2002) 27. Korupolu, M., Plaxton, C., and Rajaraman, R.: Analysis of a local search heuristic for facility location problems. Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1-10, January (1998) 28. Cocking, C.: Solutions to facility location–network design problems. Ph.D. thesis, University of Heidelberg, Germany (2008)

PAGE 119

119 29. Schrijver, A.: Theory of Linear and Integer Programming. John Wiley and Sons, Chichester, (1986) 30. Richard, J. P. P. and Dey, S. S.: T he Group-Theoretic Approach in Mixed Integer Programming. In: Jnger, M., Liebling, T. M., Naddef, D., Nemhauser, G. L., Pulleyblank, W. R., Reinelt, G., Rinaldi, G., and Wolsey, L. A. (eds.) Fifty Years of Integer Programming 1958-2008 From early years to the state-of-the-art. Springer (2009) 31. Kannan, R. and Bachem, A.: Polynomial algorithms for computing the Smith and Hermite normal forms of an integer matrix. SIAM J. Comput. 8, 499-507 (1979) 32. Shapiro, J.F.: Dynamic programming algorithms for the integer programming problem I: The integer programming problem viewed as a knapsack type problem. Oper. Res., 16, 103–121 (1968) 33. Silvester, J. R.: Determinants of block matrices. Gaz. Math. 84, 460–467 (2000) 34. Brookes, M.: The Matrix Reference Manual. http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html (2005). Accessed 01 March 2010. 35. Hadamard, J.: Rsolution d'une question relative aux determinants. Bull. Sci. Math. 17, 240-246 (1893) 36. Brenner, J.: The Hadamard maximum determinant problem. Amer. Math. Monthly. 79, 626-630 (1972) 37. The On-Line Encyclopedia of Integer Sequences, Sequence A003433. http://www.research.att.com/~njas /sequences/A003433 (2010). Accessed 01 March 2010. 38. Golomb, S. W. and Baumert, L. D.: T he search for Hadamard matrices. Amer. Math. Monthly. 70, 12-17 (1963) 39. Seberry, J. and Yamada, M.: Hadamard Matrices, Sequences, and Block Designs. In Dinitz, J. F. and Stinson, D. R. (eds .) Contemporary Design Theory: A Collection of Surveys. Wiley-Interscience (1992) 40. Williamson, J.: Determinants whose elements are 0 and 1. Amer. Math. Monthly. 53, 427–434 (1946) 41. Zivkovic, M.: Classification of small (0,1) matrices, Linear Algebra Appl. 414, 310–346 (2006) 42. The On-Line Encyclopedia of Integer Sequences, Sequence A003432. http://www.research.att.com/~njas /sequences/A003432 (2010). Accessed 01 March 2010.

PAGE 120

120 BIOGRAPHICAL SKETCH Mohammad Khalil graduated from Cairo University at Fayoum (CUF), Egypt, with a B.Sc. degree in Industrial Engineering in May 2004. He then worked as a teaching assistant at the Industrial Engineering Department at CUF. He also had a consultant position in supply chain management, quality, environmental, and safety management systems. After graduation, Mohammad joined the M.Sc. program at Cairo University in Mechanical Design and Production Engineering. In 2008, Mohammad was awarded a Fulbright fellowship to pursue his M.Sc. at the University of Florida. He decided to pursue his M.Sc. in Industrial and Systems Engineering. Mohammad is very interested in Oper ations Research especially integer programming. He plans to continue his Ph.D. in the same field. His PhD dissertation will focus on logistics or scheduling problems. His objective is to pursue an academic career in Operations Research.