UFDC Home  myUFDC Home  Help 



Full Text  
OPTIMIZING INTEGRATED PRODUCTION, INVENTORY AND DISTRIBUTION PROBLEMS IN SUPPLY CHAINS By SANDRA DUNI EKSIOGLU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2002 This work is dedicated to my family. ACKNOWLEDGMENTS I would like to thank the people who helped me complete the work contained in this dissertation. The help of my supervisors Panos Pardalos and Edwin Romeijn was of great value. I would like to thank Panos Pardalos for his technical advice, encouragement and insightful comments throughout my dissertation work. I would like to thank Edwin Romeijn for working closely with me. His unconditional support in solving in ii: details surrounding this dissertation and his valuable feedbacks are deeply appreciated. I extend my thanks to the members of my committee Selcuk Erengiic and Joseph Geunes for their constructive criticism concerning the material of this dissertation. I also would like to express my appreciation to all my friends at the ISE department and in Gainesville for lightening up my life every d4i and making graduate school more fun than I am sure it is supposed to be. In particular I would like to thank Adriana and Jorge Jimenez, Mirela and Ilir Bejleri, Ebru and Deniz Erdogmus, Paveena (' ,,valitwongse, Bayram Yildirim, Seviye Yoriik, Mary and Kevin Taaffe, Olga Perdikaki, Hiilya Emir, Sergiy Butenko and Lihui Bai. I would like to express my special thanks to my parents Leonora and Perikli Duni and my brother Dhimitraq Duni. Their understanding and faith in me and my capabilities, their love, encouragement, and eternal support have motivated me all the time. Last but not least, I would like to thank my husband Burak for his love, patience and continuous support throughout all my years here at the University of Florida. TABLE OF CONTENTS page ACKNOWLEDGMENTS ................... ........ iii ABSTRACT ...................... ................ vi CHAPTER 1 INTRODUCTION ................................ 1 1.1 Supply C('!, i Management ........... ............ 1 1.2 Framework of This Study ......................... 3 1.3 Objectives and Summary ........... .............. 6 2 MULTIFACILITY LOTSIZING PROBLEM ................ 12 2.1 Introduction ................... ......... 12 2.2 Literature Review ................... ....... 13 2.3 Problem Description ................... ..... 16 2.4 Extended Problem Formulation ............... .. ... 21 2.5 PrimalDual Based Algorithm . . ........ . 26 2.5.1 Intuitive Understanding of the Dual Problem . ... 27 2.5.2 Description of the Algorithm .................. .. 28 2.5.3 Running Time of the Algorithm ................ .. 30 2.6 Cutting Plane Algorithm .................. ..... 31 2.6.1 Valid Inequalities .................. ..... .. 31 2.6.2 Separation Algorithm . . .......... .. 32 2.6.3 Facets of MultiFacility LotSizing Problem . .... 33 2.7 Dynamic Programming Based Heuristic . . . 35 2.7.1 Introduction .................. ......... .. 35 2.7.2 Description of the Algorithm .................. .. 37 2.7.3 Running Time of the Algorithm ................ .. 40 2.8 Computational Results .................. ..... .. 42 2.9 Conclusions .................. ............. .. 52 3 EXTENSIONS OF THE MULTIFACILITY LOTSIZING PROBLEM ... 54 3.1 MultiCommodity, MultiFacility LotSizing Problem . ... 54 3.1.1 Problem Formulation ................ .... .. 56 3.1.2 Linear Programming Relaxation ................ .. 59 3.1.3 Valid Inequalities .................. ..... .. 61 3.1.4 Lagrangean Decomposition Heuristic . . ..... 62 3.1.5 Outline of the Algorithm ................... ... .. 67 3.1.6 Managerial Interpretation of the Decomposition .. ...... 3.1.7 Computational Results .. .................. 3.2 SingleCommodity, MultiRetailer, MultiFacility LotSizing Problem 3.2.1 Problem Formulation .. ................... 3.2.2 PrimalDual Algorithm .. .................. 3.2.3 Intuitive Understanding of the Dual Problem .. ........ 3.2.4 Outline of the PrimalDual Algorithm .. ............ 3.2.5 Computational Results .. .................. 3.3 Multi Facility LotSizing Problem with Fixed Charge Transportation C o sts . . . . . . . . . 3.4 C conclusions . . . . . . . . 4 PRODUCTIONDISTRIBUTION PROBLEM .. .............. 4.1 Introduction . . . . . 4.2 Problem Formulation .. ............ 4.3 Dynamic Slope Scaling Procedure . .. 4.3.1 MultiCommodity Network Flow Problem Cost Function . ........ 4.3.2 SingleCommodity Case . .... 4.3.3 MultiCommodity Case . .... 4.3.4 ProductionDistribution Problem . . 4.3.5 Extended Problem Formulation . . 4.4 A Lagrangean Decomposition Procedure . 4.5 Computational Results . 4.6 Conclusions . .... 5 CONCLUDING REMARKS . APPENDICES .. ........... with Fixed C('i ige .. . 100 . .. . 04 . .. . 04 ... . 08 . .. . 09 .. . 111 .. . 114 . .. . 26 . . ... . 2 7 REFERENCES ......................... . . 138 BIOGRAPHICAL SKETCH ....... Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy OPTIMIZING INTEGRATED PRODUCTION, INVENTORY AND DISTRIBUTION PROBLEMS IN SUPPLY CHAINS By Sandra Duni Eksioglu December 2002 Chair: Panagote M. Pardalos CoChair: H. Edwin Romeijn Major Department: Industrial and Systems Engineering The goal of this dissertation is the study of optimization models that integrate production, inventory and transportation decisions, in search of opportunities to improve the performance of a supply chain network. We estimate the total costs of a given design of a general supply chain network, including production, inventory and transportation costs. We consider production and transportation costs to be of fixed charge type. Fixed charge cost functions are linear functions with a discontinuity at the origin. The main focus of this dissertation is the development of solution procedures for these optimization models. Their computational complexity makes the use of heuristics solution procedures advisable. One of the heuristics we propose is a Multi Commodity Dynamic Slope Scaling Procedure (\I CDSSP). This heuristic makes use of the fact that when minimizing a concave function over a convex set, an extreme point optimal solution exists. The same holds true for linear programs. Therefore, the concave cost function is approximated by a linear function and the corresponding linear program is solved. The slope of the linear function is updated iteratively until no better solution is found. The MCDSSP can be used to solve any multicommodity network flow problem with fixed charge cost functions. We also develop a Lagrangean decomposition based heuristic. The subproblems from the decomposition have a special structure. One of the subproblems is the multi facility lotsizing problem that we study in detail in C'! Ilpter 2. The multifacility lotsizing problem is an extension of the economic lotsizing problem. We add a new dimension to the classical problem, the facility selection decision. We provide the following heuristic approaches to solve this problem: dynamic programming, a primal dual method, a cutting plane method and a linear programming based algorithm. We propose a set of valid inequalities and show that '1!, ', are facet defining. We tested the performance of the heuristics on a wide range of randomly generated problems. We also studied other extensions of the multifacility lotsizing problem. In Chapter 3 we analyze and provide solution approaches to the multicommodity and multiretailer (singlecommodity) versions of the problem. CHAPTER 1 INTRODUCTION 1.1 Supply C(' iii Management Most companies 1i I. ,1 ivs are organized into networks of manufacturing and distribution sites that procure raw materials, process them into finished goods, and distribute the finished goods to customers. The goal is to deliver the right product to the right place at the right time for the right price. These productiondistribution networks are what we call "supply chains." Supply C'!i ,i, Management is a growing area of interest for both companies and researchers. It first attracted the attention of companies in the 1990s as '1i vi, started to realize the potential cost benefits of integrating decisions with other members of their supply chain. The primary cost factors within a supply chain can be put into the categories of production, transportation and inventory. The signature of supply chain management is the integration of activities. Effective supply chain members invariably integrate the wishes and concerns of their downstream members into their operations while simultaneously ensuring integration with their upstream members. We concentrate on developing optimization tools to enable companies to take advantage of opportunities to improve their supply chain. For many years companies and researchers failed to take an integrated view of the entire supply chain. They considered only one piece of the overall problem, such as production or distribution submodels. These submodels were optimized separately and the solutions were then joined together to establish operating policies. A number of new developments have had an impact on many companies. For example, increased market responsiveness has intensified the interdependencies within the supply chain (Erengiic et. al [30]); technological innovations have shortened the life span of manufacturing equipment, which in turn increases the cost of manufacturing capacity; internet has offered high speed communication (Geunes et. al [50]). These developments combined with increased product variety and decreased product volumes prompt companies to explore new v of running their business. Experience has shown that a firm's ability to manage its supply chain is a 1,i i ,i source of competitive advantage. This realization is the single most important reason for the recent emphasis on supply chain management in industry and academia. To exploit these new opportunities to improve their profitability, the companies need decision support tools that provide evaluation of alternatives using optimization models. Several examples can be found in the literature proving that models coordinating at least two stages of the supply chain can detect new opportunities for improving the efficiency of the supply chain. C'!i ,i i i and Fisher [21] investigated the effect of coordinating production and distribution on a singleplant, multicommodity, multiperiod scenario. In this scenario, the plant produces and stores the products until '!, i, are delivered to the customers using a fleet of trucks. They proposed two solution approaches. The first approach solves the production scheduling and routing problems separately and the second approach considers both, production and routing decisions to be incorporated into the model. Their computational study showed that the coordinated approach can yield up to 211' in costs savings. Anily and Federgruen [4] considered integrating inventory control and transportation planning decisions motivated by the tradeoff between the size and the frequency of delivery. Their model considered a single warehouse and multiple retailers with inventories held only at the retailers who face constant demand. Burns et al. [18] investigated distribution strategies that minimize transportation and inventory costs. Successful applications of supply chain decision coordination were reported at Xerox [94] and Hewlett Packard [44]. As a result of coordinating the decisions on inventories throughout the supply chain, '!. :, were able to reduce their inventory levels by 25'. 1.2 Framework of This Study Companies deliver products to their customers using a logistics distribution network. Such networks typically consist of product flows from the producers to the customers through distribution centers (warehouses) and retailers. Companies generally need to make decisions on production planning, inventory levels, and transportation in each level of the logistics distribution network in such a way that customer's demand is satisfied at minimum cost. Coordinating decisions with other members with the aim of 'i. . r profits and better customer service is a distinctive feature of supply chain management. Bhatnagar, ('C! ,1,i i. and Goyal [13] distinguished between two broad levels of coordination. At the most general level, coordination can be seen in terms of integrating decisions of different functions (for example, facility location, inventory planning, production pl1 '!'i:i distribution pl1 'iw!'' etc.). They refer to this level of coordination as ;, i, I I coordination." At another level, the problem of coordination may be addressed by linking decisions within the same function at different echelons in the organization. They classified the research on general coordination into three categories. These categories represent integration of decisionmaking related to (i) Supply and production planning (ii) Inventory and distribution planning (iii) Production and distribution planning Studies on coordination between supplier and buyer focused on determining the order quantity that is jointly optimal for both. The second category addresses the problem of coordinating inventory planning with distribution planning. This problem emerges when a number of customers must be supplied from one or more warehouses. The inventory and distribution planning problem decides on the replenishment policy at the warehouse and the distribution schedule for the customers, so that the total of inventory and distribution costs are minimized. The tradeoff is reducing the inventory costs versus an increase in the transportation costs. For example, shipping in smaller quantities and with high frequency reduces the inventory level at the warehouse, but causes higher transportation costs. The third category of research concentrates on integrating production planning and distribution planning. The production planner is concerned with determining optimal productioninventory levels for each product in every period so that the total cost of production and inventory holding is minimized. On the other hand, the distribution planner must determine schedules for distribution of products to customers so that the total transportation cost is minimized. These two activities can function independently if there is a sufficiently large inventory buffer that completely decouples the two. However, this would lead to increased holding costs and longer lead times, since on one side the distribution planer, in order to minimize transportation costs, would prefer full truck shipments and minimum number of stops; and on the other side the production planner would prefer less number of machine setups. The pressure of reducing inventory and lead times in the supply chain forced companies to explore the issue of closer coordination between production and distribution. Our work contributes to this last research category. We consider complex supply chains with centralized decisions on productioninventory and transportation. We model these supply chains as multicommodity network flow problems with non convex cost functions. In particular we use fixed charge cost functions to model production or transportation costs. We present several optimization techniques to solve these problems. Related work is the research of King and Love [67] on the coordination of production and distribution systems at Kelly Springfield, a 1n ii "r tire manufacturer with four factories and nine in, ii distribution centers located throughout the United States. The authors described a coordinated system for manufacturing plants and distribution centers. Implementation of this system resulted in substantial improvements in overall lead times, customer service and average inventory levels. Annual costs were reduced by almost $8 million. Blumenfeld et al. [16] reported on the successful implementation of an optimization model that synchronizes scheduling of production and distribution at the Delco electronics division of General motors. Implementation resulted in a 2'. reduction in logistics costs. Brown et al. [17] presented a successful implementation of an optimization model that coordinates decisions on production, transportation and distribution at K. 11..: Company. K. 11..:; operates five plants in the United States and Canada, and it has seven core distribution centers. The model has been used for tactical and operational decisions since 1990. The operational version of the model determines everydv production and shipping quantities. The tactical version helps to establish budget and make capacity expansion and consolidation decisions. The operational model reduced production, inventory and distribution costs by approximately $4.5 million in 1995. The tactical model recently guided a consolidation of production capacity with a projected savings of % ;: to $40 million per year. Models coordinating different stages of the supply chain can be classified as strategic, tactical or operational (Hax and Candea [55]). The model we study helps the companies to make strategic decisions related to production, inventory and transportation. Several surveys in the literature address coordination issues. Vidal and Goetschalckx [102] addressed the issue of strategic productiondistribution planning with emphasis on global supply chains. Beamon [12] presented models on multistage supply chain design and analysis. Erengiic et al. [30] surv, v, d models that integrate production and distribution planning. Thomas and Griffin [97] surveyed coordination models on strategic and operational planning. 1.3 Objectives and Summary We propose a class of optimization models that consider coordination of production, transportation and inventory decisions in a particular supply chain. The supply chain we consider consists of a number of facilities and retailers. This model helps to estimate the total cost of a given logistics distribution network, including production, inventory holding and transportation costs. The evaluation takes place over a fixed, finite planning horizon of T periods. The particular scenario presented considers a set of facilities where K different product types can be produced. Products are stored at the facilities until demand occurs. Moreover, retailers are supplied by the facilities and keep no inventories. We do not allow for transportation between facilities. An example of this particular scenario is the supply chain for very expensive items (such as cars). Often, it is not affordable for the retailers to keep inventories of expensive items, therefore '! .:, order the commodities as demand occurs (for example, Ford dealers keep no inventories for Corvette). We consider production and transportation cost functions to be of the fixed charge form. Often in the literature production costs are modeled using this type of cost function. This is because of the special nature of the problem. Thus, whenever the production of a commodity takes place, a fixed charge is paid to set up the machines, plus a variable cost for every unit produced. Transportation costs also can be modelled using fixed charge costs, since usually there is a fixed charge (for example, the cost of the paperwork necessary) to initiate a shipment plus other costs that depend on the amount shipped. The objective is to find the production, inventory, and transportation quantities that satisfy demand at minimum cost. We formulate this problem as a multi commodity network flow problem on a directed, single source graph consisting of T 1v, rs (Figure 1 1). Each l1v r of the graph represents a time period. In each IlVr, a bipartite graph represents the transportation network between the facilities and retailers. The set of nodes of the network consists of T copies of the F facilities and R retailers, as well as the source node. The set of arcs consists of production arcs (between the source node and a facility at a particular time period), transportation arcs (between a facility and a retailer at a particular time period), and inventory arcs (between two nodes corresponding to a particular facility in consecutive time periods). There are bundle capacities on the transportation and production arcs, and no capacities on the inventory arcs. In multicommodity network flow problems, the bundle capacities tie together commodities by restricting the total flow (of all commodities) on an arc. Our problem is related to the ones studied by Wu and Golbasi [106]; and Freling et al. [43]; and Romeijn and Romero Morales [90, 91]; and Romero Morales [83]; and Balakrishnan and Geunes [6]. In contrast to our model, Wu and Golbasi [106] assume a fixedcharge cost structure for production, but assume linear transportation costs. Also, '! :, consider in their model the product structure of each enditem. On the other hand, Freling et al. [43], Romeijn and Romero Morales [90, 91] and Romero Morales [83] consider the case of a single commodity only, but account for the presence of socalled singlesourcing constraints, where each retailer should be supplied from a single facility only. Balakrishnan and Geunes [6] addressed a dynamic requirementsplanning problem for twostage multiproduct manufacturing systems with billofmaterial flexibility (i. e., with options to use substitute components or subassemblies produced by an upstream stage to meet demand in each period at the downstream stage). Their model is similar to the multiretailer and multifacility lotsizing problem we discuss in Chapter 3. However, their formulation is more general and consists of extra constraints (similar to the singlesourcing constraints) and extra components in the cost function (penalties for violating the singlesourcing constraints). Figure 1 1: Two 1I. w y, .hrccretailcr 1'y chain ( '' i )em The supply chain optimization model we discuss is an NPhard problem. Even the special case of the multicommodity network flow problem with fixed charge costs, the singleperiod and singlecommodity problem is NPhard (Garey and Johnson [45]). The complexity of this problem led us to consider mainly heuristic approaches. The difficulty in solving this problem motivated us to look into some special cases of the model. The knowledge we gained from analyzing the simpler problems gave us insights about how to approach the general model. Below we describe some of the special cases we identified. The singleretailer, singlefacility problem with only fixed charge production costs, reduces to the classical lotsizing problem (Figure 1 2). The retailer in each period has to ship his demand from the facility. Therefore, the only decision to be made in this problem is for the facility to decide on its production schedule, when and how much to produce in every period (Figure 13). The same holds true when the model considers a single facility with multiple retailers. Figure 12: MuliperioS Figure~ 12: N/\ ii perl() Si VO A 1 5 K A6 R PT F F A6 *QD Jinglefacility, singleretailcr problem ir, r, Figure 1 Multiperiod, singlefcacility, singlertailcr problem The singlecommodity, multifacility, singleretailer problem with only production setup costs is discussed in C'!i lpter 2 (Figure 14). We refer to this problem as the multifacility lotsizing problem. Wu and Golbasi [106] show that the uncapacitated version of this problem with holding costs not restricted in sign is NPcomplete. We propose a number of heuristic approaches to solve the problem such as a dynamic programming based algorithm, a primaldual heuristic and a cutting plane algorithm. We propose a set of valid inequalities and show that '1!, ', are facets of the convex hull of the feasible region. We present a different formulation of the problem that we refer to as the extended problem formulation. The linear programming relaxation of the extended formulation gives lower bounds that are at least as good as the bounds from the linear programming relaxation of the original formulation. \ Figure 14: Multiperiod, singleretailer problem In ('!i lpter 3 we discuss two extensions of the multifacility lotsizing problem. First, we discuss the multicommodity version of the problem. Later we present a model for the singlecommodity, multiretailer, multifacility problem. We propose a Lagrangean decomposition scheme to solve the multicommodity problem and a primaldual algorithm to solve the multiretailer problem. Finally, in Chapter 4 we discuss the general productiondistribution model we presented above. We propose a heuristic approach called the multicommodity dynamic slope scaling procedure (i!CDSSP). This is a general heuristic that can be used to solve any multicommodity network flow problem with fixed charge cost function. The MCDSSP is a linear programming based heuristic. We solve the linear programming relaxation of the extended formulation to generate lower bounds and compare the quality of the solutions from MCDSSP to these bounds. For the productiondistribution problem we also present a Lagrangean decomposition based algorithm. This algorithm decomposes the problem into two subproblems. The first subproblem further decomposes by commodity into K singlecommodity, multi facility, multiretailer problems, that we discuss in C(i lpter 3. The performance of these algorithms is tested on a large variety of test problems. We present extensive computational results. CHAPTER 2 MULTIFACILITY LOTSIZING PROBLEM 2.1 Introduction In this chapter we analyze and provide algorithms to solve the multifacility economic lotsizing problem. This is an extension of the wellknown Economic Lot Sizing problem. The economic lotsizing problem can be described as follows: Given a finite time horizon T and positive demands for a single item in each production period, find a production schedule such that the total costs are minimized. The customers' demand must be satisfied from production in the current period or by inventory from the previous periods (that is no backlogging is allowed). The two kinds of costs considered are production costs and holding costs. Different from the classical economic lotsizing problem, the multifacility lot sizing problem considers that demand can be satisfied via multiple facilities. This adds an extra dimension to the classical problem, the facility selection decision. Transportation costs, together with production and inventory costs are the bhi. 1 components of total costs. Thus, in contrast with the classical lotsizing problem, this model, in deciding on a production schedule, considers not only production and inventory costs, but also transportation costs. The multifacility lotsizing problem finds an optimal production, inventory and transportation schedule that satisfies demand at minimum cost. This problem has many practical applications. For example, a manufacturing company often has multiple facilities with similar production capacities. The need for cross facility capacity management is most evident in hightech industries that have capital intensive equipment and a short technology life cycle. These companies often li, i1,. with their production planning problem in their complex and rapidly changing environment. To better utilize their capitalintensive equipment '! i:, are pressured to produce a variety of products in each of their production facilities. Coordinating the decisions on production, inventory and transportation among all the facilities reduces the costs relative to having each facility make its own decisions independently. We present various solution approaches to solve this problem. The importance of the algorithms we propose is in the fact that these algorithms can be used as subroutines to solve more complex supply chain optimization problems. In Section 2.2 we present a literature review of economic lotsizing and related problems. Section 2.3 gives the problem formulation and Section 2.4 discusses an extended formulation of the multifacility lotsizing problem. Its linear programming relaxation gives close tooptimal solutions and the corresponding dual problem has a special structure. In Section 2.5 we discuss a primaldual based algorithm. We present a set of valid inequalities for the problem and show that ', i, are facets defining inequalities. The valid inequalities are used in the cutting plane algorithm discussed in Section 2.6. In Section 2.7 we discuss a dynamic programming based algorithm. Finally, Section 2.8 presents some of the computational results and Section 2.9 concludes this chapter. 2.2 Literature Review This section presents a literature review on the singleitem economic lotsizing problem. The first contribution is the Economic Order Quantity (EOQ) model proposed by Harris [54] in 1913. This model considers a single commodity with a constant demand rate, production taking place continuously over time, and does not incorporate capacity limits. A major limitation of the above model is the restriction that the demand is continuous over time and has constant rate. Manne [81] and Wagner and Whitin [104] considered the lotsizing problem with a finite time horizon consisting of a number of discrete periods, each with its own deterministic and independent demand. This is the classic economic lotsizing problem. Wagner and Whitin developed a dynamic programming algorithm for the singlecommodity, uncapacitated version of the problem. Their algorithm runs in 0(T2). This problem (apparently well solved since 1958) recently revealed a variety of new results. Practical reasons exist for the interest in this model. Almost thirty years later Wagelmans et al. [103], Aggarwal and Park [1], and Federgruen and Tzur [39] showed that the running time of the dynamic programming algorithm could be reduced to O(T log T) in the general case and to O(T) when the costs have a special structure (ht + pt > pt+l), also referred to as the absence of speculative motives. The capacitated lotsizing problem is NPhard even for many special cases (Florian et al. [41] and Bitran and Yanasse [15]). In 1971, Florian and Klein presented a remarkable result. They developed an O(T4) algorithm for solving the capacitated lotsizing problem with equal capacities in all periods. This result uses a dynamic programming approach combined with some important properties of optimal solutions to these problems. Recently, van Hoesel and Wagelmans [99] showed that this algorithm can be improved to O(T3) if backlogging is not allowed and the holding cost functions are linear. Several solution approaches have been proposed for NPhard special cases of the capacitated lotsizing problem. These methods are typically based on branch andbound (see for instance, Baker et al. [5] and Erengiic and Aksoy [29]), dynamic programming (see for instance, Kirca [68] and Chen and Lee [23]) or a combination of the two (see for instance, C('!hu and Lin [25] and Lofti and Yoon [75]). The multicommodity version of the problem has attracted much attention. Manne [81] used the zero inventory (ZIO) property to develop a column generation approach to solve this problem. B ,i 111 et al. [10] solved the multicommodity capacitated lotsizing problem without setup times optimally using a cutting plane procedure followed by branch and bound. Other extensions to the classic economic lotsizing problem consider setup times, backorders and other factors. Zangwill [107] extended Wagner and Whitin's model to allow for backlogging and concave cost functions. Veinott [101] studied an uncapacitated model with convex cost structures. Trigeiro et al. [98] showed that capacitated lotsizing problem with setup times is much harder to solve than capacitated lotsizing problem without setup times. It is easy to check if the capacitated lotsizing problem problem without setup times has a feasible solution or not. This can be done by computing cumulative demand and cumulative capacity. When setup times are considered, the feasibility problem is NPcomplete. The bin packing problem is a special case of capacitated lotsizing problem with setup times (Garey and Johnson [45] p.226). Much research on lotsizing problems focused on determining a (partial) polyhedral description of the set of the feasible solutions and applying branchandcut methods (Pochet et al. [89], Leung et al. [74], Barany et al. [10]). The main motivation for studying the polyhedral structure of the single item lotsizing problem is to use the results to develop efficient algorithms for problems such as the multicommodity economic lotsizing problem that contains this model as a substructure. However, the branchandcut approach has not (yet) resulted in competitive algorithms for the singleitem lotsizing problem itself. The reason is that generating a single cut could be as time consuming as solving the whole problem. Barany et. al [9, 10] provided a set of valid inequalities for the singlecommodity lotsizing problem showed that these inequalities are facets of the convex hull of the feasible region and furthermore, '!, i, showed that the inequalities fully describe the convex hull of the feasible region. Pereira and Wolsey [88] studied a family of unbounded polyhedra arising in an uncapacitated lotsizing problem with WagnerWhitin costs. They completely characterized the bounded faces of maximal dimension and showed that '!, i, are integral. For a problem with T periods '! i:, derived an O(T2) algorithm to express any point within the polyhedron as a convex combination of extreme points and extreme rays. They observed that for a given objective function the face of optimal solutions can be found in O(T2). Shaw and Wagelmans [93] considered the capacitated lotsizing problem with piecewise linear production costs and general holding costs. They showed that this is an NPhard problem and presented an algorithm that runs in pseudopolynomial time. Wu and Golbasi [106] considered a multifacility production model where a set of items is to be produced in multiple facilities over multiple periods. They analyzed in depth the productlevel (singlecommodity, multifacility) subproblem. They prove that generalcost version of this uncapacitated subproblem is NPcomplete. They developed a shortest path algorithm and showed that it achieves optimality under special cost structures. Our study is closely related to all the work we mention in this section. The multifacility lotsizing problem is an extension of the lotsizing problem. We add to the classical model the facility selection decision and other than production and inventory costs, we consider transportation costs as well. The results found and the algorithms developed for the singlecommodity lotsizing problem enlighten us in our search for solution approaches to the multifacility lotsizing problem. The set of valid inequalities that we propose in Section 2.6.1 are a generalization of the valid inequalities proposed by Barany et. al [10]. The extended problem formulation presented in Section 2.4 is inspired by a similar formulation proposed by van Hoesel [100] for the lotsizing problem. However, our study is closely related to the work of Wu and Golbasi [106]. The multifacility lotsizing problem is the same model as the one in Wu and Golbasi, however we propose a wider (and different) range of solution approaches. 2.3 Problem Description The multifacility lotsizing problem studied in this chapter can be formulated using the following notation. Problem Data: T number of periods in the planning horizon F number of facilities pit production unit cost at facility i in period t sit production setup cost at facility i in period t hit inventory unit cost at facility i in period t it transportation unit cost at facility i in period t Qit(qit) production cost function at facility i in period t bt demand in period period t Decision Variables: qit number of items produced at facility i in period t xit number of items transported from facility i in period t lit number of items in the inventory at facility i in the end of period t The objective is to find a minimum cost production, inventory and transportation plan to fulfill demand when the setup costs and production, inventory, transportation unit costs can vary from period to period, from facility to facility. The model assumes no starting or ending inventory. This problem can be formulated as a network flow problem as follows: F T minimize (Qit(qit) + c~tx~t + hitt) i= t 1 subject to (i\ F) lit +qit = xit + lit i = ,...,F;t = ,...,T (2.1) F .xit bt t= ,...,T (2.2) i= 1 lio = 0 i 1,...,F (2.3) it, xit, it > 0 i ,... ,F;t 1,...,T (2.4) Equation (2.1) presents the flow conservation constraints for each facility in each time period, and (2.2) presents the flow conservation constraints for the demand in every time period. The flow conservation constraint for the source, 1 t1 qiit E 1 bt, is not included in the formulation because it is implied by (2.1) and (2.2). The structure of this network is illustrated in Figure 21. The set of nodes in this network consists of T copies of each of the F facilities and the demand node, as well as a source node. Each 1,.r of the network represents a time period. The set of arcs consists of F x T production arcs, F x T transportation arcs and F x (T 1) inventory arcs. The source node s supplies the total demand for the planning horizon. The production arcs connect the source to each facility, in each time period. The inventory arcs connect the same facility in successive periods. Transportation arcs connect facilities to the demand node in each period. The production cost function T / F31 Figure 21: Network representation of a twoperiod, threefacility lotsizing problem is a fixed charge cost function, thus, if production in a time period is initiated, the setup cost plus production cost for each unit produced has to be paid. t pitqit + Sit if qit > 0 t)itit it i t > 0 fori 1 ,... ,F;t 1,... ,T. 0 otherwise The standard mixedinteger linear programming (M! I.P) formulation of this function can be obtained by introducing a binary setup variable yit corresponding to each production arc. The production cost function can then be replaced by where, Y 1 if qit > 0 Yit [ 0 otherwise. The MILP formulation of the multifacility lotsizing problem then reads as follows: F T minimize Y y(pitqit + sityit + citXit + hitlit) i= 1 t 1 subject to Iit1 + F i= 1 qit, xit, qit xit + lit , .. .,T i= 1,...,F;t t= 1,...,T i= 1,...,F;t it= 1,...,F; t i 1,...,F ; t xit = bt qit < btTYit lit > 0 Yit e {0,1} ,... T , , 1 ,... , where, btT presents the demand during the time periods t (t = 1,..., T) to T, (i.e. btT = 7=t b,). Standard solvers such as CPLEX can be used to solve formulation (P\!F*) of the multifacility lotsizing problem. The set of constraints (2.5) show that the number of items produced in the current period together with the inventory from previous periods, should be equal to the ending inventory plus the amount shipped to the retailer. Together with (2.8) '! i:, imply lit = (qi, i,) T1 and using (2.6), F t = qi bit i= 1 1 fort 1,...,T. (2.5) ( (2.7) (2.8) (2.9) F i 1 Qit(qit) = Pitqit + sityit for i = 1,... ,F; t = 1,... ,T, (\!F*) for i= 1,... F; t = ,... T, Using these facts, the inventory variables can be eliminated from the formulation, reducing the size of the problem. The following is the MILP formulation of our problem without inventory variables. F T minimize Y i(p tqt + sitit + c$,xit) i= t=1 subject to (RMF*) F t qir > bit t = ,... ,T (2.10) i= 1 r= 1 t t Y qi, > xi, i =l ,...,F; t ,...,T (2.11) T=l T=1 F xit = bt t= ,...,T (2.12) i=1 qit < btT it i 1,...,F;t 1,...,T (2.13) xit, qit > 0 t=l,...,F;t= ,...,T (2.14) it E {0,1} i= ,...,F;t 1,...,T (2.15) T t it Cit where p't = it + 3 it hi, and ci cu E t hir. Proposition 2.3.1 (Nonsplitting p1**/'p ,I;): There exists an optimal solution to the uncapacitated, single,..c t iiit..; multif'i. .:.:1;i lot ..:,." problem such that the demand in period t is .'I.:/7., / from either production or the inventory of '. i,. /;/ one of the facilities. Proof: The multifacility lotsizing problem minimizes a concave cost function over a bounded convex set, therefore its optimal solution corresponds to a vertex of the feasible region. Let (q*, y*, x*, I*) be an optimal solution to the multifacility lot sizing problem. In an uncapacitated network flow problem, a vertex is represented by a tree solution. The tree representation of the optimal solution implies that demand in every time period will be satisfied by exactly one of the facilities. In other words, x*x = 0, for i / j and t = 1, 2,..., T. Furthermore, for each facility, in each time period if the inventory level is positive, there will be no production, and vice versa. Thus, zero inventory policy holds for this problem (q^it1 = 0, for i = 1,... F, t =1,..., T). This completes the proof that in an optimal solution to our problem demand is satisfied from either production or the inventory of exactly one of the facilities. o Another characteristic of an optimal solution to the multifacility lotsizing problem is the following: Every facility in a given time period either does not produce, or produces the demand for a number of periods (the periods do not need to be successive). This property can easily be derived from Proposition 2.3.1 and the tree representation of an optimal solution. Figure 28 presents an example of such an extreme flow solution, where demands bl and b3 are satisfied from production at facility 1 in period 1 and demand b2 is satisfied from production at facility 2 in period 2. Such a production schedule happens in case that the setup cost at facility 2 in period 2 is very small (as compared to the setup cost at facility 1 in periods 1, 2, 3 and setup cost at facility 2 in periods 1 and 3), but inventory holding cost at facility 2 in period 2 is very high. For the multifacility lotsizing problem there exists an exact algorithm that is polynomial in the number of facilities and exponential in the number of periods (Wu and Golbasi [106]). Assign demands bl,...,bT to facilities i 1,..., F. This takes O(FT). Given the assignment, solve for each facility an uncapacitated, single item, single facility lotsizing problem. This takes F O(T log T). Wu and Golbasi [106] show that for the special case when inventory holding costs are not restricted in sign, the multifacility lotsizing problem is an NPcomplete problem. They use a reduction from the uncapacitated facility location problem. 2.4 Extended Problem Formulation The linear programming relaxation of ( \! *) is not very tight. This is due to the constraints qi < btTut. In the linear programming relaxation of (\!P*), the variable /it determines the fraction of the demand from periods t to T satisfied from production at facility i in period t. btT is usually a very high upper bound, since the production in a period rarely equals this amount. One way to tighten the formulation is to split the production variables qi by destination into variables qit, (r = t,..., T), where T denotes the period for which production takes place (van Hoesel [100]). For the new variables, a trivial and tight upper bound is the demand in period 7 (i.e. b,). The split of the production variables leads to the following identities: T qiit itT (2.16) r=t t Xit= qist (2.17) s=l t T t ,it =Y Y qisr :qst (2.18) s=l T=t s=l Replacing the production, transportation and inventory decision variables, and after rearranging of terms, the objective function becomes F T T minimize > Y[ {(pit + CiT + his)8qitr + Sitit] i=1 t=1 T=t s=t+l Replacing the decision variables in the constraints of the formulation (! F*) with the new variables, we obtain the following: Constraints (2.5) transform to t1 T tl T t t T t is qist + itr qist + is S qist s= T=tl s=l T=t s=1 s=1 T=t s=1 for i = 1,...,F; t 1,...,T. After rearranging terms, it follows that the constraints (2.5) are redundant. Constraints (2.6) transform to F t S qist bt for t = ,... T. i= s=l 1 * Constraints (2.7) transform to qit, < bryit for i = 1,..., F; t = 1,..., T; t < < T. Finally, constraints (2.8) transform to qur > 0, fori 1,...,F;t 1,...,T;t The extended problem formulation is the following: F T T minimize [ citgqit + sityit] i=1 t= r=t subject to (ExMF) F T Qit = bT r 1,...,T (2.19) i= 1 t= 1 qit, bryt < 0 i = ,...,F;t 1,...,T; t T (: ) qit, > 0 i 1,...,F;t 1l,...,T; t < Yit {0,1} i 1,...,F;t 1,...,T, (2.22) where cit = Pit + ci, + Z t+l his. The above formulation resembles the model for the uncapacitated facility location problem. The relationship with the uncapacitated facility location problem is the following: consider W x T facilities from which customers can be served. The decision to be made is whether or not to open (whether to get a shipment from) a certain facility it. The cost for opening a facility (setup the machines) is denoted by sit. The cost of serving customer (satisfying demand in period) r (r > t) is cit, per unit of demand. Our problem resembles the facility location problem, however it is not exactly the same problem. The fact that the costs assigned to the same facility in different periods are related to each other, makes the multifacility lotsizing problem a special case of the facility location problem. The linear programming relaxation of (ExMF) replaces constraints y E {0, 1} with y > 0. The binary variables y appear only in constraints (2.20), therefore Yit > fori 1,...,F;t 1,...,T;t I Since setup costs sit > 0, and (ExMF) is a minimization problem, in an optimal solution we will have yit = it for i = 1,... F; t = 1,... T; t r < T. S bT We know that the most qit can be is br, and in this case y = 1; the least qit can be is zero and in this case y = 0. This result makes the constraint yit < 1 redundant. Formulation (LPMF) is the linear programming relaxation of (ExMF). F T T minimize [I 6ditqitq + Sityit] i= t=1 r=t subject to (LPMF) ES 1 qitr = b r= 1,...,T qit bryit < 0 i =1,...,F;t 1,...,T;t qit, > 0 i= ,...,F;t= l,...,T;t< r< T Yit > 0 i 1,...,F;t 1,...,T. In the special case of only one facility, our problem reduces to the economic lotsizing problem. The linear programming relaxation of the extended formulation of this special case ak, has an integer solution. This result is due to Krarup and Bilde [69]. Although this does not hold true for the multifacility lotsizing problem, the lower bounds generated solving (LPMF) are very good and the solutions are close to optimal. In fact, the lower bounds found solving (LPMF) are much tighter than the lower bounds generated solving the linear programming relaxation of the original formulation of this problem (\! F*) (Proposition 2.4.1). A network representation of the extended formulation is given in Figure 22. F 2/ \ q/ q211 / / /F /2,1 b '~......... ... "9, _ q022 2 q322 Figure 22: Network representation of extended formulation of a twoperiod, three facility lotsizing problem Formulations (\!F *) and (ExMF) of multifacility lotsizing problem are equivalent to each other. The notion of equivalence requires that the optimal solution to both formulations is the same (e. g. once the problem (ExMF) is solved in terms of qit, variables, this solution should yield the optimal solution to problem (\ I *)). For this to hold true, two conditions must be satisfied: (i) every solution of (Ex MF) must correspond to a solution of (ill *) (i. e. no new solutions were created by redefining the decision variables of (ill *)), and (ii) there must be a solution in the feasible region of (ExMF) that corresponds to every extreme point in the convex hull of the set of feasible solutions to (ill *) (i. e. no solutions to formulation (\!F*) were lost by redefining the variables). Condition (i) follows directly from the definition of qit. A solution to (ExMF) can be directly translated into a solution to (il F*) using equations (2.16), (2.17) and (2.18). Condition (ii) is more difficult to argue. An extreme point of the convex hull of the feasible solutions to (ill *) is such that (a) it satisfies the zero inventory property (qiti,t1 = 0 for i = 1,..., F and t = 1,..., T); (b) demand in period t (t = 1,..., T) is satisfied from exactly one facility (xitjt = 0 for i,j = 1,..., F, i / j); (c) in period t (t = 1,..., T) a facility either does not produce, or produces the demand for a number of periods, the periods do not need to be successive (Proposition 2.3.1). One can easily see (Figure 22) that an extreme point to (ExMF) satisfies exactly the same conditions. The correspondence between the extreme points of the two formulations shows that there is an extreme flow (tree solution) in the formulation (ExMF) for every extreme flow of the formulation (\! I *). Proposition 2.4.1 The optimal cost of linear 1"''',i';.nun':i. relaxation of the extended formulation of multif', ..7;'.:/ lot ...:.:, problem (ExMF) is at least as high as the optimal cost of linear i" 'pi'"'r.1 relaxation of (.' ..': ri formulation (_FlP). Proof: Every feasible solution to the linear programming relaxation of extended formulation of multifacility lotsizing problem (ExMF) can be transformed to a solution to linear programming relaxation of original problem formulation (\ I *) using equations (2.16), (2.17) and (2.18). It follows that the optimal solution of linear programming relaxation of (ExMF) can be transformed to a feasible solution (not necessary the optimal solution) to linear programming relaxation of (\! [*). D 2.5 PrimalDual Based Algorithm The dual of (LPMF) has a special structure that allows us to develop a primal dual based algorithm. The following is the formulation of the dual problem: T maximize 1,,,, t= 1 subject to (DMF) ^ tl"'.,, S it i= 1,...,F;t 1,...,T v,wit <_ cit, i 1, ... F;t = ,..., T;t < T < T wit, > 0 i= 1,...,F;t 1,...,T;t In an optimal solution to (DMF) both constraints witr > 0 and witT > v. cit should be satisfied. Since wit, is not in the objective function of (DMF), we can replace it with wit = max(0, v, cit,). This leads to the following condensed dual formulation: T maximize 7Y I,, t=1 subject to (DMF*) ZTLtbmax(0,vitr) < si i 1,...,F;t 1,...,T. Recall that the extended formulation of the multiretailer lotsizing problem is a special case of the facility location problem. The primaldual scheme we discuss, in principle, is similar to the primaldual scheme proposed by Erlenkotter [38] for the facility location problem. However, the implementation of the algorithm is different. Wagelmans et al. [103] consider the extended formulation of lotsizing problem. They solve the corresponding dual problem in O(T log T). They show that the dual variables have the following property: ', > ', t for t = 1,..., T. This property of the dual variables is used to show the dual ascent algorithm that '1 :, propose gives the optimal solution to the economic lotsizing problem. 2.5.1 Intuitive Understanding of the Dual Problem In this section we give an intuitive interpretation of the relationship between the primaldual solutions of (ExMF). Suppose the linear programming relaxation of extended formulation (LPMF) has an optimal solution (q*, y*) that is integral. Let 0 {(i, t), = 1} and let (v*, w*) denote an optimal dual solution. The complementary slackness conditions for this problem are (CI) *It[Sit tl,l, 0 !i 1, (C i) y E th"'t] 0 for i 1,... ,F;t 1,...,T (C2) q*[ it v+',] = 0 fori =,...,F;t =,...,T;t (C3) "', [, '1 = 0 fori 1,...,F;t 1,...,T;t < r< T (C4) v[bt EEt q,] 0 for t ,...,T. By conditions (C1), if a facility produces in a particular time period, the setup cost must be fully paid (i. e. if (i, t) E 0, then sit _= T'). Consider conditions (C3). Now, if facility i produces in period t, but demand in that period is satisfied from inventory from a previous period (qit 0 and qitt btYt / 0), then i ,, =0, which implies that the price paid for the product in a time period will contribute to the setup cost of only the period in which the product is produced. By conditions (C2), if qT > 0, then v* = Cita + Thus, we can think of v* as the total cost (per unit of demand) in time T; of this, cit goes to pi', for production and inventory holding costs, and wi*, is the contribution to the production setup cost. 2.5.2 Description of the Algorithm The simple structure of the dual problem can be exploited to obtain near optimal feasible solutions by inspection. Suppose that the optimal values of the first k 1 dual variables vl, ... v_ are known. Then, to be feasible must satisfy the following constraints: k1 bk max(0, k Citk) < 1 Sit b, max(0, v ci,) (2.23) T=t for all i = 1,..., F and t = 1,..., k. In order to maximize the dual problem we should assign to vk the largest value satisfying these constraints. When bk > 0, this value is i, = min + itk + } (2.24) i,t<_k bk Note that .i, _1 > 0 implies vk > citk. A dual feasible solution can be obtained simply by calculating the value of the dual variables sequentially using equation (2.24) (Figure 23), and a backward construction algorithm can then be used to generate primal feasible solutions (Figure 24). For the primaldual set of solutions to be optimal, the complimentary slackness conditions should be satisfied. However, the primal and dual solutions found do not necessarily satisfy the complimentary slackness conditions. S_ ,= ..., F, =12,..,T for r=1 toTdo if by = 0 then v, = 0 else r<71 Cit. {rt+Miui /b.} for t 1 to 7 do for i 1 to F do mlax{0, v, c, } S nmax {0, } enddo enddo enddo Figure 23: Dual algoritihmn Proposition 2.5.1 The solutions obtained using the primal and dual i'l'.:thms are feasible and they il;,.i ;, .'i>;.fy the complimentary slackness conditions (C1) and (C2). Proof: It is clear that primal and dual solutions generated are feasible by construction. Since the primal algorithm sets qit, > 0 only when Cit V itr = 0, the solution satisfies conditions (C2). The dual algorithm constructs solutions by making sure that equations (2.23) are satisfied. Therefore, the dual solutions are ahlv such that br+lwitr+l < ., . If 3/ , = 0, then witr+ = 0, and since the dual algorithm sets 3 1,+1 = i 1, b+tlwitr+1, we also have f1, +1 = 0. Continuing this way it is clear that if at some point in the calculation we get 31, = 0, we subsequently obtain the following: [,= m,.+ = ....= ir 0 and Witr = Witr+1 I = = 0. The primal algorithm sets yit = 1 only when i1, = 0; this implies that conditions (C1) will alh,v be satisfied. o Hence, in order to determine if the solution obtained using the primaldual algorithm is optimal, one can check conditions (C3). Another way to check if the yi = 0, q. 7 = 0, i = 1, ... F, 1 .... ,T; r > I P= {'" >0.t= 1,...,T} Start: j maxt E P, t 0 1: for i 1 to do repeat t t 1 until 0 and c, 0 1, i t* t, go to 2 enddo go to S' 3 S 2: for = 1* to j do if czti Vi =0 P = P {}, qi. = enddo Step 3: if P / then go to start else Stop Figure 24: Primal algorithm solution is optimal is by comparing the objective function values from the primal and dual algorithms. Since the dual algorithm gives a dual feasible solution and the primal algorithm gives a primal feasible solution, at optimality, the two objective functions should be equal. 2.5.3 Running Time of the Algorithm Here we discuss the running time of the primaldual algorithm. The total number of logical and arithmetical operations performed in the dual algorithm is FT assignments during initialization 2T + FT + FT2 comparisons FT + FT2 multiplications FT2 additions FT2 + T assignments 2FT2 subtractions The total number of logical and arithmetical operations in the primal algorithm is FT + FT2 assignments during initialization T + 2FT comparisons T + 2FT assignments 2T + FT2 subtractions Thus, the running time of the primaldual algorithm is O(FT2). 1: Solve Lt relaxation of 2: if y* is integral, then STOP, the solution is < else solve the '" n problem if no violated 'y is found :)P else add the v to (LPMF), go to Step 1. Figure 25: Cutting algorithm 2.6 Cutting Plane Algorithm In this section we derive a set of valid inequalities for the multifacility economic lotsizing problem. We show that these inequalities are facets of the feasible region. The inequalities are then used in a cutting plane algorithm that finds tight lower bounds. Figure 25 presents the steps of the cutting plane algorithm. The algorithm stops when either (i) the optimal solution to (\!F *) is found, or (ii) if no more valid inequalities can be generated. Next we discuss in detail the valid inequalities, the separation algorithm, and we show that these inequalities are facets of the feasible region of (i\IF*). 2.6.1 Valid Inequalities Theorem 2.6.1 : For any 1 < I F Y(Eqd + byt) > biC ) i=1 teS tEL\S are valid inequalities for the multi fr .1.:1 ,; economic lot...::,' problem. Proof: In order to prove that (2.25) are valid inequalities we should prove that '1 :,i are satisfied by all feasible solutions with integer y. Consider formulation (\! I*). Given a solution (q, x, I, y), let yi = 0 for t E L \ S. Inequality qit < btTYit implies that if yi = 0, then qit = 0. Thus, F F F I S( q + > t itu) qit > i1 tES tEL\S i=1 tES i=1 t=1 Consider constraints (2.5). Summing up these constraints over all i = 1,..., F we get F I F I E E ^ E(it (E Xjt+ ) v i, ...,r. i= 1 t= 1 i 1 t= 1 Since I > 0 for all 1 = 1,..., T, and F x = bL for all t = 1,...,T F I SI qtl > bit for = 1,..., T. i= 1 t= 1 As a results, solutions such that yt = 0 for t E L \ S, satisfy the inequality (2.25). Let t' = argmint{t E L \ S, yit 1}. Then, yt = qi = 0 for t E (L \ S) n 1,..., t' 1}. Hence, F F t' 1 ( qtit + byit) > ) q L + b6i > bitl + b6l b11. i=1 teS tEL\S i=1 t=l1 This completes the proof that every feasible solution with integer y will satisfy the valid inequalities (2.25). O The intuition behind the inequalities (2.25) is as follows: Assume that no production takes place in the periods in L \ S. Then the full demand bll has to be produced in the periods in S, giving ES 1 EtS qit > blu. Now, suppose that we do produce in some of the periods in L \ S, and let period k be the first such period. The production for periods 1,..., k 1 then has to be done in periods in S. It is possible that the remaining demands bkl is produced in a single period in L \ S, which explains the coefficients of the y variables. 2.6.2 Separation Algorithm Let (q*, x* y*) be the solution to the linear programming relaxation of (\!F*). If this solution satisfies the integrality constraints (y E {0, 1}), this is the solution to (\! F*) as well. However, if it does not, we have to identify a valid inequality that cuts off this solution from the set of feasible solutions to linear programming relaxation of ( IP*). A challenging problem is to identify the sets S and L \ S for which the valid inequality (2.25) is violated. An exponential number of sets S and L \ S exists, however, the following separation algorithm that runs in polynomial time (O(FT2)) takes us through the steps needed to identify the sets S and L \ S. Note that, for a given time period I (1 1,..., T), this procedure identifies the valid inequality that is the most violated by the current solution. For 1 1,...,T i=i i=i F F and t eL \ Si if iqu > buyt F i= 1 i= 1 2. check if ( qt + > bi < b1l. i=1 teS1 tEL\SI If so, the inequality is violated; otherwise for each 1, the following holds: F F miny(y + b>1Yi4) b11} ( q + > btlt) bl > 0 i1 tES tEL\S i=1 tESi tEL\SI If no violation can be found, all valid inequalities of the form (2.25) are satisfied by the current solution (q*, x*, *, y*). Therefore, either this solution is already an integer solution, or if this is not the case, the valid inequalities are not able to cutoff this noninteger solution from the feasible region of formulation ( \!F*). There are two reasons for that to happen, either our valid inequalities are not facets of the convex hull of the feasible region, or in case that '!, i, are facets, I! i, may not describe the whole feasible region, and other facets are needed as well. In Section 2.6.3 we will show that the valid inequalities (2.25) are indeed facets of the convex hull of the feasible region. 2.6.3 Facets of MultiFacility LotSizing Problem Let ) be the feasible region of the multifacility lotsizing problem, let co(J) be the corresponding convex hull, and let G {q, y E co(K) : I (s5Es fit + tEL\s bytlit) = bl}. G consists of all the points in the convex hull of the feasible region that satisfy the valid inequality as equality. In other words G consists of all the points in the convex hull of the feasible region that lie on the plane defined by the inequality (2.25). A common approach to show whether an inequality defines a facet of co()) is to show that there exist precisely dim(co(l)) vectors on the boundary of G that are affinely independent. Note that if the boundary of G does not contain zero, this is equivalent to showing that there exist precisely dim(co())) vectors on the boundary of G that are linearly independent (N. in! 1, ri and Wolsey [86], Wolsey [105]). In our problem zero is not a feasible solution. The following is a procedure that will help us to see whether inequalities (2.25) are facets of co(4). This means these inequalities are necessary if we wish to describe co(K) by a system of linear inequalities. Proposition 2.6.1 : If bt > t = 1,... ,T, the dimension of the feasible set of multif,'. i.:. economic lot .:..' problem is dim(() = 3TF F T 1. Proof: We show that the dimension of the feasible region of the multifacility lotsizing problem is 3TF F T 1 in both cases, when formulations (\! *) or (RMF*) are considered. Consider formulation (\F! *): A feasible solution in K( has a total of 3TF of q, x, y variables and F(T 1) of I variables. The assumption that bt > 0 implies that b1 > 0 and yu = 1 for at least one i 1= ,..., F. This reduces the dimension of ) by one. The equations i, xit = bt for t 1,..., T reduce the dimension of ) by T. The equations li,t1 + qit = xit + it for i = 1,..., F, and t = 1,..., T reduce the dimension of K by TF. This shows that the dimension of the feasible region to formulation (F\!*) is at most equal to 3TF F T 1. Consider formulation (RMF*): A feasible solution in K( has a total of 3TF q, x, and y variables. The assumption that bt > 0, for t = 1,..., T implies that b1 > 0 and yi = 1 for at least one i 1,..., F. Therefore, the dimension of + is reduced by one. The equations i xi = bt for t 1,..., T reduce the dimension of ) by T. The equations 7 i qit = iT xi for i 1,..., F reduce dimension of ) by F. The dimension of the feasible region of formulation (RMF*) is at most equal to 3TF F T 1. However, in Theorem 2.6.2 we show that there exist 3TF F T affinely independent points in the convex hull of the feasible region of the multi facility lotsizing problem that satisfy the valid inequalities (2.25) to equality. This indicates that there exist 3TFFT affinely independent points in the feasible region of the multifacility lotsizing problem. This concludes our proof that the dimension of the feasible region of the multifacility lotsizing problem is 3TF F T 1. o To illustrate that the above result is correct, we consider the following simple example. B ,n 111i et al. [10] show that the dimension of the feasible region to the economic lotsizing problem is 2T 2. The economic lotsizing problem is a special case of the multifacility problem with F = 1; therefore its dimension is 3T1T1 = 2T 2. The next step is to show that there are 3TF F T 1 affinely independent points in K) that satisfy the valid inequality to equality. Theorem 2.6.2 If bt > 0, for t = 1,...,T, the (1, S) :,., ,,rl.:// /1, /I,,, 1 a facet of + whenever 1 < T, 1 E S and L \ S / 0. Proof: See Appendix. 2.7 Dynamic Programming Based Heuristic 2.7.1 Introduction Dynamic programming provides a framework for decomposing certain optimization problems into a nested family of subproblems. The nested substructure ,il.1 1; a recursive approach for solving the original problem from the solutions of the subproblems. The recursion expresses an intuitive principle of optimality for sequential decision processes; that is, once we have reached a particular state, a necessary condition for optimality is that the remaining decisions must be chosen optimally with respect to that state (N. i!, 111" r and Wolsey [861). Dynamic programming was originally developed for the optimization of sequential decision processes. A typical example that is used in the literature to explain how the dynamic programming algorithm works is the economic lotsizing problem (N. i!i111l r and Wolsey [86] and Wolsey [105]). Consider the lotsizing problem with T time periods (t=1,...,T). At the beginning of period t, the process is in state st1, which depends only on the initial state so (initial inventory lo) and the decision variables yt and qt for t = 1,..., t 1. The contribution of the current state t to the objective function depends on It1. Let us denote by ', the value of the optimal decisions in periods t,...,T (Iti) = min(ptt + styt + ht(Iti + qt bt) + ', i(It1 + qt be)) (2.26) qt,yt The difficulty with this algorithm is that since demand in period t is satisfied by production in periods r < t, it follows that the level of inventory in the end of period t 1 can be as large as btT, and it appears that a large number of combinations of (qt, Itl) must be considered to solve the problem. Fortunately, the following properties of an optimal solution to the economic lotsizing problem make the problem easier. Theorem 2.7.1 (Nemhauser and W .1, ; [861) An optimal solution to economic 1. I ..:, problem .'/.:i. the following: qtIt_= 0 for t 1,...,T If qt > 0, then qt = C, t b for t < < T If It1 > 0, then It = Et,, br for 1 < 7' < t From Theorem 2.7.1 it follows that 2(T t) combinations of (qt, Iti) must be considered to solve the recursive function (2.26). Thus the overall running time is O(T2), and recursive optimization yields a polynomialtime algorithm for the uncapacitated economic lotsizing problem. 2.7.2 Description of the Algorithm In this section we present a dynamic programming procedure to solve the multi facility lotsizing problem. This procedure is a heuristic, and therefore does not capture all solutions, possibly including the optimal solution to the problem. In Section 2.3 we discussed the nonsplitting property of optimal solutions to the multifacility lotsizing problem. It is important to note that a production, inventory and transportation plan is optimal if and only if the corresponding arcs with positive flow form an arborescence (rooted tree) in the network. Important implications of this result are the following: in an optimal production plan, the demand bt for a given period (t = 1,..., T) will be produced in a single facility in a single time period; in every time period a facility either will not produce, or will produce the demand for a number of periods, and these time periods need not to be successive. Let Tt be the set of all periods covered by production at facility i in period t in an optimal arborescence. Then the optimal production plan is T qit = qitr= bi 1,...,F;t 1,...,T. T=t TETt The bold arcs in Figures 27 to 210 represent the extreme flows. As illustrated in Figures 27 and 28, an extreme flow to the multifacility lotsizing problem is not necessarily sequential. The example in Figure 28 shows an extreme point solution in which demands b1 and b3 are satisfied from the production at facility 1 in period 1, and the demand b2 is satisfied from production at facility 2 in period 2. :ork representation of multifacility lotsizing uire 2 6 I I with 4 Using this information we can now simplify the multifacility lotsizing problem to a shortest path problem in an .. i, ii: network, zv G'. We build G' in the following way: let the total number of nodes in G' be equal to (T + 1), one for each time period along with a dummy node T + 1. Traversing arc (r, r') E G' represents the choice of producing in a single facility in a single time period t = 1,..., T to satisfy the demand in periods 7, 7 + 1,..., ' 1. The cost of arc (r, T') is calculated using the following cost function: (2.27) g,' = min sit + citTrb + cit,T+lb,+l + ... + cit,_'lbT 1. i=1,...,F;1 Let v, be the minimum cost of a solution for period r 1,..., T 1 and r' > r. The ",, < . \ F \ \ N? F, Figure 27: Sequential extreme flow recursion function for the multifacility lotsizing problem is and (2.28) V( I'i,1) min{gI + V( I'i,'l)} i= 1 i= 1 F Vr( Ii,T1) g= T,T+ i 1 (2.29) T tI F F, F bt ^\ /" \\\A \\\\F Figure 28: Nonsequential extreme flow The total number of arcs in G' is equal to T(T + 1)/2. Given the costs g,,' for every arc (7, 7') e G', the recursive functions (2.28) and (2.29) will provide the optimal solution in O(T2). Every (directed) path in G' that connects node 1 to T + 1, corresponds to a feasible solution to the original problem. The network G' for a 4period problem is presented in Figure 26. Theorem 2.7.2 For every path on G' that connects node 1 to T + 1, there exists a corresponding an extreme point solution in the extended problem formulation (ExMF), and every sequential extreme point of (ExMF) is represented by a path on G'. Proof: See Appendix. Wu and Golbasi [106] propose a similar shortest path algorithm to solve the multifacility lotsizing problem. They showed that their algorithm gives the optimal solution to the problem if the following conditions hold: (i) no simultaneous production over more than one facility can take place in a given period. In other words qitqjt = 0 for i,j = 1,..., F, i / j and t = 1,..., T. (ii) no production will be scheduled at all if there is inventory carried over from previous period in one of the facilities. In other words qitla,t_ = 0 for i,j = 1,..., F and t = 1,..., T. \F, , Ebli Figure 29: Nonsequential flow: Case 1 These conditions obviously restrict the search for a solution to only sequential extreme flows. Furthermore, i, investigate only part of the sequential extreme flows, the ones that satisfy the above conditions. Different from Wu and Golbasi, our procedure considers a wider range of extreme flows. We consider all the sequential extreme flows, although some of them may not satisfy conditions (i) and (ii). Figure 2 9 presents a sequential extreme flow that violates condition (i) and Figure 2 10 presents a sequential extreme flow that violates condition (ii). 2.7.3 Running Time of the Algorithm The above dynamic programming algorithm to find the shortest path in the i.. ,, 1 graph G' has running time of O(m), where m is the number of arcs in G'. Since the graph is complete, S T(T + 1) t= 2 tLi Therefore, the running time of our algorithm will be O(c + T2), where c presents the time it takes to calculate the costs of all arcs in G'. F, "" N,, b 0^''. ^^N F,, Figure 210: Nonsequential flow: Case 2 In calculating the cost of a particular arc (r, r') E G', we need to perform a certain number of comparisons, additions and multiplications. For an arc (r, 7') (where 7 = 1,..., T and r' = r,..., T + 1), the number of comparisons to be made in order to calculate its cost is equal to rF 1. The total number of comparisons needed to generate all the arc costs for a problem with T time periods is T Comp. = r(rF 1) r=i thus, O(FT3). The number of additions required to calculate the cost for arc (r, r') is (7' 7 + 1)7F. The total number of additions is T T+1 Add. = (' r 7 + 1)rF T =l1T'=T+1 thus, O(FT4). The number of multiplications required to calculate the cost for arc (7, 7') is (r' 7)rF. The total number of multiplications is T T+1 Mltp. (T' 7)7F T =l1T'= T+1 thus, O(FT4). This shows that the time complexity to solve the problem using the above dynamic programming algorithm is O(FT4). 2.8 Computational Results To test the performance of the algorithms discussed in this chapter we randomly generated a set of test problems and compared the computation times and solution quality to the general purpose solver CPLEX. The algorithms were compiled and executed on an IBM computer with 2 Power3 PC processors, 200 Mhz CPUs each. The scope of our experiments, other than comparing the algorithms is to see how different factors (such as the ratio of setup to variable cost, number of facilities, etc.) affect their performance. We first generate a nominal case problem as follows: Production setup costs sit ~ U[1200, 1500] Production variable costs pit U[5, 15] Holding costs hit ~ U[5, 15] Demand bt ~ U[5, 15] Number of facilities F = 150 Number of periods T = 30 Most of the above parameters are the same as the ones used in Wu and Golbasi [106] for a related problem. To generate meaningful transportation variable costs, we randomly generated from uniformly distributed points on a [0, 10]2 square the facility and demand point locations, and calculated corresponding Euclidean distances. We assumed one to one correspondence between the Euclidean distances and the unit transportation costs. Varying one or more factors from the nominal case, we generated five groups of test problems. In the first group of problems we change the level of production setup costs from the nominal case to the following: sit ~ U[200, 300], sit ~ U[200, 900], sit ~ U[600,900], sit ~ U[900,1500], sit ~ U[1500, 2000], sit ~ U[2000, 3000], sit ~ U[3000, 6000], sit ~ U[5000, 10000], and sit ~ U[10000, 20000]. These together with the nominal case problem give a total of 10 problem classes (problem classes 1 to 10). In the second group of problems we change the length of the time horizon to 5, 10, 15, 20, 25, 35, 40 (problem classes 11 to 17). In the third group we change the number of facilities to 120, 130, 140, 160, 170, 180, 190, and 200 (problem classes 18 to 25). In the fourth group of problems the level of demand is changed to bt ~ U[20, 50], bt ~ U[50, 100], bt ~ U[100, 200], bt ~ U[200, 400], and bt ~ U[400, 1000] (problem classes 26 to 30). Finally, in the fifth group the level of holding costs is changed to ht ~ U[20, 10], ht U[10, 10], ht ~ U[10, 20], ht ~ U[20, 40], and ht ~ U[40, 100] (problem classes 31 to 35). For each problem class we generate 20 instances. The errors and running times we present for each problem class are the averages over the 20 problem instances. A summary of the results from the experiments are presented in Tables 22 to 25. We do not present the results from implementing the cutting plane algorithm, since in almost all of the problems CPLEX outperformed our algorithm in terms of solution quality and running times. We would like to emphasize that the linear programming relaxation of the extended formulation and dual algorithm give lower bounds, while dynamic programming and primal algorithms give feasible solutions to the problems. The Table 2 1: Problem characteristics Problems Nodes Arcs' Arcs" Arcs"* 1,...,10 4 1 13 7 11 7 2,1 3, 15 12 1511 4, 9,750 55 13 2 6i 14 3.021 8,. 34 210 15 3. 11,1 52,5 16 5, 15, ,7 17 6.041 17,, 1 : 18 3 1 10, 465 19 3 1 11, 64I 4(65 4 1 12, 65 21 4. 1 1 4,2. 22 5,131 15,1 84,150 5. 1 16 ,100 24 5,731 16, 0 9/ 25 6 1 17, , .35 41, 1 7 45 4 531 13,. 7/ SOriinal formula tion 'c ogrammini Co~isrrit armulation Ext cncic SF bound procedures for problem G errors that we present give the deviation of the heuristics and lower bounds from the optimal solution found from solving the MILP formulation using CPLEX. The linear programming relaxation of extended formulation is solved using CPLEX callable libraries as well. We measure the tightness of the lower bounds as follows: CPLEX Lower bound Error( ) CPLEX 100, CPLEX and the quality of the heuristics using the following: Upper bound CPLEX Error( ) CPLEX 100. CPLEX For all problem classes, except problem class 32, the dynamic programming algorithm gave the optimal solution in all 20 instances. The running time of this algorithm was at most 4.31 cpu seconds. On the other hand, the primal algorithm did not perform F "c Prog. Primal ';or. CPLEX 'or rror Problem () (sec) ( ) (sec) (sec) 1 1 1.15 0.17 29.55 2 1. 0. 0.17 31.57 3 1 5.34 0.17 6 4 1. 0.17 5 1 1 0.17 6 0.00 1. 15.14 0.17 7 0. 1., 15.15 0.16 14 8 0.00 1.. 14. 0.17 07 9 0.' 1. 1 44 0.16 255., 10 0.00 1. C 0.16 1 0. 0.01 47.37 0.01 12 0. 61 0.02 13 i 0.13 7.72 .7 0. 6. 14 .4 15 0. 5 15 0.75 1 0. 7 16 11.17 0.13 147 17 i 4.31 11 0.17 Table 2 2: uilts from 1 and 2 well, as the errors went up to 52' for problem class 32. However, its running time for all problems was less than 1 cpu second. Both algorithms are much faster than CPLEX. Table 2 Results >m I bound ::cdurcs for 'icm CGroups 3,4 and 5 The linear programming relaxation of the extended formulation generated solutions that are less than 0.1 from optimal for all problem classes. The dual algorithm provided tight lower bounds as well. For all problem classes except problem class 32, it generated bounds that are less than 2' from optimal. However, the linear programming relaxation of the extended formulation took almost as much time as solving MILP formulation. The running time of the dual algorithm for all problem classes was less then 1 cpu second. Based on the results presented in Tables 22 to 25, we can see that an effective algorithm to solve our problem would combine the dynamic programming algorithm to generate upper bounds and the dual algorithm to generate lower bounds. S "c Prog. Primal Algor. CPLEX Error Error Problem ( ) (sec) (, ) (sec) (sec) 18 0.1 1.17 11.04 0.13 64.C 19 0. 1.27 12.59 0.14 75.04 20 0. 1. 8.34 0.16 21 0. 1.56 12.42 0.17 11 6 22 I 1. 1 0.18 1 75 1.75 1 7 0.19 146.82 24 .I I 11. 0.19 174.79 1 15.34 0. 189. I 2.66 0.16 45.65 S 1. 77 0.16 1. Q 0.15 12.19 0. 1.. 0.00 0.16 6.24 0. 1.. 0. 0.15 4.13 31 0.00 1. 44 0.16 4.72 1.24 51.84 0.16 7.12 0.00 1.. 9.35 0.16 91.  34 0.' 1. 1 0.15 4.41 0.00 1.. 0.44 0.16 C Table 2 4: Results from lower bound procedures : i problem G 1 and 2 Linear Prog. Dual or. 'Or 'Or Problem () (sec) ( ) (sec) 1 ii 24. 10 2 l 1 7 i 3 65. 41 5 I 1 10 .1 0.10 6 175 1 10 7 0.00 217.25 0. 0.10 8 0. 312.41 0.22 0.10 9 0.00 484.41 I 0.10 10 0.i ( 5 0.: 0.10 1 0. 0.49 1.72 0. 12 0.1 6.00 1.07 0. 13 0. 19. 0. 0. 14 I 45. 14 15 .4 16 l 1 .17 41 0.14 S17 1 18 Now, we want to provide more insights into the problem characteristics that affect the performance of the heuristics. It has been shown in other studies (Hochbaum and Segev [60], Eksioglu et al. [34]) that the problem difficulty depends on the values of the fixed costs. In problem classes 1 to 10 the level of production setup costs is increased (while everything else is the same). Thus, the time it took solving the MILP formulation of the problem, as well as the time it took solving the linear programming relaxation of extended formulation, increased as the level of fixed charge costs increased. However, the computational time of the dualprimal and dynamic programming algorithms were not affected by the change in setup costs. This can be explained by noting that their running time depends only on the number of facilities and the length of the time period. From the results for problem groups 2 and 3, one can see that as the number of time periods and the number of facilities increase, the running time of the dynamic programming algorithm increased. Problem classes 17 and 25 have almost the same lem Groups 3, 4 and 5 number of arcs; however the time it took to solve problem class 17 is almost twice the time it took to solve problem class 25. The reason is that the number of time periods in problem class 17 is higher (the running time of this algorithm is O(FT4)). Another interesting observation is that all algorithms performed very poorly in solving problem class 32. In this problem class we have generated the holding costs in the interval [10, 10]. For this problem class, the quality of both lower and upper bounds generated is worse when compared to the rest of the problems. The dynamic programming algorithms gave solutions that are 1.2!' from optimal and primal algorithm gave solutions 13.19'. from optimal. The lower bounds generated using linear programming relaxation of the extended formulation were 0.1,' from optimal and the dual algorithm gave an average error of 51.S !' The reason for such a performance is the holding costs being not restricted in sign. Wu and Golbasi [106] Linear Prog. Dual or. 'or '(or Problem () (sec) ( ) (sec) 18 7 19 l 92. .1 113 21 i 1 153. : 0.10 22 164. 11 0.00 1 .94 i 0.12 24 0.01 213.l 0.42 0.13 0.01 2z 0.32 0.14 S0.01 0.12 0. 27 0.01 0.04 0. 0. 16.75 0.1 0.10 0. 14.13 0.00 0.10 12.57 0.10 31 0.01 17.31 1.48 32 0.16 19.55 13.19 S*.44 19 10 14 Table 2 5: Results lower bounti *dures for es for problem Group 6 F c Prog. Primal _or. CPLEX 'or ITOr Problem ( ) (sec) ( ) (sec) (sec) 1.17 1.41 3 17 1.1 140 4. 015 3.77 1. 1. 17 4.27 1.54 1.40 0.16 4.79 1.24 1., 51.84 17 7. 41 0.84 1." 16 0.17 8. 42 0. 1. 8 0.17 7.13 0.41 1. 147. 0.16 10.47 44 1 0.16 17. 45 0.00 1. 719. 0.16 31 show that the multifacility lotsizing problem is NPhard in case that the holding costs are not restricted in sign. We wanted to further investigate the performance of the algorithms for the case when the holding costs are not restricted in sign. We generated a sixth group of problems that have holding costs uniformly distributed in the interval [10, 10]. We reran problem classes 1 to 10 creating 10 new class problems (class problems 36 to 45). Table 27: BRsult :edurcs for oblcm Group G Linear P Dal Algor. 'or 'Or Problem ) (sec) ( ) (se) 0.1 1 3.11 0.10 0. 13.67 O 0.05 15. 0 0.10 0 1 8.42 0.1 0.16 18.98 13.19 0.10 41 0.10 18.04 14.78 0.1 42 0.16 17.18 15.81 0.10 43 0. 22.82 17.11 0.10 44 0. 47 5 0.10 45 0. 83.41 ,3 0.10 .ble 2 6: Resulth bound I>m lower bound Tables 26 and 27 present the results for the sixth group of problems. For this set of problems, the dynamic programming algorithm performed well. Its maximum error was less than 1.5 !' from optimal and the running time averaged 1.40 CPU seconds. The primal algorithm performed very poorly for this group of problems. The linear programming relaxation of the extended formulation gave tight lower bounds. The maximum error presented for this group of problems is 0.3' However, the running time of this algorithm is comparable to the time it took CPLEX to solve the corresponding MILP formulation of the problem. The dual algorithm also performed poorly. In our final group of problems (problem group seven), we consider the demand to show seasonality pattern. Demands are generated as follows: 2xr bt = 200 + azt + a sin [2 (t + d/4)] where, a is the standard error of demand zt i.i.d. standard normal random variable a amplitude of the seasonal component d is the length of a seasonal cycle in periods These demands are generated in the same way as in Baker et al. [5] and Chen et al. [22]. In our test problems we take a = 67, a = 125 and d = 12. Table 28 presents the characteristics of the problems generated. The errors presented in Table 29 are calculated as follows: Upper Bound Lower Bound from Dual Alg. Gap (,) = 100. Lower Bound from Dual Alg. For problem classes 46 to 56 we compare the solutions from the dynamic programming and primal algorithms with the corresponding optimal solutions from CPLEX. For these problems, dynamic programming performed very well, as the maximum error presented is 0.0,,' However, the running time of this algorithm is higher than the running time of CPLEX. It took CPLEX on average 53 cpu seconds eteristics # problem Group 7 Extendcedl orimuiaon F 'cc programming, to run problem class 56, and for the same problem it took the dynamic programming algorithm on average 354 cpu seconds. The primal algorithm also performed very well. The maximum error presented is 0.35"' The running time of the primal algorithm was less then 1 cpu second for this set of problems. For problem classes 57 to 63 we do not have the optimal solutions. CPLEX failed to solve these problems, because of their size. Therefore, we use the lower bounds generated from dual algorithm to calculate the error gaps. The dynamic programming algorithm gaves very good solutions. The maximum error gap was 0.12' ", but the running time of the algorithm went as high as 115, 453 cpu seconds. The primal algorithm gave a maximum gap of 0 2<' and maximum running time 14.05 cpu seconds. Problem F periods Nodes Arcs* Arcs" 24 5 6,4 47 24 745 9, 48 241 1 48 1 24,4 1,1' S 1,' 1,176 51 48 1 148, 1,1 52 17 3 1 77 1 4 54 3 1 4 55 t1 374, 18 57_ t 7,873 748, 18 58 1 1. 73 11 i 73 S15,745 16. 5 1 i 1,9 31t, 11,8 Table 2 8: C formulation Table 2 9: Results for problem G 7 D 'c Prog. Primal or. C Problem _rror C 'or C ( ) ( ) (se) ( ) ( ) (sec) (sec) 46 0 0. 0. 0.318 0. 47 0.C 0. .6 0.12 0.1 0.1. 48 0 0. 0.17 0.155 0.156 7 S 0. 0. 8 1. 0.1 0184 1. 0 0.013 1. 0.1 0. 2.74 51 0 8 0. 2.17 0.1 0.107 0.07 3.58 52 0 0. 15.49 0.C 0. 0.13 6. 40 24 0. 2 0.274 0 0.18 11. 54 0 0. 31.16 0. 0.377 0.23 .48 55 0 0.( 0 .C 0 0.44 .25 6 0. 0. 3.44 0.127 0.133 ;4 52.58 57 N/A O.C N/A 0.246 C N/A 58 N/A 0.121 3654.24 '/A 0.117 1. /A N/A 0.129 2.14 N/A 0.151 2.51 N/A N/A 0. 7, .58 A 0 J //A 61 N/A 0. .1 57 N/A 0.227 4 N/A N/A 0. /A 0.1 1 /A N/A 0.108 115.453. N/A 0 1 N/A The results of Table 29 show that the quality of the solutions generated and the quality of lower bounds is very good. In particular the running times of primal algorithm are very small. 2.9 Conclusions In this chapter we discuss the multifacility lotsizing problem. We propose the following heuristic approaches to solve the problem: dynamic programming algorithm, a cutting plane algorithm and a primaldual algorithm. For this problem we also give a different formulation that we refer to as the extended problem formulation. The linear programming relaxation of extended formulation gives lower bounds that are at least as high as the lower bounds from linear programming relaxation of 1_,1Ih1 ,!" problem formulation. We present a set of valid inequalities for the multifacility lotsizing problem and show that I, i, are facet defining inequalities. We tested the performance of the heuristics on a wide range of randomly generated problems. The dynamic programming algorithm gave good quality solutions for all problem instances. The error (calculated with respect to the optimal solution or with respect to a lower bound in case that the optimal solution does not exists) reported is less than 1.,.' The running time of the dynamic programming algorithm is O(FT4). This explains the relatively high running times of this algorithm for the seventh group of problems. Linear programming relaxation of extended formulation gave high quality lower bounds. The maximum error reported for problem classes 1 to 45 is 0.3'' ,. from optimal. It has been shown that the multifacility lotsizing problem is NPhard when holding costs are not restricted in sign (Wu and Golbasi [106]). This explains the fact that all algorithms gave their worst results for problem classes 32 and 36 to 45. In particular primaldual algorithm performed poorly for these type of problems. For problems 57 to 63 CPLEX ran out of memory and failed to provide an integer solution. Primaldual algorithm however gave solutions within 0.;:77' error gap in less than 14 CPU seconds. CHAPTER 3 EXTENSIONS OF THE MULTIFACILITY LOTSIZING PROBLEM 3.1 MultiCommodity, MultiFacility LotSizing Problem In the previous chapter we discussed the singlecommodity, multifacility lot sizing problem. In practice, management of production, inventory and transportation in a plant typically involves coordinating decisions for a number of commodities. In this section we analyze and propose solution approaches for the multicommodity, multifacility lotsizing problem. The multicommodity, multifacility lotsizing problem consists of finding a production and transportation schedule for a number of commodities over a finite time horizon to satisfy known demand requirements without allowing backlogging. The schedule is such that the total production, inventory holding, transportation and setup costs are minimized. These costs may vary by product, facility and time period. Production capacities and joint order costs tie together different commodities and necessitate careful coordination of their production schedules. It is easy to see that without the presence of joint order costs or production capacities, this problem can be handled by solving each commodity subproblem separately. In this section we consider the multicommodity problem with only production capacities (no transportation capacities and no joint order costs) and refer to it as the capacitated multicommodity, multifacility lotsizing problem. A large amount of work has been devoted to the capacitated multicommodity, singlefacility lotsizing problem since it is the core problem in the Aggregated Production Planning models. Solutions to lotsizing problems are often inputs to Master Production Schedules and subsequently to Materials Requirements Planning (\I RP) systems in a "push" type manufacturing environment (see Bhatnagar et al. [13] and N iliiii s [85] for a review of these models). It has been shown that, even in the singlecommodity case, the capacitated lot sizing problem is NPhard [42]. Bitran and Yanasse [15] showed that the capacitated multicommodity problem is NPhard and Chen and Thizy [24] showed that the problem is strongly NPhard. Heuristic approaches to solve the problem were developed by Lambrecht and VanderVeken [71], Dixon and Silver [31] and Maes and van Wassenhove [77, 78]. The method proposed by Barany et al. [10] solves the singlefacility, capacitated problem optimally using a cutting plane algorithm followed by a branch and bound procedure. These algorithms do not consider setup times. Three groups of researchers pioneered work on the capacitated economic lotsizing problem with setup times. Manne [81] used a linear programming model; Dzielinski and Gomory [33] used DantzigWolfe decomposition; and Lasdon and Terjung [72] used a generalized upper bounding procedure. Eppen and Martin [36] provided an alternative formulation of the capacitated, multicommodity lotsizing problem known as the shortest path formulation. They showed that the linear programming relaxation of the shortest path formulation is very effective in generating lower bounds, and the bounds are equal to those that could be generated using Lagrangean relaxation or column generation. So far, promising heuristic approaches to solve the capacitated multicommodity lotsizing problem seem to be those based on Lagrangean relaxation. Thizy and van Wassenhove [96] and Trigeiro et al. [98] proposed a Lagrangean relaxation of the capacity constraints. They updated the Lagrangean multipliers using a subgradient approach and proposed a heuristic to obtain feasible solutions. Merle et al. [32] used a Lagrangean relaxation approach as well; however, '. :, updated the Lagrangean multipliers using a cutting plane method. Chen and Thizy [24] analyzed and compared the quality of different lower bounds calculated using relaxation methods such as Lagrangean relaxation with respect to different sets of constraints and linear programming relaxation. Millar and Yang [82] proposed a Lagrangean decomposition procedure to solve the capacitated multicommodity lotsizing problem. Their approach decomposes the problem into a transportation problem and K independent singlecommodity lotsizing problems. Thizy [95] analyzed the quality of solutions from Lagrangean decomposition on original problem formulation and shortest path formulation using polyhedral arguments. 3.1.1 Problem Formulation In many practical situations, coordination of production, inventory and transportation decisions involves different commodities. This complicates the problem considerably. In this section we discuss the multicommodity, multifacility lotsizing problem. This is a generalization of the classical capacitated, multicommodity lotsizing problem. We add to the classical problem a new dimension, the facility selection problem. In addition, we consider transportation costs and their effect on lotsizing decisions. Assume that there are K commodities that need to be produced. Each commodity faces a known demand during each of the next T periods. Note that commodities share a common production resource with item specific setup costs. The goal is to decide on the production schedule for each commodity, such that production, transportation and inventory costs in all the facilities are minimized, demand is satisfied and capacity constraints are not violated. For each commodity k, we define the following input data: Pitk production unit cost for commodity k at facility i in period t Sitk production setup cost for commodity k at facility i in period t hitk inventory unit cost for commodity k at facility i in period t itk transportation unit cost for commodity k at facility i in period t btk demand in period t for commodity k I production capacity at facility i in period t We introduce the following decision variables: qitk production quantity for commodity k at facility i in period t Xitk amount of commodity k transported from facility i in period t litk amount of commodity k in the inventory at the facility i in the end of period t yitk a binary variable that equals 1 if there is a production setup for commodity k at the facility i in period t. An MILP formulation of the problem is the following: F T K minimize 1 (Pitkqitk + SiitYitk + CitkXtk + hitkitk) i= t=l k= subject to (MiC) Ii,t1,k + qitk xitk+ Iitk i 1 ,...,F;t 1,...,T; k 1,...,K (3.1) F Xitk = btk t 1,...,T;k 1,...,K (3.2) i= 1 K Y qitk < t 1,...,F;t ,...,T (3.3) k=l qitk < btTkYitk i = 1,..., F; t = 1,..., T; k = 1,..., K (3.4) qitk,Iitk,Xitk > 0 i 1,... ,F;t 1,...,T;k= 1,...,K (' ) yitk e {0,1} i= ,...,F;t= ,...,T;k= 1,...,K (3. ) Note that btTk is the total demand for commodity k from period t to T. Constraints (3.1) and (3.2) are the flow conservation constraints at the production and demand points respectively. Constraints (3.3) are the production capacity constraints. These constraints reflect the multicommodity nature of our problem. If' i, are absent, the problem can be decomposed into K single commodity problems. We assume that initial and ending inventories are zero for all items. There is no loss of generality in this assumption, since we can reset for each commodity the given level of initial (ending) inventory ilk (liTk) at zero by removing lilk from the demand of the first periods (adding liTk to the demand of the last period), thus obtaining the adjusted demands for t 1,..., T and k 1,..., K. We propose a Lagrangean decompositionbased heuristic to solve the multi commodity problem. The decomposition is performed on the extended problem formulation. The extended formulation, similar to the single commodity problem, requires splitting the production variables qitk by destination into variables qitrk (r = t, ..., T), where r denotes the period for which production takes place. Given this, we have T qitk Y qitrk Xitk qistk s=l tT t litk = qisrk qistk s=l r=t s=l qitTk denotes the production quantity for commodity k from facility i, in time period t for period 7. The extended formulation of (! C) is the following: F T K T minimize > > I(> CitrkQitrk + Sitkitk) i= t=l k= r=t subject to (ExMC) F T > qitTk drk t 1,... ,T; k 1,..., K (3.7) i= t 1 K T k Qtrk qitrk bTkitk < 0 i 1,...,F;t 1,...,T;t qitrk > 0 i 1,...,F;t= 1,...,T;t < 7 itk e {0,1} i 1,...,F;t= 1,...,T; k= 1,...,K (3.11) citrk consists of the unit production cost of commodity k at facility i in period t, the unit transportation cost from facility i in period T, as well as the total unit cost of holding the item from period t to 7 at facility i (Ctrk =itk + Cik + s=t+ hisk)" 3.1.2 Linear Programming Relaxation The linear programming relaxation (LP) of (\ MC) is obtained by replacing constraints (3.6) with yitk > 0 for i= 1,...,F;t= 1,...,T;k= 1,...,K. The solution to this relaxation is generally far from optimal. We want to give a measure of the quality of the linear relaxation, and specifically disphyli a worst case for the error bound. Note that v(LP) denotes the optimal objective function value of (LP) and v(MC) denotes the optimal objective function value of (M\C). Theorem 3.1.1 K v(LP) > mm Sit (3.12) i 1,...,F ;t 1,...,T k 1 FT K K v(MC) v(LP) < S Sitk i mm 1 itk (3.13) i=l t=l k=l k=l We present a class of problems for which these bounds are attained i'inpl/.. .l'l. with respect to e. Proof: The fixed charge production cost Sitk is assumed to be positive, thus there exists an optimal value of problem (LP) such that yitk itk/btTk. Since, qitk > 1 F T (3.14) btTk blTk the total setup costt iof this solution will be greater than the total setup cost of this solution will be greater than K i=1,...,F;t 1,...,T k=1 This proves (3.12). In order to prove (3.13), let (q, I, x, y) be an optimal solution to (LP) and (q, I, x, [y]) a feasible solution to (\I C) obtained by fixing the fractional components of y to 1. Then, we have the following relationship: F T K v(MC) v(LP) < (Sitk Sitk i1 t=I k=1 tTk F T K K F T S k i=1,...,F;t 1,...,T i=l t 1 k 1 k 1 i= t= 1 FT K K < Sitik i ..itk (due to (3.14)) i=l t=l k=l k=l Now we present a class of problems for which the bounds derived in (3.12) and (3.13) are obtained ..imptotically with respect to e. The class of problems we consider has the following properties: Demands: btk = ,bTk = 1/e, for t = 1,...,T1 and k 1,...,K. Costs: 1tk 1,Pltk = hltk > 1, Ctk 0 for t =,...,T; k =,...,K. Sitk > l,ik > hitk > itk >0 for i =2,..., F; t =,..., T; k =,..., K. Capacities: vit = K, for t = 1,... ,T1 and VlT = K/e. The optimal solution for this class of problems is such that qltk =btk, ltk 1, ltk btk for t = 1,..., T; k = 1,..., K, and an optimal solution to the corresponding linear programming relaxation is q1tk = btk, Ytk btk for t= t T;k= 1, t..., K. btTk Thus, Y 1 if t T;k= 1,...,K Y1tk = 1/+cTt)c for t 1,...,T 1;k 1,...,K. As c approaches to zero, we have the following equation: 1 if t T;k= 1,...,K Yltk S 0 for t 1,...,T 1;k 1,...,K. Therefore, K K lim (LP) 81Tk = Min Sitk e>O i 1,...,F;t=1,...,K k=1 k=1 and F T K v(MC) Sitk. i= t=l k= 1 We have showed that, for a class of multiretailer, multifacility lotsizing problems the right hand sides of (3.12) and (3.13) are attained ..I., !nil i. lly with respect to e. O 3.1.3 Valid Inequalities Consider the formulation (\ C) of the multicommodity problem. Let 4k denote the set of feasible solutions to the kth singlecommodity problem and let LPk denote the set of the feasible solutions to the linear programming relaxation of the kth singlecommodity problem. We can restate the capacitated multicommodity problem as follows: F T K minimize > 1 (Pitkqitk + itkYitk + CitkXtk + hitklitk) i= t=l k= subject to (qk,,Ikyk) e k, k=l 1,...,K E lIqitr t' t,..., T;i1,...,F Or equivalently, F T K minimize ( > (pitkqitk + SitkYitkk CtkXitk+ htklitk) i=l t=l k= subject to (\IC*) (qk, Xk,Ik,k) e LPk k 1,... ,K (3.5) K k=1 itk e {0,1} i= ,...,F;t= 1,...,T;k= 1,...,K (3.7) where LPk n (y e {0,1}) = k. Given a fractional point (ql, xi, i, yi),.. (qK, XK, K, KYK), the goal is to find a valid inequality that cuts off this non integer point from the feasible region of linear programming relaxation of (\ C*). Ignoring constraints (3.16), the problem decomposes into K singlecommodity, multifacility lotsizing problems. For the singlecommodity problem, we have proposed in Section 2.6.1 a set of valid inequalities. Let ) be the feasible region of the multicommodity flow problem, then co()) C n i=co()k) This implies that the valid inequalities for the single commodity problem, are valid for the multicommodity problem as well. Thus, one can use these inequalities to check for each commodity k if point (qk, Xk, Ik, yk) can be cut off from K). 3.1.4 Lagrangean Decomposition Heuristic In this section we discuss a Lagrangean decompositionbased heuristic that we used to solve the capacitated multicommodity, multifacility lotsizing problem. Lagrangean relaxation is a classical method for solving integer programming problems (Geoffrion [49], Wolsey [105]). This method has been used to solve various network flow problems. Held and Karp [56, 57] successfully used Lagrangean relaxation to solve the traveling salesman problem; Fisher [40] used this method to solve a machine scheduling problem; Ross and Soland [92] applied this method to the general assignment problem. Holmberg and Yuan [62], Holmberg and Hellstrand [61], and Balakrishnan et al. [8] used Lagrangean relaxation based approaches for network design problems. We start our discussion with a review of the Lagrangean relaxation approach and its extension, Lagrangean decomposition. We continue then with a detailed description of the Lagrangean decomposition based heuristic we have used to generate upper and lower bounds for the multicommodity, multifacility lotsizing problem. Review of the method. Geoffrion [49] formally defines a relaxation of an optimization problem (P) as follows: min {f(x) x E X}, as a problem (RP) over the same decision variable x: min{g(x) x E Y} such that (i) the feasible set of (RP) contains that of (P), that is X C Y, and (ii) over the feasible set of (P), the objective function of (RP) dominates (is at least as good as) that of (P), that is g(x) < f(x), Vx X. The importance of a relaxation is the fact that it provides bounds on the optimal value of difficult problems. The solution of the relaxation, although usually infeasible for the original problem, can often be used as a starting point for specialized heuristics. Lagrangean relaxation has shown to be a powerful family of tools for solving integer programming problems approximately. Assume that problem (P) is of the form min{f(x) lAx > b, Cx > d, x E X}, where X contains only the integrality constraints. The reason for distinguishing between the two types of constraints is that the first of these (Ax > b) is typically complicated, in the sense that problem (P) without this set of constraints would be much easier to solve. Let A be a nonnegative vector of weights called Lagrangean multipliers. The Lagrangean relaxation LR(x, A) of (P) is the problem min{f(x) + A(b Ax) Cx > d, x e X}, in which the slack values of the complicating constraints Ax > b have been added to the objective function with weights A, and the constraints Ax > b have been dropped. LR(x, A) is a relaxation of (P), since (i) the feasible region of LR(x, A) contains the feasible region of (P), and (ii) for any x feasible for (P), f(x) + A(b Ax) is less than or equal to f(x). Thus for all A > 0, the optimal objective function value of LR(x, A), which depends on A, is a lower bound on the optimal value of (P). The problem max v(LR(x, A)) A>0 of finding the highest lower bound is called the Lagrangean dual (D) of (P) with respect to the complicating constraints Ax > b. Let v(*) denote the optimal objective function value of problem (*). The following theorem by Geoffrion is an important result: Theorem 3.1.2 The Lagri,.', .,; dual (D) is equivalent to the following primal relaxation (PR): min{f(x)lAx > b,x e Co{x e XCx > d}}, in the sense that v(D) = v(PR). This result is based on linear programming duality and properties of optimal solutions of linear programs. Let denote by (LP) the linear programming relaxation of problem (P). Then the following holds: (i) If Co{xlCx > d,x X} = {xlCx > d}, then v(P) > v(PR) v(D) = (LP). In this case the Lagrangean relaxation has the integrality property, and the (D) bound is equal to the (LP) bound. (ii) If Co{xlCx > d,x e X} C {xlCx > d}, then v(P) > v(PR) = v(D) > v(LP), and it is possible that the Lagrangean bound is strictly better than the (LP) bound. This il. I; that in deciding on how to implement the Lagrangean relaxation method, one should consider the following properties of the set E = {xCx > d}: (i) B should be simple enough that the resulting optimization subproblems are not computationally intractable (usually E decomposes into simpler subsets, E = IljEjj), (ii) E should be complex enough, such that the subsets Ej do not have the integrality property. Otherwise, the lower bounds generated would be equal to the lower bounds from the corresponding linear programming relaxation. The Lagrangean function z(A) = v(LR(x, A)) is an implicit function of A, and z(A), the lower envelope of a family of linear functions of A, is a concave function of A, with breakpoints where it is not differentiable. An extension to Lagrangean relaxation is the Lagrangean decomposition method introduced by Guignard and Kim [52]. Different than Lagrangean relaxation, Lagrangean decomposition does not remove the complicating constraints, but decomposes the problem into two subproblems that collectively share the constraints of the original problem. This is achieved by introducing a new set of variables z, such that x = z. Then, problem (P) reads min{f(x) Ax > b, Cz > d, x E X,z = x,z E X}. Relaxing the "< ..i constraints z = x yields a decomposable problem, ju1IF i'i,; the name "Lagrangean decompc.i i.. (LD(x, z, A)) (Lagrangean relaxation refers to the case when only one subset of constraints is relaxed), which is given by min{f(x) + A(z x) Ax > b, Cz > d, x e X,z e X} x,z min{f(x) AxlAx > b, x e X} + min{AzlCz > d,z c X}. x z Guignard and Kim [52] show that Lagrangean decomposition can in some cases yield bounds substantially better than Ii Il1 lil 1 I" Lagrangean relaxation. The Lagrangean dual problem (D) is the following: max{min LD(x, z, A)}. A xz The following theorem by Guignard and Kim is an important result: Theorem 3.1.3 v(D) = min{f(x)x e Co{xlAx > b, x X} n Co{xlCx > d,x e X}}. If one of the subproblems has the integrality property, then v(D) is equal to the better Lagrangean relaxation bound. If both have the integrality property, then v(D) = v(LP). Note that finding a lower bound for problem (P) using a Lagrangean relaxation/decomposition algorithm requires optimally solving the inner minimization problem and the outer maximization problem. Since the Lagrangean function z(A) = v(LD(x, z, A)) is a piecewise concave function, we use a subgradient optimization algorithm to maximize it. In implementing the subgradient algorithm, an important issue is choosing a step size to move along the subgradient direction that guarantees convergence to the A that maximizes the dual function (D). Significant for the decomposition is that the inner minimization problem is easier to solve than the original problem (P). Recall that Lagrangean relaxation/decomposition is an iterative method, the inner minimization problem needing to possibly be solved several times. In the case that the inner minimization problem is still difficult and requires a considerable amount of computational efforts to solve to optimality, a practice that can be followed is finding good lower bounds instead. Obviously this will affect the quality of the lower bound from the Lagrangean relaxation/decomposition. Letting v(P) be the optimal solution to problem (P), then v(P) > v(D). Let w(*) denote a lower bound and Q(*) an upper bound for problem (*). If, for every A we do not solve the Lagrangean decomposition problem LD(x, z, A) optimally, but we rather provide a lower bound, the following holds: v(P) > v(D) = maxv(LD(x, z, A)) > maxw(LD(x, z, A)). A A maxA w(LD(x, z, A)) will still be a lower bound for problem (P), however not as good bound as v(D). In case that we use a heuristic procedure to find a feasible solution to the inner optimization problem, then v(D) < maxA Q(LD(x, z, A)). However, we are not sure anymore if maxA Q(LD(x, z, A)) gives a lower bound for problem (P) since one of the following may happen: v(P) > maxA Q(LD(x, z, A)) or v(P) < max A(LD(x, z, )). In the case when a lower bounding procedure is used instead of solving the inner minimization problem, it is important to identify the quality of these bounds compared to the lower bounds from the linear programming relaxation. If the lower bound guarantees that max (LD(x, z, A)) > v(LP) A and the running time of this procedure outperforms linear programming relaxation, one is better off using the Lagrangean relaxation/decomposition method to find lower bounds to problem (P). In the next section we describe the Lagrangean decomposition algorithm we used to solve the multicommodity, multifacility lotsizing problem. 3.1.5 Outline of the Algorithm Consider the extended problem formulation (ExMC). The basic idea of our decomposition is to separate the capacitated, multicommodity problem into subproblems that are computationally easier to solve than the original problem. There are many vw, one can do that; however we aim to decompose the problem in such a way that it has interesting managerial implications as well. We decompose the problem into two subproblems. The first subproblem consists of the flow conservation constraints and the integrality constraints. This subproblem can be further decomposed by commodity. The single commodity subsubproblems have the special structure of the singlecommodity, multifacility lotsizing problem analyzed in Chapter 2. The second subproblem consists of the flow conservation constraints and the capacity constraints. In this decomposition, the first subproblem consists of a collection of MILPs and the second subproblem is a linear program. Below we give an equivalent formulation of the capacitated multicommodity, multifacility lotsizing problem. We introduce the continuous variables zitrk that are simply "colp, of the production variables qitrk. This allows for the duplication of some of the constraints. F T K T minimize it [ citrkqite k + Sitklitkl i=1 t=l k=1 =t subject to qitrk F T z itk  i 1 t 1 K T k1 7 t Zitrk S brk < I , zitrk > 0 (3.7), (3.9), 3.10), and (3.11) i 1,..., F;t 1,..., T;t < 7 < T;k = 1,...,K (3.18) S= 1,...,T;k= 1, ..., K (3.19) (3 ) (3.21t) It is clear that the above formulation is equivalent to formulation (ExMC). Relaxing the "(<. ,,i constraints (3.18) and moving them to the objective function yields the following Lagrangean decomposition problem: FT K T T minimize il l[ (critk + Aitk)qitrk Sitkitk AitTkZitrkl i=1 t=l k=1 =t 7=t i= 1,...,F;t 1,..., T t l,...,F;t t,...,T;k 1,...,K . subject to (LD(x,z,A)) (3.7), (3.9), (3.10), (3.11), (3.19), (3.20), and (3.21). The Lagrangean decomposition (LD(x,z,A)) problem can now be separated into the following two subproblems: F T K T minimize > Y [ (ciitrk i+ trk)qitrk + Sitktitkl i=l t=l k=l ~ t subject to (SPi) (3.7), (3.9), (3.10), (3.11), and F T K T minimize 1 > 1 AitrkZitrk i 1 t= k= 1= t subject to (SP2) (3.19), (3.20), and (3.21). The corresponding Lagrangean dual (LD) problem is the following: maxv(LD(x, z, A)) A Note that under the general framework of Lagrangean decomposition method, constraints (3.7) do not need to be duplicated for both subproblems. However, computational experience indicates that as long as the added constraints do not add computational burden to the subproblems, constraint duplication improves the speed of the convergence and yields better lower bounds (Guignard and Kim [52]). The potential computational savings from using the dual algorithm is significant since the subgradient search requires solving K single commodity subproblems in each iteration. If a problem instance considers 30 commodities for example, this results in up to 15, 000 calls to the subproblem (the maximum number of iterations we use is 500). In Section 3.1.7 we compare the performance of the Lagrangean decomposition approach when the subproblems are solved to optimality versus the case when the dual algorithm is used. Subproblem (SP1) can be decomposed by commodity. Each subsubproblem k has the following MILP formulation: F T T minimize > Z(Z (cit + Ait)qit + sityit) i= 1 t= 1 = t subject to (SP1k) il F t1 l qitT d 7 1,... ,T qit byit < 0 i = ,..., F;t = 1,... ,T; > t qitr > 0 i= t,...,F;t l,...,T;7> t Yit {0,1} i 1,...,F;t 1,...,T One can see that this formulation is the same as the formulation of multifacility lotsizing problem we discussed in the previous chapter. Since for most of the test problems discussed in C'!h pter 2, the dual algorithm gave close to optimal lower bounds and its running time was much smaller than the running time of the linear programming relaxation, we use the dual algorithm to find good lower bounds for the subproblems (SP1k) for all k = 1,..., K. However, it should be recognized that the quality of the lower bounds from the decomposition may not al,v be as good as in the case that we solve the subproblems optimally. On the other hand, subproblem (SP2) is simply a linear program. We use CPLEX callable libraries to solve this problem. The solution of (SP2) is feasible for the (ExMC) problem. However, since the setup costs are not considered in the formulation, we use a simple procedure to calculate an upper bound (Figure 31). Proposition 3.1.1 The Lagrir,, ..;,. decomposition of (ExMC) gives lower bounds that are at least as high as the corresponding linear 1'i ',i,,,,,,.:, relaxation of extended formulation. Proof: Let 1 denote the set all feasible solutions to the extended formulation of the multicommodity, multiretailer problem (ExMC). Let Ii be the set of the Let z* be the optimal solution to subproblem ( 's) in iteration s. UB= 0; = 0 fori 1= F = 1... T; k 1k .....K. for = 1,..., F; I = :...:T; k 1..., K for r= t...,T do if zt*k_ > 0 then lU" = UlB I  I t, i , if > 0 then U I + ure 3 1: bound 1 e feasible solutions corresponding to linear programming relaxation of subproblem (SP1) and 42 be the set of feasible solutions to linear programming relaxation of subproblem (SP2). Guignard and Kim [52] show that "Optimizing the Lagrangean decomposition dual is equivalent to optimizing the primal objective function on the intersection of the convex hulls of the constraint I (Theorem 3.1.3). Therefore, optimizing (LD) is the same as optimizing FT K T minimize [ >1 ZZ eTitrkitrk + SitkYitkl i i t=l k=l = t subject to q, y Co{q, yq, y e 4 n y E {0,1}} q c Co{qlq e 42} The linear programming relaxation of (ExMC) minimizes the same objective function over the intersection of 1t and )2. FT K T minimize > [ Z Y c itrkqitrk + SitkYitkl i i t=l k=l = t subject to (LPEx) q, E I1 i 0 E q 2. Since q are continuous variables Co{qlq e )2} 2, however, Co{q, yq, ye ic ny, {0, 1}} C i. This shows that the feasible region of the dual problem (LD) is smaller than the feasible region of the linear programming relaxation (LPEx). Therefore, the objective function value we get solving (LD) will be at least as high as the objective function value of the linear programming relaxation. O Proposition 3.1.2 The Lagr ,'.; ,, decomposition of (ExMC) gives lower bounds that are equal to the bounds from Lagri,., ,,;,. relaxation of (ExMC) with respect to the I "'.r. :1/; constraints (3.8). Proof: Let tI be the set of solutions satisfying constraints (3.7),(3.9),(3.10) and (3.11) and )2 be the set of solutions satisfying constraints (3.8) and (3.11). Let V* be the intersection of )2 with the convex hull of 1i. Solving the Lagrangean relaxation with respect to the capacity constraints (3.8) is equivalent to minimizing the objective function of (ExMC) over V* (Theorem 3.1.2). On the other hand, solving the Lagrangean decomposition of (ExMC) is equivalent to minimizing the objective function over the intersection of VY = )2n (3.7) with the convex hull of tI (Theorem 3.1.3), or in other words minimizing the objective function of (ExMC) over the intersection of V* with (3.7). Note that all the solutions in V* satisfy constraints (3.7) since these constraints are contained in )i. Therefore, solving the Lagrangean decomposition problem is equivalent to minimizing the objective function of (ExMC) over .*. This concludes the proof that both the Lagrangean relaxation with respect to the capacity constraints and the Lagrangean decomposition scheme we propose give the same lower bounds for (ExMC) D The lower bounds generated from the Lagrangean decomposition heuristic are the same as the bounds from Lagrangean relaxation with respect to the capacity constraints, however we chose to implement the Lagrangean decomposition approach. There are two reasons we choose to do so (i) The decomposition scheme provides feasible solutions to problem (ExMC) at every iteration (ii) The decomposition converges faster. Subgradient optimization algorithm. It is wellknown (N. in!i1 m1, and Wolsey [86]) that the Lagrangean dual function is concave and nondifferentiable. To maximize it and consequently derive the best Lagrangean lower bound we use a subgradient optimization method. For more details on the subgradient optimization method see Held, Wolfe and Crowder [58] and Crowder [28], and for a survey of nondifferentiable optimization techniques see Lemarichal [73]. Subgradient optimization is an iterative method in which steps are taken along the negative of the subgradient of the Lagrangean function z(A) (z(A) = v(LD(x, z, A)). At each iteration s, we calculate the Lagrangean multipliers Aitrk using the following equation: AXSk where s 7 (minUB maxLB) y1 :T IT tK ti= 1 t Z T (qitrk Zitrk)2 " is a number greater than 0 and less or equal to 2. 7" is reduced if the lower bound fails to improve after a fixed number of iterations; maxLB is the best lower bound found up to iteration s and minuB is the best upper bound. Calculating the step size A using equation (3.22) is a common heuristic rule that has been used in the literature. The step size updating using rule (3.22) generally allows convergence to the A* that maximizes the Lagrangean function z(A) (N. in!, , r and Wolsey [86]). In order to find a subgradient direction at each step of the Lagrangean decomposition procedure, we need to find a feasible solution to subproblem (SP1). 1: Initialize A, mint i Step 2: Solve the ' Ss, u. 7, e. count, ( 'i) and ( 2). C the lower bounty K k  the current iteration s count = count I 1 If LB'i > maxLB, then B = LF Step 3: C< an 1 ,r bound UE If UTB < mi then = UB" If minu(n> = maxs.j, then STOP If 7 e, then STOP Step 4: U Jate the iultipliers using equation (3.22) 5: if the number of iterations reach the Otherwise e to Step 2 Figure 32: Lagrtangean decc :)si ' d limit (count) tion algorithm The dual algorithm provides only a lower bound to (ExMC), but does not provide a feasible primal solution. Therefore, we use the primal algorithm (Section 2.5) to find a feasible solution to subproblems (SPIk). In our computational experiments, we terminate the algorithm if one of the following happens: (i) the best lower bound is equal to the best upper bound (the optimal solution is found), (ii) the number of iterations reaches a prespecified bound, (iii) the scalar 7y is less than or equal to c (a small number close to zero). Figure 32 presents the steps of the Lagrangean decomposition algorithm. 3.1.6 Managerial Interpretation of the Decomposition In this section we discuss the managerial insights of the decomposition procedure proposed in Section 3.1.5. The choice of the Lagrangean decomposition scheme we present is motivated not only by its computation capability, but also by its interesting managerial implications. Several studies (Burton and Obel [19] and Jornsten and Leisten [64]) have recognized that mathematical decomposition often leads to insights for general modelling strategies and even new decision structures. In this discussion we refer to subproblem (SPI) as the product (commodity) subproblem and (SP2) as the resource subproblem. The Lagrangean decomposition we propose helps understanding and solving managerial issues that arise in multifacility manufacturing planning. Suppose we consider the resource subproblem as a decision problem for a production manager who supervises multiple facilities, and each product subproblem as a decision problem for a product manager. Therefore, the decomposition can be viewed as a decision system where product managers compete for resource capacity available from manufacturing facilities. The production manager, on the other side, represents the interests of efficiently allocating resources from multiple facilities to the products. Often the solutions proposed by the production manager will not agree with the individual solutions of product managers. A search based on the Lagrangean multipliers basically penalizes their differences, while adjusting the penalty vector iteratively. This process stops when the degree of disagreement (the duality gap) is acceptably low, or when further improvement is unlikely. See Wu and Golbasi [106] for a similar discussion on a related problem. 3.1.7 Computational Results In this section we have tested the performance of the Lagrangean Decomposition algorithm on a large group of randomly generated problems. We use the CPLEX callable libraries to solve the MILP formulation (ExMC). The CPLEX runs were stopped whenever a guaranteed error bound of 1 or less was achieved, allowing for a maximum CPU time of 5, 000 seconds (or 10, 000 seconds depending on problem size). We use CPLEX to solve the linear programming relaxation of (ExMC) and subproblem (SP2). One of the factors that affects the problem complexity is the tightness of the upper bounds on production arcs. If the arcs are very tight, there exist only a few feasible solutions, and this makes the search for the optimal solution easy. On the other hand, if the arc capacities are so loose we could remove them from Table 1: Problem characteristics Problem Nodes Arcs Problem Nodes Arcs 1 1 6 114 8, 2 2,240 15 2. 9,1 3 2,, 10 16 3) 19,.. 4 3 1 17 4 5 14 18 6 4, 16 19 105 7,..., 10 1 9 1 . 11 1,; 6 21 11, 12 1 i 7 22,...,25 1, 6 13 2,130 7, the problem formulation, the problem loses its multicommodity nature, and it can be decomposed into K singlecommodity problems. In order to create challenging test problems, we generated the upper bounds in the following way: A necessary condition for feasibility of the problem is F t K t EE > wbE k Vt 1,... ,T. (3 ) i= T= 1 k=lT =1 Under the assumption that all facilities in any time period have the same production capacity v, we can replace F t i 1 T 1 Ftv, thus, In fact v should be su /K t v > bk Vt 1, k=1 r=1 ch that the following is satisfied: K t v > 1 max K 1 7bk F t t F t t .. ,T . Letting 6(> 1) be the capacity tightness coefficient, then K t = k1 7= 1 b7k v max F t t Using this procedure we generated challenging (but feasible) test problems with tight capacity constraints. We start our computational experiments by generating a nominal case problem with the following characteristics: Production setup costs sit ~ U[200, 900] Production variable costs pit U[5, 15] Holding costs hit U[5, 15] Demand bt ~ U[5, 15] Number of facilities F = 10 Number of periods T = 5 Number of commodities K = 30 Capacity tightness 6 = 1.3 We alter the nominal case by changing problem characteristics to generate additional problems. For each problem class we generate 20 instances. First we change the number of commodities from 30 to 40, 50, 60, 70, and 80 generating six problem classes (problem classes 1 to 6; problem class 1 is the nominal case). In the second group of problems we change the capacity tightness coefficient (6) to 1.1, 1.2, 1.4, and 1.5 (problem classes 7 to 10). In the third group of problems we change the number of facilities from 10 (the nominal case) to 11, 12, 13, 14 and 15 (problem classes 11 to 15). We also ran the program for problems with different lengths of the time horizon. In problem classes 16 to 21 we change the number of periods to 10, 15, 20, 25, 30, and 35. Finally, we change the level of the fixed charge cost to sit U[200, 300], U[600, 900], U[900, 1500] and U[1200, 1500] (problem classes 22 to 25). In implementing the Lagrangean decomposition algorithm, for all problem instances we set 7 = 1.8. 7 is reduced by 211'. if there is no improvement in the last 5 iterations. We set a limit of 500 iterations for the Lagrangean decomposition algorithm. As we have mentioned earlier in this section, we are interested in the effectiveness of the primaldual algorithm as a heuristic in the context of the subgradient search algorithm. Therefore, we present results from the Lagrangean decomposition algorithm where 'y ( upper bounds (in Problem CPLEX Schemcl Scheimc2 Schemnc 1 0. 1 1.79 1.82 2 0.05 1.22 1.14 1.16 3 0.01 0.83 0. 0. 4 0.01 0.50 0.52 5 0.01 0.57 0.1 0.42 6 0.01 0.53 7 1.44 3. 3.61 11 8 0. 2.61 2. 9 0 1 1., 1 1.24 10 0.01 1.01 0.3 0. 11 2.54 2 2. 12 0.17 2.) 2.84 2.' 13 0.21 3 3. 3 14 0 4. 4.52 4. 15 0. 4 ., 4. 4. 16 0.24 1 1 17 0 2.12 2.12 18 0.48 2.22 19 0.53 2.21 0.57 2.34 2.29 21 0.55 2.41 2.35 22 0.02 1.25 1 0.04 3.13 24 0.22 4.75 0.77 ;4 ) Tablc 32: )osl.tion Scheme: a lower bound to subproblem (SPlk) is found using the dual algorithm Scheme2: subproblem (SPIk) is solved to optimality using the exact MILP formulation Scheme3: a lower bound to subproblem (SPlk) is found solving its linear programming relaxation. The error bounds from the three schemes as well as the linear programming relaxation of (ExMC), are presented in Tables 32 and 33. The quality of the upper bounds we generated using the Lagrangean decomposition algorithm is measured as follows: Errr Upper Bound CPLEX Lower Bound Error (.) 100, CPLEX Lower Bound and the quality of the lower bounds is calculated as follows: Lower Bound CPLEX Upper Bound Error (,) = 100. CPLEX Upper Bound The results indicated that the quality of the upper bounds generated when we solve the subproblems to optimality (Scheme2) is almost the same as the quality of the upper bounds when we use the dual algorithm (Schemel) or the linear programming relaxation (Scheme3). The difference in the quality of the solutions was no more than 0.2'". A main incentive for using the primaldual algorithm to solve the single commodity, multifacility lotsizing problems (SPlk) is the potential computational savings when solving the multicommodity problem. The results from Section 2.8 showed that primaldual algorithm took little time to solve each singlecommodity problem; however, we were interested in gauging the real savings for the multicommodity, multiretailer lotsizing problem. Table 34 presents the CPU running times of the three algorithms. The running time of Schemel were much smaller in all cases. These savings are due to the performance of the dual algorithm. The quality of the lower bounds generated using either scheme of the Lagrangean decomposition algorithm or the linear programming relaxation of extended problem Tab' 33: y ( lower bounds (in ) m Lagrange an * Problem, LPEx Scheimel Schemc2 Schemc3 1 0. 0. I ;7 2 0.: 0. 0.'. 0. 3 0.19 0.19 0.19 0.19 4 0 0.10.11 0.11 0.11 5 0.07 0.C 0.07 0.07 6 0 0.10 0., O.C 7 2.47 2.50 2.48 2.48 8 1.18 1 1.18 1.18 9 0.46 0. 0.' 46 10 0. 0.31 l 1 11 1.19 1 1.19 1.19 12 2.24 2.24 2.24 2.24 13 14 2.97 7 7 15 3.18 3.19 3.18 3.18 16 0.75 0. 75 17 0.81 0.82 0.81 51 18 0.87 0.. 0.87 0.87 19 0., 0 ( 0 0."' 21 0., 0.: 0., 22 0 0.57 0.56 0.56 23 0.58 0.59 0.58 0.58 24 0.8 0.87 I C 25 1.34 1 i 1. 1. )oSltiIon formulation was almost identical. The results indicated that as the number of commodities increased, the error reported from CPLEX and Lagrangean decomposition decreased (problem classes 1 to 10). For these problem classes the Table 34: CP( running times (in seconds) Problem LEX Schlcmcl Sclhcmc2 Scheme LPEx 1 .57 9 1 27 0.58 2 48 12. 41 44.59 0.72 3 145 17. "'. 11 48.18 4 78.' 11 1 54. 1.31 5 57.25 16 4 4. 1.50 6 58.48 31 1 5 74.40 1.72 7 5 i 12. 1, .,:. 32.87 0. 8 7 10.87 1 )4 .44 0.64 9 1 9 174.81 24.75 4 10 i 173.16i 11 .22 11. 31 12 3, 75 13.33 ... 16 13 3,294. 13.77 2 .51 2 0.75 14 3, 7 16.. ; 15 4,7 t 16.48 1 O 51 0.; 16 t10 4 7 4 15 7. 17 10 .( 17 1, .18 8.10 18 10 542.24 3,32 1.4 1 .19 17.78 19 10 8 31 5,591. 1 66 10 1. 4.48 7.81 21 10 i 2,555.11 i 2, : 4 4, .'" 22 1,55 9. 1 0 ( 1,1 11.67 33.82 0. 24 4, 13.24 17 0. 4,75 7 14.77 1.51 41. solution from Lagrangean decomposition was less than 2'. from optimal and the running times of Schemel were smaller compared to CPLEX. However, the increase in the number of commodities affected the running time of Lagrangean decomposition algorithm because of the increase in the number of subproblems (SP1k) to be solved at every iteration. Increasing the capacity tightness coefficient 6 (problem classes 7 to 10) made the problems easier. It is wellknown that the uncapacitated lotsizing problem is easier to solve capacitatedd lotsizing problem, different than the uncapacitated problem is NPhard). The results showed that problem classes 11 to 15 were quite challenging. As the number of facilities increases we observed a monotonic increase in the duality gap. The results S.,. 1. 1 that adding the facility selection dimension to the classical multicommodity lotsizing problem has quite an effect on problem complexity. The running times of all algorithms for problem classes 16 to 21 were the highest. One of the reasons for this to happen is the size of the network for these problems (Table 31). Setup costs appeared to have a significant impact on the duality gap (problem classes 22 to 25). Problem class 22 presented an average error of 1.211 '. compared to 6.4 !'. for the problem class 25. This result is not surprising since increased setup costs widen the gap between (SP2), which ignores the setup costs and (SP1). Moreover, since our original problem is a MILP with binary setup variables, as the setup costs increase the problem behaves closer to a combinatorial problem than a linear programming problem. 3.2 SingleCommodity, MultiRetailer, MultiFacility LotSizing Problem The satisfaction of the demand for products of a set of customers involves several complex processes. In the past, this often forced practitioners and researchers to investigate these processes separately. As mentioned by Erengiic et al. [30], the traditional way of managing operations in a competitive market place l.L. 1. I that companies competing on price will sacrifice their flexibility in offering new products or satisfying new demands from their customers. The competition and the evolution of hardware and software capabilities has offered companies the possibility of considering coordinated decisions between processes in the supply chain. In this section we propose a class of optimization models that consider the integration of decisions on production, transportation and inventory in a dynamic supply chain consisting of a number of retailers and facilities. We call this model the singlecommodity, multiretailer, multifacility lotsizing problem. This model estimates the total cost of a given logistics distribution network, including production, inventory I'. lii. and transportation costs. The evaluation is performed for a typical planning period in the future. This problem considers a set of plants where a single product type is produced. This is a special case of the supply chain optimization problems we discussed in Chapter 1. Chapter 4 considers the more general problem, where a number of commodities are produced in the plants and the plants face production and transportation capacity constraints. 3.2.1 Problem Formulation In this section we consider a class of multifacility, multiretailer production distribution problems. Let R denote the number of retailers. Demand of retailer j in period t is given by bjt. The unit transportation costs from facility i to retailer j in period t are cijt. In this discussion we assume that transportation and inventory cost functions are linear, and the production cost function is of the fixed charge type. The multifacility, multiretailer problem can be formulated as follows: F T R minimize Y (pitqit + sityit + hilit + i jtxijt) i=1 t=1 j 1 subject to (\ilR) R qit + lit = xijt + it i= 1,...F;t= 1,...,T (3.24) j=1 F xijt j= 1,...,R;t= 1,...,T (3 ) i=1 R qit < bjtTit i 1,..., F; t= 1,... ,T (3 j=1 xijt, lit, qit > 0 i ,. .. F;j = l,..., R; t 1,. T (3.27) Yit {0,1} i= 1,... ,F; j= 1,...,R;t= 1,...,T (3 ) Decision variables xijt represents the quantity transported from facility i to retailer j in time period t. Constraints (3.24) model the balance between the inflow, storage, and outflow at facility i in period t. Constraints (3.25) make sure that retailer's demand is satisfied. Constraints (3.26) relate the fixed and variable production costs (the production can be initiated once the setup cost is paid). Figure 11 gives a network representation of this problem. The multiretailer, multifacility model we propose helps managers to answer questions that arise in managing the production and distribution network. Obviously, the most accurate answers are given when the problem is solved to optimality. However, this is a difficult task since the problem we present is NPhard. This problem can be classified as a network flow problem with fixed charge cost functions. Several special cases of the singlecommodity network flow problem have been shown to be NPhard: for bipartite networks (Johnson et al. [63]), for singlesource networks and constant fixedtovariable cost ratio (Hochbaum and Segev [60]), and the case of zero variable costs (Lozovanu [76]). For the special case of this model when there is only one retailer, Wu and Golbasi [106] show that the problem is NPhard when the holding costs are not restricted in sign. We present the extended formulation of this problem as well. We do this by splitting the variables qit by destination into variables qjt, (r = t, ..., T), where T denotes the period and j represents the facility for which production takes place. The split of the production variables implies the following: R T qd > 3 > qjt (3.29) j=l T=t t tijt (3.30) s=1 t R t R t T t ILt 1 j x CC ( qLST 1 jt) (331) T=l J=l T=l j=l s=l T=t S=i We can reformulate problem (\!1) as follows: F T R T minimize >I[>I 1 cijtcqijt7r + sityit] i=1 t=l j=l 7=t subject to (ExMR) F T ijtr = b 1,...,R;T= 1,...T (3. ) i= 1 t= 1 qijtr < bjyit i 1,...,F;j 1,...R;t 1,...T;t qijt > 0 i 1,...,F; 1,...,R;t ,...T;t Et e {0,1} i= ,...,F;t= 1,...T (3 ) where cijt = pit + cij, + Y:=t+l his. The variable unit cost on the arcs of the extended network consist of the production unit cost at facility i in period t, the transportation unit cost from facility i to retailer j in period r, and the total unit holding cost at facility i from production period t to the shipping period r. Proposition 3.2.1 The optimal cost of linear ",'..i:,,,,,,:iI. relaxation of the extended formulation of multif'a.: .:..; multiretailer 1 I..;:,. problem (ExMR) is at least as high as the optimal cost of linear ,'.ull,,,,, .:,i relaxation of (. .:':,..l formulation (_ll:). Proof: Every feasible solution to the linear programming relaxation of extended formulation of multifacility, multiretailer lotsizing problem (ExMR) can be transformed to a solution to linear programming relaxation of formulation (\llR) using equations (3.29), (3.30) and (3.31). It follows that the optimal solution of linear programming relaxation of (ExMR) can be transformed to a feasible solution (not necessary the optimal solution) to linear programming relaxation of (\!l ). Computational results show in fact that the linear programming relaxation of (Ex MR) gives solutions that are close to optimal. O 3.2.2 PrimalDual Algorithm The dual of the linear programming relaxation of (ExMR) has a special structure. Below we present the linear programming relaxation and the corresponding dual of the formulation (ExMR). F T R T minimize >[ 1 CijtqijtrLT + sityit] i= t=l j=l 7=t subject to (LPMR) (3.32),(3.33),(3.34) yit > O i= 1,...,F;t = 1,...T (3. ) The dual problem reads T R maximize Y Y1 bjtvjt t=1 j=1 subject to (DMR) Y T 1'."'., < sit i= 1,...,F;t= 1,... T Vj ,'., < cijt i =,..., F;t= ,...,T;t Wi,,t > 0 i= 1,...,F;j = 1,...,R;t= 1,...,T t In an optimal solution to (DMR), both constraints ti.,t > 0 and ., > vj. cijtT should be satisfied. Since t, ., is not in the objective function, we can replace it with ', ,t =max(0, vj, Cjt.). This leads to the following condensed dual formulation: T R maximize bijt1jt t=1 j=1 87 subject to (DMR*) T R Z bj max(0, vj, cijt) < Sit = 1, ..., F; t= 1, ... T. T=t j=l 3.2.3 Intuitive Understanding of the Dual Problem In this section we give an intuitive interpretation of the relationship between primaldual solutions of (ExMR). Suppose the linear programming relaxation of (ExMR) has an optimal solution (q*,y*) that is integral. Let 0 = {(i, t) yi = 1} and let (v*, w*) denote an optimal dual solution. The complimentary slackness conditions for this problem are as follows: (Ci) /, [.t E t '" , 0 fori 1,...,F;t 1,...,T (C2) ,7*.[ + ', 0 fori 1,...,F;j 1,...,R; t= 1,...,T;t < < T (C3) ,, ., ,' ,, = 0 for i= ,...,F;j= 1,...,R; t= 1,...,T;t < T < T (C4) v*[, ., l q1 tTl] 0 for j l,...,R;t 1,...,T. By conditions (C1), if a facility produces in a particular time period, the setup cost must be fully paid (i.e., if (i,t) E 0, then si = E 1 3 E '=ti', ). Consider conditions (C3). Now, if facility i produces in period t, but demand of retailer j in that period is satisfied from the inventory from a previous period or from production (inventory) at a different facility (qitt = 0 and qijtt '..', / 0), then ,, .,, 0. This implies that the price paid for the product will contribute to setup cost of only the period when the product is produced. By conditions (C2), ifjtr > 0, then vj = ijtr + . Thus, we can think of v, as the total cost (per unit of demand) in period 7 for retailer j. Of this amount, Cijt T for i 1,..., 1,..., 1,. for t 1 to T do for j 1 to do if bji, 0 then tiy, 0 else j7 = 1,...,F:t for = 1 to 7 do for i = 1 to F do = max {0, v.j } _= x{mx {,O M ,A_ } enddo enddo enddo enddo .ure 3: Dual algorithm goes to 1p i for production and inventory holding costs, and i, ._ is the contribution to the production setup cost. 3.2.4 Outline of the PrimalDual Algorithm Suppose that the optimal values of the first k 1 dual variables of (DMR) are known. Let the index k be such that k = (7 1) R + 1, where 7 = 1,... ,T and I 1,..., R. Then, to be feasible, the kth dual variable (,.) must satisfy the following constraints: bIT max(0, Ctilr) < Mrilt,1 = Sit Efi EC~ bj max(0, v], cfts) E i bj max(0, j ijtT) for all i = 1,..., F and t = 1,..., r. In order to maximize the dual problem we should assign t. the largest value satisfying these constraints. When, bi, > 0, this value is min=mm {cit + ilt, (3.38) i=1,...,F;T>t blT Note that if 3..,_1 > 0 implies ,. > cilt. The dual solution found using equation (3.38) may not necessarily satisfy the complimentary slackness conditions. However, a dual feasible solution can be obtained simply by calculating the value of the dual 0, 0, i 1 ... ; j k 1 T: > k P {(j )" >0, for j 1..., k 1,...,1} Start : T max k P, k 0 Step 1 : for i 1 to F do for j 1 to do repeat k = k I 1 until = 0 and cajk, 0jl 0 = 1, and i* = i, k' = k, go to Step 3 enddo enddo go to Step 3 S. 2 : for t k* to I do for j 0 to do if jk*t VAjt c"t 0 then gi',jkt bj, P P (j, t) enddo enddo Step 3 : if P / 0 then go to Start F: 3 4: Primal ;. variables sequentially (Figure 33). A backward construction algorithm can then be used to generate primal feasible solutions (Figure 34). For the primaldual set of solutions to be optimal, the complimentary slackness conditions should be satisfied. Proposition 3.2.2 The solutions obtained with the primal and dual il'. rithms are feasible and they il;,.,r;, .'i:;fy the complimentary slackness conditions (C1) and (C2). Proof: It is clear that the primal and dual solutions generated are feasible by construction. Since the primal algorithm sets qijt, > 0 only when jtr vj. ., = 0, the solution satisfies conditions (C2). The dual algorithm constructs solutions by making sure that equation (2.23) is satisfied. Therefore, the dual solutions are ak,v such that bj,.+,, ., r+1 < i ... If if .,. = 0, then i, ., +1 = 0, and, since the dual algorithm sets 3.1 .,+1 = .,1 bj,+i, it.~r+1, we also have .i ., +1 = 0. Continuing this way it is clear that if at some point in the calculation we get i .,. = 0, we subsequently obtain S... = .,, r+l ..=MiRtT 0 and t' = t',t, r+l= WiRtT = 0 The primal algorithm sets yit = 1 only when ., = 0; this implies that conditions (C1) will al, be satisfied. o Hence, one can determine whether the solution obtained with the primal and dual algorithms is optimal by checking if conditions (C3) are satisfied, or if the objective function values from the primal and dual algorithms are equal. 3.2.5 Computational Results In order to test the performance of our algorithm, as in previous sections, we randomly generated a set of test problems and compared their computation times and solution quality to the general purpose solver CPLEX. We generated feasible solutions to our problem using the primal algorithm and lower bounds using the dual algorithm. The nominal case problem has the following characteristics: Production setup costs sit ~ U[200, 300] Production variable costs pit U[5, 15] Holding costs hit ~ U[5, 15] Number of retailers R = 60 The transportation variable costs are generated the same way as described in Section 2.8. Seasonal demands are randomly generated in the same way as presented in Baker et al. [5] and Chen et al. [22]. 2xr bt = 200 + azt + a sin [ (t + d/4)] In our test problems we take a = 67, a = 125 and d = 12. The characteristics of the problem classes are presented in Table 35. For each problem class we generate 20 problem instances and we report on the average error bounds and average running times. The error bound for each problem instance is calculated as follows: Error ( ,) Primal SolutionDual Solution 100. Dual Solution Table 36 presents the error bounds from the primaldual algorithm. We were able to get the optimal solutions only for problem classes 1 and 2. For these two problem classes linear programming relaxation of the extended formulation gave the optimal solution. However, because of the the size of the problems (problem classes 3 to 12), CPLEX ran out of memory without providing either an integer feasible solution or a lower bound. Results on Table 36 indicate that as the value of setup Table 35: Problemi characteristics Problem Facilities Periods Nodes Arcs 1 24 ,21 ,4" 2 30 24 2.161 3 24 2,401 7 4 48 3,841 1,412,1 5 48 2,118,2. 6 48 4, 2 4,. 7 7i 1 5, 8 8,641 , 9 9 11,178,2  10 192 15 1 ,4 . 11 1 17 1 1 12 192 19 1 44,474,, cost increased, the problems became more difficult. This affected the performance of the primaldual algorithm. For all problem classes, when setup costs are uniformly distributed on the intervals [200, 300] and [200, 900], the error gap was less than 0.n'. and the running times less than 141 cpu seconds. The maximum error reported is 4.07 :' This corresponds to problem class 3 with setup costs uniformly distributed in the interval [1200, 1500]. Results in Tables 36 and 37 indicate that increasing the number of facilities and the length of the time horizon affected the performance of the primal and dual algorithm. : Error bounds We next randomly generated a second group of problems for which demand is uniformly distributed in the following intervals: [20, 40], [40, 100], [100, 200], [200, 400] and [400, 1000]. We rerun problem classes 4, 5 and 6 for each demand distribution (problem classes 13 to 27). For example, for problem classes 13, 14 and 15, demand is uniformly distributed in [20, 40] and the problem characteristics are the same as the characteristics of problems 4, 5 and 6. This gives a total of 15 problem classes and 300 problem instances. CPLEX failed to solve problem classes 13 to 27. Table 38 Table 37: Running times (in seconds) < pr maldual heuristic SetUp )sts Problem 1.. 1 ,1. 1 0.2, 0.418 1.105 1. 2.5 2 0. 0.511 1. 2.519 3. 3 0.423 1.' 3.1 4. 4 0.2 0.415 1.119 1 5 0. 1. 2 2. 3 6 0 1.754 2.872 3 7 0.2 1.150 1 i 2.611 8 0. 0./ 1. 2.544 3.412 9 0.419 0. 1.810 7 4. 10 0 0.411 1.152 N/A 2. 11 0.341 0.522 1.5 N/A N/A 12 0.413 0.577 1. I N/A N/A SetUp Costs Problem ! 1 1. 1 1.16 1." 1.18 1.32 1.14 2 1.58 1.87 1 1. 1.56 3 2 2. 2.34 2. 4 4.15 5.18 5 15 7.." 14 7.C  6 8.1 9. 7., 9.14 7. 7 1 78 .45 1 19.29 11 8 15 .58 12 24.41 9 33.41 2 10 74. 74.1 N/A 70.42 11 1 1 1 1 N/A N/A 12 140.55 1 .35 1 '.57 N/A N/A till Table 3 dual heuristic presents the error and running times of the primaldual heuristic for these problems. The average running time of the primal algorithm was less than 5.22 cpu seconds for all problem classes and less than 3.23 for the dual algorithm. Notice that the error decreased as the average value of demand increased. For these problem classes, everything else kept the same, an increase in demand affected the ratio of total variable cost to total fixed cost. Increasing this ratio generally makes the problems easier (Hochbaum and Segev [60]). able Results 'dual heuristic Error Time (sec) Problem ( ) Primal Dual 13 3 2.54 1. 14 5. 3 2.41 15 5 5 3.18 16 1.34 2.54 1. 17 1.78 3.87 2. 18 2.18 5.22 3.21 19 0.43 2.54 1. 0 0 3 2. 21 0.73 5 3 22 0.14 2.55 1. 0.19 3.87 2.45 24 0 5.22 3.22 0.1 2.55 1.67 0.04 2.45 27 0.1 5.22  3.3 Multi Facility LotSizing Problem with Fixed C'I irge Transportation Costs In this section we discuss the uncapacitated multifacility, multiretailer problem with fixed charge transportation costs and linear production and inventory costs. Usually, when shipments are sent from a facility to a retailer, a fixed charge is paid (for example, the cost of the paperwork necessary) to initiate the shipment plus a variable cost for every unit transported. Therefore, modelling the transportation cost function as a fixed charge cost function makes sense. The following is the MILP formulation of the problem: 