SOLUTION APPROACHES TO A MULTI-STAGE, MULTI-MACHINE,

MULTI-PRODUCT PRODUCTION SCHEDULING PROBLEM

BY

CHARLES STAFFORD LOVELAND

A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF

THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF

THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

1978

TO

MOM AND DAD

ACKNOWLEDGMENTS

I wish to acknowledge the considerable contributions to this work

by Dr. Thom J. Hodgson, who acted as the Chairman of my Doctoral Committee.

His encouragement, support, and periodic "pep talks" kept me from wavering

from my course. His guidance and friendship are sincerely appreciated.

The insight and experience of Dr. Hodgson contributed considerably to

the form and content of this dissertation and to my education.

I thank the remainder of my committee, Dr. Richard L. Francis,

Dr. Gary J. Koehler, Dr. Timothy J. Lowe, and Dr. H. Donald Ratliff for

their constructive comments and suggestions leading to the completion of

this work. I am also grateful to Dr. James F. Burns, Dr. Donald W. Hearn,

Dr. Eginhard Muth, and Dr. Michael E. Thomas for their assistance and

support throughout my stay at the University of Florida.

I also wish to thank Miss Adrian Agerton, Miss Cheryl Gray,

Mr. Russell Plumb, and Miss Billie Jo Zirger for their many long hours

of physical and artistic assistance in the many tasks in the preparation

of this work.

Thanks are also extended to the office staff of the ISE Department

for their aid and for putting up with my disruption of the normal office

functions in those final hectic days.

I also acknowledge Mrs. Beth Beville for her excellent typing

of the manuscript.

Finally, I thank and lovingly dedicate this dissertation to my

parents, Warren and Barbara Loveland, for their constant understanding,

encouragement, and faith in me throughout the course of this work and

my other endeavors. No son could ask for more.

This dissertation was supported in part under ONR contract number

N00014-76-C-0096.

TABLE OF CONTENTS

PAGE

ACKNOWLEDGMENTS ............................................ iii

LIST OF FIGURES .................. .. ....................... vi

LIST OF TABLES ................ ........... .... ............. ix

ABSTRACT ............. ............ .. .. .................. x

CHAPTER

1 INTRODUCTION AND LITERATURE REVIEW .................... 1

1.1 Introduction ................. .................. 1

1.2 Literature ............. ........................ 2

1.3 Overview of Dissertation ......................... 9

2 A FRONTAL BOTTLENECK PROBLEM .......................... 11

2.1 Introduction .............. ...................... 11

2.2 Problem Definition ............................... 12

2.3 A Mathematical Model ............................. 14

2.4 The Backward Solution Technique .................. 21

2.5 Feasibility and Optimality Conditions for the

Backward Solution Technique ........... ........ 27

2.6 Application of the Backward Solution Technique ... 50

2.7 The Extension to N Stages in Series .............. 52

2.8 Further Extensions of the Backward Solution

Technique ....................................... 58

2.9 Conclusion ................... ................ 61

3 A POSTERIOR BOTTLENECK PROBLEM ...................... 63

3.1 Introduction ................................... 63

3.2 Formulation ..................... ............. 64

3.3 Conclusion ...................................... 74

4 HEURISTICS AND TESTING ................................ 76

4.1 Introduction and Formulation ..................... 76

4.2 The Heuristics ................................... 77

4.3 A Multiple Interchange Method .................... 81

PAGE

4.4 The Enumeration Algorithm ........................ 86

4.5 Testing and Results .............................. 98

4.6 Conclusions .............. ....... .............. 11

5 SUMMARY AND SUGGESTIONS FOR FUTURE RESEARCH ...........113

APPENDIX A COMPUTER PROGRAMS ............................... 116

BIBLIOGRAPHY .............. ................................143

BIOGRAPHICAL SKETCH ......................... .... ...............147

vi

LIST OF FIGURES

FIGURE PAGE

1.1 An example network .................................... 3

2.1 Constraint matrix ..................................... 20

2.2 Dorsey's schedule for Table 2.1A ...................... 26

2.3 Dorsey's schedule for Table 2.2A ...................... 29

2.4 Two-stage schedule of product 2 ....................... 30

2.5A Production rate counterexample- BST solution .......... 33

2.5B Production rate counterexample feasible solution .... 33

2.6A Supplier job counterexample BST solution ............ 35

2.6B Supplier job counterexample feasible solution ....... 35

2.7A Machine availability counterexample BST solution .... 36

2.7B Machine availability counterexample feasible

solution ......................... ........... ...... 36

2.8 Portions of a solution for two adjacent stages ........ 39

2.9A Cost counterexample BST solution .................... 47

2.9B Cost counterexample optimal solution ................ 47

2.10 Portions of a solution for two adjacent stages of an

N-stage problem .................... ................ 54

2.11 Stage diagram of a general production system .......... 59

3.1A General product system ............................... 65

3.1B Stage diagram for first product ...................... 65

3.1C Stage diagram for second product ...................... 65

FIGURE

3.2

3.3A

3.3B

Two stages in series ...............................

Single product, 3 stages ............................

Figure 3.2A reduced to 1 stage ......................

4.1 Two product problem ............................... 78

4.2A Standard 3-way interchange ............................ 82

4.2B Standard 3-way interchange first step ............... 82

4.2C Standard 3-way interchange second step .............. 85

4.2D Standard 2-way interchange ............................ 85

4.3 Search tree ........................................ 92

4.4 Search tree with permutations eliminated .............. 95

4.5 Level of interchange vs. percent optimal-4 machines ...102

4.6 Level of interchange vs. percent error-4 machines .....103

4.7 Level of interchange vs. average percent error-4

machines ............... ........................... 104

4.8 Level of interchange vs. percent optimal-2 machines ...106

4.9 Level of interchange vs. percent error-2 machines .....107

4.10 Level of interchange vs. average percent error-2

machines ................................. ..........108

4.11 Level of interchange vs. execution time ...............110

LIST OF TABLES

TABLE PAGE

2.1A Single-stage example problem demand table ........... 24

2.1B Relative deadlines for the problem in Table 2.1A ...... 24

2.2A First stage demand for Figure 2.2 ..................... 28

2.2B Relative deadlines from Table 2.2A .................... 28

4.1 Statistics from 4 machine problems .................... 01

4.2 Statistics from 2 machine problems ................... .105

Abstract of Dissertation Presented to the Graduate Council

of the University of Florida in Partial Fulfillment of

the Requirements for the Degree of Doctor of Philosophy

SOLUTION APPROACHES TO A MULTI-STAGE,

MULTI-MACHINE, MULTI-PRODUCT PRODUCTION SCHEDULING PROBLEM

By

Charles Stafford Loveland

December 1978

Chairman: Dr. Thom J. Hodgson

Major Department: Industrial and Systems Engineering

Consider an M-product, N-stage (acyclic machine network), finite

horizon scheduling problem. Demand for the products over H scheduling

periods is known, but not necessarily constant. There are N. identical

J

machines at each stage j. The objective is to schedule this multi-machine

system so as to minimize the sum of production and inventory costs over

the scheduling horizon.

Two cases of the problem are investigated--the frontal bottleneck

and the posterior bottleneck. Both are caused by the production rate of

each product being a monotonic function of its stage of completion. It

is shown that under certain conditions a single-pass algorithm provides

optimal solutions to the frontal bottleneck case. For the posterior

bottleneck an efficient heuristic is developed. Over 98% of the randomly

generated test problems are solved optimally by the heuristic.

CHAPTER 1

INTRODUCTION AND LITERATURE REVIEW

1.1 Introduction

If one considers the structure of a production system, it is

often the case that whatever is produced is processed through several

stages of production. For instance, a gear blank for an automobile

transmission is first turned and given an initial shape on a screw

machine. The gear teeth are cut using a gear hobber. Then the gear

goes through a series of machining operations which result in a finished

gear, ready for assembly into a transmission. It is also many times

the case that more than one product is produced on any given set of

facilities (stage). In this situation it is necessary to schedule the

products to be produced over the stages in order to satisfy the demand

for the products. It is assumed (with little loss in generality) that

time can be discretized for scheduling purposes into periods (i.e., shifts,

days, weeks). It is also assumed that the demand for each product is

known sufficiently well to be used for planning purposes over some horizon.

It is this scenario that provides the setting for this dissertation.

The multi-stage production scheduling problem studied here is

concerned with scheduling the production of a set of products on a set

of machines which must perform their functions in a given order. Each

of these functions is a stage in the completion of the product and

each stage has a given production rate for each product. A machine

can perform only one function; however, there may be more than one

machine at each stage. An in-process inventory is maintained at each

stage to store the goods which have just completed this stage.

The system of stages can be depicted as a directed, acyclic network.

The nodes are the stages. The arcs represent the direction of flow of

unfinished products through the production system. If such a network

has at most one incoming arc at each node, it is called an arborescence,

as in nodes 1 through j-1 in Figure 1.1. If it has at most one outgoing

arc at each node, it is called an assembly network, as in nodes 2

through j in Figure 1.1.

The objective is usually to schedule and determine lot sizes for

the production of the different products while minimizing some function

of the production costs, set-up costs, and inventory carrying costs. In

some problems backlogging of demand is considered, which causes a

shortage cost.

There exists terminology in the literature which it is convenient

to adopt here. A model is deterministic when the external demand on

the system is known in advance with certainty. In a stochastic model

these demands are random variables having a probability distribution

which is assumed to be known. A stationary model defines its external

demands with parameters which are assumed to be independent of time. In

a nonstationary model, these parameters may vary over time. Backlogging

of demand allows unsatisfied demand to be satisfied by products completed

at a later date. If no backlogging is allowed, sales from unsatisfied

demand are lost.

1.2 Literature

This literature review covers the more recent papers on the

multi-stage production scheduling problem with emphasis on the

Figure 1.1. An example network.

deterministic models. In the process the problems treated by the papers

are categorized in order to delineate the unique area into which this

dissertation falls. The stochastic problems are basically inventory

control problems and have no constraints on resources or on the number

of machines used in any production processes. The deterministic

problems fall into two main categories finite and infinite horizon.

With one exception both categories deal with a problem in which a single

product is produced. Excluding Von Lanzenauer [44] among the finite

horizon problems and Jensen and Khan [25] among the infinite horizon

problems, all the deterministic problems assume an infinite resource

supply or enough machines available at each stage to produce the desired

batch size. The problem investigated in this dissertation is deterministic,

has a finite scheduling horizon, deals with multiple products and con-

strains the number of machines at each stage. Only the Von Lanzenauer

paper deals with all four of these properties.

The latest survey of the literature was done by Clark [6] and

covers publications through 1971.

There are four major techniques which are generally employed to

analyze the stochastic multi-stage problem: expected cost analysis,

stationary process analysis, dynamic process analysis, and network theory.

Among the expected cost models, Berman and Clark [3] treated a

single product in a two-level arborescence stage structure. Hadley and

Whitin, [18] and [19], analyzed a single-level system of parallel stages

whose demands are assumed to have an independent, stationary Poisson

distribution. The same problem was investigated by Gross [17] con-

sidering a single time period. Krishnan and Rao [28] also considered

the problem as formulated by Gross.

5

A stationary process analysis was used by Love [31] on a two-

stages-in-series problem in which demand has a Poisson distribution.

Rosenman and Hockstra [34] investigated a problem for a repairable item

in a two-level supply/repair system having Poisson distributed demands.

Sherbrooke [36] analyzed a multi-product problem which consisted of

repairable items in a two-level, arborescence system of stages. Simon [38]

refined and extended Sherbrooke's model.

Clark [5] and Clark and Scarf [7] used dynamic programming on the

single-product, stages-in-series problem with stochastic demand. In a

later paper, Clark and Scarf [8] extended and refined the model.

Hochstaedter [20] considered the case of a two-level, arborescence

system of stages as an extension of the model. Fukuda [15], [16], extended

the Clark and Scarf series model with the addition of combined ordering

and disposal policies. Zacks [47], [48], formulated a Bayesian model

for a two-level, parallel stages problem. Williams [45] investigated a

stages-in-series problem for the backlogging case with fixed production

cost.

Bessler and Veinott [4] used Veinott's dynamic process analysis

technique on many forms of the stochastic, multi-stage problem. Ignall

and Veinott [24] removed restrictions on initial inventory in the same

formulation.

Connors and Zangwill [9] made an application of network theory to

the stochastic problem, which, in principle, is an extension of the

network analysis on the deterministic problem.

The greatest progress in the deterministic, nonstationary, finite

horizon, multi-stage problem has come in the last ten years. In a

1966 paper, Zangwill [50] linked together single-stage models into an

acyclic network representation. The model, in effect, is a single

product model. He uses the assumptions of concave production costs,

piecewise concave inventory costs, known demand over a finite horizon,

and backlogging to characterize a dominant set of extreme points. For

the cases in which the stages are in series or in parallel networks,

he develops dynamic programming algorithms to search the dominant set

for optimal production schedules.

In a 1969 paper, Zangwill [49] noted that the single-product,

stages-in-series case from his previous paper can be modeled as a network

with conservation of flow constraints for each stage. He shows that

under the concavity conditions on the costs there is an optimal schedule

which is an extreme point of the network problem. Using the property

that in such an extremal solution any node can have at most one incoming

arc with positive flow, he presents a dynamic programming algorithm to

find the optimal extreme point.

Veinott [43] shows that the single-product problem with no backlogging,

as described by Zangwill, can be formulated as a Leontieff substitution

model. Under the assumption of a concave objective function, the optimal

solution is an extreme point. He demonstrates that when there is instant

shipment of goods from one facility to another, Zangwill's network model

with concave costs can be extended to include an arborescence con-

figuration of stages with the amount of computation depending on the

fourth power of the number of time periods. Under some rather severe

assumptions on the cost functions, Veinott presents a simpler and more

efficient procedure for the arborescence model.

Love [32] uses the network model of Zangwill for the one-product,

concave cost, finite horizon, stages-in-series problem. He adds three

additional conditions: production and storage costs are separable;

production costs are nonincreasing over time; and storage costs are

nondecreasing over the stages. Under these conditions the optimal

schedule has the property that if in a given period stage j is in

production, then stage j+l is also in production. Love uses the

property to develop a more efficient dynamic programming algorithm.

He also considers the stationary case where costs and demands are constant

over time. It is shown that under some additional conditions a periodic

optimal schedule exists, and an algorithm is given for computing it.

Crowston and Wagner [10] studied a single-product, finite horizon

system which has its stages in an assembly network. Its demand is

deterministic and nonstationary. Production costs are assumed to be

concave, while inventory holding costs are linear. The model is an

extension of the series model of Love into an assembly network, except

that it does not have the concave holding costs of Love. They present

two algorithms. One is a dynamic programming algorithm for the general

assembly network for which solution time increases exponentially with

the number of periods but only linearly with the number of stages.

The other is a branch and bound algorithm and is intended for cases

in which there are a large number of time periods and a nearly serial

network structure.

Kalymon's [27] decomposition algorithm applies to single-product,

arborescence problems which are too large for Zangwill's and Veinott's

methods to solve. His model assumes holding costs are linear and that

production costs, except in the latest stage on each path through the

arborescence, are linear with set-up costs. The costs in the latest

stages are general. The decomposition treats the latest stages as

single-stage problems and implicitly enumerates the production set-up

patterns of the other stages. The number of computations increases

exponentially with the number of stages having followers and linearly

with the number of latest stages.

Von Lanzenauer [44] treats a multi-product problem having one

machine per stage and a finite horizon. The production, holding, and

shortage costs are all linear. There is also a set-up cost for each

product and stage. Not every product, however, uses the machines in

the same sequence. The formulation is a 0-1 program and relies on the

available techniques for solution.

The multi-stage problem having an infinite horizon was examined by

Taha and Skeith [41]. Their paper considers one product, the stages in

series with set-up costs and linear holding costs, noninstantaneous

production, delivery lags between stages, and backorders for the

finished product at the final stage. They assume that the batch size

at stage j is exactly enough units to build the number of units in the

batch of the last stage. The optimal batch size of the last stage is

found by enumeration.

Jensen and Khan [25] present a problem having one product, stages

in series, and noninstantaneous production. The production rate at each

stage is greater than the demand rate. The problem is to develop a

start-up, shut-down schedule which equalizes average production and

demand rates while minimizing set-up and inventory cost. Each stage

operates in a periodic manner with a fixed cycle time over the infinite

horizon. The solution technique is a dynamic program.

Crowston, Wagner, and Williams [11] treat a one-product problem

whose stages have at least one incoming arc and at most one outgoing arc.

The other conditions are lot sizes which are time invariant over the

infinite horizon, shipping delays which are independent of lot size, no

backorders, a fixed charge per lot, and a linear holding cost. They

consider only solutions which have a single lot size for each stage over

time. In this situation they show that the optimal lot size for a given

stage is an integer multiple of the optimal lot size of its immediate

successor stage. They use dynamic programming to find the optimal lot sizes.

Schwarz and Schrage [35] use a branch and bound procedure to solve

the same problem. They also present some heuristic procedures to solve

the problem by optimizing pairs of adjacent stages.

Additional contributions on the problem have been made by Evans [14],

Johnson and Montgomery [26], Ratliff [33], Sobel [39], Szendrovits [40],

Thomas [42], and Young [46].

Zangwill, Veinott, Love, Kalymon, and Crowston and Wagner considered

a multi-stage, multi-machine, single-product, deterministic, finite horizon

scheduling problem. In fact, they have no constraint on the number of

machines in a stage. Only Von Lanzenauer treated a multi-product,

finite horizon problem having a constraint on the number of machines in a

stage. The remainder of the papers which have been considered here deal

with a stochastic problem or a deterministic problem with an infinite

horizon. Neither of these problems falls within the category of problems

studied in the following chapters.

1.3 Overview of Dissertation

The following chapters consider a multi-stage, multi-machine,

multi-product scheduling problem which is deterministic and has a finite

horizon. The methods used, unlike those of Von Lanzenauer, will not be

hindered by a dependence on the state of the art of 0-1 integer programming.

The problem is an extension to multiple stages of the work of Dorsey,

Hodgson, and Ratliff [12] on the multi-facility, multi-product problem.

Like that of Dorsey et al., this problem is nonstationary and restricts

the number of machines at each stage.

Two cases of the problem, distinguishable by conditions on the

production rates, receive attention. In Chapter 2, the case in which

a production bottleneck occurs in the initial stages is examined, and

a single pass solution procedure is presented. Chapter 3 addresses

the case in which the production bottleneck occurs in the final stages

and shows how such a problem can be reformulated as a single-stage

problem having precedences among the jobs. Due to the extreme difficulty

in solving this problem, Chapter 4 presents heuristics for solution.

In addition, it develops an enumeration algorithm which is used to test

the accuracy and efficiency of the heuristics. Chapter 5 draws conclusions

and suggests avenues for future research.

CHAPTER 2

A FRONTAL BOTTLENECK PROBLEM

2.1 Introduction

The multi-stage, multi-machine, multi-product production scheduling

problem studied in this chapter is an extension of the multi-facility,

multi-product problem examined by Dorsey, Hodgson, and Ratliff [12].

Where the problem of Dorsey et al. is concerned with different products

manufactured in a single production operation, this problem considers

different products manufactured in a series of production operations.

Each operation is unique and is called a stage of production. Each

product requires the same sequence of production stages in its creation

as every other product. Each machine is used exclusively in a given

production stage.

The gear manufacturing example in Chapter 1 illustrates these

requirements. The different kinds of gears are the different products.

Each kind starts from a different kind of blank. The turning of the

blank and the cutting of the gear teeth are different operations and can

be considered production stages. Each gear must go through both stages

in the same serial sequence. Clearly, the screw machine and the

gear hobber can be used only in their respective production stages.

The problem discussed in this chapter is called a frontal bottleneck

problem due to an assumption that any stage of production has less

production capacity than any of the succeeding stages. This assumption

will cause more time per unit to be devoted to production in the earlier

of any two stages. The result is that the question of schedule

feasibility becomes a question of whether or not there is enough time

or machine capacity to handle the demands placed upon the earliest

stages of the production system--a frontal bottleneck.

The problem discussed first in this chapter has only two stages in

series. One possible solution method is presented which consists of

solving a series of network flow problems. A greedy algorithm, developed

by Dorsey [12], is presented which solves the network flow subproblems.

It is shown that under certain assumptions the technique finds a

feasible solution. With the addition of an ordering assumption on the

cost coefficients, the technique finds an optimal solution. These

results are then shown to extend to N stages in series. Extensions of

the model and of the solution technique to more general production

systems are briefly discussed. Finally, application of the technique

to situations in which demand is uncertain and demand forecasts are used

is considered.

2.2 Problem Definition

The problem to be considered in this chapter can be described as

an industrial process in which M different products are manufactured.

Each product undergoes the same two stages of production in the same

sequence. Within stage j there are N. parallel identical machines

which perform the operation associated with the stage.

All production runs, called jobs, are performed on a single

product and a single machine and have a duration of one time period.

An (i,j) job is a production run of stage j of product i. Jobs of the

same product and stage may be scheduled consecutively or at the same

time on parallel, identical machines. It is assumed either that setup

is included in the production run and is performed for each job or that

setups are performed between periods.

Nonnegative in-process and finished-product inventories must be

maintained throughout the scheduling horizon H. A newly completed

component or finished product is added to the proper inventory at the

end of its production period with no time loss for transportation.

The necessary raw materials are always available. The components of

product i which are used as input for the production of stage 2 of

product i during period t are drawn from the stage 1 in-process inventory

at the end of period t-l.

Considering the gear manufacturing example, after a batch (job)

of type i gear blanks has been turned, the turned blanks are sent to

in-process inventory to await input to the gear-hobber stage. Each

turned blank becomes one gear in finished inventory after processing

by the gear hobber.

The output of stage 2 is the finished product. Demand for each kind

of finished product is assumed known for each time period through the

scheduling horizon. The demand for period t is satisfied from the

finished-product inventory at the end of period t.

The costs of a schedule are incurred in production costs and

inventory carrying costs. The production cost for a given stage and

product is assumed independent of time. The inventory carrying cost for

a given product is assumed a linearly increasing function of time.

As a unit of a product finishes a stage of production, its inventory

carrying cost per unit increases proportionally with the value added

by the stage of production. The objective is to find the production

schedule which minimizes the production and inventory carrying costs

over the horizon H while satisfying the previously mentioned constraints.

In the next section a mathematical model is formulated for the

problem. The model gives the insight that it may be possible to separate

the problem and solve it as a sequence of network flow problems.

Finally, a greedy algorithm, developed by Dorsey [12], is presented

which solves the network flow problems.

2.3 A Mathematical Model

In order to develop some insight into the solution of the problem

described in the previous section, a mathematical model will be

formulated. However, it is necessary, first, to present some notation:

bI inventory carrying cost per batch (job) per period of stage 1

of product i

2

b. incremental inventory carrying cost (value added) per batch

(job) per period of stage 2 of product i

cq production cost per batch (job) of stage j of product i

di,t demand for product i in period t

D demand matrix having entries di,t

H number of periods in the scheduling horizon

I? desired level for the final inventory of stage j of product i

IJ(O) initial inventory level of stage j of product i

M number of kinds of finished products

N. number of identical machines which perform operation j

J

N(1,2) minimum number of jobs in stage 1 needed to supply the

input for each job in stage 2

pJ production rate (batch size) for stage j of product i

W, (Xj+l) the number of jobs of stage j of product i which must be

j+i

performed, due to the demand created by Xj+, by the end of

period t

xj number of machines which produce stage j of product i during

i ,t

period t

X matrix of decision variables for stage j (has entries x t)

[al the smallest integer no less than a

[aj the largest integer no greater than a

Ti inventory carrying cost per unit per period of stage 1 of

product i

I' incremental inventory carrying cost (value added) per unit

per period of stage 2 of product i

6t,H 1, if t=H; 0, otherwise

Since a production scheduling problem is being considered, the

objective is to find a schedule which satisfies the demand for finished

products on time and attains the desired levels for the various final

inventories. All this must be accomplished while maintaining nonnegative

inventory levels, using only the available machines, and minimizing

the production and inventory carrying costs.

The variable to be considered here is x the number of machines

i ,t

which produce stage j of product i during period t. It can also be

interpreted as the number of (i,j) jobs (batches) performed during

period t. The use of this variable allows the formulation of an

integer program which is related to a network flow problem and, under

some limiting assumptions, lends itself to a straightforward solution

technique.

Consider the following definitions. The production rate for stage

j of product i is p. The initial inventory and desired level of

final inventory for stage j of product i are I(O0) and 13, respectively.

I 1

The demand for product i during period t is d Finally, the Kroenecker

delta, 6 t,H is 1, if t=H, and 0, otherwise.

Since the finished products are the output of stage 2, the production

in stage 2, aided by the initial inventory of the finished product,

must satisfy the demand over the scheduling horizon and meet the desired

level of final inventory of finished products. This is expressed in

(2.1) for i=l, ..., M and t=l, ..., H.

t t

P 1 x k -I(0) + d, k 6 tI (2.1)

i 1 ,k 1 i,k t,H 1

k=l k=l

Since the output of stage 1 supplies the input of stage 2, the

production in stage 1, aided by the initial inventory of stage 1 compo-

nents, must supply the input needs for stage 2 production and meet

the desired level of final inventory of stage 1 components. If this

is accomplished in every period, the in-process inventory level remains

nonnegative. This stage 1 production constraint is expressed in (2.2)

for i=l, ..., M and t= ,.. H. (Note: x 0.)

i ,H+l

p I xk -I (0) + p i xi k t,H (2.2)

i k -I i ,K k I i k I, I i ,k t ;,n 1

k=l k=l

Constraints (2.1) and (2.2) can be simplified and made more in-

tuitive, if they are expressed in terms of the decision variables alone.

t t

Solving the constraints for x. and x ,I respectively, leaves

kl k k-i

k=1 k=l

a right-hand side which is a lower bound on the number of (i,j) jobs

which must be completed by the end of period t. (The production in

stage 2 causes a demand on stage 1.) Let D and X2 be the matrices which

have the entries dit and x r,, respectively. Also, let Tal be the

smallest integer no less than a. Then, the maximum number of (i,j)

jobs which must be completed by the end of period t due to the demand in

the argument is expressed by w (X), defined by

i t

2 t 2 2

w ,(D) max {0, (-I (0) + = di,k + t I)/P

Ik=1 ,k ,H i 1

and

w t(X2) max 0, -I(0) + pt k k+1 Ix/p.

Using the w functions, constraints (2.1) and (2.2) will become

(2.9) and (2.8), respectively, in the complete problem formulation

(P2.1), below.

A straightforward stage 2 constraint requires that production in

stage 2 during period 1 does not exceed the input available from the

stage 1 initial inventory for all products i=l, ..., M.

2 2 1 (0) (2.3)

Solving for x ,1 and letting La be the largest integer no greater than

a, (2.3) can be transformed (since x 2 is integer) into a tighter

constraint ((2.10) in the complete problem formulation (P2.1), below).

Let N. be the number of machines available in stage j. Constraint

J

(2.4) ensures for j=l, 2 and t=l, ..., H that no more than N. machines

are used.

I x 5 N (2.4)

i=l

Let cj be the production cost per (i,j) job. Also, let iJ be

1 1

the inventory carrying cost (incremental for stage 2) per (i,j)

component per period. The objective function is to minimize total

cost and can be expressed as in (2.5).

M 2 c. H . H .)

min L x + lpt l (H-t+l)x (2.5)

i=1 j=1 t=1 i t=1 i

Since total cost is being minimized, the minimum necessary number

of jobs will be performed. Thus, in an optimal solution,

2 2

S= w i,(D) and

t=1l

V 1 = 1 ,(X2), both constants.

t=1 it H

Consequently, with a little manipulation, (2.5) can be restated

as

M 2 H

min YI I -t& p x + constant (2.6)

i=l j=1 t=l 1 i i t

The production costs have dropped out of the optimization, which leaves

only the inventory costs. Let the notation be consolidated by defining

bJ = TJpJ

The objective function (2.7) in P2.1 is derived from (2.6) and

has the effect of minimizing inventory cost. Note that the objective

tends to schedule all the jobs as late as possible. This objective

function makes sense since the later a job is scheduled, the less time

its output will be held in inventory.

The mathematical model can be expressed as the following integer

program:

M 2 H

max X X I tbx~ ; (2.7)

i=l j=1 t=1 1 t

s.t.

M

Sx t N all j and t; (2.4)

i 1it J

P2.1 lx ,k 1, (X2) all i and t; (2.8)

k k t

I x k 2 w t(D), all i and t; (2.9)

k=I

xi 1 1(0)/p all i; (2.10)

x 0 and integer. (2.11)

i ,t

For j=2 (thus, considering only the second production stage),

(2.7), (2.4), (2.9), (2.10), and (2.11) define the problem (with a

unimodular constraint matrix) addressed by Dorsey, Hodgson, and

Ratliff [121 Assuming stage 1 can supply the input required by

stage 2, the stage 2 problem can be solved using a network flow algorithm.

The decision variables from the stage 2 solution define X2. Considering

only j=1 and fixing X2, the remainder of P2.1 ((2.7), (2.4), (2.8),

and (2.11)) defines the stage 1 problem. The stage 1 problem also has

a unimodular constraint matrix and can be solved by a network flow

algorithm.

An example is presented in Figure 2.1. The figure is the

constraint matrix of P2.1 for two products and three periods, assuming

1 X1 X 1 1 X 1 X2 X2 X2 X2 X2 X2

1 x1,2 1,3 2,1 2,2 x2,3 1,1 11,2 1,3 2,1 2,2 2,3

1 0 0 1 0 0 0 0 0 0 0 0

0 1 0 0 1 0 0 0 0 0 0 0

0 0 1 0 0 1 0 0 0 0 0 0

0 0 0 0 0 0 1 0 0 1 0 0

0 0 0 0 0 0 0 1 0 0 1 0

0 0 0 0 0 0 0 0 1 0 0 1

1 0 0 0 0 0 0 0 0 0 0 0

1 1 0 0 0 0 0 0 0 0 0 0

1 1 1 0 0 0 0 0 0 0 0 0

0 0 0 1 0 0 0 0 0 0 0 0

0 0 0 1 1 0 0 0 0 0 0 0

0 0 0 1 1 1 0 0 0 0 0 0

0 0 0 0 0 0 1 0 0 0 0 0

0 0 0 0 0 0 1 1 0 0 0 0

0 0 0 0 0 0 1 1 1 0 0 0

0 0 0 0 0 0 0 0 0 1 0 0

0 0 0 0 0 0 0 0 0 1 1 0

0 0 0 0 0 0 0 0 0 1 1 1

0 0 0 0 0 0 1 0 0 0 0 0

0 0 0 0 0 0 0 0 0 1 0 0

000000000100

Figure 2.1. Constraint matrix.

the righthand side of (2.8) is fixed. The first six rows are (2.4), the

second and third sets of six rows are (2.8) and (2.9), respectively.

The final two rows are (2.10). Linear transformations on rows seven

through eighteen can be used to create a network flow constraint

matrix. The second stage can be solved to determine the righthand

side of (2.8). Then stage 1 can be solved.

Clearly, then as a heuristic approach, it appears that problem

P2.1 can be separated into two subproblems which, if solved in the proper

order, may yield a solution to the full problem. In the next section

an ordering procedure and a greedy algorithm (Dorsey [12]) for solving

the individual stage subproblems is presented.

2.4 The Backward Solution Technique

In the last section it was shown that (as a heuristic) problem

P2.1 can be separated into two parts, each of which can be solved as

a network flow problem (if the part corresponding to stage 2 is

solved first). The problem of stage 2 is the multi-facility, multi-

product production scheduling problem solved by Dorsey, Hodgson, and

Ratliff [12]. After solution of stage 2, stage 1 has the same form.

Dorsey developed a greedy algorithm which is much more efficient than

standard network flow techniques.

The following procedure, called the Backward Solution Technique

or BST, is applied to two-stage production problems.

Step 1. Set j=2 and use the demand for the stage j problem.

Step 2. Solve the network flow problem presented by stage j,

using Dorsey's algorithm [12]. Use the result to define

X the schedule for stage j.

Step 3. If j > 1, set j=j-l, use Xj+ as the demand for the stage

j problem, and go to Step 2. Otherwise, stop.

An application of the BST to the gear manufacturing problem would

cause the teeth-cutting stage to be optimally scheduled first. This

schedule would act as a demand timetable for turned gear blanks to

be used as input for the gear hobbers. The gear-blank-turning stage

is then solved optimally using the demand from the gear hobbers.

It remains to establish conditions for which the BST gives an

optimal, or even a feasible, solution to the scheduling problem P2.1.

First, however, Dorsey's algorithm and its use in the BST need to be

explained. Dorsey's algorithm is presented primarily because much of

the later development in this chapter depends on its structure. (Note

that the following discussion considers a single-stage problem.)

In order to present Dorsey's algorithm, the concept of a relative

deadline for scheduling a job must be explained first. The relative

deadline of a job is that period in the scheduling horizon in which the

first unit of the output of the job is used to satisfy demand. This

period can be determined before scheduling takes place. Thus, a job may

not be scheduled later than its relative deadline.

In order to determine relative deadlines, it is necessary to be

able to distinguish between two jobs of the same product. In some

(not necessarily optimal) schedule for a set of identical jobs, number

the jobs according to their relative positions in the schedule. If

two jobs are in the same period, then the job on the higher-numbered

machine has the higher number. Thus, the n+1st job of a product is

"later" than the nth job. Without loss of generality the nth job of the

product in this schedule is the nth job of the product in any optimal

schedule. Also, a first-in-first-out inventory system is assumed for

ease of notation and without loss of generality.

Using numbered jobs and a FIFO inventory system, initial inventory

satisfies the early demand. After that the first unit of demand is

satisfied by the first unit of output from the first job. The period

in which this happens is the relative deadline of the first job, etc.

In making these calculations the desired level of the final inventory

is considered part of the demand in the last period.

In his scheduling procedure Dorsey [12] considers the periods one

at a time, starting with the last period. Within the period, he starts

by scheduling the jobs of the highest-numbered product. He schedules

as many as possible of these jobs which have not already been scheduled

and which have relative deadlines no earlier than the period under

consideration. Having scheduled one product, he starts with the next

lower-numbered product. Having completely scheduled a period, he

considers the next earlier period.

Dorsey [12] has shown that the method finds an optimal schedule,

if the products are numbered according to their nondecreasing costs.

Consider Table 2.1A for a two-machine example of the use of

Dorsey's method. The entries of the table are the demand (in units)

for the two products in the example. Products 1 and 2 have production

rates of two and three units per period, respectively. Product 2 has an

initial inventory of four units. Since period 4 has a demand for three

units of product 1, it will take two jobs to produce them. So the

first two product 1 jobs have their relative deadline at period 4.

This leaves one unit in product 1 inventory going into period 5.

TABLE 2.1A. Single-stage example

TIME 1 2

problem-demand

3 4 5

table

6

PRODUCT 1 0 0 0 3 2 1 2

PRODUCT 2 0 0 0 8 4 4 3

TABLE 2.1B. Relative

TIME

deadlines for the

1 2 3

problem in Table 2.1A.

4 5 6 7

PRODUCT 1 0 0 0 2 1 0 1

PRODUCT 2 0 0 0 2 1 1 1

However, that is not enough to cover the demand in period 5--another

product 1 job is needed and has its relative deadline in period 5.

Because of its initial inventory only two jobs are needed to satisfy

demand for product 2 in period 4. Table 2.1B has as its entries the

number of jobs of each product which have their relative deadlines in

the indicated periods. Table 2.1B shows that periods 5 through 7

have enough machines so that their jobs can be scheduled at their

deadlines. However, period 4 has deadlines for four jobs but room

for only two. The scheduling method places the higher-numbered products

in period 4 and considers the others in period 3. Figure 2.2 is a

Gantt chart, the entries of which are the product numbers of the jobs

scheduled in the indicated periods.

Consider again a two-stage problem. In order to use Dorsey's

method in a stage other than the last stage of production, it is

necessary to be able to use the concept of the relative deadline in

an earlier stage. Demand is taken from inventory at the end of the

period in which it appears; however, input to a later stage is taken

from inventory at the end of the period immediately before it is

needed. As a convention to rectify this problem let the input

requirement for stage 2 at time t be regarded as demand on stage 1 at

time t-1. The relative deadline for jobs in stage 1 depends on the

demand placed on stage 1 by the schedule calculated for stage 2.

Dorsey's method can now be applied to stage 1 in a straightforward

manner.

Let us use the example in Tables 2.1A and 2.1B and in Figure 2.2

as the second stage of a two-stage problem. Thus, p2=2, p2=3, 12(0)=4,

TIME 1 2 3 4 5 6 7

MACHINE 1 1 2 1 1

MACHINE 2 1 2 2 2 2

FIGURE 2.2. Dorsey's schedule for the problem in Table 2.1A.

and =1. Let stage 1 have two machines, plp 2=2, and I=1. The

stage 1 "demand" derived from the stage 2 schedule in Figure 2.2 is

in Table 2.2A. The demand translates into the stage 1 relative

deadlines in Table 2.2B. From this, Dorsey's method derives the stage 1

schedule in the Gantt chart of Figure 2.3.

The stage 2 and stage 1 schedules in Figures 2.2 and 2.3, re-

spectively, represent the BST solution to the two-stage problem which

uses the demands in Table 2.1A.

The Backward Solution Technique and the method used to solve the

individual stages within the BST were presented in this section. The

next section presents the conditions under which the BST finds

feasible and optimal solutions to the two-stage problem.

2.5 Feasibility and Optimality Conditions for the Backward Solution

Technique

The purpose of this section is to develop conditions which enable

the BST to find feasible and (under additional conditions) optimal

solutions to problem P2.1. Three assumptions will be presented. It

will be proven that under the three assumptions, the BST will find a

feasible solution, if one exists. A fourth assumption will be made.

It will be shown that under the four assumptions the BST will find an

optimal solution, if one exists.

Consider the Gantt chart in Figure 2.4. The chart is the combination

of Figures 2.2 and 2.3, but depicts only the schedule of the product 2

jobs in the two-stage problem. (The first row for each stage is

1 2

machine 1.) Since p2 = 2 and p2 = 3, the jobs in period 2 and 3 of

stage 1 furnish the input for the two jobs in period 4 of stage 2.

TABLE 2.2A. First stage demand caused by the stage 2

schedule in Figure 2.2.

TIME 1 2 3 4 5 6 7

PRODUCT 1 0 4 0 2 0 2 0

PRODUCT 2 0 0 6 3 3 3 1

TABLE 2.2B. Relative deadlines for the stage 1 problem

in Table 2.2A.

TIME

1 2

4 5

PRODUCT 1 0 2 0 1 0 1 0

PRODUCT 2 0 0 3 2 1 2 0

TIME 1 2 3 4 5 6 7

MACHINE 1 1 1 2 2 1 2

MACHINE 2 1 2 2 2 2 2

FIGURE 2.3. Dorsey's schedule for the stage 1 problem

in Table 2.2A.

TIME 1 2 3 4 5 6 7

2 2 2

2 2 2 2 2

STAGE 1 ----

2

STAGE 2 2

2 2 2 2

FIGURE 2.4. Two-stage schedule of product 2.

All of the output of the first and third jobs of stage 1 (machine 2

of periods 2 and 3, respectively) are used as input strictly to the

first and second jobs, respectively, of stage 2. The second job of

stage 1 supplies its first unit of output to the first job of stage 2

and its second unit to the second job of stage 2. Therefore, the

relative deadlines of the first and second jobs of stage 1 are

determined by (in fact, are one period earlier than) the period in

which the first job of stage 2 is scheduled. If the first job in

stage 2 moved into period 3, the second job of stage 1, in order to

supply the job, would be forced to move into period 2. If that stage 2

job moved to period 2, it would force the first two stage 1 jobs into

period 1.

The first two stage 1 jobs in Figure 2.4 are supplier jobs of the

first stage 2 job. A job of stage 1 is a supplier job of a stage 2

job if the first unit of output from the stage 1 job is used as input

to the stage 2 job. Also, a job of stage 2 is a consumer job of a

stage 1 job if the stage 2 job uses as input the first unit of output

from the stage 1 job. Consequently, the first stage 2 job in Figure 2.4

is the consumer job of the first two stage 1 jobs, and the second stage

2 job is the consumer job of the third stage 1 job.

The first assumption to be used in the proof that the BST finds

feasible solutions concerns the relative production rates (batch sizes)

of the same product between the two stages.

1 2

Assumption 2.1: (production rate) The production rate pl p2 for

all i.

The consequence of the production rate assumption is that each

(i,2) job requires an input at least as great as the output of one

(i,l) job. In fact, each (i,2) job requires input equivalent to the

2 1

output of p /pI (i,l) jobs.

As an example of the possible consequences of violation of

Assumption 2.1, consider the Gantt chart in Figure 2.5A. In this

1 2 1 2

problem each stage has one machine, 2pl = p1, p2 = 2p2, and all

stage 2 jobs have their relative deadlines at period 4. There is no

initial or desired final inventory. Scheduling a job in period 0 is

infeasible. The BST solution to the problem is in Figure 2.5A. It

is infeasible; however, the feasible solution is in Figure 2.5B.

Assumption 2.2: (supplier job) Each stage 2 job has at least N(1,2)

stage 1 supplier jobs, where N(1,2) = min p /p .

When Assumption 2.1 is considered, N(1,2) 1, and Assumption 2.2

states that each stage 2 job has at least one supplier job in stage 1.

In the early periods of the scheduling horizon, demand can cause

the first stage 2 job of a product to be scheduled so early that there

is not enough time to schedule its supplier jobs. Another possibility

is that initial inventory of stage 1 of product i is so high that the

first (i,2) job does not need supplier jobs for its input. The supplier

job assumption avoids these possibilities by forcing each stage 2 job

to have at least N(1,2) supplier jobs. Note that this assumption

effectively places an upper bound on initial in-process inventories.

Practical considerations of this upper bound will be discussed in a

later section. Furthermore, Assumption 2.2 ensures that if a product

is produced in stage 2, it has a component produced in stage 1, since

it must have a supplier job.

TIME 0 1 2 3 4

STAGE 1 1 1 2

STAGE 2 1 2 2

FIGURE 2.5A. Production rate counterexample-BST solution.

TIME 0 1 2 3 4

STAGE 1 2 1 1

STAGE 2 2 2 1

FIGURE 2.5B. Production rate counterexample-feasible solution.

Figure 2.6A presents an example of the violation of the supplier

1 2 1 2

job assumption. Each stage has one machine, pl = p1, p2 = P2'

12(0) = p2, and all stage 2 jobs have their relative deadlines at

1 2

period 3. Since 12(0) = p2, the first (2,2) job has no supplier job.

The BST solution is in Figure 2.6A, and the feasible solution is in

Figure 2.6B.

Assumption 2.3: (machine availability) The number of stage 1 machines

satisfies N1 N2N(1,2).

Without the truncation in N(1,2), Assumptions 2.2 and 2.3 state

N1 I N2 min (p /p.) or p.N1 p N2 for all i. Thus, the one-period

production capability of stage 1 is no greater than the one-period

production capability of stage 2 for any product and the production

bottleneck is maintained in stage 1. The consequence of the machine

availability assumption, when combined with Assumptions 2.1 and 2.2,

is that there are no more machines available in any period t of stage 1

than are necessary to execute the minimum number of supplier jobs which

N2 jobs in period t+l of stage 2 can have. In other words, if there

are no idle machines in period t+l of stage 2 and all stage 1 jobs are

scheduled as late as possible, there are no idle machines in period t

of stage 1.

In the example of Figure 2.7A stage 1 has too many machines, in

violation of Assumption 2.3. Stage 1 has two machines, stage 2 has one

machine, and all stage 2 jobs have their relative deadlines at period 3.

1 2 1 2

The production rates 3pl = p and p = 2; thus, N(1,2) = 1. Figure 2.7A

presents the BST solution, and Figure 2.7B presents the feasible

solution.

TIME 0 1 2 3

STAGE 1 1 2

STAGE 2 1 2 2

FIGURE 2.6A. Supplier job counterexample-BST solution.

TIME 0 1 2 3

STAGE 1 1 2

STAGE 2 2 1 2

FIGURE 2.6B. Supplier job counterexample-feasible solution.

TIME 0 1 2 3

STAGE 1 1 1

1 2

STAGE 2 1 2

FIGURE 2.7A. Machine availability counterexample-BST solution.

TIME 0 1 2 3

STAGE 1

2 1

STAGE 2 2 1

FIGURE 2.7B. Machine availability counterexample-feasible

solution.

It has been shown that violation of any of the Assumptions 2.1,

2.2, or 2.3 can result in the BST failing to find a feasible solution,

when one exists. It will be shown that when all three assumptions are

met, the BST finds a feasible solution, if one exists. First, however,

some lemmas must be proven.

The first lemma characterizes a solution to a single-stage

problem obtained by Dorsey's method. It concerns the ordering of the

jobs by product number within a schedule.

Lemma 2.1: Consider a schedule, found by Dorsey's algorithm, to a

single-stage problem. If a job of some product i is scheduled

in period tl earlier than its relative deadline t2, then every

machine in the interval [tl + 1, t2] is utilized by the schedule,

and every job in the interval has a product number which is at

least as high as i.

Proof: Assume for the purpose of contradiction that a machine in some

period t in the interval [tl + 1, t2] is idle or scheduled to process

a job of product il, where il < i. When Dorsey's algorithm scheduled

period t, it assigned to t all product i jobs which were unscheduled

and had relative deadlines no earlier than t. This assignment was

made before any jobs of product il were assigned to t and before any

decision was made to leave a machine in t idle. Consequently, no job

of product i which has a relative deadline in period t or later is

scheduled earlier than period t. This contradicts the existence of

the product i job referred to in the hypothesis of the lemma.

Therefore, every machine in period t is utilized, and every job in

period t has a product number which is at least as high as i, for all

t in [tl + 1, t2]. Q.E.D.

The insight to be found in Lemma 2.1 is that in a Gantt chart of

Dorsey's solution all the machine blocks between a given job and its

relative deadline contain jobs having product numbers no lower than

the product number of the job under consideration.

Lemma 2.2: If there exists a feasible solution to a single-stage

problem, then Dorsey's algorithm will find a unique, feasible

solution.

Proof: Follows from Dorsey [12] Q.E.D.

The preceding assumptions and lemmas are used in Lemma 2.3, which

shows that, under certain conditions, a pairwise interchange of two

jobs in the second stage of a feasible schedule for a two-stage problem

can be made so that the resulting schedule is feasible.

In considering Lemma 2.3, refer to the Gantt chart in Figure 2.8.

Unlike previous Gantt charts in this chapter, each stage has an un-

specified number of machines, except that they satisfy the machine

availability assumption. The i3 in some of the periods represents a

class of products (not necessarily all of which have the same product

number) all of which have higher product numbers than il and i2. The

i4 in other periods represents another class of products. The il and

i2 in the chart indicate that each of those periods contains at least

one job of product il or i2, among other jobs.

Note in the proof of the lemma that all changes in the solution

are accomplished by applying Dorsey's algorithm to a stage or by a

pairwise interchange of jobs in which the job of the higher-numbered

product moves to a later period and the job of the lower-numbered

product moves to an earlier period.

TIME I tl ... I t2

... |t3-1 I t3

... It4-1 t4

STAGE 1 i2 ... i4 i4 i4 ..i. i

STAGE 2 .

* i3

ii

... I t5

FIGURE 2.8. Portions of a solution for two adjacent stages.

Lemma 2.3: Consider the Gantt chart of a feasible solution to a two-

stage problem in which the first stage has a feasible solution

found by Dorsey's algorithm and in which each stage 2 job is

scheduled so that there are no idle machines between it and its

relative deadline. For any two products il and i2 such that

il < i2, consider an (il,2) job in some period t4 and an (i2,2)

job in some period t2 (t2 < t4) both of which have relative dead-

lines no earlier than t4. Furthermore, let il be the lowest

product number in period t4 of stage 2 and let this (il,2) job be

the "earliest" (lowest job number) of the (il,2) jobs in period t4.

Finally, let the (i2,2) job be the "latest" (highest job number)

of the (i2,2) jobs in period t2. If

(a) the jobs in the interval [t2+1, t4-1] of stage 2 are of the

product class i3,

(b) the (i1,2) job and the (i2,2) job are interchanged in the

schedule, and

(c) Assumptions 2.1, 2.2, and 2.3 are met,

then there exists a feasible solution which is identical to the

new solution in stage 2 and has a feasible solution from Dorsey's

algorithm in stage 1.

Proof: It will be shown that all the (il,l) supplier jobs of the (il,2)

job which are in the interval [t2, t4-1] are actually in period t4-1.

It will also be shown that there are no more than N(1,2) of these

(il,l) supplier jobs in period t4-1. Thus, when the interchange of

their (il,2) consumer job and the (i2,2) job is made, the N(1,2) (il,l)

supplier jobs in t4-1 can interchange feasibly with N(1,2) (i2,1)

supplier jobs of the (i2,2) job in periods earlier than t2. The

resulting solution is feasible. Finally, Dorsey's algorithm is applied

to the new stage 1 solution.

Consider Figure 2.8. Let period tl(

containing (i2,1) supplier jobs of the (i2,2) job in period t2. Define

all the stage 1 jobs in the interval [tl+l, t4-2] to be in the product

class i4. The (i2,1) supplier job in period tl has its relative

deadline in period t2-1. By Lemma 2.1, each job in the interval

[tl+l, t2-1] has a product number no lower than i2, therefore, higher

than il. By Assumptions 2.1 and 2.2, each stage 2 job in the interval

[t2+1, t4-1] has at least N(1,2) supplier jobs. Then, by Assumption 2.3

and Lemma 2.1, all stage 1 jobs in the interval [t2, t4-2] have product

numbers at least as high as those of the products in class i3, which

are higher than il and i2. Therefore, all products in the class i4

have numbers at least as high as i2 and higher than il. Consequently,

there are no (il,l) jobs in the interval [tl+l, t4-2].

Since the (i1,2) job of interest in period t4 is the lowest-

numbered job of the lowest-numbered product in period t4 and, by

hypothesis, there are no idle machines between the (i2,2) job in

period t2 and its relative deadline, then period t4 of stage 2 has N2

jobs, N2-1 of which have higher product numbers or higher job numbers

than the (il,2) job under consideration. By all three assumptions

and Lemma 2.1, there are at least (N2-1)N(1,2) supplier jobs which

have higher product or job numbers than and are scheduled later than

the supplier jobs of the (i1,2) job (N(1,2) for each of the other jobs

in period t4 of stage 2.) Therefore, period t4-1 contains at most

N(1,2) supplier jobs of the "earliest" (lowest-numbered) (i1,2) job

in period t4.

Interchange the highest-numbered (i2,2) job in period t2 and the

lowest-numbered (il,2) job in period t4. (The interchange is feasible,

since each job has its relative deadline at least as late as period t4.)

It has been shown that the (il,2) job in the interchange has at

most N(1,2) supplier jobs in the interval [tl+l, t4-1] (all in period

t4-1). By the supplier job assumption, the (i2,2) job in the inter-

change has at least N(1,2) supplier jobs in period tl or earlier. In

order to maintain feasibility in stage 1 in conjunction with the

stage 2 interchange, the period t4-1 supplier jobs of the (i1,2) job

in the stage 2 interchange must be interchanged with the "latest"

supplier jobs of the (i2,2) job in the stage 2 interchange.

These stage 1 interchanges must preserve the ordering of the

jobs of the same product. There are two cases to be considered.

Case 1: There are no (i2,1) jobs in the interval [tl, t4-2] which are

"later" (higher-numbered) than the supplier jobs of the (i2,2)

job in the interchange.

Case 2: There are (i2,1) jobs in the interval [tl, t4-2] which are

"later" (higher-numbered) than the supplier jobs of the (i2,2)

job in the interchange.

If Case 1 is true, moving the (i2,1) supplier jobs to period t4-1

does not alter the ordering of the (i2,1) jobs. By Lemma 2.1, no

(il,1) jobs lie between the supplier jobs of the (i2,2) job in the

stage 2 interchange and the supplier job's former relative deadline,

period t2. Consequently, moving the (il,l) supplier jobs to the

locations of the (i2,1) supplier jobs does not alter the ordering of

the (il,l) jobs. Therefore, make the necessary stage 1 pairwise

interchanges.

If case 2 is true, it is shown in Lemma 2.1 that there are no

(il,l) jobs in period t4-1. No stage 1 interchanges are necessary.

After the stage 2 interchange and the stage 1 interchanges (if

any are necessary) are made, both stages 1 and 2 have feasible schedules.

The only thing which remains to be done is to convert the stage 1

feasible solution to a feasible solution based on Dorsey's algorithm.

By Lemma 2.2, applying Dorsey's algorithm to the stage 1 problem

created by the new stage 2 solution completes the proof.

Q.E.D.

The results of Lemma 2.3 form the cornerstone of the proof of

the following theorem:

Theorem 2.1: If there exists a feasible solution to the two-stage

problem and if Assumptions 2.1, 2.2, and 2.3 are met, the Backward

Solution Technique finds a feasible solution.

Proof: The proof consists of assuming the existence of a feasible

solution to a two-stage problem and then converting it to a feasible

solution both stages of which are Dorsey solutions. After transforming

stage 1 into a Dorsey solution, the stage 2 feasible solution is compared

to a Dorsey solution of stage 2. Starting with the last period,

the stage 2 solution is converted by pairwise interchanges to the Dorsey

solution, period by period. After each interchange the stage 1

Dorsey solution is updated to maintain feasibility.

Consider any feasible solution to the two-stage problem. By

Lemma 2.2, stage 1 of the solution can be converted to a Dorsey solution.

The solution now is feasible in stage 2 and has a Dorsey solution in

stage 1. Order the jobs on the machines in each period of stage 2

by increasing product and job number. If there are any idle machines

in stage 2 between a job and its relative deadline, move the job to the

latest such machine.

Also consider the Dorsey solution to stage 2. Compare the

stage 2 feasible and Dorsey solutions. (The conditions of Figure 2.8

and Lemma 2.3 will now be created.) Find the latest period in which

the two stage 2 Gantt charts don't agree. Call that period t5. In

period t5 of the stage 2 Dorsey schedule find the highest-numbered

job of the highest-numbered product which is not in period t5 of the

stage 2 feasible solution. Call that job n2 of product i2. Find the

period in the stage 2 feasible solution which contains job n2 of

product i2. Call that period t2. Since the two stage 2 schedules

agree in the interval [t5+1, H], t2 < tS. By the ordering of job

numbers and the method of choice of job n2, job n2+1 of product i2

is in the interval [tS, H] in both stage 2 schedules. Thus, job n2 of

product i2 is the highest-numbered (i2,2) job in period t2 of the

feasible schedule. Job n2 of product i2 must now work its way, through

a series of pairwise interchanges, to period t5 of stage 2 of the

feasible solution. (Until it reaches t5, all discussion concerns the

feasible solution only.)

The next step is to find a job with which to interchange job n2

of product i2. Find the earliest period which is later than t2 and

which contains a product number smaller than i2. Call the period t4

and call the product number il (t2 < t4 and ii < i2). Let job nl

be the lowest-numbered (il,2) job in period t4. Let i3 be the

class of products in the stage 2 interval [t2+1, t4-1]. By the choice

of t4 and job number ordering, all the products in the class i3 have

higher numbers than i2. Interchange job n2 of product i2 and job nl

of product il. Find the new feasible Dorsey schedule for stage 1,

the existence of which is shown in Lemma 2.3.

If t4 < t5, job n2 is the only (i2,2) job in period t4 (job n2+1

is in period t5 or later). Redefine t2 to be the old t4. Search for

a new period t4, product il, and job nl.

When job n2 reaches period t5 (t4 = t5), find a new period t5,

product i2, and job n2. The search must eventually end with the

conversion of period 1 into period 1 of the stage 2 Dorsey solution,

because the interval [t5,H] has no schedule alterations (there is no

cycling in the search).

The feasible solution now is a Dorsey solution in both stages.

Lemma 2.2 shows that Dorsey's algorithm finds unique solutions. Since

the BST finds Dorsey solutions at each stage, this two-stage solution

is the one the BST would find.

Q.E.D.

Since no mention was made of an objective function in the proof,

there is the following corollary:

Corollary 2.1: If there is a feasible solution to the two-stage

problem and if Assumptions 2.1, 2.2, and 2.3 are met, then the

BST finds a feasible solution for any objective function.

It remains to be shown under what additional conditions the BST

will find an optimal solution. The fourth assumption requires that

the products can be ordered equivalently in each stage by their cost

b.

Assumption 2.4: (cost) The inventory cost function b3 bj for

i i+l

i = 1, ..., M-1 and j = 1, 2

The meaning of the cost assumption in the first stage is that

the inventory carrying cost of a batch of product i is no greater

than the inventory carrying cost of a batch of product i+l. In stage 2

the cost is the value added to a batch, but the relationship between

products must still hold.

The assumption of the relationship between costs, when considered

for an individual stage, is the same assumption Dorsey [12] makes to

ensure that his method finds an optimal solution. Thus, under the

cost assumption, the BST finds a solution which is optimal in each stage,

if a solution exists for each stage. However, that does not mean the

total solution is optimal.

To show that the BST may not find an optimal solution if Assumption

2.4 is not met, consider a two-stage problem. Each stage has one

machine, each stage 2 job has its relative deadline at the end of

1 2 1 2 1 1 2

period 3, p1 = p2, and p = p2. Let b = 3, b = 1, b = 1, and

2

b2 = 2. Figure 2.9A contains the BST solution, which has an objective

value of 13. The optimal solution, in Figure 2.9B, has an objective

value of 14. (Remember, that the objective is to maximize.)

TIME 1 2 3

STAGE 1 1 2

STAGE 2 1 2

FIGURE 2.9A. Cost counterexample-BST solution.

TIME 1 2 3

STAGE 1 2 1

STAGE 2 2 1

FIGURE 2.9B. Cost counterexample-optimal solution.

Theorem 2.2: If the Assumptions 2.1, 2.2, 2.3, and 2.4 are met and

if a feasible solution exists to the two-stage problem, then the

Backward Solution Technique finds an optimal solution.

Proof: Consider any optimal solution. Clearly, since it is optimal,

no idle machines lie between a job and its relative deadline.

Convert the optimal solution to a solution which has Dorsey solutions

in each stage by the same conversion method as is used in the proof of

Theorem 2.1. There are four ways in which the solution is changed.

Reordering the jobs within a stage does not change their objective

value. Since there are no idle machines between a job and its relative

deadline, the jobs of stage 2 will not be moved later to fill idle

machines. In the proofs of Lemma 2.3 and Theorem 2.1 all pairwise

interchanges moved a job of a higher-numbered product (i2) later and

a job of a lower-numbered product (il) earlier, for a net objective

value change per period moved of b bj I 0 (by Assumption 2.4).

i2 il

Finally, stage 1 is periodically solved using Dorsey's algorithm.

Dorsey [12] showed that, under Assumption 2.4 applied to a single stage,

his algorithm finds an optimal solution. Thus, under Assumption 2.4,

applying Dorsey's method to a solution in stage 1 does not worsen the

solution.

It has been shown that each alteration which must be made to an

optimal solution to a two-stage problem in order to convert it to a

solution having Dorsey solutions in each stage results in a solution

at least as good as the optimal solution. Consequently, under

Assumptions 2.1, 2.2, 2.3, and 2.4, a solution which has a Dorsey

solution in both of its stages is optimal (if a feasible solution exists).

Since the BST generates unique Dorsey solutions in each stage and

finds a feasible solution (Lemma 2.2 and Theorem 2.1), it finds that

optimal solution.

Q.E.D.

It has been shown that under the production rate, supplier job,

and machine availability assumptions the BST finds a feasible solution

to the two-stage problem (if a feasible solution exists), regardless

of objective function. With the addition of the cost assumption, the

BST finds the optimal solution. The combination of the production

rate and machine availability assumptions causes the frontal bottleneck

1 >2

discussed earlier. The posterior bottleneck for which pi pi for

1 1

all i and for which there is a lower bound on the number of machines

in stage 1 is discussed in the next chapter. The cases in which

1 < 2 1 > 2

p. pi for some i and p pi for other i are very difficult to analyze

1 2

and are not discussed here. If p. > pi, there are some product i

jobs in stage 2 which do not require supplier jobs (the supplier job

assumption would not hold). All the other assumptions are independent

of each other. The supplier job assumption affects the allowable

level of initial inventory and is very restrictive. It is shown,

however, in the next section that violation of the supplier job

assumption results in start-up problems, which are resolved after a

short interval. The cost assumption appears to be reasonable in that

an expensive (or valuable) product tends to remain expensive relative

to the other products as it goes through the stages of production.

In the next section, practical applications of the BST are considered,

such as problems in which demand may fluctuate and/or initial inventory

levels may cause the violation of the supplier job assumption.

2.6 Application of the Backward Solution Technique

The normal use of a scheduling technique like the BST is to

schedule production over a horizon using all available information on

demand and inventory. After performing one period's production, the

schedule is then recomputed using updated demand and inventory

information. This cycle is repeated for the remainder of the pro-

duction process.

In computing the schedules it becomes apparent that initial,

in-process inventory may cause Assumption 2.2, that each (i,j) job has

at least N(j,j+l)(i,j) supplier jobs, not to be realized for early

jobs. In a practical sense, this situation is a start-up problem,

however, and after a period of time resolves itself. Let H1 be the

period in the first schedule by which each product has had completed

at least one stage 2 job having a supplier job. From no later than H1

on, Assumption 2.2 is satisfied in the first schedule, which must be

optimal for all stage 2 jobs scheduled after H1 and for their suppliers.

Thus, the start-up period ends no later than period H1.

When demand remains unchanged, it is easily seen that any schedule

computed after period H1 is optimal. This scheduling technique loses

its effectiveness when large fluctuations in demand are encountered

from schedule to schedule. Under the assumption of reasonable accuracy

in the forecast of demand, it will be shown that the Backward Solution

Technique creates optimal schedules.

When computing a new schedule at time t, compute it from time 0.

Then use the portion from t to H. The inventory at time t is the new

initial inventory. The schedule from H1 to H satisfies Assumption 2.2

and is optimal. Thus, the schedule from t to H is optimal. Let ti

be the earliest period in this schedule in which product i has had

completed at least one stage 2 job having a supplier job. The half-

open interval [t,ti) is the interval of accuracy for the demand fore-

cast for product i.

An increase in demand has two possible impacts on the schedule.

There may be enough idle time in the necessary places to handle the

added load; thus, Assumption 2.2 is maintained. Since the addition

of a new job to a BST schedule can cause the jobs of lower-numbered

products to be pushed to earlier periods, the alternative impact of an

increase in demand is the creation of an infeasible problem. If an

increase in demand is for a period earlier than H1 and is feasible, it

may cause a redefinition of H1 to an earlier period. The same is true

for period t..

A decrease in demand of one or less stage 2 jobs between schedules

also has two possible impacts on the schedule. If the decrease in

demand for product i applies to a period no earlier than t., it is

easily shown that the affected jobs in the interval [t,ti) will

maintain the same relative ordering as they advance in the schedule,

as a result of the removal of the product i jobs. Thus, the schedule

remains optimal. If the decrease in demand for product i occurs for a

period earlier than ti, the schedule may revert to a start-up mode.

In order for the scheduling method to maintain optimality after

the start-up period, all decreases in demand for product i must occur

no earlier than t. and must be able to be satisfied by the cancellation

of jobs which have a supplier job. This cancellation is accomplished

when the demand forecast for product i is a lower bound on actual demand

in the interval.

Conditions have been developed under which the BST gives an

optimal solution. It was shown using worst case analysis that after

an initial start-up period the method can accommodate normal changes

in demand. In the next section the notation and assumptions will be

extended to cover N stages in series. Lemma 2.3 and Theorems 2.1

and 2.2 will be extended to N stages in series.

2.7 The Extension to N Stages in Series

Up to this point the discussion has concerned a two-stage problem.

A natural extension of this problem is to N stages in series. There

is no intuitive difference between the two problems. All facets of

the definition and solution of the new problem are the same, except

that there are N stages to consider.

In the notation, the range of the stage indicator j is now from

2 2

1 to N. The incremental carrying cost coefficients, Y and b, now

become TI and b". The minimum number of supplier jobs in stage j

1 I

for each job in stage j+l is N(j,j+l) and is defined by

N(j,j+1) = min ++I/P J

The functions w ,t(D) and wt (Xj+) replace w (D) and wi ,(X 2

it i,t i,t t

respectively. The Backward Solution Technique is unchanged, except

that in Step 1 the stage counter j is initialized at N.

Assumption 2.1 (production rate) becomes

Assumption 2.5: The production rate p p- for all i and for

j = 1, .... N-1.

Assumption 2.2 (supplier job) becomes

Assumption 2.6: Each stage j+1 job has at least N(j,j+1) stage j

supplier jobs, for j = 1, ..., N-l.

Assumption 2.3 (machine availability)becomes

Assumption 2.7: The number of stage j machines

Nj Nj+ N(j,j+l) for j = 1, ..., N-1.

Lemma 2.3 becomes Lemma 2.4 in which il, i2, i3, i4, tl, t2,

and t4 are defined exactly as they are for Lemma 2.3, except that they

are in stages j and j+1 (Figure 2.10).

Lemma 2.4: Consider the Gantt chart of a feasible solution to an

N-stages-in-series problem in which the first j stages each have

a Dorsey solution and in which each stage j+1 job is scheduled

so that there are no idle-machines between it and its relative

deadline. If

(a) the jobs in the interval [t2+1, t4-1] of stage j+1 are of

the product class i3,

(b) the (il, j+1) job and the (i2, j+1) job are interchanged in

the schedule, and

(c) Assumptions 2.5, 2.6, and 2.7 are met,

then there exists a feasible solution which is identical to the

new solution in stage j+1 and has a Dorsey solution in the first

j stages, for j = 1, ..., N-1.

Proof Outline: The proof is by induction on j. Note that after

each interchange stages j+2 through N remain unchanged and, therefore,

feasible.

TIME

t1 ...I t2

t3-1 I t3

... 1t4-1 t4

STAGE j i2 ... i4 i4 i4 ... il

STAGE j+1 ... 2 ... 3 i3 ... i3

FIGURE 2.10. Portions of a solution for two adjacent

stages of an N-stage problem.

For j = 1, Lemma 2.3 is the proof.

Make the induction assumption that Lemma 2.4 is true for j = K>1.

For j = K+1, make the interchange of the (il, j+1) and (i2, j+1)

jobs, as in the proof of Lemma 2.3. Then, as each pair of their stage j

supplier jobs are interchanged to re-establish stage j feasibility,

invoke the induction assumption to create feasible Dorsey solutions

in stages 1 through j-1. After establishing a feasible solution in

stage j, all that remains is to convert stage j to a feasible Dorsey

solution while maintaining feasible Dorsey solutions in stages 1

through j-1. This is accomplished by means of a period by period

conversion of stage j, like the one used on stage 2 in the proof of

Theorem 2.1. After each interchange, invoke the induction assumption

to maintain feasible Dorsey solutions in stages 1 through j-1. This

concludes the induction and proves the lemma.

Q.E.D.

The results of Lemma 2.4 form the cornerstone of the proof of

Theorem 2.3 (the successor of Theorem 2.1).

Theorem 2.3: If there exists a feasible solution to the N-stage

problem and if Assumptions 2.5, 2.6, and 2.7 are met, then the

Backward Solution Technique finds a feasible solution.

Proof Outline: The proof is by induction on N.

For N = 2, Theorem 2.1 is the proof.

Make the induction assumption that Theorem 2.3 is true for N = K.

For N = K+1, if there is a feasible solution, the first N-1 stages

have, by the induction assumption, a feasible solution (found by the

BST) which has a Dorsey solution in each stage. With this feasible

solution, all that remains is to convert stage N to a feasible Dorsey

solution while maintaining feasible Dorsey solutions in stages 1

through N-l. This is accomplished by means of a period by period

conversion of stage N to a feasible Dorsey solution, like the one used on

stage 2 in the proof of Theorem 2.1. After each interchange invoke

Lemma 2.4 to re-establish feasible Dorsey solutions in stages 1 through

N-l. This provides a feasible solution to the N-stage problem which

has a Dorsey solution in each stage. Since Dorsey solutions are

unique, this is the solution found by the BST. Thus, the induction

proof is complete.

Q.E.D.

To show that the BST finds an optimal solution, Assumption 2.4

becomes

Assumption 2.8: The inventory cost b b for i = 1, ..., M-1

and all j.

In order to show that the BST finds an optimal solution, it is

necessary to prove an additional lemma. The lemma extends Lemma 2.4

to finding a new feasible solution which has an objective value at

least as large as that of the original feasible solution. The lemma

again refers to Figure 2.10.

Lemma 2.5: If the hypothesis of Lemma 2.4 and Assumption 2.8 are met,

then there exists a feasible solution which has an objective value

at least as great as that of the original feasible solution

and which is identical to the new solution in stage j+l and has a

Dorsey solution in the first j stages, for j = 1, ..., N-l.

Proof Outline: The proof is by induction.

For j = 1, since the interchange and the determination of the

Dorsey solution in the proof of Lemma 2.3 cannot decrease the objective

value, the first induction step is proven.

Make the induction assumption that Lemma 2.5 is true for j = K.

For j = K+1, use the same proof as was used in the last step of

the induction proof of Lemma 2.4. Invoke the induction assumption of

the present proof in the place of the induction assumption of the

proof of Lemma 2.4. The only other movement of jobs is by pairwise

interchange. No interchange, by Assumption 2.8, decreased the objective

value. Thus, the objective value of the feasible solution found

during the induction is at least as great as that of the original

feasible solution.

Q.E.D.

To show that the BST finds an optimal solution, Theorem 2.2

becomes

Theorem 2.4: If Assumptions 2.5, 2.6, 2.7, and 2.8 are met and if a

feasible solution exists to the N-stage problem, then the Backward

Solution Technique finds an optimal solution.

Proof Outline: The proof is by induction.

For N = 2, Theorem 2.2 is the proof.

Make the induction assumption that Theorem 2.4 is true for N = K.

For N = K+1, assume an optimal solution for the N-stage problem.

By invoking the induction assumption, convert the first N-1 stages to

a BST solution. This N-stage solution is still optimal and has Dorsey

solutions in the first N-1 stages. Convert stage N to a Dorsey solution

by using the same period by period conversion used in the proof of

Theorem 2.4. Instead of invoking Lemma 2.4 after each stage N

interchange, use Lemma 2.5. The conversion results in a feasible

solution which has a Dorsey solution in each stage and which has

an objective value as great as that of the original optimal solution.

Since this is a maximization problem and Dorsey solutions are unique.

The BST would find this optimal solution.

Q.E.D.

It has been shown that the BST, under certain assumptions, finds

an optimal solution, if a feasible solution exists. A more general

production system is presented in the next section. Conditions

under which the BST can find an optimal solution are discussed.

2.8 Further Extensions of the Backward Solution Technique

Until this section the discussion has concerned only problems in

which the production system consisted of stages in series. To consider

more complex systems, it is necessary to introduce the concept of a

stage diagram. The stage diagram is a network representation of the

production system. Each node in the diagram represents a production

stage. The directed arcs connecting the nodes indicate a possible

path a product can take through the system on its way to completion.

In the stage diagram in Figure 2.11, the arcs show that a component

produced in stage 3 must be used in at least one of the stages 6

through N-3. Finished products are output from both stages N-1 and N.

A supplier stage of stage j is any stage in the diagram from which

stage j has an incoming arc. Let S(j) be the set of supplier stages

of stage j. A consumer stage of stage j is any stage in the diagram

to which stage j has an outgoing arc. Let C(j) be the set of all

FIGURE 2.11. Stage diagram of a general production system.

consumer stages of stage j. In this terminology the production rate

and supplier job assumptions are concerned with a stage and its supplier

stages. Those two assumptions, along with the cost assumption, remain

unchanged. The machine availability assumption, because a stage can have

more than one consumer stage, becomes

Assumption 2.9: The number of machines

N. ` min N.jN(j,j ) .

J jsC(j)

A result of the supplier job assumption, which becomes more

apparent for the general production system, is that any product

produced in stage j must be produced in each of its supplier stages.

However, a product produced in stage j is only required to be pro-

duced in at most one of its consumer stages. As an example from

Figure 2.11, a product produced in stage N-3 must be produced in

stages 3 and 4 but is only required to be produced in at most one of

stages N-1 and N. Thus, there are only two possible paths for a

product to follow through the production system represented in

Figure 2.11. One path ends in stage N-1 and includes stages 1, 3, 4, and

6 through N-3. The other path ends in stage N and includes stages

1, 2, 3, 4, 5, N-3, and N-2.

The production system under consideration for the BST must be

able to be represented by an acyclic, directed network. With the

exception of the replacement of Assumption 2.7 by Assumption 2.9,

the feasibility and optimality theorems and their proofs are

essentially unchanged. The BST would solve the stages of the system

in Figure 2.11 in the order of their decreasing stage numbers.

A general production system was described. The assumption

alterations necessary to accommodate the system were explained.

Finally, it was noted without proof that with these changes the BST

finds an optimal solution to the general production system, if a feasible

solution exists.

2.9 Conclusion

An extension of a single-stage, multi-machine, multi-product

scheduling problem into a multi-stage problem was discussed in this

chapter. The two-stage problem was considered first. A method, called

the Backward Solution Technique, was introduced which solves each stage

with a greedy algorithm developed by Dorsey [12]. It was found that

when the bottleneck occurs in the initial production stage (the pro-

duction rate and machine availability assumptions) and when production

start-up effects are over (the supplier job assumption), the BST finds

a feasible solution, if one exists. With the addition of the cost

assumption, the BST finds an optimal solution. It was shown by

counter-example that relaxation of the assumptions can lead to failure

to find the desired solution. Practical application of the BST was

discussed. It was found that after a start-up period and with

reasonably accurate demand forecasts, the production system met the

requirements for the effective use of the BST.

After the two-stage problem an extension to N stages in series

was presented. In the final extension, the production system formed

a general, directed, acyclic network. With few basic changes, the

assumptions, lemmas, and theorems remained the same for each of the

extensions.

62

The next logical avenue of exploration is the case in which

p pj for jl E C(j) and all j. This moves the production bottle-

neck to the last stage. This problem is considered in the next

chapter.

CHAPTER 3

A POSTERIOR BOTTLENECK PROBLEM

3.1 Introduction

In Chapter 2 the frontal bottleneck problem was discussed. The

cause of the frontal bottleneck was that the production rate for each

product was a nondecreasing function of its level within the network

of stages. As a result of the production rate at least one supplier

job exists in each supplier stage for every job in a given stage. Such

a property along with properties on the inventory carrying cost and the

number of machines in successive stages allowed the development of a

single-pass algorithm for an optimal solution.

In this chapter the case in which the production rate for each

product is a nonincreasing function of its level within the network of

stages is dealt with. This causes the existence of no more than one

supplier job in each supplier stage for every job in a given stage. In

fact, some jobs may have no supplier jobs at all and may depend upon

receiving their input from the supplier job of some other job in their

stage. This creates a potential production bottleneck in the final

stages of production.

In this chapter it is shown that under certain conditions this

problem can be reformulated as a single-stage, multi-machine, multi-

product scheduling problem in which there are precedence constraints

among some of the jobs. Each job will have linear deferral costs and

an availability time which may differ from job to job.

It may appear that this is a relatively easy problem to solve.

However, Lenstra [30] has shown that the problem is NP-complete. This

caused Elmaghraby and Sarin [13] to examine the bounds on a heuristic

for a relaxed version of the problem.

Other papers deal with further relaxations of the problem in

which each job has the same availability time and there is a single

machine. Horn [23] solves the problem in which the precedences are

sets of arborescences (forests). Adolphson and Hu [1] solve the problem

for an arborescence in worst case time O(n log n) and shorten Horn's proof.

Lawler [29] solves the problem in which the precedences involve parallel

series of jobs and realizes a worst case time of O(n log n). Sidney [37]

develops an algorithm which decomposes and solves general, acyclic

networks of jobs.

Hodgson and Loveland [21,22] address a multi-machine problem which

has similar structure but minimizes the completion time of the latest

job. Bartholdi, Martin-Vega, and Ratliff [2] survey parallel processor

scheduling problems.

3.2 Formulation

In this section the posterior bottleneck case of the multi-stage,

multi-machine, multi-product scheduling problem is formulated. The

conditions under which the problem can be reformulated as a single-stage

problem are developed. Since even the single-stage problem is difficult,

there is a discussion of solutions to relaxed versions of the problem

in preparation for the heuristics of Chapter 4.

Consider a multi-stage, multi-machine, multi-product scheduling

problem in which all the stages in the whole production system must form

an assembly network. An example of such a system is in Figure 3.1A.

LEVEL

Figure 3.1A. General product system.

Figure 3.1B. Stage diagram for first product.

Figure 3.1C. Stage diagram for second product.

1

0-

Each node is a production stage and is in one of L production levels.

A product does not have to go through each stage. In this example there

are two products. The first product only uses the stages shown in

Figure 3.1B; the second product uses the nodes in Figure 3.1C.

The notation and terminology are the same as those of Chapter 2.

The mathematical programming formulation is

M

Max

i=l

N H

x tb J x

j=1 t=l

(3.1)

M

Mx x N

i=1 ,t S

t

k=1

ttl

S ,k 1 P

k=l

pj xj < I (0) ,

P1 i,1

t t

Pi ,k XL di,k

k=1 k= ,

j = 1, ..., N and t = 1, ..., H

xj] -lJ(o) + It

i,k -I( + stH I '

i = 1, ..., M, t = 1, ..., H,

jl a consumer stage of stage j,

and j not in level L.

(3.2)

(3.3)

i = 1, ..., M, j1 a consumer stage (3.4)

of stage j, and j not in level 1.

- (0) + 6tH I

(3.5)

i = 1 ..., M, t = H,

and j in level L.

x 0 integer.

I,

(Note: xj H+ 0 and p. E 0

i ,H+l z n I

for i / G(j).)

As in Chapter 2, the objective function (3.1) is the nonconstant

portion of the inventory carrying cost. Since the cost is nondecreasing

over time, the jobs are forced to be scheduled as late as possible.

Constraint (3.2) ensures that no more jobs are scheduled in a period

than there are machines for that stage. The purpose of constraint (3.3)

is to force the production of the first t periods of stage j to be great

enough to supply that part of the input requirements of the first t+l

periods of stage jl production which is not supplied by stage j initial

inventory. The same constraint causes the desired level of final

inventory for stage j to be reached. Constraint (3.4) shows that pro-

duction in the first period of stage jl is limited by the initial

inventory of stage j. The effect of constraint (3.5) is the same as

that of constraint (3.3), except that the demand for finished goods is

satisfied instead of the input requirements of a consumer stage.

Consider problem P3.1 when

p a pj1 for jl a consumer stage of j.

Under these circumstances an (i,j) job produces at least as many units

as an (i,jl) job requires. Assumption 2.2 of Chapter 2 no longer is

satisfied. Each (i,jl) job has no more than one supplier job in stage j.

The production bottleneck is in the final stages. As a result, the

analysis of Chapter 2 does not apply here.

For two stages in series, if the supplier stage has as many

machines as the consumer stage, the supplier jobs are able to be

scheduled at their relative deadlines. Figure 3.2 shows two such stages,

each of which has two machines. The product 2 job in period 2

of stage 1 supplies the input for both product 2 jobs in stage 2.

68

Time 1 2 3 4

Stage 1

1 2

Stage 2

122

1 2 2

Figure 3.2 Two stages in series in which pl = p2 and p2 = 2p2.

p1 p1 an 2p 2'

Its consumer job is the product 2 job in period 3 of stage 2. Thus,

the stage 1 job has its relative deadline at period 2. Since each

stage 2 job has at most one supplier job in stage 1 and since stage 1

has at least as many machines as stage 2, every feasible schedule

can have the jobs of stage 1 at their relative deadlines. In particular,

the optimal solution has the jobs of stage 1 at their relative dead-

lines. The position of the consumer job completely determines that of

its supplier.

It is easily seen that for a set of stages in series, in which the

number of machines in a stage is at least as great as the number in its

consumer stage, the positions of the jobs in the final stage of an optimal

solution completely determine the positions of the jobs in all the earlier

stages. This leads to the following theorem.

Theorem 3.1: If p11 i pa and N. N for jrS(jl) and the stages of the

production system form an assembly network, the positions of the

jobs in the final stage of the optimal solution of problem P3.1

completely determine the optimal positions of the jobs in the

earlier stages.

Proof: By the above argument, the last stage of the optimal schedule

determines the optimal position of all the jobs in any path (i.e., series

of stages) through the network. Since each stage in a given level of

the assembly network is independent of every other stage in that level,

the terminal stage of the optimal schedule determines the optimal

position (one period earlier than their consumer jobs) of all the jobs

in every path through the network.

Q.E.D.

By Theorem 3.1 optimally solving this multi-stage problem is

equivalent to finding the schedule of the stage in level L of its

optimal solution. In every optimal solution each job attempts to be at

its deadline. Since there are always enough machines in the supplier

stages for that purpose, each optimal solution to the problem has the

jobs in stages of level less than L at their relative deadlines. Because

of this, all the supplier jobs and consumer jobs of every job can be

designated before any solution procedure starts.

All of this requires, without loss of generality, an ordering among

the jobs of each product in the last stage (level L). The ordering does

not change the optimal schedule. It merely numbers the jobs ahead of

time and ensures that they are in order in all, including the optimal,

schedules. The effect of the ordering, for some stage j in level L, is

to ensure that the first (i,j) job can be scheduled no later than the

second (i,j) job, etc. An example of this appears in Figure 3.3A.

Figure 3.3A shows a production system consisting of three stages

in series, each having one machine. There is a single product which

uses all three stages. Figure 3.3A is a Gantt chart in which the numbers

are job numbers for that stage of the single product. The production rates,

as shown in Figure 3.3A, are such that the stage 2 jobs in periods

2 and 3 are supplier jobs of jobs 1 and 2, respectively, in stage 3.

The job in stage 1 is a supplier of job 1 in stage 2. Actually,

job 1 in stage 1 supplies the input to both jobs 1 and 2 in stage 2.

Thus, it must be scheduled earlier than the earlier of those two jobs

in stage 2. A precedence constraint would require job 1 to be

performed no later than job 2 in stage 3. If, in addition to the

Stage 1 1

Stage 2 1 2

Stage 3 1 2

S- 2, b 5

22

pl = 1, b = 20

3 3

p = b1 =30

Figure 3.3A. Solution for single product, 3 stages.

Time

1 2 3 1 4

Stage 3 1 2

3 3 3

p = 1, b = 55, b = 50

P1 1,1 1,2

Figure 3.3B. Solution for Figure 3.2A reduced to 1 stage.

precedence, the relative deadlines of the jobs are observed, the jobs

in each stage will be ordered by their job numbers.

If the above ordering is kept, a job in some level k(

period t if and only if its consumer job is in period t+l. Thus,

moving a job in a stage in level L causes all of its supplier jobs to

move with it by the same number of periods. It is reasonable, there-

fore, to add the cost of the level k job to that of its consumer job

and remove the level k job from the problem. If this is done for all

level k jobs and for all levels k(

this multi-stage problem becomes a single-stage problem which has

precedence relationships among the jobs of the same product. Figure 3.3B

demonstrates the reduction of the problem of Figure 3.3A from three

stages to a single stage problem having cumulative costs and a precedence

constraint between jobs 1 and 2.

The following theorem formally states the conditions for this stage

reduction.

j > j>

Theorem 3.2: If (a) pi p1 and N Njl for jeS(jl), (b) the production

system is an assembly network, (c) the production stages of each

product are connected by arcs, (d) and a series ordering is placed

on the jobs of the same product within the terminal stage, then

problem P3.1 can be reformulated as a single-stage problem which

has precedence constraints among the jobs of the same product.

Proof: The proof was established in the preceding discussion.

Q.E.D.

In this new single-stage problem the jobs are distinguishable because

of the precedence constraints. As a result, the demand and the time at

which it occurs can be translated into due dates for the individual jobs.

In order to conform with similar problems in the literature, the

time frame of the single-stage problem will be reversed. This translates

the problem into one of minimization, thus forcing the jobs as early

as possible. The due dates become availability times.

The notation for the new problem has ci,k as the combined cost

coefficient of suppliers for the kth job of product i. The constant

M. is the number of jobs of product i. Also, M, H, and N are the number

of products, the horizon, and the number of machines, respectively. The

term ai,k is the availability time of the kth job of product i. The

decision variable x. ,k, is 1 if the kth job of product i is scheduled

in period t and is 0 otherwise. The new problem has the following

mathematical programming formulation for the terminal stage:

M Mi H

min Y Y Y tci,k i,k,t (3.6)

i=l k=l t=ai,k

s.t.

H

P3.2 I x.i k 1, i = 1, ..., M and k = 1, ..., M (3.7)

t=a 1,

1,k

M i

Xi,k,t N, t = 1, ..., H (3.8)

i=l k=l

H

Xi,k,t i,k+l,r 0, i = 1 ..., M, k=1, ..., M (3.9)

r=t

and t = ai,k, ..., H

Xi,k,t = 0 or 1.

(Note: xi,k,t = 0 for t

The objective function of P3.2, represented by (3.6), is a

linearly increasing function over time and, thus, forces the jobs to

be scheduled as early as possible. Constraint (3.7) ensures that each

job is scheduled exactly once and in its interval of availability.

Constraint (3.8) limits the number of jobs in a period to the number

of available machines. Constraint (3.9) is a precedence constraint

which maintains the ordering relationship among the jobs of the same

product.

As is pointed out in the literature review, Lenstra shows this

problem is NP-complete. Therefore, an attempt to find a way to solve

it efficiently would appear fruitless. A more promising approach seems

to be to find a good heuristic. A place to look is at optimal solution

methods for relaxed versions of P3.2.

Clearly, P3.2 without the constraints (3.9) is the well known

transportation problem. It is easily and efficiently solved. However,

because the cost coefficients may fluctuate so greatly from job to job,

it is doubtful that a network solution would show much resemblance to

an optimal solution to P3.2.

If the number of machines, N, is dropped to one and the availability

times become zero, the problem is well solved. Sidney has the most

generally applicable method for optimally solving the single-machine

problem. His method is the basis for one of the heuristics of the next

chapter.

3.3 Conclusion

Another case, the posterior bottleneck, of the multi-stage, multi-

machine, multi-product scheduling problem was examined in this chapter.

The bottleneck exists when the production rate for any given product is

a nonincreasing function of the level of the stage to which it applies

(P" p l for jcS(jl)). Unlike Chapter 2, no restrictions were placed

on the cost coefficients of the jobs. The production system was an

assembly network of stages.

It was explained that numbering the jobs of each product of the

terminal stage and requiring the jobs to be scheduled in order does not

cause a loss in the generality of the solution. It merely allows the

identification of a job's supplier and consumer jobs before the solution

procedure begins. As a result, it was shown that, with the proper lower

bound on the number of machines in the stages outside level L, the

problem can be reformulated as a single-stage problem having serial

precedence constraints among the jobs of the same product.

Solution approaches for the problem and its relaxed versions were

discussed at the end of the second section and in the literature review

of the first section of the chapter. It was decided that since the

problem is NP complete, heuristics, one of which will be based upon

Sidney's algorithm for the single-machine relaxation of the problem,

will be developed to solve the problem. The two heuristics of the next

chapter include an efficient method for the multiple interchange of

jobs in a feasible schedule. An enumeration algorithm is developed which

takes advantage of many properties of the problem to improve its

efficiency. The enumeration algorithm is used to test the accuracy of

the heuristics.

The results of 200 randomly generated problems are presented in the

latter part of the chapter. Statistics are developed to measure the

efficiency and accuracy of the heuristics.

CHAPTER 4

HEURISTICS AND TESTING

4.1 Introduction and Formulation

Consider the problem of scheduling M products on N identical

machines over a horizon of length H. Each product i has M. unit-duration

jobs. Each job k of product i has a deferral cost of ci,k units per

period. In addition, each job k has an availability time ai,k. Within

the products the jobs are ordered by serial precedence constraints

(job number). For product i, job k can be scheduled no later than

job k+l. Let x. ,k,t be the 0-1 decision variable which is 1 if job k

of product i is scheduled in period t and 0 otherwise.

The objective of the problem is to schedule all the jobs within the

horizon while maintaining the precedence constraints and minimizing the

deferral cost. The mathematical programming formulation of the

problem is

M M. H

min i I tci x (4.1)

i=l k=1 t=ai,k

s.t.

P4.1 xi, = 1, i = 1 ..., M and k = 1, .. M (4.2)

t=ai,k

M M.

S x x N, t = 1, ..., H (4.3)

i=l k=1 i,k,t

H

x t x' 0, i = 1, .. M; k = 1.., M. i-

,kt t ,k+l,r 1

a ttr=t

and t = aik ..., H (4.4)

I ,k K'

i,k,t = 0 or .

(Note: Xi,k,t E 0 for t < ai,k.)

The problem P4.1 without constraint (4.4) can be solved as a network

flow problem. Thus, problem P4.1 can be viewed as a network problem

having additional complicating constraints. Problem P4.1 has been shown

to be NP-complete. For that reason another approach must be taken to

the solution of the problem.

The following sections are used to develop and test two heuristics

augmented by a multiple interchange method. An enumeration algorithm

is developed to test the accuracy of the heuristics. This and the

heuristics are combined as subroutines into a Fortran program for the

purpose of testing on randomly generated problems. The final section

describes the testing and results.

4.2 The Heuristics

Since the problem P4.1 is a very difficult problem to solve, two

heuristics, SCHED1 and SCHED2, have been developed and programmed in

Fortran IV. To augment the heuristics, a method of performing multiple

interchanges on the jobs in a schedule has also been programmed. The

method is applied to the solution found by each heuristic in an attempt

to improve it. This section describes the heuristics and the multiple

interchange method applied to them.

Define a feasible substring of a job string to be a substring which

contains the first job on that job string and consecutive jobs which are

available to successive machines. Use Figure 4.1 as an example and

let there be two machines in each period. Also, let the next available

machine be the first machine in period 1. There are three feasible

Availability Time 1 2 3

Job Strinas

Figure 4.1. Availability times and precedences for a two product

problem.

substrings of the first job string. Job 1 is a feasible substring because

it is available to that first machine in the first period. Jobs 1 and 2

form a feasible substring because job 2 is available for the second

machine of period 1 while machine 1 does job 1. Jobs 1, 2, and 3

form a feasible substring. Since jobs 1 and 2 can occupy the two machines

in period 1, the first machine in period 2 becomes the next available

machine. Job 3 is available for it. For similar reasons, other feasible

substrings are job 4; jobs 4 and 5; and jobs 4, 5, and 6. Jobs 4 through

7 do not comprise a feasible substring. After jobs 4 through 6 are

assigned to the machines of period 1 and the first machine of period 2,

the second machine of period 2 becomes the next available machine. Job 7

is not available for that machine and cannot be included in the substring.

If, however, the jobs in Figure 4.1 had machine 2 of period 1 as the next

available machine, machine 1 of period 3 would be the next available

machine after scheduling jobs 4 through 6. Job 7 is available for that

machine. Thus, jobs 4 through 7 would comprise a feasible substring.

The first heuristic (called SCHED1) is based on a measure of the

potential penalty of not scheduling a given job on the first available

machine. The calculation is made for the first job, if available, of

every job string. Considering the job for scheduling on the second

available machine, not the first available machine, the longest feasible

substring of that job string is determined. Analysis is performed as if

the substring were to be scheduled next, starting on the second available

machine. The jobs of interest are those which fall on the first machine

in a period. Either their availability time is that period or they

would be scheduled in the preceding period, if the string's first job

were scheduled on the first available machine. If that job on the first

machine of the period is at its availability time, neither it nor its

successor jobs are considered in this analysis. If, however, it is

not at its availability time, the job has been forced one period later

by scheduling the string's first job on the second available machine.

This job and all like it would cause the penalty incurred by not

scheduling the string's first job on the first available machine. Thus,

the penalty cost on the first job of the string is defined as the sum

of the deferral costs of all such jobs which would be forced a period

later if the string's first job is scheduled on the second available

machine.

The analysis is done for every job string. The available first job

having the greatest penalty is actually scheduled on the first available

machine and removed from its job string. The analysis is then repeated

with the new first available machine for all the job strings until all

jobs have been scheduled.

The second heuristic (called SCHED2) is an adaptation of Sidney's

algorithm [37] for the one-machine problem in which all jobs have the

same availability time. Each product is represented by a string of jobs

in series. Their order on the string defines the precedences between

them.

As an example, consider the strings in Figure 4.1. Each node is a

job. Jobs 1, 2, 4, 5, and 6 become available in period 1. Job 1 has

precedence over job 2 which means job 2 can be scheduled no earlier than

job 1. Each job string corresponds to a product. Thus, jobs 1, 2, and

3 produce one product and jobs 4 through 7 produce another.

SCHED2 generates all the feasible substrings for a given next

available machine. For each such substring the ratio of the cumulative

deferral costs to the number of member jobs is determined. The

feasible substring having the largest ratio is the candidate for

scheduling. If there is a tie between substrings of the same string,

the first substring calculated is used. In order to conform with Sidney's

tie breaking rule, all ties should go to the shortest substring.

After the final candidate substring has been selected, its jobs

are scheduled on the earliest available machines. This creates a new

next available machine. The jobs which have just been scheduled are

deleted from their job string. New feasible substring and ratio cal-

culations are then made to determine the next substring to be scheduled.

If no feasible substrings exist but some jobs remain to be scheduled,

the first machine in the next period becomes the next available machine.

This method is repeated until all the jobs have been scheduled.

4.3 A Multiple Interchange Method

After a schedule has been completed by SCHED1 or SCHED2, it may be

beneficial to try to improve the solution by interchanging two or more

jobs. In the present case, two-job, three-job, and four-job interchanges

are used. What follows is a discussion of the mechanics of applying

this concept to the problem at hand.

A two-way interchange of jobs kl and k2 in a schedule would replace

kl on its machine with job k2 and place job kl on the machine formerly

occupied by k2. An example of a two-way interchange is presented in

Figure 4.28 in which jobs 1 and 2 are interchanged. A three-way inter-

change involves three jobs switching places in a schedule, as in

Figure 4.2A.

Time Jobs

t

t+tl

t+t2

Figure 4.2A. Standard 3-way interchange

Time Jobs

t

t+tl

t+t2

Figure 4.2B. Standard 3-way interchange--first step

Multiple interchanges of jobs are used to improve the heuristic

solutions. For experimentation, the interchange method accommodates

two, three, and four-way interchanges.

The interchange method has the following form for a k-way inter-

change:

Step 1. Set i = 2 and make all i-way interchanges of jobs which

improve the solution.

Step 2. If i< k, set i = i+l. Otherwise, stop.

Step 3. If an i-way interchange which would improve the solution

exists, make it and go to Step 1. Otherwise, go to

Step 2.

By the time the interchange method has been completed, there are

no two-way through k-way interchange improvements to be made in the final

solution.

The form of a two-way interchange needs no further explanation.

The standard three-way interchange is demonstrated in Figure 4.2A. Job 1

replaces job 3 in period t+t2. Job 3 replaces job 2 in period t+tl which,

in turn, replaces job 1 in period t. The only other three-way inter-

change (a mirror image) occurs when job 1 replaces job 2 in period t+tl;

job 2 replaces job 3 in period t+t2; and job 3 replaces job 1 in period t.

The standard three-way interchange generates four cases which improve

the solution. Only a subset of those cases needs to be considered, as

is shown later. Of those four cases, three can be decomposed into

two-way interchanges which improve the solution at least as much. Thus,

only one case requires an actual three-way interchange be performed to

accomplish its goals. (As will be seen in later discussion, this can

result in computational savings.)

Case 1 involves either a precedence constraint between jobs 2

and 3 or a higher cost ci,k for job 2 than job 3. In this case both

jobs 2 and 3 have higher costs ci,k than job 1. Therefore, the inter-

change can be decomposed into the two-way interchange of Figure 4.2B,

followed by that of Figure 4.2C. Both two-way interchanges improve

the solution.

Case 2 involves no precedence constraint between jobs 2 and 3 and

a higher cost for job 3 than job 2. Since job 3 also has a higher cost

than job 1, the two-way interchange of Figure 4.2D would be more

advantageous and would save the inevitable interchange of jobs 2 and 3.

Case 3 involves a cost for job 3 no greater than that of job 1.

In this case, the two-way interchange of Figure 4.2B accomplishes the

improvement and saves a possible interchange of jobs 1 and 3.

Case 4 is the genuine three-way interchange of Figure 4.2A. It

requires a precedence constraint between jobs 2 and 3 and a cost no

higher for job 2 than job 1. It is this case and its mirror image

which the three-way interchange routine is programmed to detect.

Where there are only two standard three-way interchanges (including

mirror images), the four-way interchange has five standard forms

(including mirror images). Two of those and part of a third are,

unconditionally, sets of pairwise interchanges. Other parts of that

third form are a combination of a three-way and two-way interchange.

The result is that instead of programming five different searches, only

two full searches and a subset of a third are necessary. It appears

that these forms have the smallest number of potential interchanges.

The result of all this is that the computational expense which is

expected in combinatoric problems having this complexity can be

Time Jobs

t (D

t+tl

t+t2

Figure 4.2C. Standard 3-way interchange--second step

Time Jobs

t 1

t+tl

t+t2 3

Figure 4.2D. Standard 2-way interchange

alleviated for these multiple interchange problems. With some insight

into the problem being solved and some analysis of the component parts

of the interchanges, the more complex interchanges can be decomposed into

lower order, profitable interchanges which are more easily detected and

performed.

Two heuristics and a multiple interchange method are described in

these sections. They are used to solve problem P4.1, which has the form

of a network problem with additional complicating constraints.

The following sections will cover an enumeration algorithm which

takes advantage of some useful properties of the problem and testing

of the heuristics. The enumeration algorithm is used in the testing

in order to compare the heuristics against optimality.

4.4 The Enumeration Algorithm

An enumeration routine is needed to test the accuracy and effi-

ciency of the heuristics. In order to make testing of a large number

of problems feasible, the enumeration algorithm must be as efficient

as possible.

The enumeration algorithm is developed in this section after the

presentation of some properties of the problem which are useful in

paring down the search tree. Separability and a relaxed version of the

problem which acts as a lower bound are presented first.

The precedence constraints used in the problem are of a looser

form than are normally seen. Rather than constraining job 2 to being

done after job 1, they cause job 2 to be done no earlier than job 1.

These looser precedences lead to a very useful property.

Property 4.1: A machine is empty in the heuristic solution iff it is

empty in the optimal solution.

Proof: The proof is by contradiction. It will be sufficient to look

at the earliest instance in which empty machines in the heuristic and

optimal solution do not coincide. Clearly, if no such earliest instance

occurs, the proof is complete. As a matter of terminology, for job 1

and 2 scheduled in the same period, job 1 is earlier than job 2 if it

is on a lower numbered machine.

Assume that the optimal solution has the earliest unmatched empty

machine. There must exist a job in the heuristic schedule which is no

later than the empty machine but is later than that machine in the

optimal schedule. Thus, its availability time and those of its prede-

cessors are no later than the period of the optimal solution's empty

machine. Consider the job's earliest predecessor which is scheduled

later than the empty machine in the optimal solution. This job could

be feasibly shifted into the empty machine resulting in an improvement of

the objective function. This contradicts the optimality of the solution,

and shows that the optimal solution will not have the earliest unmatched

empty machine.

Assume that the heuristic solution has the earliest unmatched empty

machine. By the very nature of the heuristic (either SCHEDI or SCHED2),

that machine would be full unless each job scheduled later than it is

not available until after that period. However, there must exist a job

in the optimal solution which is no later than the empty machine but is

later than that machine in the heuristic schedule. Thus, this job must

be available by the period of the empty machine. This contradicts this

solution being a product of the heuristic. Thus, the heuristic solution

will not have the earliest unmatched empty machine.

There can be no earliest unmatched empty machine.

Q.E.D.

Clearly, empty machines in the optimal solution delineate prob-

lem sections which could be solved independently. The problem is

separable. Property 4.1 shows that these empty machines and separable

sections can be identified by the heuristics.

A separable section of the single-stage problem considered here

has a property which results from the precedence constraints. No job

can be scheduled on the first machine in its predecessor's availability

time period. This property would be redundant as a constraint. However,

consider it as a constraint, which creates a revised problem.

Clearly, the revised problem has the same optimal solution set as

the original problem. Thus, relaxing the precedence constraints but

maintaining the new constraint supplies a lower bound on the original

problem. To break ties, jobs with equal deferral costs are scheduled

by increasing job number. A desired characteristic of a lower bound is

that it also supplies a lower bound on the completion of any partial

optimal schedule.

In solving this relaxed, revised problem, the machines within a

time period are scheduled in the order of their increasing number.

The effect is that in period t the job on machine i has a deferral

cost no less than that of the job on machine i+l, unless it violates

the new constraint of the revised problem.

Property 4.2: For any partially completed optimal schedule, the optimal

solution to the relaxed, revised problem provides a lower bound

on the value of the final portion of the optimal schedule.

Proof: In this proof an arbitrary machine and period will be chosen

in the optimal schedule. This machine will be considered the last

scheduled machine in the partially completed optimal schedule. In the

next step another schedule, based on the precedence constraint relaxation,

is developed. This second schedule consists of an optimal solution to

the relaxation of the partially completed optimal schedule and an optimal

solution to the relaxation of the remainder of the optimal schedule. By

a series of pairwise interchanges, the second schedule will be converted

to a schedule containing all jobs in the same sections as those in the

optimal solution to the relaxed, revised problem. It will be shown that

the portion of the optimal relaxed, revised solution which succeeds the

arbitrary machine chosen above provides a lower bound to the value of the

corresponding portion of the optimal schedule.

Consider an optimal schedule S to the problem. Next, arbitrarily

choose machine k in period t. Consider the jobs in the first t-l periods

and the first k machines of period t of S and call that schedule S1. The

remainder of schedule S is called schedule S2. Solve S1 and S2 as relaxed,

revised problems having optimal schedules Srl and Sr2, respectively.

Clearly, Srl and Sr2 are lower bounds for Sl and S2, respectively.

Let Sr be an optimal schedule for the relaxed, revised problem

*

corresponding to the whole schedule S. Srl and Sr2 are the two sections

of Sr which cover the same periods and machines as Srl and Sr2' respectively.

*

If Srl and Srl contain the same jobs, Srl = Srl and Sr2 = Sr2'

*

A property of Sr can be easily seen. Let ql be any job in Sr and

let job q2 be any job in Sr which is between ql and the first machine

available to ql. The property is that the deferral cost of ql is no

greater than that of q2. The same property holds if ql and q2 are both

*

in Sri or Sr2. The same property, clearly, holds for Srl and Sr2.

The next step is to convert Srl and Sr2 into schedules having the same

*

jobs as Srl and Sr2, respectively. This is done by converting Srl into

Srl

Find the earliest machine in Srl which contains a job different

from the one in that machine in Srl. Let that be machine kl in period

tl. Let job jl be in machine kl in Srl and job j2 in Srl. Thus, j2

is later than jl in Srl or is in Sr2. Clearly, the first machine

available to job jl is no later than machine kl in period tl. The same

is true for job j2. Since j2 is earlier than jl in Sr, j2 has a deferral

cost no less than that of jl.

If job j2 is in Srl, then all the jobs, including jl, between it and

job jl have at least as high a deferral cost as job j2. Thus, jl and j2

have the same deferral cost. They can be interchanged in Srl forming

Sr with no change in objective value. Sr2 becomes Sr2

If instead, job j2 is in S 2' let it be in machine k2 in period t2.

Move job j2 forward into machine kl in period tl. Since the cost of j2

is no less than that of jl, no job in Srl which is later than jl and is

available at machine kl in tl is more costly than j2. Move job jl later in

Srl until it reaches a job less costly than it. Replace that job with jl

and, in turn, move it later in Srl until a cheaper job is found. Continue

this string of exchanges until the moving job moves later than machine k

in period t (into Sr2). Call this job jn. Since job jn is less expensive

than j2, move job jn into machine k2 in period t2. The net result in Sr2

is that job jn replaces j2, causing a decrease in the value of Sr2. Make

a pairwise interchange between job jn and the next later job available at

the time and machine of jn until a cheaper job is reached. No such

interchange increases the value of Sr2. The new schedules created from