BATCH PRODUCTION SMOOTHING WITH
VARIABLE i i AND PRO(" i i ii.
By
i YAVUZ
A E. i i. i TION P F T) TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFTT T T
OF THE RF^UI:'T iTS FOR Tii DEC. OF
DOCTOR, OF PHILOSOPHY
UNIVEE ii OF FLORIDA
G17.
by
i\Iesut Yax uz
I dedicate this work to .: f, ., and :. beloved girlfriend Deniz Kazanci.
AC T NOWLEDC. i TS
I sincerely thank Suleyman T.::' : for u' ": i i: ,: :  : advisor. Looking
back, I could not have asked i : a better person to be : thesis advisor than him.
He has been a great mentor 'n my .: .: career in academia, and T thank him for
all of the i. ;.::. we have shared. He :ii. i me the space to think creatively,
and he was always there when I needed .. "L. and guidance on both academic and
.: : matters. His fatherly and :: :: ,. attitude has made him very
special for me. I hope we will maintain this sincere rc 1.'^'.^ for a lifetime.
I wish to :... :.. .. :i ::. to the members of i:i : "r~.ry committee,
Elif Akcali, Panos Pardalos, Joseph Geunes and Haldun Aytug, for their assistance
and guidance. I esp : : iy thank Elif Ai :i, for her guidance in my work related to
metaheuristics.
Deniz :,.. deserves the warmest thanks for being my ;1:.. ..... i and
S. sorting me continuously <:::::, the i::: year of :: studies. Without her love
and support, I would not have been able to finish my work on time. At the times
I felt stressed out and :tc, she was always there to ': 1i me, to motivate
me and to get me on track. Also, I thank my parents and my dear friend Umrut
Inan for ::: ...:' :: me constantly. T: .::)rt :: 1 .: 1 me in overcoming the
difficulties of studying overseas and fc..': on my research, especially in the first
two years.
TABLE OF CONTENTS
page
ACKNOWLEDGMENTS ................... ....... iv
LIST OF TABLES ...................... ......... ix
LIST OF FIGURES ..................... ......... xii
ABSTRACT .. .. ... .. .. .. .. ... .. .. .. ... .. .. .. .. xv
CHAPTER
1 INTRODUCTION .................... ..... 1
1.1 Batching Decisions in Production ......... ......... 3
1.2 The Toyota Production System ................... 8
1.3 Production Smoothing .......... .............. 12
1.3.1 Demand Stabilization ................. .... 14
1.3.2 Batch Method for Production Smoothing .......... 18
1.3.3 Heijunka ........... . . .... 19
1.4 Toyota's Way for Production Smoothing . . 19
1.5 Manufacturing Environment Types .... . . 22
1.6 Contribution of the Dissertation .................. .. 25
2 SINGLEMACHINE SINGLELEVEL MODEL . . 29
2.1 Literature Review .................. ........ .. 31
2.2 2nd Phase Formulation .................. ... .. 36
2.3 Exact Methods for the 2nd phase problem . . ... 38
2.4 Problem Specific Heuristics for the 2nd Phase Problem ...... 44
2.5 MetaHeuristics for the 2nd Phase Problem . . ... 51
2.6 1st Phase Problem Formulation .................. .. 57
2.7 Structural Properties of the 1st Phase Problem . .... 62
2.8 Exact Methods for the 1st Phase Problem . . ... 66
2.8.1 Dynamic Programming Formulation . . ... 67
2.8.2 Bounding Strategies ................. . .. 71
2.8.3 Numerical Example . . ... .... 73
2.9 Problem Specific Heuristics for the 1st Phase Problem . 76
2.10 MetaHeuristics for the 1st Phase Problem . . ... 82
2.10.1 Neighborhood Structure ....... . . .. 83
2.10.2 Strategic Oscillation ................. . .. 84
2.10.3 Scatter Search and Path Relinking . . .. 85
2.11 Comparative Study ............... ..... .. 91
2.11.1 Design of Experiments ............. .. .. 91
2.11.2 Methods ............... ....... .. 92
2.11.3 Results and Discussion .... . . 109
3 FLOWSHOP SINGLELEVEL MODEL ..... . . 112
3.1 1st Phase Problem Formulation .... . . 114
3.2 Structural Properties of the 1st Phase Problem . .... 116
3.3 Exact Methods for the 1st Phase Problem . . ... 118
3.3.1 Dynamic Programming Formulation . . .... 119
3.3.2 Bounding Strategies . . ....... .. 123
3.4 Problem Specific Heuristics for the 1st Phase Problem . 124
3.5 MetaHeuristics for the 1st Phase Problem . . .... 130
3.5.1 Neighborhood Structure .... . . .. 130
3.5.2 Path Relinking .................. ... 131
3.6 Comparative Study .................. .. ..... .. 134
3.6.1 Design of Experiments ..... . . .. 134
3.6.2 Methods ....... ...... .............. 135
3.6.3 Results and Discussion ..... . . .. 138
4 SINGLEMACHINE MULTILEVEL MODEL . . 141
4.1 Literature Review .................. ....... .. 143
4.2 2nd Phase Formulation .................. ..... .. 149
4.3 Exact Methods for the 2nd phase problem . . .... 153
4.4 Problem Specific Heuristics for the 2nd Phase Problem ....... 155
4.5 MetaHeuristics for the 2nd Phase Problem . . .... 155
4.6 1st Phase Problem Formulation ..... . . 157
4.7 Structural Properties of the 1st Phase Problem . .... 160
4.8 Exact Methods for the 1st Phase Problem . . .... 160
4.8.1 Dynamic Programming Formulation . . ... 161
4.8.2 Bounding Strategies . . ....... .. 163
4.9 Problem Specific Heuristics for the 1st Phase Problem . 165
4.10 MetaHeuristics for the 1st Phase Problem . . .... 171
4.10.1 Neighborhood Structure .... . . .. 171
4.10.2 Path Relinking .................. .... 172
4.11 Comparative Study .................. .. .... .. .. 175
4.11.1 Research Questions ................... ... 175
4.11.2 Design of Experiments ..... . . .. 175
4.11.3 M ethods .................. ....... .. .. 177
4.11.4 Results and Discussion ..... . . .. 181
5 FLOWSHOP MULTILEVEL MODEL ...........
5.1 1st Phase Problem Formulation ....... .......... 193
5.2 Structural Properties of the 1st Phase Problem . .... 195
5.3 Exact Methods for the 1st Phase Problem . . .... 196
5.3.1 Dynamic Programming Formulation . . .... 196
5.3.2 Bounding Strategies . . ....... .. 201
5.4 Problem Specific Heuristics for the 1st Phase Problem . 202
5.5 MetaHeuristics for the 1st Phase Problem . . .... 208
5.5.1 Neighborhood Structure .... . . ... 208
5.5.2 Path Relinking .................. ... 208
5.6 Comparative Study .................. .. .... .. .. 212
5.6.1 Research Questions ................... .. 212
5.6.2 Design of Experiments ..... . . ... 213
5.6.3 Methods ............... ... .. ... .... 214
5.6.4 Results and Discussion ..... . . .. 217
6 SUMMARY AND CONCLUSIONS ............. .. .. ..221
REFERENCES ........... .. .. .......... .. .... 225
APPENDIX
A DERIVING OBJECTIVES FROM GOALS FOR THE SINGLELEVEL
M ODELS ....... ... .......... .......... ......... 232
A.1 Deriving F1 From G1 .................. .... 232
A.2 Exploiting G2 .................. .......... .. 234
A.3 Exploiting G3 .................. .......... .. 234
A.4 Exploiting G4 .................. .......... .. 235
A.5 Exploiting G5 .................. .......... .. 235
A.6 Exploiting G6 .................. .......... .. 235
B LOWER BOUND FOR THE FUTURE PATH IN THE DP FORMU
LATION OF THE SINGLELEVEL MODELS . . .. 237
C FINE TUNING THE METAHEURISTIC METHODS FOR THE SMSL
MODEL ................... ... ......... 240
C.1 Strategic Oscillation .................. .... .. 240
C.2 Scatter Search .................. .......... .. 243
C.3 Path Relinking .................. ......... .. 248
D FINE TUNING THE PATH RELINKING METHOD IN THE FSSL
MODEL ................... ... ......... 251
E DERIVING OBJECTIVES FROM GOALS FOR THE MULTILEVEL
M ODELS ........... ............ ............ 253
F LOWER BOUND FOR THE FUTURE PATH IN THE DP FORMU
LATION OF THE MULTILEVEL MODELS ....... ....... 255
G FINE TUNING THE PATH RELINKING METHOD IN THE SMML
M ODEL ................... ....... ....... 258
H FINE TUNING THE PATH RELINKING METHOD IN THE FSMIL
M ODEL ................... ....... ....... 260
BIOGRAPHICAL SKETCH ................... ....... 261
LIST OF TABLES
Table page
21 Example for Algorithm Nearest Point ................ ..40
22 Sequence Found by Algorithm Nearest Point . . .... 47
23 Sequence Found by Algorithm MH1 ................ . 48
24 Sequence Found by Algorithm MH2 ................ 50
25 Sequence Found by Algorithm Twostage ............... ..52
26 Correlations Between Alternative 1st Phase Objectives and Z* . 61
27 Summary of the Fine Tuning Process for the SO Method ...... ..106
28 Summary of the Fine Tuning Process for the SS Method . ... 107
29 Summary of the Fine Tuning Process for the PR Method ...... ..108
210 Summary of Results .................. ......... .. 110
31 Summary of the Fine Tuning Process for the PR Method ...... ..138
32 Summary of Results .................. ......... .. 139
41 Summary of the Fine Tuning Process for the PR Method ...... ..181
42 Summary of Results .................. ......... .. 182
43 Summary of Supermarket Inventory Levels . . ..... 183
51 Summary of the Fine Tuning Process for the PR Method ...... ..217
52 Summary of Results .................. ......... .. 218
53 Summary of Supermarket Inventory Levels . . ..... 219
61 Problem Complexities and WorstCase Time Complexities of the Ex
act Solution Methods .................. ....... .. 224
62 Average Performance of our Heuristic Methods on the 1st Phase Prob
lem ................... .. .... ........... 224
C1 Analysis of the Parameters R,,i/.1 and Iterative for the SO Method 240
C2 ttest Results of the Parameters R.,,,I./. and Iterative for the SO Method241
C3 Analysis of the Parameters Maxlters, NFM and NIM for the SO
M ethod . . . . . . . .. 241
C4 ttest Results of the Parameters Maxlters, NFM and NIM for the
SO Method ....... ........ .......... ...... 242
C5 Analysis of the Parameters NFM and Relativelmprovement for the
SO Method ....... ........ .......... ...... 242
C6 ttest Results of the Parameters NFM and Relativelmprovement for
the SO Method. .................. ........... .. 242
C7 Analysis of the Parameters PSHMethods and Diversification for
the SS M ethod .. .. .. .. ... .. .. .. ... .. .. ... .. 243
C8 ttest Results of the Parameters PSHMethods and Diversification
for the SS Method .................. .......... .. 244
C9 Analysis of the Parameters LSinPreProcess and LStoRefSetPP for
the SS M ethod .. .. .. .. ... .. .. .. ... .. .. ... .. 244
ClOttest Results of the Parameters LSinPreProcess and LStoRefSetPP
for the SS M ethod .................. ........ .. .. 244
C11Analysis of the Parameters LSinIterations and LStoRefSetlters for
the SS Method .................. ............ .. 245
C12ttest Results of the Parameters LSinIterations and LStoRefSetlters
for the SS M ethod .................. ........ .. .. 245
C13Analysis of the Parameters SubsetSize and NIC for the SS Method 246
C14Analysis of the Parameter NEC for the SS Method . ... 246
C15ttest Results of the Parameter NEC for the SS Method . ... 247
C16Analysis of the Parameter b for the SS Method . . ... 247
C17ttest Results of the Parameter b for the SS Method . ... 247
C18Analysis of the Parameters LSinIterations and LStoRefSetlters for
the PR M ethod .................. ......... .. .. 248
C19ttest Results of the Parameters LSinIterations and LStoRefSetlters
for the PR Method .................. ......... .. 248
C20Analysis of the Parameters b and NTS for the PR Method . 249
C21ttest Results of the Parameter NTS for the PR Method ...... ..249
C22Extended Analysis of the Parameter b for the PR Method ...... 250
C23ttest Results of the Parameter b for the PR Method . ... 250
D1 Analysis of the Parameters b and NTS for the PR Method . 251
D2 ttest Results of the Parameters b and NTS for the PR Method .. 252
G1 Analysis of the Parameters b and NTS for the PR Method . 258
G2 ttest Results of the Parameters b and NTS for the PR Method .. 258
G3 Analysis of the Parameters b, NTS and PSHMethods for the PR
M ethod . . . . . .. .. ... 259
G4 ttest Results of the Parameters b and NTS for the PR Method .. 259
H1 Analysis of the Parameters b and NTS for the PR Method . 260
H2 ttest Results of the Parameters b and NTS for the PR Method . 260
Figure
11
LIST OF FIGURES
Automotive Pressure Hose Manufacturing Process Outline .
12 RiverInventory Analogy .. . ...................
13 Demand Stabilization Over Time .. ...............
14 Effect of Production Smoothing on Inventory Level .. .......
15 Ideal and Actual Consumptions .. ................
21 Pseudocode for Algorithm Nearest Point .. ...........
22 Pseudocode for Algorithm Modified Nearest Point .. .......
23 Graph Representation of the Assignment Problem Formulation .
24 Pseudocode for Algorithm MH1 .. . ................
25 Pseudocode for Algorithm MH2 .. .................
26 Pseudocode for Algorithm Twostage .. ..............
27 Pseudocode for Algorithm Find Acceptable Values .. .......
28 Network Representation of the Problem .. .............
29 Pseudocode for Algorithm Forward Recursion .....
210 Pseudocode for Algorithm Solve with DP .. ...........
211 Pseudocode for Algorithm Solve with BDP ......
212 Pseudocode for Algorithm Bounded Forward Recursion .. .....
213 Network Representation of the Example .......
214 DP Solution to the Numerical Example .. .............
Pseudocode
Pseudocode
for Algorithm
for Algorithm
NE Feasible Solution
Parametric Heuristic
217 Examples for Feasible Regions
lation .. ...
that Can Benefit from Strategic
page
2
Search
Search
Oscil
218 Pseudocode for Algorithm SO ............ ... . .. 104
219 Example for SS and PR Methods .... . . .. 104
220 Pseudocode for Algorithm SS/PR .... . . .. 105
31 Network Representation of the Problem . . ...... 120
32 Pseudocode for Algorithm Forward Recursion . . .... 121
33 Pseudocode for Algorithm Solve with DP . . . 122
34 Pseudocode for Algorithm Solve with BDP . . .. 125
35 Pseudocode for Algorithm Bounded Forward Recursion ...... ..126
36 Pseudocode for Algorithm NE Feasible Solution Search . 127
37 Pseudocode for Algorithm Parametric Heuristic Search . 129
38 Pseudocode for Algorithm PR ............ .... ... .. 132
41 Pseudocode for Algorithm One Stage . . ...... 185
42 Network Representation of the Problem . . ...... 186
43 Pseudocode for Algorithm Forward Recursion . . .... 187
44 Pseudocode for Algorithm Solve with DP . . . 187
45 Pseudocode for Algorithm Solve with BDP . . .. 188
46 Pseudocode for Algorithm Bounded Forward Recursion ...... ..189
47 Pseudocode for Algorithm NE Feasible Solution Search . 189
48 Pseudocode for Algorithm Parametric Heuristic Search . 190
49 Pseudocode for Algorithm PR ............ .... . .. 191
51 Network Representation of the Problem . . ...... 197
52 Pseudocode for Algorithm Forward Recursion . . .... 198
53 Pseudocode for Algorithm Solve with DP . . . 200
54 Pseudocode for Algorithm Solve with BDP . . .. 203
55 Pseudocode for Algorithm Bounded Forward Recursion ...... ..204
56 Pseudocode for Algorithm NE Feasible Solution Search . 205
57 Pseudocode for Algorithm Parametric Heuristic Search . 207
58 Pseudocode for Algorithm PR ................... ... 209
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial l ..:i.:i:::. :: of the
R .::= :..... :! for the Degree of Doctor of pl. 1... ....y
BAT( i PRODUCTION i)0 ii G. .
VARIABLE SETUP AD' PROC .. .iNG TI 
By
Mesut Yavuz
S : Su : T ::
... PD. tment: Industrial and Systems i .....e ering
Many .:::. .::' use mixedmodel : duction systems, :::: .: under the
justintime (JIT) i.?.:. .*y, in order to. ..... iy meet customer demands for
a variety of :: :: ts. Such systems : *ce demand be stable and production
sequence be leveled. Ti.. production .......i.:.. !...iblem (i i) aims to find level
schedules at the il:: : level of a multilevel manufacturing system. The products
in a level schedule are 1: .. .1 over the horizon as uniformly as possible. In
this area, most research has focused on *: ::. :' JIT mixedmodel assembly
where r ::, and changeover times are assumed :.. 1l :1.o. However, in many
Sw lines in the real life, a : :c. .. amount of time needs to be dedicated to
!::: /changeover among 0:: : .: products. T :.i..'e, for such systems the
S: '. 1:' ature falls short of helping to smooth production.
We consider two alternative i:: :::: : :::::::; environments, a 1: :' machine
or a ..... at each level of the r.. '..... system; and study both single
level and multilevel versions of the PSP. We allow the products to have arbitrary
nonzero prc ... and setup time requirements on the machines, where the total
productive time is :: ::ied. Here, one must decide on batch sizes and number of
batches for each product, before ... :. the batches. We .1. .1..: a two ,:'
solution '.:i :. : 1 that is "i'i I. ..T1 on all four models. The :: phase ::
...:... .' batch sizes for the products and the second phase finds a level sequence
of the batches of products. '.'. relate the second phase problem to the : :
solution methods available in the literature, and focus on the :: : 1.1 .:.." 1
.. build an optimization model for the first phase problem; show that it is
"Pcomplete; devise heuristic methods for its solution; implement metaheuristic
techniques; and i" .1. exact solution ...... .'...'es based on dynamic i jamming
(DP) and branchandbound (B&B) methods. Ti:: : ': ::: ::r :: :: experi
ments, we compare the .. .... .... of our solution methods. The results show that
our exact methods are :.: ::i in solving mediumsized instances of the problem.
Also, our metaheuristic implementations yield near .:.. 1 solutions in almost
realtime.
CHAPTER 1
INTRODUCTION
This dissertation aims to solve a production smoothing problem (PSP) en
countered in mixedmodel manufacturing environments which are operated under
the justintime (JIT) philosophy, where different products may have arbitrary
setup and processing times. Mixedmodel manufacturing systems are widely used
by manufacturers, in order to meet customer demands efficiently for a variety of
products. Operating such mixedmodel systems with JIT principles requires de
mand to be stable and production sequence to be leveled. The PSP aims at finding
smoothed or leveled production schedules. A majority of the current literature
focuses on special systems, such as the Toyota's production system, which is a
synchronized assembly line. However, in reallife, there exist many mixedmodel
manufacturing systems which are far from being synchronized assembly lines
or even assembly lines. Therefore, the wellestablished literature of production
smoothing is not directly applicable in these systems. We present such an example
from the automotive industry.
A leading automotive pressure hose manufacturer runs a production facility
in Ocala, Florida, to produce various types of pressure hoses for the automotive
industry. Production of these hoses is achieved through a sixstage process, where
the first three stages are heavier processes and are operated in large batchsizes,
whereas the latter three stages are more productspecific operations and are
operated in smaller batchsizes. Also, the latter stages are separated from the
former ones by an inventory of partiallyprocessed goods. The outline of the system
is depicted in Figure 11.
i ,ure 1 1: Automotive Pressure Hose I ::: ::::.:: Process Outline
The latter three I : constitute a flowline. However, the mold operation
uses the vi.'...: :.. oven, which is a batch processor. Moreover, the setup and
Time : :: : of the : ducts Ii : both .::::::the stages and
among the products. .. *...e, small inventory .:,*. are created between the
consecutive .
The described reallife :. is far from 1 :: a :. :onized assembly line.
S. the operation of the system under the JTT p1'1 ... 1 necessitates a new
Seduction smic i: method. In this case, our method : ses focusing on the
assembly stage (which can be seen as a :.. .rmachine without loss of generality)
and establishing the 1 :' 1: policy and the schedule with respect to this stage.
T, employment of a pulltype shop H ..... control .....1 the schedule set for the
assembly stage to the entire ; ::: Also, since we are focusing on the bottleneck
of the system, the other stages can be subordinated with relative ease.
The above .:::: states a need for alternative JIT : : duction
methods which can be used in complex manufacturing systems, and motivates
our study. To assure a clear understand. of our research, we proceed with each
to a decrease in customer satisfaction. Larger batches further exacerbate q(i.lil v
related problems.
High inventory levels in a manufacturing system may hide some problems
existing in the system. A popular analogy is to compare a production process
with a river and the level of inventory with the water level in the river (see Figure
12). When the water level is high, the water will cover the rocks. Likewise, when
inventory levels are high, problems are masked. However, when the level (i.e., the
inventory) is low, the rocks (i.e., the problems) are evident. The healthy approach
is not to use high inventory levels to avoid problems, but to solve root problems
and later decrease the inventory levels.
Work flow
Inventory
level
Quay Lead Setup Machine Product
times times types design
Figure 12: RiverInventory Analogy
High inventory levels cause waste for the manufacturer. The reasons for this
include high carrying costs and long lead times. One can extend the disadvantages
of inventories, but just like setups, they are unavoidable. In other words, unless
one reduces setup times and costs to a negligible level, the capacity constraint
will not allow the ideal onepieceflow, in which all the products are manufactured
and carried in batches of one. This brings a tradeoff problem between setups and
inventories. This tradeoff has captured a lot of academic interest, under the broad
of lotsizing.
Research on lotsizing traces back to the classical economic order quantity
(EOQ) model of : .... (1913). Harris derived the optimal order quantity a
:: 1 item, considering a stable demand and (1 a inventory ..: costs.
Later, variations of the EOQ model have been developed : determining economic
:::: : : :: ::. : :' i ( .::: : : :i ). D ue to single item :: :: : I : ',cd
manufacturing ... *itions, optimal solution calculation is an easy task.
A variation of the classical EOQ problem is the economic lot :* :::::;; prob
lem (T I w') (1 .. .by, ) which takes capacity constraints into account
in a multiitem :: ::: ': ': environment., i too assumes a static, known
demand and an :::.:: .:. :. horizon. An :. ..tant variant of the ELSP is the
common 7 :. ::. 1:,,:: problem (C( '') (iL :i1 : r and i :::.i. :, 1.: ) In
the CCSP. the i : .:.'. horizon is ( : i i into fixed th periods ( 7 ) and
the products are : ed to these cycles. Some products may be produced once in
several cycles, whereas some others : : be scheduled several batches in a cycle.
An important variation of the EOQ problem is the so called '.'. .. iWhitin
problems (WW) ('.': ::. :. and Whitin, 1'..S). Ti:. \W ." problem assumes a finite
S... :.. horizon divided into several discrete .. .... s with demands.
However, the WW model ignores capacity constraints, is concerned with . i :
.. ''item structures, and hence is not applicable in many reallife situations.
A more comprehensive model can include capacity constraints in a multiitem
environment, considering .1 :. .. 1.: demand for several discrete time .. : ,ds.
An extension to the '.: .'. model i : i :i : v(y constraints is the capac
itated lot : ..." 1.:: (C: i') (Barany, van R.. and W 1 A, i'~ i) In the
Cl i', one decides what to produce in each period. Several items can be produced
in the same period, if the capacity allows. Ti .. items can be ... duced in advance
in order to save setup costs o' ::: the capacity constraints.
A variant of the CLSP is the d: lot 1 :: and scheduling problem
(Di :') (Salomon, Kroon, Kuik and .'. ....ve, 1991). The Di :' divides each
S: ,d into several micro i* '1.' and assumes allornothing :.:.. uct ion for these
micro periods. A product can be produced during several consecutive ods with
only one setup.
TI.. allornothing assumption of the DLSP is found restrictive and a more
realistic problem i ::::::::: :: (the continuous setup lot p 1::". problem, I
(Karmarkar and Schrage, 1985)) has been defined by relaxing this ... tion. At
I: : the i:i.: ::* between the DLSP and the < iP looks : but it
becomes important when one schedules a product for two distinct .. )(i>ds and no
other i: .duct is scheduled in between these two periods. In this case, the setup
cost is incurred twice in the DLSP, but only once in the CSLP.
The .". *' drawback of the C. i is that if the capacity of a period is not used
: iy for the ...duct scheduled in that 0d, then the remaining i... y y is 1i
unused. In the proportional lotsizing and ..1.'. .':.. problem (PLSP) (Drexl and
HIaase, 1 '.) two distinct products can be scheduled for .: : :I :1, so that the
capacity is fully utilized. In the PLSP, only one setup per period is allowed.
Although the allornothing : ::::: : :: (introduced with the D:i i) provides
computational ....... it is impractical for real world problems, since the number
of i : s ds : : easily become r' ::' :. : large. As a solution to this trade i
between ..:1 :I with : or many : .ds, the general lot sizing and scheduling
problem (( i P) (i ::::.::: and Meyr, 1'c:) has been developed. In the (i '', a
user i :: parameter restricts the number of lots ( ::. ,) : .: );d.
Among the discretetime lotsizing models = '1 is known as a bigbucket
model and the rest (DLSP through GSLP above) are smallbucket models. A
combination of the 1.' and smallbucket models is the socalled capacitated lot
I ::problem with ]I::i .1 lot sizes (CLSLP) (Suerie and Stadtler, ::::3). i
S.ducts can be .. lduced in a = .d (which is the y. .... : y of bigbucket models)
and a setup state can be carried over .... one ( :od to the next (which is a
:... y seen in :: .' bucket models).
Among the problems mentioned so i only EOQ and i : are based on
continuous time : ::::: ition. All the other problems have a discrete time structure.
An important variant of the ELSP is the socalled watching and :" .
problem (B: : ) (Jordan, :: ). In the i '', dynamic demand is allowed but each
order is characterized by its ,1. ..i":.. and size; in other words, orders are taken and
processed as jobs on a single machine. One : ::. ::::. : i : : ::: : of the i' is
that the ....1 of tie machine does not i. i :..1 on time; thus ... : times of
the i 1 do notI : : on the schedule. Another important :::: i on is that jobs
are not .i. wed to 1: but jobs can be combined to avoid setups.
The follow: .':' can be I to for further r; .'; on the topic.
Drexl and Kimms (.'" .) provide an extensive literature review on lot sizing and
scheduling problems. Potts and van Wassenhove (1':: ) point to the importance of
i::, .: :'::: scheduling methods with 1 :I 7:::: and lot ":: methods and r:: t on
tie complexities of the ; 1...'..
Unfortunately all the literature mentioned above is related to tr,: i :
manufacturing methods and MRP. Each problem tries to calculate optimal batch
sizes in order to minimize the total cost. These problems are i:: : based on the
existing :. .. of the m ;i :. :r, : := 1 .. under consideration. T:... :.. 'e,
none of them can create a breal I :: .:, 1: solution. Their contribution is :::: i to
( : :. ::.; the MIRP literature to some new .. .: ::. ::: :: ,; systems.
the simplest form of the EOQ model gives us a direction. Tf K is the
:: (ordering) cost, D is demand .. : period, h :,..i ::.; cost per unit per :1.d
Production ... ( :onden, i ::), is recognized as a JIT classic. Dr. Monden
gained valuable knowledge and experience from his research and related activities
in the T . ... automobile in:: i ie has also made valuable contributions to
the soi i i T Aota " 1': '" i ....:....:t this section, we will mainly
be referring to these two books in order to :.1 .:: the T ... production .
(TPS),and we be using the words JTT and a .1 nonymously.
In his book (Monden, 1 '. p.1) Monden wrote:
Ti:. T,.. 1 .. : duction ; ::: is a viable method i .7 :: i. .ducts
because it is an : : ve tool for producing the ultimate ..p... ..:. To
achieve this t:.:::... .. the primary goal of the Ti.. ..d.uction ; .
is cost reduction, or improvement of productivity. : '.. reduction and
.:idu. I improvement are attained th ..:: 1. the elimination of
various wastes such as excessive ::: ::' : and excessive work force.
T.. treats waste elimination as a : .I. / for continuous: :::
and the concept of waste is very broad. '. sources in manufacturing ... vary
within and across : .: 1: : but the ) ii wing (first described by Ohno) are the
most common ones:
Waste frornm : auctionn :i
Waste in transportation
WVaste : ::: inventory
Waste '...... overproduction
"' of : time
Waste of :
'.' .. *: of motion
The 1 elements in ::.:::: t:: the i .. : i .1 waste are included in JIT.
E'' .. t .: authors have emphasized l' :i ... : tools of JIT; some ? .. >ioned the
Kanban system while the others favored setup time reduction ( i )) or produc
tion r.......i T. following tools are described in the TPS.
Kanban system
Production ...... .
Capacity i
S.. :: reduction
:' ... raining and plant 1 ut
Total : :. :: .. .
The essence of i .an is its being a pull system, in contrast to traditional
1: systems such as MRP. In a :: 1: system one forecasts the demand and then,
.: with the final e, i...... .. .. in detail. T
decision is materials release for each operation. Due to uncertainties in the i::
these strict :.7 .. are likely to fail or become obsolete in a short time. To overcome
problems and keep the : :::: : 1::::::; system running, ::: :: : generally choose
S high inventory levels, which defeats the .==::~ose of. := : .. production in
the:: :: .
On the other hand, in a pull system, one forecasts the demand and 1 :. only
the i stage. A production unit (a machine, a cell, a line, etc.) produces the
.:'ts demanded 1. the successor unit. Ti: : if no demand comes for a i: :.f.
Then no extra units are ;....duced. This way, production systems can respond
to demand changes much quicker than a :: I: system, and no extra inventories are
created.
T ta implements the Kanban I. .:: : i y, but it .:. not be the
best .. of contrc.11:. .. .. : .. *; operations. In the western world some new
.:i i::'' like i i.i, and Spearman's (2: :::) Con'. : which stands for constant
work :. :'ocess, and Goldratt's (1 ) TOC, which stands for the theory of
constraints, have been developed and successfully implemented.
Many scholars in the area have misintl :: \.\. .1 the JIT 1 :.: 1 narrowing
the ideal of the system to inventories." This misinterpretation probably stems
: Robert Hall. In his book (Hall, '""., p.3), Hall wrote:
Zero Inventories connotes to a level of i : actionn not ever attainable in
a production process. However, the ....... t of a : i. level of excellence
is important because it stimulates a :: i : constant improvement
through ... :ve attention to both the overall task and to the
minute details.
As Hall .. :. zero invrc'. . ideal is not achievable. However, in order
to stimulate continuous improvement in a ) : : ::: : :::::: system, this ideal 1 
an r:.. .'tant role. Edwards (1 .) pushed the use of absolute ideals to its
SI :::., the .: of JIT in terms of the seven zeros, which are required to
achieve zero inventories. Ti.. are = below:
.'o defects
Zero (excess) lot size
Zero .: .
Zero breakdowns
Zero ..
Zero lead time
Zero sur; :.
Obviously, the seven zeros are no more achievable in practice than is zero
inventory. Zero lead time with no inventory iy means instantaneous ...duc
tion, which is .1: : .:iiy ::: possible. i i :: :ose of such goals, .: .... .1:: : to the
JIT ...: ..::. who make use of them, is to inspire an environment of continuous
improvement. No matter how a .... .. :....: system is running, there is
:1". : room for : :.. .. :
JIT is more than a system of frequent materials delivery or the use of kanban
to control work releases. At the heart of the ::: ::::: ::: :; systems developed
by Toyota and other Japanese firms is a careful .. :..... of the production
environment. Ohno ('": p.11) was r clear about this:
Production equals demand
The first : ::.: ::: :I pertains to product demand, something over which a
company might have control, but the other two are the things that a company
can and must control. The first requirement clearly pinpoints the *1: ':
F.. .......... or : :. : ana1 I. we c :: three production : .1 .1
maketostock ( ), assembletoorder (ATO), and maketoorder ( : 0).
MTS : ; make .: ducts in :::' i : :: of demand. A. : i::::: to
demand forecast, the length of the i. .. horizon (for which a stable demand
will be assumed) and the demand level for the horizon for each product can be
determined simultaneously. This ,. 1. .. ... (demand stabilization) is defined in
subsection 1.3.1.
ATO ..... .: i:. iduce sli ...:. according to forecasts, then combine
these s : :::1.1.i : in .::: : : combinations as requested customers. A 1.:
variety of different products can be produced ..: ... : i : .. :. combinations
of relatively kinds of subassemblies. For example, if a company produces 3
: i. .::. that into a .:.. 7::. I and each subassembly has 5 I :. :: models,
than the number of the ....'. combinations is 5 125. Computers and cars are
S::y :.1 this i.i:.. : y. In ATO ..:: panies production smoothing can
be implemented by 1.::.. focus from the final assembly level to the subassemblies
level. For each subassembly, demand stabilization should be done iately.
MTO companies : ...duce products in r :..... to actual customer orders
and hence are the most : i : ;, class for stabilizing the demand for production
smoothing. Because of the large number of possible : .:. ducts and possible small
demand for each i : duct, it is impossible to forecast demand for products. In
the best case, a company :. .; :...::. [:.)ducts according to subassembly and
component requirements, and create product i ..... i ... the problem becomes
somewhat similar to the i.:.,blem of ATOs. If the company cannot create i...duct
families, or the number of families created is large, the only option may be to deal
with the firm (future and backlogged) demand. Since the company will try to
minimize a measure of tardiness or completion times of the orders, the problem
here is a classical scheduling problem; production smoothing is not the primary
concern. Even if it were, due to the uncertainty of the future demands, it would be
inapplicable.
In the next subsection we explain a technique which stabilizes the demand for
a single product over a finite horizon. The length of the horizon is also an output
of the technique.
1.3.1 Demand Stabilization
If a company could have a stable demand over a very long horizon, e.g., five
years, this would make smoothing and planning the production system much easier.
However, most companies are likely to face fluctuating demand with seasonal
effects. Then, in order to meet the prerequisite condition of production smoothing,
the company should define a planning horizon with a stable, continuous demand,
based on demand forecast.
If the planning horizon is too long, then the reliability of the forecast is low,
and the plan may need to be modified before the end of the horizon. On the other
hand, if it is too short, then the purpose of production smoothing is lost. We
demonstrate the effect of choosing the appropriate length for the planning horizon
in Figure 13.
For the demand pattern in Figure 13, the 2year period is inappropriate for
several reasons. First, since the level of demand for the first year is mostly higher
than the production level, an initial stock of inventory must be available to meet
the demand, and the inventory investment for this first year will be high. Second,
the production level throughout the second year exceeds demand, yet the demand
seems to be decreasing. As a result we will most likely have a large stock of
Demand
6month plans
2year plan
Year 1 Year 2 Time
iure 13: Demand Stabilization Over Time
inventory at hand at the end of the second year. i ... i, since the demand pattern
:.::': the first :'s i.:duction rate using the data for both years
is risky. In this example a series of 6month plans looks more reasonable. After the
second J : 1.:t, we need to replan with a new demand forecast.
TI. level of I ... auction cannot be chosen simply as the average demand over
the planning horizon. The level chosen must account : preexisting stock and be
able to satisfy periods of peak demand. Also the length of the horizon should not
cause unacceptably 1 1: inventory accumulations throughout the period.
Although the intent of production leveling is to keep production uniform for
as .;:..' as possible, if forecasted demand shows 1: 1: variation between seasons or
months, then the level of i:.,duction should be ..1: i..1 seasonally or monthly (to
assure that :... 1 ': equals demand).
U I: continuous, stable demand for :.ducts, the i:.,duction smoothing
problem aims to find a production plan which disperses the production of each
..: uct uniformly over time. In the absence of : !:: :tion smoothing a ::i .:
may face with two ... : ..... either some work units starve or to prevent
starvation, :i 1: amount of inventories are carried. Let us explain these problems
with an example.
A ..: .. :...... I .. :y i... ",1 2 (i : i .. .: products, A and B. Dem and
: both i ..ducts is identical and 200 units/month. Product A constitutes of 2
units of part a and 2 units of :.t Q, product B constitutes of 2 units of : o ,t and
4 units of :I 7. Parts a, 3 and 7 are i : :uced on e :t work units, X,Y and
Z resp(. i' all : a assembly station (S), where products A and B are
assembled. T1:. demand values parts are ': x 2 1 200 x 2 = .:: ''::, x 2 = ::
and ::i x 2 = ::' i units, for a, and 7 respectively. Setup times on the assembly
station are negligible. A :: :: .::1 1 : per month and :i: minutes per ,
total l .. ductive time is : i :i minutes month, on every manufacturing unit.
i :. !: bonds to a time of 20 minutes on the assembly station. Processing
times of the 'ts o, f and 7 are given as 8, 15 and 10 minutes, respectively.
The total time needed to produce all parts are calculated as ::: x 8 = t I.
.i\ x 15 = =. : and 800 x 10 = i minutes, on X, Y and Z, :. .. ];. 1 X and
Y have excess I : but we're not so lucky with Z.
For X, the schedule of the final assembly station does not make a : :.:
since it has to produce 2 units of a 20 minutes, no matter what is being
assembled on S. Y has excess < : : and this slack can be used effectively with
good .....:... T. .: is, if a good schedule can be found, the inventory level for
3 can be I low. However, for 7, ::.:. assembly sequence is of vital :: :iortance.
Let us consider two extreme sequences for S. In the first scenario, we : t J duce
::: units of A and then 200 units of B. In this scenario there is no demand for
.:'t 7 in the first half of the month. Ti::: if Z i.:..duces 7 in the :: : half, then
a huge ... of part 7 accumulates. If Z does not produce 7 in the : : I. i, on the
other hand, then S will starve in the second half of the month, when it needs part
7 to assemble product B. In the second scenario the assembly plan continuously
shifts from product A to B, then to A again and so on. With this plan 4 units of
part 7 are required every 40 minutes, and the inventory of 7 never exceeds 4 units.
The inventorytime graph in Figure 14 demonstrates the effect of final assembly
schedule for this 2product example. As the number of products and diversity in
part requirements grows, the effect of production smoothing becomes more vital.
Inventory
Scenario 1: Largest Batches
Time
Inventory
Scenario 2: Smallest Batches
Figure 14: Effect of Production Smoothing on Inventory Level
In practice we may not be able to smooth the production perfectly, as in the
example. Significantly large setup times may imply watching. In this case, one may
APT7fIT
AA1A
Time
MO.
In the Kanban systems used in Toyota, preceding processes supplying
the various parts or materials to the line are given greatest attention.
Under this "pulling" system, the variation in production quantities or
c'r.i .i,i'. times must be minimized. Also, their respective workin
process inventories must be minimized. To do so, the quantity used per
hour (i.e., consumption speed) for each part in the mixedmodel line
must be kept as constant as possible.
Keeping the quantity used per hour constant can be interpreted as keeping
the cumulative quantity used proportional to time elapsed. In Figure 15 the
straight line demonstrates the ideal consumption for a part (or product itself).
Consumption rate is constant, in other words consumption is proportional to
time. The actual consumption is a step function, increasing by the amount the
part is consumed by the product at that spot of the sequence. The chart may
be drawn for products, as well. In this case, the actual consumption may only
increase by one if that product is assigned to that spot of the sequence. Actual
consumption may be above or below the ideal consumption. If there are more than
one products in the system considered, the actual cannot be ,,l.i\, equal to the
ideal, which means there occurs a gap between the two. The smaller the gap, the
more successful the sequence.
Gap is calculated for every part (or product) and for every spot in the
sequence. Toyota's objective is to minimize the total of the squared gaps. Since
plus and minus gaps diminishes each other, using squared gaps seems appropriate.
Toyota is interested in smoothing the parts usage at the subassemblies level
only. Therefore, till \ calculate the gaps for only subassemblies used by the end
products in the sequence. Also, in the literature there exist examples where only
product level, or up to 4 different levels (including product level) are considered. If
a method is about sequencing the products in a way that only the variation in the
product level is considered, we call it a singlelevel production smoothing problem,
otherwise we call it a multilevel production smoothing problem.
EW
Ideal Production
rTime
End of the
Planning Horizon
Toyota's goal chasing method (GCM) works as follows. First no product is
assigned to any stage in the sequence. Starting with the first stage, one stage is
analyzed at every iteration. At an iteration, a product is assigned to the stage, and
the assignment is never taken back. The algorithm terminates when all the stages
are considered, yielding a complete sequence.
At each iteration, every product (if all units of a product has been assigned to
some stages in the sequence, that product is ignored) is considered separately and
the total gap is calculated for the product being considered. The product which
gives the smallest total gap is assigned to the stage.
The GCM is a onepass greedy heuristic. It is quite efficient in terms of time,
but frequently fails to find the optimum sequence. Several algorithms have been
developed and found superior to the GC'\ I. These methods will be discussed later,
in this dissertation.
Toyota's manufacturing system structure has major advantages for production
smoothing, that is, each product takes one unit of time on the line and changeover
machine environments are often 1. ..... ..... into subproblems that deal
with single machines. For example, a cc::: 1' :i 1 machine environment
with a .. 1. bottleneck give rise to a machine model.
From production smoothing point of view, even the single machine model
is not a trivial problem to solve. Solutions to some other models (:. :.. *, and
assembly ::: ..) can be obtained from single machine techniques by minor modifica
tions. If one can : that the; i :.. being considered has one bottleneck, and
.. schedule that is ... for this bottleneck is ... for the rest of the system,
then a w. i ..::: .::. .: :: ::::: : 1.: (i K w 1 job 1 .. or .. : shop)
can be reduced to a .. machine tem.
In a parallel machine environment there is only one task to be I. :.. .
for each ':* (product) but there are a number of machines that can do this task.
The machines might be identical. In theory, parallel machine models are the
generalizations of single machine models. In i .. ice, generally there are more
than one resources which can be used for the same task, therefore parallel machine
systems are as common as single machine systems.
S: production smoothing point of view, if the .::'allel machines are identical
and each : duct can be produced on .. of the machines, the problem can be
tr ... ...... i to a single machine e problem. i other cases where machines are not
identical in .: 1 and each .: duct can not be processed on each machine, problem
is much harder.
Flow 1: are :: :::: : ::: :: systems which consist of a number single
machines, :.1.. .. serially. Each product has to follow the same route, but may
have different setup and prc t :: ". times on each machine. A ::\iw Ii where a
product's ..... .. and setup times are the same on each machine, is identical to
a single machine; I :::
A .:. ::.:. flow shop is a combination of a .'allel machines in a flow 1
":: : : are a number operations to be .. :.: : ,1 on each i: i duct, : I ,: 1 ts
must .H... w the same route and there .. be more than one machines used
for the same operation, in par: i. i i i : p.::.:.i I machines r: i :1 be identical.
Other names for a flexible i w w ... in the ::. 1:::. are ..:.. ,und fl.. shop,
multiprocessor flow shop and 1.: flow shop.
Flow 1:.. and flexible flow 1:.. are common .::.. ::::::. systems,
Where :i can be grouped into product i .t .. : ....... to
S:::::: :: : : in : :: orders. If a ; : duct is not processed on one of the
serial machines, than !.; and ..... .. time can be assumed zero for that
product on that machine and the i. ::: can be treated as a flow 1:
Assembly are similar to i.. 1... Generally on flow shops unique
batches are processes, the demand i each product :.:. not be large and there
are relatively :. 1H number of operations. On assembly lines, relatively smaller
number of products (the (.: '::..1 idea was to 1. : :: an assembly i :: for a single
duct); with larger demands and more operation : ...: eventsts. According to
demand data, a ,1. time is found for the line and tasks are .... :' i to stations.
Each product should be processed on each station within the. '. time. In multi
..1. .:t lines this rule is generally too restrictive, thus a more relaxed rule such as
:. : processing time of all .: ducts on a station should be less than or .:: :' to
the cycle time, is used instead.
Since the first production smoothing system was developed : an assembly
(in Toyota) and has a wide ." area, the assembly line model is
important for : duction :::. .1:: studies. An assembly line on which processing
and ':: times do not ( .c: ::..: station to station is identical to a : 1, machine
system.
In a job 1:.. there is a fixed route for each i.:.,duct, but routes : :i. :
products are not necessarily the same. i .e :"e, material :i within the system
is :.: 1I ated and is c::: ::: for .1 ::1 .
Starting at Toyota, production smoot'1 ....1. .. has been studied for
decades now. i.:::y researchers have contributed to the literature, with alternative
objective functions or solution m. i1..i ... ;. Unfortunately in all these work the
problem scope has been stuck in Toyota's system. i 'y product is assumed to
take exactly one unit of time on a synchronized assembly line. Ti:: t, he i:: .. :,ure
does not help other .. .: : systems, much.
Lummus (1' :'.) claims that an increase in product variety i:: i.: .the
JIT negatively. . considers a 9 station JIT i.. ...... ,. . em and runs
simulations with (H:ii : .:I and processing time requirements for 2 : 1:
She tests the T. of 3 T. ... ... ... :. methods; ... .. in large batches,
T ta's rule and a random :: :: Results from the simulation study show
that, when there is a :.i:. .. imbalance between time :':..: ements of : .
* 1...:ts on at least one station, sequencing cannot improve system's
In other words, the sequence found with Toyota's rule is not significantly better
than a random sequence. .. result clearly states a need for a method which can
be used in imbalanced, complex :.::::::.': ::::
This dissertation addresses the production smoothing problem where each
: duct ::: : have distinct : : ::. and i:: times on the : ..: (machines
or stations). T . ..... ......... system types, which are subjects of this research.
are primarily .: machine and flow .1: i environments.
This dissertation addresses both '.. .level (considering c.t... the products)
and multilevel (product level being the first, several levels of parts are considered)
versions of the : .iduction smoothing problem.
;ting literature on production smoothing : 1' ::: had the advantage of
ana1' : the .1 .:.::. : horizon in discrete time units. For .:.. [, if the horizon
is 1 month (assuming :::::: minutes productive time : month) and a total of i:::::'
units of 3 0.:. .. ..: products to sequence, one might work with ::: :1 discrete time
units (each equal to 2 minutes), or spots in the
Allowing arbitrary .. : and i= = times i that the .1i :
horizon should be analyzed in a continuous manner. This fact makes the problem
:. ::in several T: : the obi.. ve function must be reformulated.
Since the time is continuous, measured ,i;: over minimal time units should be
time integrated to get the total gap between the ideal and actual : :::: tions.
Second, in a 1. ... ..I products imay have (i. .. ... time requirements on i
machines. T : : 1.; .::::.. a continuous flow of products is a ii::. :: i task.
Even if the environment is not a w x .".v nchronizing the upstream operations
with this continuous time schedule is difficult.
To overcome these I :::. :! : ; and make use of the : i: literature we define
a H: i i.::: th timebucket. F . 1..:t, no matter whether it is produced as a
single unit or in batches, should fit into this timebucket, where the length of this
timebucket is also a decision variable. P .1.:. decisions are made in a .. such
that the total number of batches defines the length of the bucket, in which I
batch should fit.
In this context the problem can be divided into two ::,atc subproblems,
and solved i1 a 2phase approach. In the first phase, the number of batches and
batch sizes for .i product are determined. In the second phase these batches are
sequenced. Since . batch fits into the fixedlength timebucket, the sequencing
problem can be solved 1 using the existing .1; :. time methods available in the
Al'1 ..: 1: the two phases are separate problems, there is a link between them.
Model formulation of the second d 7. h drives formulation of the model in the: :
1. .... ...'e, the second phase should be stated before the first phase.
28
The dissertation is organized as follows. SingleMachine SingleLevel model is
presented in Chapter 2, while chapters 3, 4 and 5 are devoted to FlowShop Single
Level, SingleMachine MultiLevel and FlowShop MultiLevel models, respectively.
Finally, in Chapter 6, we summarize our work, give concluding remarks and discuss
possible future research directions.
Each model will be studied, starting with a literature review, formulation and
solution approaches for the second phase problem (which is mostly based on the
work available in the literature) and formulation and solution approaches for the
first phase problem (which is completely introduced by us).
CHAPTER 2
SINGLEMACHINE SINGLELEVEL MODEL
SingleMachine SingleLevel (SMSL) model has a pivotal importance in this
dissertation. Principles and the approach used in building the model will be
extended to the other models. Furthermore, model properties and findings, as well
as the solution methods, give useful insights for the other models.
The manufacturing environment of interest in this model is a single machine.
This single machine can be interpreted as a final operation that all the products
must go through, or a bottleneck operation which again serves all the products, and
all other operations may be subordinated to the needs of this single machine. In
other words, any feasible sequence for this single machine is feasible for the entire
system. Thus, the entire system can be controlled by setting a sequence for this
single machine. The automotive pressure hose manufacturing facility presented in
the previous chapter (see pages 1 and 2) is a good example for such systems. The
final stage of the manufacturing system in the example (the assembly station) is
identified as the single machine on which we focus.
SingleLevel denotes that only the endproducts level is considered and
variation in product consumption will be minimized. If the part requirements
of different products are somewhat close, than controlling the single level is
appropriate. The idea is, a leveled production schedule will result in leveled
consumption in sublevels as well. Kubiak (1993) calls this singlelevel problem as
product rate variation (PRV) problem.
In the majority of papers in the production smoothing literature, batch sizes
are assumed identical (and equal to one). The advantage of this assumption is
obvious; with this assumption one easily adds setup time to processing time,
therefore eliminates the complexity imposed on this problem by setups and larger
batch sizes. Moreover, til \assume that processing times are identical (and equal
to one) for each product and there is enough time to process all these products in
any sequence. This makes the ideal onepieceflow possible, thereby eliminating the
need to process the products in batches. With this assumption the environment is
defined as a synchronized (perfectly balanced) assembly line.
The models used in the papers mentioned above cannot be used to obtain level
schedules for a single machine environment where processing and setup times vary
for different products and total available time is limited. In this environment one
must decide on batch sizes and the number of batches that will be produced to
meet the demand, for each product, before trying to sequence the batches. This
dissertation is interested in solving this harder and more realistic version of the
mixedmodel sequencing problem.
As the previous chapter has already explained, this dissertation develops a
new structure where demands are met in batches, and each batch can be processed
within a fixed timebucket, which itself is a decision variable. Thus, the problem
can be analyzed in two phases, the first phase is to determine length of the fixed
time bucket, number of batches and batch sizes for each product. Once we solve
the problem of the first phase, the problem of sequencing those batches, which is
the second phase, becomes trivial. Since each batch should be processed in a fixed
time bucket, and total number of batches to produce is known for each product, we
can treat each batch as a single unit of that product. This second phase becomes
similar to models in the literature. Therefore, we can adapt one of the efficient
methods, which are already developed and tested for a problem similar to ours.
This chapter is organized as follows. Section 2.1 presents the currently existing
work in the literature, related to the singlemachine singlelevel model. Sections 2.2
to 2.5 are devoted to the 2nd phase problem, where the problem is formally stated
and exact and heuristic solution approaches are presented. The work related to
the 2nd phase problem mostly relies on the existing work in the literature, therefore
these sections include an extensive literature review. The rest of the chapter is
devoted to the 1st phase problem, the main consideration of the chapter. In section
2.6, we present the mathematical formulation of the problem; and in section 2.7, we
analyze the nature of the problem and draw useful properties about the problem.
Section 2.8 develops exact solution methods for the problem. Sections 2.9 and 2.10
are devoted to heuristic solution procedures, as we devise a parametric heuristic
procedure for the problem and implement three metaheuristic techniques in these
sections, respectively. Finally, Section 2.11 presents a comparative analysis of the
solution approaches developed for the problem.
2.1 Literature Review
Before proceeding into review of related papers in the field, we define our
notation in order to avoid possible confusion due to different notations used in
these papers.
n Number of products to be manufactured
i Product index
k Stage index
si Setup time of product i on the machine
pi Processing time of one unit of product i
di Demand for product i for the planning horizon
Di Total demand of products 1 to i to be manufactured in the planning horizon ( 1 dh)
h=1
bi Batch size of product i
qi Number of batches of product i to be manufactured in the planning horizon
Q Total number of batches to be manufactured in the planning horizon (= 7 qj)
i=
T Total available time, length of the planning horizon
t Length of the timebucket, length of one stage in the sequence
xi,k Cumulative production of product i over stages 1 to k, measured in batches.
The planning horizon (T) is divided into equal length (t) intervals, or stages.
The number of stages is equal to the total number of batches to be manufactured
(Q). This property will allow us to measure the deviation from the ideal produc
tion rates in a discrete manner. Also, xi,k denotes the total number of batches of
product i produced in stages 1,2,..,k. The following recursive equality holds for xi,k.
0, if k = 0
Xi,k Xi,k1 + 1, if product i is sequenced in the kth stage
Xi,k1, O/W.
We have already noted that (see pages 20 and 21), in the ideal schedule in a
JIT environment the production rate should be constant, in other terms cumulative
stock for a product at a given point in time (total number of items produced from
the beginning until this time) should be proportional to the time elapsed since the
beginning of the horizon.
Miltenburg's (1989) work can be seen as the seminal paper in singlelevel
mixedmodel justintime sequencing literature. Miltenburg defines the objec
tive function as the summation of squared deviations from the ideal schedule.
Miltenburg's model can be expressed as follows.
D, D
Minimize Z= (,k k d)2
k l i l
k=1 i= n
S.T.
xi,k = k, k =1,2,..,D
i=
i,k Z+, Vi, Vk
The model looks quite simple. It has only two sets of constraints. These
constraints assure that one and only one product is sequenced at each stage. In
the core of the objective function, is the gap between ideal and actual production
amounts, for a given product at a given stage. xi,k is the actual cumulative
production amount of product i, including stages 1, 2,..., k. As Figure 15 shows,
the ideal cumulative production can be expressed with a straight line. Since the
production amount at the end of the horizon (which is the D stage) must equal
demand for that product (di), the ideal cumulative production in stages 1, 2,..., k
can be calculated as k. Now we see that the core of the objective function is
nothing but the gap between the ideal and actual production amounts (see Figure
15). The objective is formed by summing up squared gaps over the products and
stages.
Miltenburg suggests an algorithm for the problem, which may yield an infea
sible solution. If this algorithm provides a feasible solution, then it is also optimal
to the problem. However, this infeasibility occurs frequently with his algorithm.
He further proposes a rescheduling tool to reach feasibility. The rescheduling
tool given to correct the infeasible solutions is based on enumerating all possible
subschedules, therefore not practical for large problem sizes. Miltenburg also
r 1. two heuristic methods to obtain nearoptimal solutions. The first heuristic
(MI A3H1) is a onestage constructive heuristic with an O(nD,) complexity, while
the second one (MA3H2) is a twostage heuristic with a complexity of O(n2D,)
and a better performance than MA3H1.
Ding and Cheng (1993) develop a new heuristic which has MA3H1's complex
ity (O(nD,)) and MA3H2's solution quality. C'Inig and Ding (1996) analyze a
variation of Miltenburg's problem with product weights. They modify some well
known algorithms for weighted products, and give computational results.
Inman and Bulfin (1991) develop an alternative objective function for Mil
tenburg's original model, where they define due dates (ideal production times) for
each copy of each product. Then, the authors suggest an EDD solution approach
of batches equals total demand (Q = D,). With this assumption our formulation
reduces to Miltenburg's model.
The other alternatives for the second phase model can be summarized by two
major deviations from our model. First, one might adapt a minmax approach
instead of minsum. Second, one might use an absolute measure of the gap between
ideal and actual productions, instead of the squared measure chosen here.
The primary motive in forming the model this way is, that it enables us to
adopt our model to Miltenburg's and make use of the extensive literature.
2.3 Exact Methods for the 2nd phase problem
If we ignore the integrality constraint (2.3), then the problem reduces to
minimizing a convex function subject to a set of linear constraints. The optimal
solution is found straightforwardly: xi,k ,= Z = 0. Solution is feasible since
SXi,k k Lk = k. However, this solution is not feasible for the complete model,
i= 1 i= 1
i.e. violates constraint (2.3). Now, we need to find a tool to convert this solution to
a feasible one.
We define two points in space such that Yk = (l,k, Y2,k, ..n,k) E R" and
Xk (Xl,k, 2,k, ..., *n,k) E Z". Yk is the ideal point which results in zero gap
(yi,i = i = 1,2,..., k and Xk is the "n(;,i i integer point to the point Yk, for the
n
kth stage. Here, "neiii" means minimize (bi(xi,k yi,k))2
i=
Now we suggest Algorithm Nearest Point to find the nearest integer point
to the ideal (infeasible) point. This algorithm is a modified version of Miltenburg's
Algorithm 1 (Miltenburg, 1989, p.195). Note that, in the following algorithm and
all the algorithms presented in this section, products denoted by capital letters (A,
B,...) can be any of the products, not a certain product which is labeled by this
capital letter. The algorithm is given in Figure 21.
We illustrate this algorithm with a small example. Consider three products
such that the number of batches to be produced for each product is given as 5
Algorithm Nearest Point
1. Set k = 1.
2. Find the nearest nonnegative integer xi,k to each coordinate yJk. That is, find
xi,e so that Ix i, yi,k < 1 Vi.
n
3. Calculate k = X Xi,.
i 1
4. (a) If k k = 0, then go to step 7. The nearest integer point is Xk = (l,k, x2,k, ..., Xn,k).
(b) If k k1 > 0, then go to step 5.
(c) If k kI < 0, then go to step 6.
5. Find the coordinate ytk with the smallest '.'(1 + 2(xa,k yk)). Break ties arbitrarily.
Increment the value of this xi,k; xi,k xi, + 1. Go to step 3.
6. Find the coordinate yJ, with the smallest '.(1 2(xke yie,)). Break ties arbi
trarily. Decrement the value of this xi,k; xi,k  xi,k 1. Go to step 3.
7. If k Q stop. Else set k + 1, go to step 2.
Figure 21: Pseudocode for Algorithm Nearest Point
batches for products 1 and 2, and 1 batch for product 3. Batch sizes are 2, 2 and
3 units, for products 1, 2 and 3, respectively. Algorithm Nearest Point is used to
find a sequence which minimizes the objective function.
For k = 1 (at the first stage), ideal point Y1i (0.45, 0.45, 0.09) and
corresponding rounded point is X1 = (0, 0, 0). Clearly X1 is not acceptable
(kx = ~,k = 0 < 1 = k), and step 5 is invoked where a new point X1 = (1,0, 0)
i=l
is found. The algorithm is run for stages 1 to 11, and the solution summarized in
Table 21 is obtained.
To obtain a sequence from this solution we proceed stage by stage. The sub
sequence covering up to 5th stage is: 12123. However, the 6th stage brings
a conflict. It suggests us to produce products 1 and 2 at the same time, while
d1 .1 I \ ii.:4 one unit of product 3 which was produced before. Thus, we see that a
feasible sequence can not be obtained from the solution, for this example.
As this very simple example shows, Algorithm Nearest Point does not always
yield a feasible sequence. For cases in which this infeasibility occurs we propose
40
Table 21: Example for Algorithm Nearest Point
Stage (k) X1,k X2,k X3,k Product Scheduled
1 1 0 0 1
2 1 1 0 2
3 2 1 0 1
4 2 2 0 2
5 2 2 1 3
6 3 3 0 1,2,3
7 3 3 1 3
8 4 3 1 1
9 4 4 1 2
10 5 4 1 1
11 5 5 1 1
Algorithm Modified Nearest Point, which is again adapted from Miltenburg
(1989, p.196) (see Figure 22).
Algorithm Modified Nearest Point
1. Solve the 2nd phase problem using Algorithm Nearest Point, and determine if the se
quence is feasible. If yes, stop. The sequence is the optimal sequence. Otherwise, go to step 2.
2. For the infeasible sequence determined in step 1, find the first (or next) stage I
where xi,1 xi,1 1 < 0. Set 6 = number of products i, for which xi,1 xi,1 1 < 0.
Resequence stages 1 6,1 6 + 1, ..., + 1 by considering all possible subsequences for this
range.
3. Repeat step 2 for other stages where infeasibility occurs.
Figure 22: Pseudocode for Algorithm Modified Nearest Point
Algorithm Modified Nearest Point finds an optimal sequence, but in the
worst case the number of infeasible stages in the sequence and the number of stages
to be resequenced may be as high as the total number of batches, Q. Since the
algorithm uses partial enumeration in step 2, the worst case complexity is O(Q!).
This result proves that neither Algorithm Nearest Point nor Algorithm Modified
Nearest Point can be used to solve the problems found in real life. Now, we will
develop a dynamic programming (DP) procedure for solving the 2nd phase problem
more efficiently.
Also, note that f(Xk1) = ei), where i is the index of product assigned
to stage.
To calculate the ..: ..1v. y for the : i. DP procedure, we first need
to know number of states. Since :,i,k can take values 0,1 there are q, I 1
possible values for i : means, I (qi I 1) distinct states exist. Since at
i= 1
most n decisions are evaluated at each state, the complexity of the procedure is
fi=
o( qj H( 1)).
This co.. .. ty is theoretically lower than the previous method. ... .
it allows us to solve :;: y larger problems optimally, but even for moderate
.......1 this ...'" is not practical. Which means we need to 1. :.. another
method for large problems.
In Kubiak and Sethi's (1991) assignment problem formulation of Miltenburg's
(1: :) problem, the authors further claim that the problem is convertible to an
assignment problem in the presence of: .... i ve ..: duct weights. In what
follows we propose a transformation to convert the second phase problem into
an .. :::::..... problem. F:i we define an ideal position for each. :. of each
..1. .:t and a cost function which increases as a ""py deviates :.. its ideal
position. Let Z b* be the ideal position of" of .: duct i and CO, be the
cost of assigning jth copy of product i to the : of the sequence. T.. the
following formulation is defined.
cjk
Where,
(2j 1)Q
2q
( Z*1
fk
if k < Z*
0, if k = Z
k1
bQ Z?*
\ "/ yi?
(
if k > Zj*
Here, [xl denotes the smallest integer that is greater than or equal to x and
Ix denotes the absolute value of x.
Let Yjk E {0, 1} be the decision variable denoting whether the jth copy of
product i is assigned to the kth stage of the sequence. The assignment problem
formulation of the 2nd phase problem is given as follows.
n q. Q
(AP) Minimize CkYji (2.4)
i=1 j= k= 1
S.T.
Q
k=l
n qn
i= j 1
1, Vi, j
1, Vk
(2.5)
(2.6)
(2.7)
YI e {0,1}, Vi,j, k
Constraint set (2.5) assures that each copy of each product is assigned to
exactly one position. Similarly, constraint set (2.6) assures that exactly one copy
is assigned to each position. A graphical illustration of the assignment problem
formulation is given in Figure 23. As seen from the figure, our assignment
formulation has 2Q nodes.
i,j
k
0
0
Figure 23: Graph Representation of the Assignment Problem Formulation
The assignment problem with 2Q nodes can be solved in O(Q3) time, and is
one of the most efficiently solved problems in the operations research literature.
Solution methods for the assignment problem can be traced back to the well
known Hungarian Method (Kuhn, 1955). Balas, Miller, Pekny and Toth (1991)
give a parallel algorithm that can efficiently solve assignment problems with 900
million variables. This corresponds to 30000 batches in our problem. In a reallife
manufacturing system, planning for a larger number of batches is highly unlikely.
Therefore, the assignment problem formulation for the 2nd phase problem is
practical.
2.4 Problem Specific Heuristics for the 2nd Phase Problem
In the previous section we have reviewed exact algorithms that can be used
to solve the 2nd phase problem optimally. The most efficient method suggested is
a transformation to assignment problem, which then can be solved in O(Q3) time.
If the problem at hand has a total number of batches (Q) in the thousands, then
it will take a significant amount of space to convert the problem to an assignment
problem and a significant amount of time to solve it.
It will be quite useful if we can propose some faster methods which have been
proved to give good (near optimal) results. In the literature one can find many
heuristic procedures addressing Miltenburg's problem. The first set of heuristics are
si..I:, i.1 by Miltenburg (1989), along with the problem.
Miltenburg suggests an algorithm which uses two different heuristic approaches
for the resequencing that stems from the nearest point calculation of his exact
algorithm, as an alternative to partial enumeration. The first heuristic is quite
simple. Here we explain this heuristic with adaptation to our problem and in a
complete algorithmic structure (see Algorithm MH1 in Figure 24).
Algorithm MH1
1. Solve the 2nd phase problem using Algorithm Nearest Point, and determine if the se
quence is feasible. If yes, stop. The sequence is the optimal sequence. Otherwise, go to step 2.
2. For the infeasible sequence determined in step 1, find the first (or next) stage I
where xi, xi,1 1 < 0. Set 6 = number of products i, for which xi,1 xi,1 1 < 0. Re
sequence stages 1 6,1 6 + 1, ..., + 1 by using step 3 for every stage in this range.
3. If the stage to be resequenced is stage k, assign the product with smallest
'.2(1 + 2(xak 1 k )) to this stage k.
4. Repeat step 2 for other stages where infeasibility occurs.
Figure 24: Pseudocode for Algorithm MH1
The justification for the rule used in step 3 is as follows. Consider a stage k.
If product A is assigned to stage k, the variation at this stage is denotes by Vk(A).
Similarly, if product B is assigned, then variation is Vk(B). These variations are
given below.
14(A) b(XA,k1 + 1 kQ)2+ b2(XB,k1 k )2 + bi(xi,k1 k )2
i A,B
V (B) = b(A,k1 )2 + (Bk1 1 k )2 b(i,k k )2
bQ Q + kQ)2
Vk(A) V(B) = b ( + 2(xA,k k A b ) (+ 2(XB,k i k ))
As the difference function shows,
Vk(A) < Vk(B) b ( + 2(xA,k1 kqA)) < b2(1 + 2(B,k1 )).
Q Q
This fact is the rationale behind the rule in step 3.
Algorithm MH1 is a onepass greedy heuristic method. The major advantage
is that it is onepass, and that its time complexity is low. In the worst case,
every stage has to be resequenced. At most n products are considered in step 3,
therefore the algorithm performs O(nQ) operations. This complexity is much lower
than the optimal procedure (0(Q3)).
Now let us explain how the algorithm works, on an example. There are 4
products to be sequenced. The number of batches to produce for each product is 8,
1, 8 and 3, respectively. While batch sizes are 1, 3, 2 and 1 units, respectively. The
total number of batches (Q = q g) is 20, which means, the planning horizon is
i=
divided into 20 stages. The sequence found by using Algorithm Nearest Point is
given in Table 22.
As seen from the table, the resulting sequence is infeasible. First infeasibility
occurs at stage 7 for product 4. According to step 2 we now should trace back to
stage 6 and start resequencing using step 3.
For the sake of simplicity, we'll consider that the heuristic resequencing tool
(step 3) is used to assign products to stages, from the very beginning. Which
47
Table 22: Sequence Found by Algorithm Nearest Point
Stage (k) Xlk X2k X3k X4k Product Assigned to the Stage
1 1 0 0 0 1
2 1 0 1 0 3
3 2 0 1 0 1
4 2 0 1 1 4
5 2 0 2 1 3
6 2 0 2 2 4
7 3 0 3 1 1,3,4
8 3 0 3 2 4
9 4 0 4 1 1,3,4
10 4 1 3 2 2,4,3
11 4 1 4 2 3
12 5 1 5 1 1,3,4
13 5 1 5 2 4
14 6 1 6 1 1,3,4
15 6 1 6 2 4
16 7 1 6 2 1
17 7 1 6 3 4
18 7 1 7 3 3
19 8 1 8 2 1,3,4
20 8 1 8 3 4
means we bypass other steps and emphasize the rule in step 3. The sequence found
by using Algorithm MH1 is given in Table 23.
For the first stages we calculate b2(1 + 2(Xi,k_ kL)) for each product.
Comparing the values (0.2, 8.1, 0.8, 0.7), we assign product 1 to stage 1. A tie can
be broken arbitrarily. In stage 8, we see such a tie where we could choose either
product 1 or product 4, and we chose product 4. The algorithm terminated with a
feasible sequence which has a total variation of 27.85.
The drawback of Algorithm MH1 is, that it considers only the current stage,
therefore it is myopic. The algorithm may not be able to find good solutions, due
to its myopic nature.
Miltenburg suggests another heuristic which promises better results. Here we
explain this second heuristic with adaptation to our problem and in Algorithm MH2
(see Figure 25).
Table 23: Sequence Found by Algorithm MH1
Stage b (1 + 2(,k1 kL)) Product Variation
(k) 1 2 3 4 Assigned (Vk)
1 0.2 8.1 0.8 0.7 1 1.05
2 1.4 7.2 2.4 0.4 3 0.38
3 0.6 6.3 2.4 0.1 4 0.71
4 0.2 5.4 0.8 1.8 3 1.52
5 1.0 4.5 4.0 1.5 1 0.63
6 0.2 3.6 0.8 1.2 1 1.82
7 1.4 2.7 2.4 0.9 3 1.31
8 0.6 1.8 2.4 0.6 4 2.28
9 0.2 0.9 0.8 2.3 3 3.25
10 1.0 0.0 4.0 2.0 1 2.50
11 0.2 0.9 0.8 1.7 2 2.75
12 0.6 16.2 2.4 1.4 3 2.28
13 1.4 15.3 2.4 1.1 1 1.31
14 0.2 14.4 0.8 0.8 3 1.82
15 1.0 13.5 4.0 0.5 1 0.63
16 0.2 12.6 0.8 0.2 1 1.52
17 1.4 11.7 2.4 0.1 3 0.70
18 0.6 10.8 2.4 0.4 4 0.38
19 0.2 9.9 0.8 1.3 3 1.05
20 1.0 9.0 4.0 1.0 1 0.00
Total 27.85
This algorithm is very similar to the previous one.
Yet, it makes use of the
previous algorithm in step 4, to decide on the next product to be assigned. This is
also a myopic algorithm, but its vision is somewhat broader. This advantage results
in ability to find better solutions. As expected, this advantage has a cost which is
paid in time complexity.
Algorithm MH2 resequences Q products in the worst case. Every resequencing
takes O(n2) time. Therefore, the time complexity of the algorithm is O(n2Q).
Applying Algorithm MH2 on the same example gives the sequence in Table (24).
Product pairs for the next two (kth and k + 1st) stages are found by considering
each product for the kth stage, separately, and selecting the product for the k + 1st
stage with the rule given in step 3 of Algorithm MH1. The first element of the pair
Algorithm MH2
1. Solve the 2nd phase problem using Algorithm Nearest Point, and determine if the se
quence is feasible. If yes, stop. The sequence is the optimal sequence. Otherwise, go to step 2.
2. For the infeasible sequence determined in step 1, find the first (or next) stage I
where xil xi, 1 < 0. Set 6 number of products i, for which xil xi, 1 < 0. Re
sequence stages 1 6, 1 + 1, ..., + 1 by using steps 3 to 5 for every stage in this range.
3. If the stage to be resequenced is stage k, assign a product (let us say product
A) to this stage k. Calculate variation caused by this assignment, Vk(A).
4. Assuming product A (from step 3) is assigned to stage k, find the product to be
assigned to stage k + 1 (let us say product B), using the rule in Algorithm 2.3 step 3.
Calculate variation caused by this assignment, V+ 1(B).
5. Perform steps 3 and 4 for each product (to be assigned to stage k). Assign prod
uct A which gives smallest total variation V5+i(A) + V4+i(B), to stage k.
6. Repeat step 2 for other stages where infeasibility occurs.
ure 25: Pseudocode for Algorithm MH2
which leads to the lowest variation value is assigned to that step. As seen from the
table, product pairs (1,3),(2,3),(3,4) and (4,3) give 1.4, 17.1, 2.0 and 2.9 units of
variation, :.. i'vely. C.: .:.:: 1y, product 1 is assigned to stage 1. For the last
stage there is only one alternative product that can be : : ... 1 thus no calculation
is i : ..:::: and product 1 is directly assigned to stage 20.
Ti. ; 1 ...':. .. terminated with a feasible sequence which has a total variation
of 27.35. The f o:: : found is very similar to the sequence found 1 the .i..i
algorithm. Ti difference in the total variation is minimal for this example but the
number of operations performed 1 the :I. : thm is much larger.
selection between algorithms i ii and i is a tradeoff : 1.1. :.. between
time and solution c :.:' If one can find an algorithms p:. .1:: : A' :1; ::: MH2's
solution :: .1 j T in :ii i's time, it will be much better than both MH1 and MH2. I
structure of the problem, Ding and (1 ....' (1993) have developed a new
heuristic algorithm which has this .:.. :ty and ,. :. .::.., Miltenburg's heuris
tics. Ti.' named this new ;1....'i .. as TwoS' Algorithm. Ti. :;1 ..':... is
Table 24: Sequence Found by Algorithm MH2
Stage Product Pair & Variation for Stages (k, k + 1) Product Variation
(k) 1 2 3 4 Assigned (Vk)
1 (1,3) 1.4 (2,3) 17.1 (3,1) 2.0 (4,3) 2.9 1 1.05
2 (1,3) 5.4 (2,3) 16.9 (3,4) 1.1 (4,3) 3.9 3 0.38
3 (1,3) 2.7 (2,1) 14.6 (3,1) 4.5 (4,3) 2.2 4 0.71
4 (1,3) 2.7 (2,3) 13.8 (3,1) 2.1 (4,3) 7.2 3 1.52
5 (1,1) 2.4 (2,1) 11.3 (3,1) 8.0 (4,1) 5.9 1 0.63
6 (1,3) 3.1 (2,3) 9.8 (3,1) 3.7 (4,3) 5.6 1 1.82
7 (1,3) 7.4 (2,3) 9.9 (3,4) 3.6 (4,3) 6.9 3 1.31
8 (1,3) 5.0 (2,3) 7.3 (3,1) 6.8 (4,3) 5.5 1 2.28
9 (1,3) 8.8 (2,3) 6.9 (3,2) 5.2 (4,3) 6.3 3 2.75
10 (1,2) 6.7 (2,4) 5.2 (3,2) 10.3 (4,2) 5.2 2 2.50
11 (1,3) 5.5 (2,3) 39.2 (3,1) 6.1 (4,3) 5.0 4 2.75
12 (1,3) 5.4 (2,3) 38.9 (3,1) 3.6 (4,3) 9.9 3 2.28
13 (1,3) 3.1 (2,1) 35.0 (3,1) 6.9 (4,1) 7.2 1 1.31
14 (1,3) 3.0 (2,3) 32.1 (3,1) 2.4 (4,3) 5.5 3 1.82
15 (1,4) 2.1 (2,1) 29.0 (3,1) 7.7 (4,1) 3.6 1 0.63
16 (1,3) 2.2 (2,3) 26.9 (3,1) 2.8 (4,3) 2.7 1 1.52
17 (1,3) 5.9 (2,3) 26.4 (3,4) 1.1 (4,3) 3.4 3 0.70
18 (1,3) 2.9 (2,3) 23.2 (3,4) 4.2 (4,3) 1.4 4 0.38
19 (1,3) 1.6 (2,3) 21.7 (3,1) 1.0 (4,3) 5.1 3 1.05
20 1 1 0.00
Total 27.35
practical in terms of computation time and number of operations, but complicated
to understand the formulation. The explanation of the mathematical details and
some useful proofs can be found in Ding and C'hi.ig (1993) and C'h.ig and Ding
(1996).
Adapting the two stage algorithm to our problem, we define Algorithm
Twostage as given in Figure 26
Applying this algorithm on the same example results in the sequence given in
Table (25). For the first stage (k
1), Oi values are found as 0.1, 3.8, 0.4 and
0.3 for products 1 to 4, respectively. The lowest of the four is 0.4 ((3), so product
3 is selected as the first candidate, A. Calculating Ais in a similar way, product 1 is
Algorithm Twostage
1. Set k 1.
2. Determine product A that has the lowest = .'(xei 1 (I (k +). Break ties
arbitrarily.
3. Determine product B that has the lowest AiA = .2 ik1 (k + 1) + ). If
AB > b(XAA,k1 (k + 1) + )}, set B A.
4. If A B B and bA(XA,k1 k + ) b (xB,k 1 k + ) > 0 assign
product B to stage k, otherwise assign product A to stage k.
5. Eliminate a product, if the last copy of it has been assigned to stage k, from fur
ther consideration. If all products finished, stop. Otherwise, set k = k + 1 and go to step
2.
Figure 26: Pseudocode for Algorithm Twostage
selected as the second candidate (B). Critical value ) is calculated as 0.30, which
tells us B should be assigned to this stage, so product 1 is assigned to stage 1.
Although the mathematical basics of the algorithm are challenging, the
application is quite simple. The algorithm is a onepass method, which performs
O(n) operations per stage. Therefore, the complexity of the algorithm is O(nQ).
As Table (25) shows, the algorithm gives the same sequence as Algorithm
MH2, for this example. Testing these algorithms in terms of time and solution
quality is beyond the scope of this dissertation. We refer to C'Iwn and Ding
(1996), where the authors analyze three algorithms introduced in this section, plus
two other heuristics and one exact method (assignment problem formulation from
the previous section). Their results show that Algorithm Twostage is the most
favorable heuristic for the problem.
2.5 MetaHeuristics for the 2nd Phase Problem
In this section we will briefly review metaheuristic approaches which have
been applied to Miltenburg's model, therefore can easily be adapted to the 2nd
phase problem.
In terms of solution quality, MH techniques provide better results, than
simple, pr :': l .. :i:, heuristic methods. i :: claim is based on a very ::i :"
that ... ; '.y of MHI techniques take their initial solutions from the best
problemspecific (PS) heuristic techniques d. : ed :.. that problem. .
the solution found 1 the MH i. :1 ::. cannot be worse than the result
by the embedded PS 1. 1 *.: .. If there is no known i heuristic for the problem
studied, one should select an arbitrary initial solution (starting point), and this
may cause poor performance for the IMH technique.
MH t :::. :*. are based on searching the solutions : : with the expecta
tion of finding a very good solution, or the optimal solution. Search is gener .i
directed : ::::iy constraints and objective function. :' : ;y of MH tech
.::. ignore .. :r. i .... .: ties of the problem studied and use a general search
: ':,:'e. i : causes an increase in the run time of the method. Since, better
results are found, this excess time .. ::. )tion is .. i. 1.1 Also, the t; ... ..
between running time and solution quality can be :'". i in the finetuning
Among the best known metaheuristic techniques are Simulated A : :.. >:
Tabu Search and Genetic Algorithms.
P 1 ..:: : a new metaheuristic ; ': for 2d ., I : 1.1 ::: or con :: ::
a .....iative analysis of : :. metaheuristic techniques for Miltenburg's model
is not included in the of this dissertation. Hence, this section aims to provide
a brief review of :'r,. 'tant ,.... in the field.
McMullen (1 "':) implements Tabu Search (i ) technique on Miltenburg's
problem. Hie calls Miltenburg's objective function as U ... Goal and defines a
second objective for the problem. This second objective is to minimize the total
number of ::i .. .. ::: ,::. in the ::.. T i:. two obi. 'Ives are combined
into a objective 1. assigning weights associated with each objective. To
:: weights for each obi.. e ve, McMullen creates random solutions and calculates
Values for the objectives. The we 1.. are ..!d in such a way that,
contributions of the alternative obi '. lives are equal, on the aver. For the
experiments, he tests both extremes (setting one of the weights to zero, hence
working with c 1 one ob :ve), equal ...... i... ... weights, and two other
combinations which come from : :::..;:: one of the ob,. i'ves is three times more
important than the other.
Minimization of total number of ':: is not addressed by our problem, so the
results with the first objective only is important for us. However, what we want
to ( :: : ': : in McMullen's study is the implementation, not the computational
results.
The :: 1 :.1:1 :1: od structure used in this :i :: is a :: 1 one. It is con
structed 1. selecting any two positions in the .. ... and: .i.i: the i;>ducts
in these positions. i::. way, .::: solution has Q x (Q 1) (the i : :: has Q
positions) neighbor solutions. From each solution, a number of neighbor solutions
(this number is a parameter of the method) are randomly selected and tested. The
neighbor which has the best obi.. ve [unction value is selected as the :..
solution for the move. If this candidate solution is prohibited by the tabu list for
this iteration, then the objective value is ..::: :. to the aspiration value. If lower
than the aspiration value, the move is i.wed, otherwise second best solution is
examined. When an opriate solution is found, move is By : :::. i By move we
mean updating current solution, tabu list and aspiration value. Ti.. method stops
when a predetermined number of iterations (a .::'ameter of the method) is reached,
or the best solution has not been ::..1 . .1 during the last several .. .i. : T i.
number is also a parameter for the method.
Tabu Search is known to be an i:: 11i ::: search .1::..ue, which can find
good solutions for the problem. n .... ; the tabu 1: is the 1 element of the
method. T1.. size and content of the i: are : : ortant. Generally, solutions
are not stored in the tabu list. A 1.  characteristic of the move, e.g., positions or
.* '..:ts to swap, are stored in the tabu i: This i : y .:ws the user
a more :". :. search method.
Simulated Annealing (SA) is one of the :. 1. i mnetaheuristic techniques.
.:ameters are restricted to a .. 1: schedule and an acceptance : : .1.1y
of the move considered. : ....j schedule 1. :.... initial temperature and tern
.: ::e::: change rate for an iteration. Ti: method terminates when i.: :i : ,ture
reaches a predetermined lowest level.
To prevent :::: ::e convergence to a local optimum, moves to neighbor
solutions with worse objective values are permitted. The probability to accept such
a move is a : :: : : of the temperature and the( i : b :: between the obh yve
dion values of current solution and candidate solution. When '.i :..= i : e
is 1:' : moves to r'orer quality solutions are likely. '. i: : temperature is low, a
more conservative p:  is .1 cited, where only the moves to better solutions are
accepted.
The 1 to a ::. :: :1.. : .1.: .:. : ..:.: of SA is the c ... schedule. If too
S, the method :... return a .. poor solution; if too slow, the method is .. iy
to return a good solution but take a long time.
McMiullen and Frazier ( : :':) i: SA to the same problem I *..... for Tabu
Search application in McNMullen (:": .). Same neighborhood ::::: don is used.
T .. .. a comparison between performance of the two alternative methods
makes sense.
i n authors claim that SA .. .i f'ormes TS for : ..:.. 'y of the test cases.
i result is important, since it : *. that a simpler technique can be used
instead of a more ..::: :. ,:. one. without a loss in the : p,..:: .::.:e.
Another i" ..: : I 1.. :. is the soi .ii..i Genetic Algorithm (GA). The
idea is keeping a ..:.::1 i :. of solutions at hand, and i: : some mutation and
crossover c.rations on the solutions. As time (iterations) poorer solutions
die and more fit (better objective fe::: : :.:: value) solutions survive. i :: is '
a simulation of living organisms. Parameters for the method are n...,ulation size,
number of iterations : .. termination, mutation pro(. and set of solutions
: crossovers.
The '. *1' : ... of crossover operator selects a crossover i and takes
genes I. : the crossover J ..1::I, : ::: one of the .:eonts, and the rest : ::: the
other ... ..' TI. (. crossover gives two (. i. : which are examined for.
to the next generation. More complicated crossover operators select several
crossover points or use several parents to produce .T ..:
For ':: problems crossover operators gener :1i cause infeasil :: in
.. :: solutions. One can to convert the infeasible ..:. to a feasible
one with some type of neighborhood functions and a search method, or to kill the
infeasible offspring instantly. Both cause r:.. r :. i. increasing the run
time. A more comprehensive .... ... :.?. is to (i* a new operator for the problem
at hand.
McMullen et al. (2000) ":. '.. a specific crossover operator for their study.
Ti select two crossover points, the chromosomes between these : i:. are
preserved in the i a ... as they are. ... remaining chromosomes come fror the
other parent. To avoid :::: : 7 :: ,y, any repeating chromosomes are deleted. This
assures that no chromosome ::1 be represented more than ...... I1 amount, and
the ..i : : yield f .. i 1. solutions. i: drawback of this .: i:. .'. 1: is, that
the ..e are not similar to the i 'ents. Ti : is, the fundamental element of
the GA 7::: is lost.
The authors compare their results with TS and SA results "..: .
(1' ) and i illen and Frazier ( .. ), and claim that GA gives more favorable
results.
may also construct a multiobjective model which reflects two or more of these
goals. We consider four different objective functions.
n, F b2(Q2 qf)
i1 1 )2Q
Lexmin F2 = gi, bi}
i= 1 i 1
n n
Lexmin F3 = { bi, qg}
i= 1 i= 1
Lexmin F4 = I,,,, tii, i ti ini{til}
Note that the last three objective functions are lexicographic objective
functions. In lexicographic optimization the goal is to select from among all
optimal solutions to objective function 1, the one that optimizes the second
objective function. For more on lexicographic optimization see, for example,
Hamacher and Tufekci (1984).
Aigbedo's (2000) lower bound approach is adopted to define the first objective
function, F1 (Details on derivation of this lower bound can be found in Appendix
A). The idea is to use the lower bound of the second phase objective function, as
the objective function of the first phase. Our preliminary experiments have shown
that there is a high correlation between the original deviation function Z and the
lower bound F1. Thus, we expect that optimal solution to the first phase will lead
to a nearoptimal solution for the second phase.
F2 is a lexicographic expression of goals G2 and G6.
F3 is similar to F2 but the priority between the alternate objectives is in
reverse, i.e., F3 is a lexicographic expression of goals G6 and G2.
Details on formalization of these objectives can be found in AppendixA.
The first three objectives may lead to some solutions where batch processing
times (ti) may fluctuate highly. Having production smoothing idea in mind,
we define a fourth objective F4 where the primary objective is to minimize the
maximum of batch processing times, while keeping batch processing times close to
each other. In other terms F4 is a lexicographic expression of goals G3 and G4.
One of the objective functions defined above should be chosen according to the
problem at hand. Each objective leads to a different model, therefore we have four
models to solve.
We randomly create 125 test instances each with four products (n = 4). For
each instance, we enumerate all the solutions and for each feasible solution (each
batch can be processed within t) we calculate four alternative objective function
values, which are noted as independent variables. Then, we use each feasible
solution of the first phase as input to the second phase problem and solve the latter
(sequencing) problem optimally. The objective value of the optimal solution to
the second phase (Z*) is noted as the dependent variable. After completing the
enumeration for each test instance, we compute correlation coefficients for the four
alternative objectives of the first phase and Z*. Results from this analysis show
that there is a higher correlation between F1 and Z* as compared to the other
objective functions. The results of this correlation analysis are given in Table 26.
Table 26: Correlations Between Alternative 1st Phase Objectives and Z*
Correlation Coefficient (R2)
Statistic (F1 vs. Z*) (F2 vs. Z*) (F' vs. Z*) (F4 vs. Z*)
Average 0.9905 0.5399 0.8693 0.6170
Std.Dev. 0.0119 0.2769 0.1230 0.3129
Max. 1.0000 0.8986 0.9959 0.9915
Min. 0.9057 0.3141 0.2219 0.1651
Realizing the high correlation between the lower bound and the optimal
solution of the second phase problem, we decide to use this lower bound function as
the objective function of the first phase problem.
The following optimization model represents the 1st phase problem.
Minimize F = 1)2h (2.8)
i 1 Zqh
h 1
S.T.
si + pi i 1 < T, V1 (2.9)
h 1
di b= V 1 (2.10)
i Vi (2.11)
bi
1 < qi < di, qi integer, Vi (2.12)
Note that in constraints (2.10) and (2.11) bi (batch size for product i) is used
as a state variable. These two constraints assure that excess production is limited
to the minimum. Decreasing bi or qi by 1 would result in under production.
2.7 Structural Properties of the 1st Phase Problem
The 1st phase problem is an Integer NonLinear Programming (INLP) prob
lem. INLP problems inherit the difficulties of two parent problem classes, IP and
NLP. Some very special cases of NLPs where constraints are linear or objective
function is convex can be solved efficiently, but the rest of the class is known to
be very hard. On the other hand, the general IP problems are NPHard. Our
problem, being an INLP, may be NPHard, as well. In the following, we first reduce
the problem to a simpler one and then formally prove its computational complexity.
Proposition 2.7.1 Let A be a given product. Assuming all other variables (qi,i I
N\{A}) and batch size for product A (bA) constant, the objective function F is
monotone increasing in qA.
Proof The objective function can be split into smaller functions Fi,i G N, as
follows.
F bh ((qA +Q2 )
iqA 1 + 1
bf((qA + Q)2 q)
where F A + Q and Q qi
iCN\{A}
Note that, the objective function is differentiable in qA. Nonnegativity of the
first derivative clearly proves the proposition.
bA Q'2
aOFi A+Q' (>o), ifi A
qA Q)2 ((qA+ / + q)), (>0), o/w
OF O9Fi
OqA i 9qA
Similarly, if everything but bA is constant, smaller bA values yield smaller F values.
F, 2bA (QQA), (> 0), if i A aF F F,
bA 0, b/ bA i bA
0, o/w
This information about the derivatives is closely related to constraint sets
(2.10) and (2.11). Now, in order to make use of this information and the con
straints, we introduce the concept of acceptable values.
Acceptable values of a decision variable (qi) are the integer values that satisfy
the equation q, = [di/[di/qi]]. Let Ai {rri, ..., r,ai} be the complete set
of acceptable values of qi where ri,hi is the h h acceptable value and ai is the
cardinilil v of the set. For any qi V Ai, there exists an rj E A, such that r,,j
consumes lesser resource time and yields smaller excess production, therefore
preferred over qi. Algorithm Find Acceptable Values finds all the acceptable
value set Ai for each product i E N (see Figure 27).
Methods for solving MINLPs include innovative approaches and related
techniques taken and extended from Mixed Integer Linear Programming (MILP).
Outer Approximation (OA) methods, BranchandBound (B&B), Extended Cutting
Plane methods, and Generalized Bender's Decomposition (GBD) for solving
MINLPs have been discussed in the literature since the early 1980's. General idea
in these algorithms is to provide overestimators (NLP subproblem) and under
estimators (MILP master problem) for the problem. Algorithms provide guarantee
of convergence in a finite number of iterations for problems with:
Convex objective function
Convex (in)equality constraints
Linear equality constraints
Since our problem has nonsmooth nonlinear functions in both in the objec
tive function and the constraints, we are highly unlikely to successfully apply one
of the above mentioned methods. We propose a bounded dynamic programming
(BDP) solution method that combines features of dynamic programming and
branchandbound methods to successfully handle much larger size problems (see
Morin and Marsten (1976) for details on BDP).
2.8.1 Dynamic Programming Formulation
Given a fixed Q value, the objective function (2.8) simplifies to F'
S(di/qil)2(Q2 q)/Q, which is separable in qi variables. If the vector q*(Q)
(q*, q4 .., q*) is an optimal solution to the problem with 1 q Q, then the
subvector (, q, q,.., q) should be optimal to the problem with =2* Q q,
as well. Otherwise, the vector q*(Q) can not be an optimal solution. Thus, the
principle of optimality holds for the problem and we can build the optimal solution
by consecutively deciding on the qi values. Let Ri be the total number of batches
committed to the first i products. The product index i is the stage index, and the
pair (i, Ri) represents the states of the DP formulation. Figure 28 illustrates the
underlying network structure of the problem.
n,
Figure 28: Network Representation of the Problem
In the network, each node represents a state in the DP formulation and arcs
reflect the acceptable values such that an arc is drawn from node (i 1, Ri1) to
node (i, Ri1 + qi) for each q, E Ai. We define the following recursive equation.
0, if i
mF(i, Ri)
minF(i R ) + 22 q2) i i < if i > 0
Note that, the recursive equation is a function of Q, that can be used for a
given Q value only. Also, the final state is (n, Q), and the solution to the problem,
F(n, Q), can be found with the following forward recursion (see Figure 29).
Algorithm Forward Recursion (Q)
1. Initialize F(0, 0) 0, F(i, Ri) = oo for all i E N, 1 < Ri < Di,
ActiveNodeso { (0, 0)} and ActiveNodesi 0 for all i E N.
2. For i = 1 to n, increase i by 1
{
3. For each node (i 1, Ri1) E ActiveNodesi 1
{
4. For each qi e Ai value that satisfies s, + Pid] <
5. IF (F(i, Rii + qi) > F(i 1, Ri,) ( )2 (Q2 q) /Q) THEN
6. Set F(i, R,i + qi)) F(i 1, R, ,) + ( ])2 (Q2 q2) /Q.
7. Update ActiveNodesi < ActiveNodesi U (i, Ri, + qi)
8. (Q) q
Figure 29: Pseudocode for Algorithm Forward Recursion
When the algorithm terminates, it returns q*(Q) vector that is an optimal
solution for the given Q value and F(n, Q) that is the objective value of this
optimal solution.
As in any DP model, the number of nodes grows exponentially with the
number of stages. In the final (nth) stage, we might have at most HL, ai nodes.
This is a straightforward result of the fact that each node in the (i 1)t stage is
connected to at most a, nodes in the ith stage. However, we also know that the
maximum index for a node in the final level is (n, D,). Therefore, the number of
nodes in the final level is at most min{Hnl a, D, n + 1}. An upper bound for
the total number of nodes in the graph is : 1 min{ 1, a,, Di i + 1}.
In order to derive the computational complexity of algorithm Forward
Recursion, we need to know the number of arcs, as well. The number of arcs
into the ith stage is a function of the number of nodes in the (i )st stage
and ai. An upper bound on this number is aimin{HJ a,Di_1 i + 2}.
Therefore, we claim that the total number of arcs in the network is at most
a i+ 2 ai amint{li1 ai, Di_1i+2}. In the worst case, steps six through eight are
executed as many times as the number of arcs in the network. Therefore, the worst
case time complexity of the algorithm is O(al + L 2 a min{fI a,, Di i + 2}).
Above algorithm solves the problem for a given Q value. However, the
problem does not take a Q value as an input parameter, but returns Q as a
result of the solution vector. Moreover, an arc cost can be calculated only if Q is
known. Therefore, we need to solve a DP for each possible value of Q. We propose
algorithm Solve with DP for the solution of the problem (see Figure 210).
The algorithm identifies all possible values of Q and el!ii1 ,i. algorithm Forward
Recursion successively to solve the emerging subproblems. The algorithm yields
Q* as the optimal Q value, which leads to the optimal solution vector q(Q*) and
also the optimal solution's objective value F(n, Q*).
Steps one through five can be considered as a preprocessing phase where the
reachable nodes are identified. The worst case complexity of this preprocessing
phase depends on the number of arcs in the network representation of the problem,
in that it is equal to that of algorithm Forward Recursion. Since algorithm
Forward Recursion is repetitively invoked in step eight, the preprocessing
phase does not affect the overall time complexity of the algorithm. Steps seven
through nine are repeated for each reachable node at the last stage of the DP
formulation. The number of reachable nodes is bounded above by D, n + 1.
Therefore, algorithm Forward Recursion may be invoked at most D, n + 1
Eliminate intermediate nodes which cannot yield a feasible solution. At
any stage, Ri may increase by at most di and at least 1 units. Therefore, as we
proceed towards the final state, we eliminate the intermediate nodes (i, Ri) with
Ri > Q n + i or Ri < Q D + Di.
Compare lower and upper bounds on total cost. Our preliminary experimen
tation has shown that the overall optimal solution tends to be obtained by higher
Q values. Therefore, we start with the largest possible Q value and proceed with
decreasing it.
We use, the objective function value of the best solution obtained so far,
F(n, Q*), as an upper bound on the optimal solution value. Clearly, the in
termediate solutions (i, Ri) with F(i, Ri) > F(n, Q*) can be immediately
eliminated. Moreover we define a lower bound G(i, Ri) on the objective con
tribution of the remaining part of the solution (subvector (qi+, .., q,)), and
eliminate nodes with F(i,Ri) + G(i,Ri) > F(n, Q*). In AppendixB, we derive
G(i, R) = QUi/(Q R)2 Vi/Q, where parameters Ui and Vi are also defined. Ui
and Vi, for all i E N can be computed in O(n2) time in a preprocessing phase, thus
the lower bounds for the future costs can be obtained in 0(1) time, when needed.
Furthermore, if all the states at a stage are eliminated, then the iteration is
terminated, since there is no way to reach the final state.
A lower limit for Q. Starting with a high value of Q and decreasing it at every
step requires a stopping condition based on a lower limit for Q values. The most
basic lower limit is QL = Ki 1 = n, as the smallest acceptable value is one, for
each i E N. For a better lower limit, we adapt G(i, Ri) to the complete solution
and obtain G(0, 0) = (Uo Vo)/Q. Using F(n, Q*) as the upper bound on the
objective value of the optimal solution, Q > QL (Uo Vo)/F(n, Q*) gives a
lower limit on Q value. Note that as a better solution is found, QL value increases.
Therefore, we update QL every time Q* is updated, and dynamically narrow the
search space on Q.
Incorporating all the bounding strategies developed, we propose algorithm
Solve with BDP (Figure 211) for the solution of the problem, using algorithm
Bounded Forward Recursion (Figure 212) for successively solving emerging DPs.
In the algorithm, [x] denotes the largest integer smaller than or equal to x.
Algorithm Solve with BDP
1. Initialize Q* 0, F(n, Q*) oo, ReachableNodeso { (0,0)} and
ReachableNodesi 0 for all i E N. Also compute Uo and V0.
2. For i = 1 to n, increase i by 1
{
3. For each node (i 1, Ri,) E ReachableNodesi 1
{
4. For each qi E Ai value
{
5. Update ReachableNodesi < ReachableNodesi U (i, Ri + qi)
}
}
6. Compute U, and Vi.
}
7. Set QL 1 and QU [T/max,{s, + p,, i E N}].
8. For each reachable node (n, R,) satisfying QL < R, < QU, in decreasing order
{
9. Set Q < R,.
10. Find the optimal solution for the given Q value using
Algorithm Bounded Forward Recursion.
11. If F(n, Q*) > F(n, Q) THEN
{
12. Update Q* Q.
13. Update QL [ L(Uo Vo)/F(n, Q*)].
}
Figure 211: Pseudocode for Algorithm Solve with BDP
2.8.3 Numerical Example
We illustrate the DP formulation and implementation of the bounding policies
on an example with n = 2 products. Let demand for the products be given by
dl = 15 and d2 = 10. Also, let the processing and setup time data be given by
Algorithm Bounded Forward Recursion (Q)
1. Initialize F(O, 0) 0, F(i, Ri) oc for all i E N and 1 < R, < Di, ActiveNodeso
{(0, 0)} and ActiveNodesi 0 for all i E N.
2. For i = 1 to n, increase i by 1
{
3. For each node (i 1, Ri) E ActiveNodesi 1 that satisfies
((Q D,+D i
(F(i 1, Ri) + G(i 1, Ri) < F(n, Q*)))
{
4. For each qi, A, value that satisfies s, + d] p, < T
{
5. IF (F(i, R, + q4) > F(i 1, Ri) + ( ) (Q2 q) /Q) THEN
{
6. Set F(i, R, + q) F(i 1, R i) + ( ])2 (Q2 q) /Q.
7. Update ActiveNodesi ActiveNodesi U (i, R, + qi)
8. qf (Q) qi
Figure 212: Pseudocode for Algorithm Bounded Forward Recursion
Pi 1, p2 = 2, si = 8 and s2 = 3 minutes. Our goal is to find the optimal bathing
plan for utilizing the total available time, T = 180 minutes.
The acceptable values for the given demand data are ql E A1 {1, 2, 3, 4, 5, 8, 15}
and q2 E A2{1, 2, 3, 4, 5, 10}. The network structure of the DP formulation is de
picted in Figure 213. As seen from the figure, the number of nodes and arcs
increase dramatically with the stages.
The straightforward application of the DP procedure (using algorithm Solve
with DP) requires solving the problem for each possible Q value, starting with
Q = 2. For Q = 2, the optimal costs of reaching the first stage nodes are 337.5,
0, 62.5, 96, 94.5, 120, 110.5 for R1 = 1 through 15, respectively. Here, we see
that the F(i, Ri) formula can yield negative values. However, after the first stage
we see that only the positive values can reach the destination node (2,2). And that
is only possible via node (1, 1) and using the arc that corresponds to q2 = 1. In this
case, the second arc's cost is 150 and the total cost is 487.5 units. This requires
updating Q* = 2, which automatically updates the best objective value obtained as
F(2, 2) 487.5.
For Q = 3, similarly, we calculate the first stage costs as 600, 106.67, 0, 37.33,
48, 73.33, 72. State (2,3) can be reached via two alternative paths, using either
(1, 1) or (1, 2) as the intermediate node. Using the node (1,1) yields a total cost
of 600+41.67 641.67 and the node (1,2) yields 106.67+266.67 373.33. Therefore,
the optimal solution for Q = 3 is q*(3) = (2, 1), with an objective function value
F(2, 3) 373.33. This solution beats the previous one, so Q*  3 is updated.
Same process is iterated until all the possible Q values, and the best solution
obtained is updated several times for F(2, 4) = 267, F(2, 5) = 185, F(2, 6) =
184.5, F(2, 7) = 166.86, F(2, 8) = 150, F(2, 9) = 121, F(2, 10) = 97.5,
F(2,11) = 183.64, F(2,12) 122.67, F(2,13) = 76.62, F(2,14) 212.57,
F(2, 15) = 128.33, F(2, 18) 70.22, F(2, 19) = 170.58 and F(2, 20) = 83.75.
For three of the possible Q values (16, 17 and 25), no feasible solution is found.
The values printed in bold show the solutions that require updating the best
solution found. The algorithm terminates yielding the optimal solution Q* = 18,
q*(18) = (8, 10) and F(2, 18) = 70.22, after solving 17 DPs and updating the best
solution 11 times. Figure 214 demonstrates the iterations and the solution values
found in each iteration.
As the example shows, the straightforward approach requires many calcula
tions that can be avoided. Now, we will demonstrate our bounded DP approach
on the same example. The first vital difference is starting from the highest feasible
Q value. The first bound implies Q < QU = 180/max{9, 5} 20, thus the
procedure starts with evaluating Q = 20. In the first stage, F(1, R1) values for
R1 < 20 10 = 10 and R1 > 20 1 = 19 are eliminated by the second bound, since
these nodes cannot reach the destination node (2, 20). So, the only calculation
describe a parametric heuristic solution procedure that we have developed for the
1st phase problem.
We start with explaining some basic principles which constitute the basis for
our heuristic solution procedure. A solution is a combination of decision variables
qj, i = 1, 2, .., n such that the value of each variable is chosen from the acceptable
values of the variable. In other words, constraint sets (2.10), (2.11) and (2.12) are
satisfied in any solution.
A feasible solution is a solution which satisfies the first constraint set (2.9).
In other words, if all the batches can be processed within a fixedlength time
bucket, then the solution is feasible. Here, the important point is that the length
of the timebucket is a function of the number of batches. That is, increasing the
number of batches for one of the products shortens the timebucket and may cause
infeasibility.
Let A be a selected product (A E N), Q' = iN\{A} qi and oq (q, qO, ...,q)
be a feasible solution. Since the solution is feasible, we know that the left hand side
of the constraint (2.9) is given as follows.
SA PA )(qA + Q') (< T), if i A
+ (q + Q) (< T), o/w
Now, if we increment qA from qA to qj (the smallest acceptable value for qA
which is greater than qg), the following inequalities hold.
qA > qA + 1 and rF < 1
Iq A I IqA
Depending on PA and SA values and the increase in qA, CA may increase
or decrease (CA < Cj5). On the other hand, since every other variable remains
unchanged (q qj, i c N\{A}), Ci (i E N\{A}) will definitely increase
(Cf > C, i e N\{A}). Therefore, this increment in qA may lead to an infeasible
solution (Cf > T for at least one i E N).
This result tells us that any increasing move can convert a feasible solution
to an infeasible one. However, exploiting the special structure of the problem we
develop a quick method which converts an infeasible solution to a feasible one (if
there exists one). The following discussion is the key to the method mentioned.
At this point we define critical constraint as the constraint with the maxi{si +
pi [L, i E N} value. If the solution on hand is feasible, then the critical constraint
is the tightest constraint. Similarly, in an infeasible solution, the critical constraint
is the most violated constraint. Also, critical variable is defined as the product
related to the critical constraint.
If there is a way to convert an infeasible solution to a feasible one by in
creasing the number of batches, it can only be possible by exploiting the critical
constraint.
Let us explain this fact in more detail. Assume that we are given an infeasible
solution q0 = (q0, qO,..., q ), such that infeasibility occurs for only one of the
products, namely A. Then, if we let Q' = ZCEN\{A qf, the left hand side of the
constraint (2.9) is as follows.
,SA + PA A (q +Q'), (>T), if i A
(si + Ti r ) (qi + Q'), (< T), o/w
Here, Cj is the critical constraint. Now, we will analyse the effect of increasing
any qi value to its next acceptable value. The possible outcomes of increasing qA
are:
Cf < T, for all i E N. The solution is feasible.
CA > T, CJ < T, for all i E N\{A}. The solution is still infeasible and the
infeasibility is still caused by product A only.
CA < T, C0 > T, for at least one i E N\{A}. The solution is still infeasible,
but the source of infeasibility has shifted.
CA > T, C1 > T, for at least one i E N\{A}. The solution is still infeasible,
and the sources of infeasibility have increased in number.
The first case occurs when a feasible solution can be reached by one increment
operation. The second case occurs when all the nonviolated constraints have
enough slack, but the violated constraint did not get enough relaxation by the
increment of qA. The third and fourth cases represent another critical situation
which is likely to occur. Since increasing qA increases C, (i E N\{A}) linearly, the
increment operation consumes slacks of the nonviolated constraints. Therefore,
slack in one or more of the nonviolated constraints may be depleted, which in turn
may shift the source of infeasibility or increase the number of violated constraints.
However, increasing a qj (i E N\{A}) value .li\,. yields the following.
CA > CA > T. Therefore, the solution is still infeasible.
Although this move might violate more than one constraint and shift the
critical constraint, we definitely know that this move can not lead to a feasible
solution. This proves that exploiting a noncritical constraint would lead to another
infeasible solution. This fact lets us conclude the following.
Let qO (q, q,..., q4) and q (q, q, ..., q) be two infeasible solutions such
that CA1 is the critical constraint, and ql is reached by increasing qA to qj (the
smallest acceptable value for qA which is greater than ql{) only. If there exists a
feasible solution which can be reached from q0 by increment operations only, then it
can be reached from q1 by increment operations only, as well.
We use this result as a basis in develop Algorithm NE Feasible Solution
Search (Figure 215). The algorithm examines solution space starting from any
given solution, by moving in the NorthEast (NE) direction, and reports the
existence of a feasible solution. Moving in the NE direction means increasing at
least one qc to its next acceptable value. For future use we define '..' corner as the
solution where the variables take their lowest possible values, that is q( 1, Vi, and
NE corner as the solution where qi di, Vi.
The 1 ....'!... .:..>rms exactly one increment c.r:... per iteration.
D. ::. ::: on the sta: :: solution, the algorithm :. .: .,,, at most Z^ '
iterations. i iteration requires :.....i... the critical constraint and checking if
the solution at hand is feasible or not, both these tasks take O((n) time. Ti:
the time complexity of the algorithm is O(n, E aj). C... :0..1 that the ",
direction has at most I I solutions which :: : or ,: not be feasible, the
algorithm scans this ..... significantly fast.
: : compleii of the :I. : thm is also easily calculated. The algorithm
stores the current solution which consists of n decision variables only, I;. :. the
space complexity is 0(n).
i .. algorithm can be reversed so that it scans the solution space in the SW
direction. Although the nature of the problem is quite i' '... this ease in find:
the closest feasible solution in a .. :.. : :. gives us an ad .: I .. to l. .
a : powerful heuristic algorithm.
S:.. I pr ::: with details of the algorithm, we explain the :: i: lorhood
structure used. A solution q1 (q, ',..., q') is a neighbor solution of qg
(q0 .,q ) if and . i. lif exactly one variable ( : gq) value : : in these
solutions, such that q. is the next acceptable value of qt in increasing or ...
direction. i :.:; is, it can be reached '. only one increment or decrement operation.
With this definition, any ... i, .1.. solution has at most 2nr neighbors, n of them
being in the i:: : : :: direction and the other n in the decreasing .1:: :
Now we can i:... .1 with c(. :::::. our heuristic .:.:.oach. The algorithm
takes three parameters; SearchDepth, MoveDepthh and EligibleNeighbors.
ScarchlD :" .:.:. ter denotes (I. .. of the search ;.:.,cess. If ScarchDl = 1,
then only the onestep neighbors are evaluated. If SearchDepth = 2, then the
neighbors' neighbors (the twostep neighbors) are also evaluated, and so on.
When SearchDepth > 1, then MoveDepth becomes an important parame
ter. If MoveDepth = 1, then the search terminates at a onestep neighbor. If
MoveDepth = 2, then the termination is two steps away from the Current Solution,
etc. The last parameter, EligibleNeighbors, denotes the eligible neighbors for
evaluation. If EligibleNeighbors = "f ;,,i1 1. ", then only feasible neighbors are con
sidered. If EligibleNeighbors = "both", then both feasible and infeasible neighbors
are considered for evaluation.
In the algorithm, evaluating a solution means calculating its objective func
tion value. When all the neighbors are evaluated, the following solutions are
identified. The Best Neighbor is a SearchDepthstep neighbor with the lowest
objective value of all the neighbors. The Leading Neighbor is a MoveDepthstep
neighbor which leads to the Best Neighbor. Similarly, the Best Feasible Neigh
bor is a SearchDepthstep feasible neighbor with the lowest objective value of
all the feasible neighbors and the Leading Feasible Neighbor is a MoveDepth
step feasible neighbor which leads to the Best Feasible Neighbor. Note that, if
EligibleNeighbors = "both", then Best Neighbor and Best Feasible Neighbor might
differ. If EligibleNeighbors = "feasible", then these two solutions are the same.
This also holds for the Leading Solution and the Leading Feasible Solution. A
move consists of updating the Current Solution and comparing the objective func
tion value of this solution to the Best Solution. If the solution on hand has a lower
objective value and is feasible, then the Best Solution is updated. Figure 216
shows the pseudocode for our heuristic algorithm, namely Algorithm Parametric
Heuristic Search.
The algorithm ,i1\al\ moves in the NE direction. The total number of
iterations performed by Algorithm 2 is at most i1 ai, where ai is the number of
2.10.1 Neighborhood Structure
We define a solution q = (q, q2, .., ) as a vector of the decision variables
such that all the decision variables take an acceptable value qi E Ai,Vi. We
further distinguish between feasible and infeasible solutions as follows. A solution is
feasible if it satisfies the first constraint set (2.9), otherwise it is iif,,, i.',1,.
Now, consider the following example with n = 2 products. Let dl = 15 and
d2 = 20; s = s2 = pi = 2 1 and T = 50 minutes. The above procedure
proposed for finding the acceptable values implies q1 E A1 {1, 2, 3, 4, 5, 8, 15} and
q2 E A2 = {1, 2, 3, 4, 5, 7, 10, 20}. By the definition of a solution, any pair of these
acceptable values is taken as a solution, for example (1,1), (5,5) and (5,20) are all
solutions. (5,5) is a feasible solution, since the batch sizes are 3 and 4 and these
batches take 4 and 5 minutes, where the length of the time bucket is 50/(5 + 5) = 5,
therefore both batches can be processed within the time bucket. Similarly, (5,20)
requires 4 and 2 minutes to process the batches, however the time bucket is too
short (50/(5+20) 2), thus this solution is infeasible.
A solution q (q, q,..., ql) is a neighbor of qO q..., qoo if and
only if exactly one variable value is different in these vectors, and the categorical
distance between the values of this decision variable is at most p, where p is a
user defined integer that is greater than or equal to one. If we denote the set
of neighbor solutions of a solution q0 with NS(qo, p) and consider q0 = (5, 5)
and p = 2 for example, then the neighbor solutions set of q0 is NS((5, 5), 2) =
{(3, 5),(4, 5),(8, 5),(15, 5),(5, 3),(5, 4),(5, 7),(5, 10)}. With this definition, a solution
may have at most 2 x p x n neighbors.
We identify two particular solutions. The first one is the origin, where each
decision variable takes its lowest possible value, that is qi = ,Vi E N. The second
one is the farthest corner of the solution space, where every decision variable takes
its largest value, that is qi = d, Vi G N. If we relax integrality of batch sizes, and
let ri = qi/Q where 0 < ri < 1 such that ~ieN ri 1 denote the proportion of
the number of batches of a certain product to the total number of batches, and
assume these proportions (ris) are fixed, then the objective function (2.8) becomes
~eN(d /r1)2(1 rf)/Q. This shows that larger Q values are expected to yield
better solutions. We can intuitively argue that the global optimum may be located
in the vicinity of the farthest corner of the solution space. Therefore, guiding the
search process towards this farthest corner might help us in finding the global
optimum.
2.10.2 Strategic Oscillation
The idea behind strategic oscillation (SO) is to drive the search towards and
away from boundaries of feasibility (Kelly, Golden and Assad, 1993; Dowsland,
1998). It operates by moving with local searches until hitting a boundary of feasi
bility. Then, it crosses over the boundary and proceeds into the infeasible region
for a certain number of moves. Then, a search in an opposite direction, which
results in reentering the feasible region, is performed. Crossing the boundary from
feasible to infeasible and from infeasible to feasible regions continuously during the
search process creates some form of oscillation, which gives its name to the method
(Amaral and Wright, 2001; Dowsland, 1998; Glover, 2000; Kelly et al., 1993).
There are several reasons for considering the use of SO in solving combinato
rial optimization problems. Two such cases are depicted in Figure 217. In (a) we
see a case where the feasible region is composed of several convex but disjoint sets,
while in (b) the feasible region is a nonconvex set. In the first case, the only way
to reach the global optimum by maintaining feasibility at all times, is to start from
a solution in the same set with the global optimum, which is highly unlikely. In
the second case, the starting solution may be a local optimum, and we may not be
able to reach the global optimum by a neighborhood search method that maintains
feasibility at all times, due to the shape of the feasible region. However, using SO

PAGE 1
BA TCH PR ODUCTION SMOOTHING WITH V ARIABLE SETUP AND PR OCESSING TIMES By MESUT Y A VUZ A DISSER T A TION PRESENTED TO THE GRADUA TE SCHOOL OF THE UNIVERSITY OF FLORID A IN P AR TIAL FULFILLMENT OF THE REQUIREMENTS F OR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORID A 2005
PAGE 2
Cop yrigh t 2005 b y Mesut Y a vuz
PAGE 3
I dedicate this w ork to m y family and m y belo v ed girlfriend Deniz Kazanci.
PAGE 4
A CKNO WLEDGMENTS I sincerely thank Suleyman T ufek ci for being m y dissertation advisor. Looking bac k, I could not ha v e ask ed for a better person to be m y thesis advisor than him. He has been a great men tor for m y future career in academia, and I thank him for all of the exp eriences w e ha v e shared. He allo w ed me the space to think creativ ely and he w as alw a ys there when I needed help and guidance on both academic and personal matters. His fatherly and compassionate attitude has made him v ery special for me. I hope w e will main tain this sincere relationship for a lifetime. I wish to express m y gratitude to the mem bers of m y supervisory committee, Elif Ak cali, P anos P ardalos, Joseph Geunes and Haldun A ytug, for their assistance and guidance. I especially thank Elif Ak cali for her guidance in m y w ork related to metaheuristics. Deniz Kazanci deserv es the w armest thanks for being m y girlfriend and supporting me con tin uously during the nal y ear of m y studies. Without her lo v e and support, I w ould not ha v e been able to nish m y w ork on time. A t the times I felt stressed out and desperate, she w as alw a ys there to help me, to motiv ate me and to get me on trac k. Also, I thank m y paren ts and m y dear friend Um ut Inan for supporting me constan tly Their support helped me in o v ercoming the diculties of studying o v erseas and focusing on m y researc h, especially in the rst t w o y ears. iv
PAGE 5
T ABLE OF CONTENTS page A CKNO WLEDGMENTS . . . . . . . . . . . . . . iv LIST OF T ABLES . . . . . . . . . . . . . . . . ix LIST OF FIGURES . . . . . . . . . . . . . . . . xii ABSTRA CT . . . . . . . . . . . . . . . . . . xv CHAPTER 1 INTR ODUCTION . . . . . . . . . . . . . . . 1 1.1 Batc hing Decisions in Production . . . . . . . . . 3 1.2 The T o y ota Production System . . . . . . . . . 8 1.3 Production Smoothing . . . . . . . . . . . . 12 1.3.1 Demand Stabilization . . . . . . . . . . 14 1.3.2 Batc h Method for Production Smoothing . . . . . 18 1.3.3 Heijunk a . . . . . . . . . . . . . . 19 1.4 T o y ota's W a y for Production Smoothing . . . . . . . 19 1.5 Man ufacturing En vironmen t T ypes . . . . . . . . 22 1.6 Con tribution of the Dissertation . . . . . . . . . 25 2 SINGLEMA CHINE SINGLELEVEL MODEL . . . . . . . 29 2.1 Literature Review . . . . . . . . . . . . . 31 2.22 ndPhase F orm ulation . . . . . . . . . . . . 36 2.3 Exact Methods for the2 ndphase problem . . . . . . . 38 2.4 Problem Specic Heuristics for the2 ndPhase Problem . . . 44 2.5 MetaHeuristics for the2 ndPhase Problem . . . . . . 51 2.61 stPhase Problem F orm ulation . . . . . . . . . 57 2.7 Structural Properties of the1 stPhase Problem . . . . . 62 2.8 Exact Methods for the1 stPhase Problem . . . . . . . 66 2.8.1 Dynamic Programming F orm ulation . . . . . . 67 2.8.2 Bounding Strategies . . . . . . . . . . . 71 2.8.3 Numerical Example . . . . . . . . . . . 73 2.9 Problem Specic Heuristics for the1 stPhase Problem . . . 76 2.10 MetaHeuristics for the1 stPhase Problem . . . . . . 82 2.10.1 Neigh borhood Structure . . . . . . . . . . 83 2.10.2 Strategic Oscillation . . . . . . . . . . . 84 2.10.3 Scatter Searc h and P ath Relinking . . . . . . 85 v
PAGE 6
2.11 Comparativ e Study . . . . . . . . . . . . . 91 2.11.1 Design of Experimen ts . . . . . . . . . . 91 2.11.2 Methods . . . . . . . . . . . . . . 92 2.11.3 Results and Discussion . . . . . . . . . . 109 3 FLO WSHOP SINGLELEVEL MODEL . . . . . . . . . 112 3.11 stPhase Problem F orm ulation . . . . . . . . . 114 3.2 Structural Properties of the1 stPhase Problem . . . . . 116 3.3 Exact Methods for the1 stPhase Problem . . . . . . . 118 3.3.1 Dynamic Programming F orm ulation . . . . . . 119 3.3.2 Bounding Strategies . . . . . . . . . . . 123 3.4 Problem Specic Heuristics for the1 stPhase Problem . . . 124 3.5 MetaHeuristics for the1 stPhase Problem . . . . . . 130 3.5.1 Neigh borhood Structure . . . . . . . . . . 130 3.5.2 P ath Relinking . . . . . . . . . . . . 131 3.6 Comparativ e Study . . . . . . . . . . . . . 134 3.6.1 Design of Experimen ts . . . . . . . . . . 134 3.6.2 Methods . . . . . . . . . . . . . . 135 3.6.3 Results and Discussion . . . . . . . . . . 138 4 SINGLEMA CHINE MUL TILEVEL MODEL . . . . . . . 141 4.1 Literature Review . . . . . . . . . . . . . 143 4.22 ndPhase F orm ulation . . . . . . . . . . . . 149 4.3 Exact Methods for the2 ndphase problem . . . . . . . 153 4.4 Problem Specic Heuristics for the2 ndPhase Problem . . . 155 4.5 MetaHeuristics for the2 ndPhase Problem . . . . . . 155 4.61 stPhase Problem F orm ulation . . . . . . . . . 157 4.7 Structural Properties of the1 stPhase Problem . . . . . 160 4.8 Exact Methods for the1 stPhase Problem . . . . . . . 160 4.8.1 Dynamic Programming F orm ulation . . . . . . 161 4.8.2 Bounding Strategies . . . . . . . . . . . 163 4.9 Problem Specic Heuristics for the1 stPhase Problem . . . 165 4.10 MetaHeuristics for the1 stPhase Problem . . . . . . 171 4.10.1 Neigh borhood Structure . . . . . . . . . . 171 4.10.2 P ath Relinking . . . . . . . . . . . . 172 4.11 Comparativ e Study . . . . . . . . . . . . . 175 4.11.1 Researc h Questions . . . . . . . . . . . 175 4.11.2 Design of Experimen ts . . . . . . . . . . 175 4.11.3 Methods . . . . . . . . . . . . . . 177 4.11.4 Results and Discussion . . . . . . . . . . 181 vi
PAGE 7
5 FLO WSHOP MUL TILEVEL MODEL . . . . . . . . . 192 5.11 stPhase Problem F orm ulation . . . . . . . . . 193 5.2 Structural Properties of the1 stPhase Problem . . . . . 195 5.3 Exact Methods for the1 stPhase Problem . . . . . . . 196 5.3.1 Dynamic Programming F orm ulation . . . . . . 196 5.3.2 Bounding Strategies . . . . . . . . . . . 201 5.4 Problem Specic Heuristics for the1 stPhase Problem . . . 202 5.5 MetaHeuristics for the1 stPhase Problem . . . . . . 208 5.5.1 Neigh borhood Structure . . . . . . . . . . 208 5.5.2 P ath Relinking . . . . . . . . . . . . 208 5.6 Comparativ e Study . . . . . . . . . . . . . 212 5.6.1 Researc h Questions . . . . . . . . . . . 212 5.6.2 Design of Experimen ts . . . . . . . . . . 213 5.6.3 Methods . . . . . . . . . . . . . . 214 5.6.4 Results and Discussion . . . . . . . . . . 217 6 SUMMAR Y AND CONCLUSIONS . . . . . . . . . . 221 REFERENCES . . . . . . . . . . . . . . . . . 225 APPENDIX A DERIVING OBJECTIVES FR OM GO ALS F OR THE SINGLELEVEL MODELS . . . . . . . . . . . . . . . . . 232 A.1 DerivingF 1F romG 1 . . . . . . . . . . . . 232 A.2 ExploitingG 2 . . . . . . . . . . . . . . 234 A.3 ExploitingG 3 . . . . . . . . . . . . . . 234 A.4 ExploitingG 4 . . . . . . . . . . . . . . 235 A.5 ExploitingG 5 . . . . . . . . . . . . . . 235 A.6 ExploitingG 6 . . . . . . . . . . . . . . 235 B LO WER BOUND F OR THE FUTURE P A TH IN THE DP F ORMULA TION OF THE SINGLELEVEL MODELS . . . . . . . 237 C FINE TUNING THE MET AHEURISTIC METHODS F OR THE SMSL MODEL . . . . . . . . . . . . . . . . . . 240 C.1 Strategic Oscillation . . . . . . . . . . . . 240 C.2 Scatter Searc h . . . . . . . . . . . . . . 243 C.3 P ath Relinking . . . . . . . . . . . . . . 248 D FINE TUNING THE P A TH RELINKING METHOD IN THE FSSL MODEL . . . . . . . . . . . . . . . . . . 251 E DERIVING OBJECTIVES FR OM GO ALS F OR THE MUL TILEVEL MODELS . . . . . . . . . . . . . . . . . 253 vii
PAGE 8
F LO WER BOUND F OR THE FUTURE P A TH IN THE DP F ORMULA TION OF THE MUL TILEVEL MODELS . . . . . . . 255 G FINE TUNING THE P A TH RELINKING METHOD IN THE SMML MODEL . . . . . . . . . . . . . . . . . . 258 H FINE TUNING THE P A TH RELINKING METHOD IN THE FSML MODEL . . . . . . . . . . . . . . . . . . 260 BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . 261 viii
PAGE 9
LIST OF T ABLES T able page 21 Example for Algorithm Nearest Point . . . . . . . . 40 22 Sequence F ound b y Algorithm Nearest Point . . . . . . 47 23 Sequence F ound b y Algorithm MH1 . . . . . . . . . . 48 24 Sequence F ound b y Algorithm MH2 . . . . . . . . . . 50 25 Sequence F ound b y Algorithm Twostage . . . . . . . . 52 26 Correlations Bet w een Alternativ e1 stPhase Objectiv es andZ . . 61 27 Summary of the Fine T uning Process for the SO Method . . . 106 28 Summary of the Fine T uning Process for the SS Method . . . . 107 29 Summary of the Fine T uning Process for the PR Method . . . 108 210 Summary of Results . . . . . . . . . . . . . . 110 31 Summary of the Fine T uning Process for the PR Method . . . 138 32 Summary of Results . . . . . . . . . . . . . . 139 41 Summary of the Fine T uning Process for the PR Method . . . 181 42 Summary of Results . . . . . . . . . . . . . . 182 43 Summary of Supermark et In v en tory Lev els . . . . . . . 183 51 Summary of the Fine T uning Process for the PR Method . . . 217 52 Summary of Results . . . . . . . . . . . . . . 218 53 Summary of Supermark et In v en tory Lev els . . . . . . . 219 61 Problem Complexities and W orstCase Time Complexities of the Exact Solution Methods . . . . . . . . . . . . . 224 62 A v erage P erformance of our Heuristic Methods on the1 stPhase Problem . . . . . . . . . . . . . . . . . . 224 C1 Analysis of the P arametersRangeandIterativefor the SO Method 240 C2 t test Results of the P arametersRangeandIterativefor the SO Method 241 ix
PAGE 10
C3 Analysis of the P arametersMaxIters,NFMandNIMfor the SO Method . . . . . . . . . . . . . . . . . 241 C4 t test Results of the P arametersMaxIters,NFMandNIMfor the SO Method . . . . . . . . . . . . . . . . 242 C5 Analysis of the P arametersNFMandRelativeImprovementfor the SO Method . . . . . . . . . . . . . . . . 242 C6 t test Results of the P arametersNFMandRelativeImprovementfor the SO Method . . . . . . . . . . . . . . . 242 C7 Analysis of the P arametersPSHMethodsandDiversificationfor the SS Method . . . . . . . . . . . . . . . 243 C8 t test Results of the P arametersPSHMethodsandDiversificationfor the SS Method . . . . . . . . . . . . . . 244 C9 Analysis of the P arametersLSinPreProcessandLStoRefSetPPfor the SS Method . . . . . . . . . . . . . . . 244 C10 t test Results of the P arametersLSinPreProcessandLStoRefSetPPfor the SS Method . . . . . . . . . . . . . . 244 C11Analysis of the P arametersLSinIterationsandLStoRefSetItersfor the SS Method . . . . . . . . . . . . . . . 245 C12 t test Results of the P arametersLSinIterationsandLStoRefSetItersfor the SS Method . . . . . . . . . . . . . . 245 C13Analysis of the P arametersSubsetSizeandNICfor the SS Method 246 C14Analysis of the P arameterNECfor the SS Method . . . . . 246 C15 t test Results of the P arameterNECfor the SS Method . . . . 247 C16Analysis of the P arameterbfor the SS Method . . . . . . 247 C17 t test Results of the P arameterbfor the SS Method . . . . . 247 C18Analysis of the P arametersLSinIterationsandLStoRefSetItersfor the PR Method . . . . . . . . . . . . . . . 248 C19 t test Results of the P arametersLSinIterationsandLStoRefSetItersfor the PR Method . . . . . . . . . . . . . . 248 C20Analysis of the P arametersbandNTSfor the PR Method . . . 249 C21 t test Results of the P arameterNTSfor the PR Method . . . 249 C22Extended Analysis of the P arameterbfor the PR Method . . . 250 x
PAGE 11
C23 t test Results of the P arameterbfor the PR Method . . . . . 250 D1 Analysis of the P arametersbandNTSfor the PR Method . . . 251 D2 t test Results of the P arametersbandNTSfor the PR Method . . 252 G1 Analysis of the P arametersbandNTSfor the PR Method . . . 258 G2 t test Results of the P arametersbandNTSfor the PR Method . . 258 G3 Analysis of the P arametersb,NTSandPSHMethodsfor the PR Method . . . . . . . . . . . . . . . . . 259 G4 t test Results of the P arametersbandNTSfor the PR Method . . 259 H1 Analysis of the P arametersbandNTSfor the PR Method . . . 260 H2 t test Results of the P arametersbandNTSfor the PR Method . . 260 xi
PAGE 12
LIST OF FIGURES Figure page 11 Automotiv e Pressure Hose Man ufacturing Process Outline . . . 2 12 Riv erIn v en tory Analogy . . . . . . . . . . . . 4 13 Demand Stabilization Ov er Time . . . . . . . . . . 15 14 Eect of Production Smoothing on In v en tory Lev el . . . . . 17 15 Ideal and A ctual Consumptions . . . . . . . . . . 21 21 Pseudocode for Algorithm Nearest Point . . . . . . . 39 22 Pseudocode for Algorithm Modified Nearest Point . . . . 40 23 Graph Represen tation of the Assignmen t Problem F orm ulation . . 44 24 Pseudocode for Algorithm MH1 . . . . . . . . . . . 45 25 Pseudocode for Algorithm MH2 . . . . . . . . . . . 49 26 Pseudocode for Algorithm Twostage . . . . . . . . . 51 27 Pseudocode for Algorithm Find Acceptable Values . . . . 64 28 Net w ork Represen tation of the Problem . . . . . . . . 68 29 Pseudocode for Algorithm Forward Recursion . . . . . . 69 210 Pseudocode for Algorithm Solve with DP . . . . . . . 71 211 Pseudocode for Algorithm Solve with BDP . . . . . . . 73 212 Pseudocode for Algorithm Bounded Forward Recursion . . . 74 213 Net w ork Represen tation of the Example . . . . . . . . 101 214 DP Solution to the Numerical Example . . . . . . . . 102 215 Pseudocode for Algorithm NE Feasible Solution Search . . . 102 216 Pseudocode for Algorithm Parametric Heuristic Search . . . 103 217 Examples for F easible Regions that Can Benet from Strategic Oscillation . . . . . . . . . . . . . . . . . 103 xii
PAGE 13
218 Pseudocode for Algorithm SO . . . . . . . . . . . 104 219 Example for SS and PR Methods . . . . . . . . . . 104 220 Pseudocode for Algorithm SS/PR . . . . . . . . . . 105 31 Net w ork Represen tation of the Problem . . . . . . . . 120 32 Pseudocode for Algorithm Forward Recursion . . . . . . 121 33 Pseudocode for Algorithm Solve with DP . . . . . . . 122 34 Pseudocode for Algorithm Solve with BDP . . . . . . . 125 35 Pseudocode for Algorithm Bounded Forward Recursion . . . 126 36 Pseudocode for Algorithm NE Feasible Solution Search . . . 127 37 Pseudocode for Algorithm Parametric Heuristic Search . . . 129 38 Pseudocode for Algorithm PR . . . . . . . . . . . 132 41 Pseudocode for Algorithm One Stage . . . . . . . . . 185 42 Net w ork Represen tation of the Problem . . . . . . . . 186 43 Pseudocode for Algorithm Forward Recursion . . . . . . 187 44 Pseudocode for Algorithm Solve with DP . . . . . . . 187 45 Pseudocode for Algorithm Solve with BDP . . . . . . . 188 46 Pseudocode for Algorithm Bounded Forward Recursion . . . 189 47 Pseudocode for Algorithm NE Feasible Solution Search . . . 189 48 Pseudocode for Algorithm Parametric Heuristic Search . . . 190 49 Pseudocode for Algorithm PR . . . . . . . . . . . 191 51 Net w ork Represen tation of the Problem . . . . . . . . 197 52 Pseudocode for Algorithm Forward Recursion . . . . . . 198 53 Pseudocode for Algorithm Solve with DP . . . . . . . 200 54 Pseudocode for Algorithm Solve with BDP . . . . . . . 203 55 Pseudocode for Algorithm Bounded Forward Recursion . . . 204 56 Pseudocode for Algorithm NE Feasible Solution Search . . . 205 57 Pseudocode for Algorithm Parametric Heuristic Search . . . 207 xiii
PAGE 14
58 Pseudocode for Algorithm PR . . . . . . . . . . . 209 xiv
PAGE 15
Abstract of Dissertation Presen ted to the Graduate Sc hool of the Univ ersit y of Florida in P artial F ulllmen t of the Requiremen ts for the Degree of Doctor of Philosoph y BA TCH PR ODUCTION SMOOTHING WITH V ARIABLE SETUP AND PR OCESSING TIMES By Mesut Y a vuz Ma y 2005 Chair: Suleyman T ufek ci Major Departmen t: Industrial and Systems Engineering Man y companies use mixedmodel production systems, running under the justintime (JIT) philosoph y in order to ecien tly meet customer demands for a v ariet y of products. Suc h systems require demand be stable and production sequence be lev eled. The production smoothing problem (PSP) aims to nd lev el sc hedules at the nal lev el of a m ultilev el man ufacturing system. The products in a lev el sc hedule are dispersed o v er the horizon as uniformly as possible. In this area, most researc h has focused on sequencing JIT mixedmodel assem bly lines where setup and c hangeo v er times are assumed negligible. Ho w ev er, in man y o w lines in the real life, a signican t amoun t of time needs to be dedicated to setup/c hangeo v er among dieren t products. Therefore, for suc h systems the existing literature falls short of helping to smooth production. W e consider t w o alternativ e man ufacturing en vironmen ts, a single mac hine or a o wshop, at eac h lev el of the man ufacturing system; and study both singlelev el and m ultilev el v ersions of the PSP W e allo w the products to ha v e arbitrary nonzero processing and setup time requiremen ts on the mac hines, where the total productiv e time is limited. Here, one m ust decide on batc h sizes and n um ber of xv
PAGE 16
batc hes for eac h product, before sequencing the batc hes. W e dev elop a t w ophase solution approac h that is applicable on all four models. The rst phase nds appropriate batc h sizes for the products and the second phase nds a lev el sequence of the batc hes of products. W e relate the second phase problem to the existing solution methods a v ailable in the literature, and focus on the rst phase problem. W e build an optimization model for the rst phase problem; sho w that it is NPcomplete; devise heuristic methods for its solution; implemen t metaheuristic tec hniques; and dev elop exact solution procedures based on dynamic programming (DP) and branc handbound (B&B) methods. Through computational experimen ts, w e compare the performance of our solution methods. The results sho w that our exact methods are ecien t in solving mediumsized instances of the problem. Also, our metaheuristic implemen tations yield nearoptimal solutions in almost realtime. xvi
PAGE 17
CHAPTER 1 INTR ODUCTION This dissertation aims to solv e a pr o duction smo othing pr oblem (PSP) encoun tered in mixedmodel man ufacturing en vironmen ts whic h are operated under the justintime (JIT) philosoph y where dieren t products ma y ha v e arbitrary setup and processing times. Mixedmodel man ufacturing systems are widely used b y man ufacturers, in order to meet customer demands ecien tly for a v ariet y of products. Operating suc h mixedmodel systems with JIT principles requires demand to be stable and production sequence to be lev eled. The PSP aims at nding smoothed or lev eled production sc hedules. A majorit y of the curren t literature focuses on special systems, suc h as the T o y ota's production system, whic h is a sync hronized assem bly line. Ho w ev er, in reallife, there exist man y mixedmodel man ufacturing systems whic h are far from being sync hronized assem bly lines or ev en assem bly lines. Therefore, the w ellestablished literature of production smoothing is not directly applicable in these systems. W e presen t suc h an example from the automotiv e industry A leading automotiv e pressure hose man ufacturer runs a production facilit y in Ocala, Florida, to produce v arious t ypes of pressure hoses for the automotiv e industry Production of these hoses is ac hiev ed through a sixstage process, where the rst three stages are hea vier processes and are operated in large batc hsizes, whereas the latter three stages are more productspecic operations and are operated in smaller batc hsizes. Also, the latter stages are separated from the former ones b y an in v en tory of partiallyprocessed goods. The outline of the system is depicted in Figure 11 1
PAGE 18
2 Rubberr Mixerr Continuousr Hose Mfg.r Ovenr Curingr Cutr Mold &r Vulcanizer Assemblyr Reeledr Hoser Inventoryr Rubber rawr materialsr Finishedr goodsr Initializationr r Bring reel into the cuttingr stationr r Setup cutting machine for ar particular hose typerFor each item in the batchr r Cut hoses to desired lengthr For each batch of the ovenr r Put correct pins on the rackr for hoses on the cartr r Place hoses over pinsr r Place rack in the vulcanizingr ovenr r Remove vulcanized hosesr from the pinsr Initializationr r Setup the fixture for ar particular hose typerFor each item in the batchr r Place hose on the fixturer r Trim endsr r Print labels on the hoser r Attach two end clips on ther hoser r Replace hoserFigure 11: Automotiv e Pressure Hose Man ufacturing Process Outline The latter three stages constitute a o wline. Ho w ev er, the mold operation uses the vulcanizing o v en, whic h is a batc h processor. Moreo v er, the setup and processing time requiremen ts of the products dier both among the stages and among the products. Therefore, small in v en tory buers are created bet w een the consecutiv e stages. The described reallife system is far from being a sync hronized assem bly line. Therefore, the operation of the system under the JIT philosoph y necessitates a new production smoothing method. In this case, our method proposes focusing on the assem bly stage (whic h can be seen as a singlemac hine without loss of generalit y) and establishing the batc hing policy and the sc hedule with respect to this stage. The emplo ymen t of a pullt ype shopoor con trol spreads the sc hedule set for the assem bly stage to the en tire system. Also, since w e are focusing on the bottlenec k of the system, the other stages can be subordinated with relativ e ease. The abo v e example states a need for alternativ e JIT production planning methods whic h can be used in complex man ufacturing systems, and motiv ates our study T o assure a clear understanding of our researc h, w e proceed with eac h
PAGE 19
3 aspect of the problem one b y one, then com bine these distinct aspects to form a statemen t of the problem of in terest. 1.1 Batc hing Decisions in Production The classical industrial engineering poin t of view treats setups as nonv alueadding operations and tries to minimize the time and resources allocated to these nonv alueadding operations. Some v ery popular managemen t philosophies suc h as lean man ufacturing (LM) and justintime man ufacturing dene setup as a cause of w aste and suggest tools to eliminate this w aste. Singlemin uteexc hangeofdies (SMED) is a tec hnique, frequen tly referred to in lean man ufacturing, that aims at reducing the c hangeo v er (setup) times from one product to another. Ho w ev er, unfortunately the w aste caused b y the setups ma y be una v oidable. T o eliminate w aste, practitioners perform man ufacturing operations in batc hes, that is producing a n um ber of products of the same kind with only one setup. This fact is the most importan t justication of economies of scale in man ufacturing. Ho w ev er, new managemen t paradigms do not praise economies of scale; on the con trary they focus on a v oiding batc h production and try to adopt so called onepie c eow What is wrong in the traditional methods of ac hieving production eciency? What is that they do not lik e in the economies of scale? The answ er to these questions is high in v en tory lev els caused b y batc h production. Competition has forced man ufacturers to increase product v ariet y and shorten lead times. Moreo v er, the demand is highly uctuating for some industries and the future is unpredictable. A drastic c hange in demand migh t cause shortcomings in meeting the demand and losing customers. F urthermore, the man ufacturer in v ests a great amoun t of money and time to a pile of nished goods whic h can not be sold at the time and hence has no economic v alue. Moreo v er, operating with large batc hes results in longer lead times, whic h ma y be equiv alen t
PAGE 20
4 to a decrease in customer satisfaction. Larger batc hes further exacerbate qualit y related problems. High in v en tory lev els in a man ufacturing system ma y hide some problems existing in the system. A popular analogy is to compare a production process with a riv er and the lev el of in v en tory with the w ater lev el in the riv er (see Figure 12 ). When the w ater lev el is high, the w ater will co v er the roc ks. Lik ewise, when in v en tory lev els are high, problems are mask ed. Ho w ev er, when the lev el (i.e., the in v en tory) is lo w, the roc ks (i.e., the problems) are eviden t. The health y approac h is not to use high in v en tory lev els to a v oid problems, but to solv e root problems and later decrease the in v en tory lev els. Qualityr Leadr timesr Setupr timesr Machiner typesr Productr designr Work flowr Inventoryr levelr Figure 12: Riv erIn v en tory Analogy High in v en tory lev els cause w aste for the man ufacturer. The reasons for this include high carrying costs and long lead times. One can extend the disadv an tages of in v en tories, but just lik e setups, they are una v oidable. In other w ords, unless one reduces setup times and costs to a negligible lev el, the capacit y constrain t will not allo w the ideal onepieceo w, in whic h all the products are man ufactured and carried in batc hes of one. This brings a tradeo problem bet w een setups and
PAGE 21
5 in v en tories. This tradeo has captured a lot of academic in terest, under the broad topic of lotsizing. Researc h on lotsizing traces bac k to the classical economic order quan tit y (EOQ) model of Harris (1913). Harris deriv ed the optimal order quan tit y for a single item, considering a stable demand and ordering and in v en tory holding costs. Later, v ariations of the EOQ model ha v e been dev eloped for determining economic man ufacturing quan tities (Nahmias, 2001). Due to single item, uncapacitated man ufacturing assumptions, optimal solution calculation is an easy task. A v ariation of the classical EOQ problem is the economic lot sc heduling problem (ELSP) (Elmaghrab y 1978) whic h tak es capacit y constrain ts in to accoun t in a m ultiitem man ufacturing en vironmen t. ELSP too assumes a static, kno wn demand and an innite planning horizon. An importan t v arian t of the ELSP is the common cycle sc heduling problem (CCSP) (ElNajda wi and Kleindorfer, 1993) In the CCSP the planning horizon is divided in to xedlength periods (cycles) and the products are assigned to these cycles. Some products ma y be produced once in sev eral cycles, whereas some others ma y be sc heduled sev eral batc hes in a cycle. An importan t v ariation of the EOQ problem is the so called W agnerWhitin problem (WW) (W agner and Whitin, 1958). The WW problem assumes a nite planning horizon divided in to sev eral discrete periods with v arying demands. Ho w ev er, the WW model ignores capacit y constrain ts, is concerned with only singleitem structures, and hence is not applicable in man y reallife situations. A more comprehensiv e model can include capacit y constrain ts in a m ultiitem en vironmen t, considering dynamic demand for sev eral discrete time periods. An extension to the WW model including capacit y constrain ts is the capacitated lot sizing problem (CLSP) (Baran y v an Ro y and W olsey 1984) In the CLSP one decides what to produce in eac h period. Sev eral items can be produced
PAGE 22
6 in the same period, if the capacit y allo ws. Th us, items can be produced in adv ance in order to sa v e setup costs obeying the capacit y constrain ts. A v arian t of the CLSP is the discrete lot sizing and sc heduling problem (DLSP) (Salomon, Kroon, Kuik and W assenho v e, 1991). The DLSP divides eac h period in to sev eral micro periods and assumes allornothing production for these micro periods. A product can be produced during sev eral consecutiv e periods with only one setup. The allornothing assumption of the DLSP is found restrictiv e and a more realistic problem form ulation (the con tin uous setup lot sizing problem, CSLP (Karmark ar and Sc hrage, 1985)) has been dened b y relaxing this assumption. A t rst sigh t the dierence bet w een the DLSP and the CSLP looks negligible, but it becomes importan t when one sc hedules a product for t w o distinct periods and no other product is sc heduled in bet w een these t w o periods. In this case, the setup cost is incurred t wice in the DLSP but only once in the CSLP The major dra wbac k of the CSLP is that if the capacit y of a period is not used fully for the product sc heduled in that period, then the remaining capacit y is left un used. In the proportional lotsizing and sc heduling problem (PLSP) (Drexl and Haase, 1995) t w o distinct products can be sc heduled for an y period, so that the capacit y is fully utilized. In the PLSP only one setup per period is allo w ed. Although the allornothing assumption (in troduced with the DLSP) pro vides computational eciency it is impractical for real w orld problems, since the n um ber of periods ma y easily become prohibitiv ely large. As a solution to this tradeo bet w een w orking with few or man y periods, the general lot sizing and sc heduling problem (GLSP) (Fleisc hman and Meyr, 1997) has been dev eloped. In the GLSP a userdened parameter restricts the n um ber of lots (setups) per period. Among the discretetime lotsizing models CLSP is kno wn as a bigbucket model and the rest (DLSP through GSLP abo v e) are smal lbucket models. A
PAGE 23
7 com bination of the big and smallbuc k et models is the socalled capacitated lotsizing problem with link ed lot sizes (CLSLP) (Suerie and Stadtler, 2003). Multiple products can be produced in a period (whic h is the propert y of bigbuc k et models) and a setup state can be carried o v er from one period to the next (whic h is a propert y seen in smallbuc k et models). Among the problems men tioned so far, only EOQ and ELSP are based on con tin uous time assumption. All the other problems ha v e a discrete time structure. An importan t v arian t of the ELSP is the socalled batc hing and sequencing problem (BSP) (Jordan, 1996). In the BSP dynamic demand is allo w ed but eac h order is c haracterized b y its deadline and size; in other w ords, orders are tak en and processed as jobs on a single mac hine. One fundamen tal assumption of the BSP is that the speed of the mac hine does not depend on time; th us processing times of the jobs do not depend on the sc hedule. Another importan t assumption is that jobs are not allo w ed to split, but jobs can be com bined to a v oid setups. The follo wing papers can be referred to for further reading on the topic. Drexl and Kimms (1997) pro vide an extensiv e literature review on lot sizing and sc heduling problems. P otts and v an W assenho v e (1992) poin t to the importance of in tegrating sc heduling methods with batc hing and lot sizing methods and report on the complexities of the algorithms. Unfortunately all the literature men tioned abo v e is related to traditional man ufacturing methods and MRP Eac h problem tries to calculate optimal batc h sizes in order to minimize the total cost. These problems are dened based on the existing properties of the man ufacturing system under consideration. Therefore, none of them can create a breakthrough solution. Their con tribution is limited to extending the MRP literature to some new man ufacturing systems. Ev en the simplest form of the EOQ model giv es us a direction. If K is the setup (ordering) cost, D is demand per period, h holding cost per unit per period
PAGE 24
8 andQ is the optimal batc h size, then the optimal batc h size is giv en b yQ = r 2 KD h :If w e can eliminate the setup cost (i.e., decrease it to zero lev el) then the optim um batc h size will be zero, as w ell. Zero batc h size is not applicable; th us w e in terpret it as ordering one unit at a time, or onepieceo w. This basic fact w as w ell realized b y Japanese man ufacturers and they ha v e been able to dev elop a breakthrough in man ufacturing managemen t philosoph y Decades ago, cost adv an tage and competitiv eness of Japanese man ufacturers raised the question if their systems w ere better than the alternativ e systems applied in the w est. The studies pro v ed strengths of Japanese man ufacturers, especially the T oyota pr o duction system The endea v or to formalize the strengths of eastern man ufacturers in troduced the justintime (JIT) philosoph y In the follo wing section, w e analyze the cornerstones of the T o y ota production system that ha v e also been stated in JIT. 1.2 The T o y ota Production System T aiic hi Ohno, starting in the 1940s, began to dev elop a system that could compete with American automotiv e man ufacturers. The system w as so successful that in the 1970s the W estern W orld w as a w are of this successful system and named this system the T o y ota production system or JIT. Ohno is credited with dev eloping and perfecting JIT for T o y ota's man ufacturing plan ts in Japan. Ohno can also be seen as the rst author who has written on T o y ota's system. Almost ev ery book on JIT refers to Ohno's (1988) famous book whic h w as written in 1978 and translated in to English in 1988. Y asuhiro Monden, a professor of production managemen t and managerial accoun ting at the Univ ersit y of T sukuba, Japan, is another recognized author in in troducing the JIT production system to the United States. His book, T oyota
PAGE 25
9 Pr o duction System (Monden, 1998), is recognized as a JIT classic. Dr. Monden gained v aluable kno wledge and experience from his researc h and related activities in the Japanese automobile industry He has also made v aluable con tributions to the socalled T o y ota production system. Throughout this section, w e will mainly be referring to these t w o books in order to explain the T o y ota production system (TPS),and w e will be using the w ords JIT and TPS synon ymously In his book (Monden, 1998, p.1) Monden wrote: The T o y ota production system is a viable method for making products because it is an eectiv e tool for producing the ultimate goalprot. T o ac hiev e this purpose, the primary goal of the T o y ota production system is cost reduction, or impro v emen t of productivit y Cost reduction and productivit y impro v emen t are attained through the elimination of v arious w astes suc h as excessiv e in v en tory and excessiv e w ork force. T o y ota treats w aste elimination as a strategy for con tin uous impro v emen t, and the concept of w aste is v ery broad. W aste sources in man ufacturing ma y v ary within and across organizations, but the follo wing (rst described b y Ohno) are the most common ones:W aste from production defectsW aste in transportationW aste from in v en toryW aste from o v erproductionW aste of w aiting timeW aste of processingW aste of motion The k ey elemen ts in eliminating the poten tial w aste are included in JIT. Dieren t authors ha v e emphasized dieren t tools of JIT; some c hampioned the Kan ban system while the others fa v ored setup time reduction (SMED) or production smoothing. The follo wing tools are described in the TPS.Kan ban system
PAGE 26
10Production smoothingCapacit y buersSetup reductionCrosstraining and plan t la y outT otal qualit y managemen t The essence of Kan ban is its being a pull system, in con trast to traditional push systems suc h as MRP In a push system one forecasts the demand and then, starting with the nal stage, plans ev ery single operation in detail. The nal decision is materials release for eac h operation. Due to uncertain ties in the system these strict plans are lik ely to fail or become obsolete in a short time. T o o v ercome problems and k eep the man ufacturing system running, managers generally c hoose carrying high in v en tory lev els, whic h defeats the purpose of ecien t production in the rst place. On the other hand, in a pull system, one forecasts the demand and plans only the nal stage. A production unit (a mac hine, a cell, a line, etc.) produces the parts demanded b y the successor unit. Th us, if no demand comes for a specic part, then no extra units are produced. This w a y production systems can respond to demand c hanges m uc h quic k er than a push system, and no extra in v en tories are created. T o y ota implemen ts the Kan ban system successfully but it ma y not be the best w a y of con trolling man ufacturing operations. In the w estern w orld some new approac hes lik e Hopp and Spearman's (2000) Con Wip, whic h stands for constan t w orkinprocess, and Goldratt's (1999) TOC, whic h stands for the theory of constrain ts, ha v e been dev eloped and successfully implemen ted. Man y sc holars in the area ha v e misin terpreted the JIT system b y narro wing the ideal of the system to zero in v en tories. This misin terpretation probably stems from Robert Hall. In his book (Hall, 1983, p.3), Hall wrote:
PAGE 27
11 Zero In v en tories connotes to a lev el of perfection not ev er attainable in a production process. Ho w ev er, the concept of a high lev el of excellence is importan t because it stim ulates a quest for constan t impro v emen t through imaginativ e atten tion to both the o v erall task and to the min ute details. As Hall indicates, zero in v en tory ideal is not ac hiev able. Ho w ev er, in order to stim ulate con tin uous impro v emen t in a man ufacturing system, this ideal pla ys an importan t role. Edw ards (1983) pushed the use of absolute ideals to its limit b y describing the goals of JIT in terms of the sev en zeros, whic h are required to ac hiev e zero in v en tories. These are listed belo w:Zero defectsZero (excess) lot sizeZero setupsZero breakdo wnsZero handlingZero lead timeZero surging Ob viously the sev en zeros are no more ac hiev able in practice than is zero in v en tory Zero lead time with no in v en tory literally means instan taneous production, whic h is ph ysically impossible. The purpose of suc h goals, according to the JIT proponen ts who mak e use of them, is to inspire an en vironmen t of con tin uous impro v emen t. No matter ho w w ell a man ufacturing system is running, there is alw a ys room for impro v emen t. JIT is more than a system of frequen t materials deliv ery or the use of k an ban to con trol w ork releases. A t the heart of the man ufacturing systems dev eloped b y T o y ota and other Japanese rms is a careful restructuring of the production en vironmen t. Ohno (1988, p.11) w as v ery clear about this:
PAGE 28
12 Kan ban is a tool of realizing JIT. F or this tool to w ork fairly w ell, the production process m ust be managed to o w as m uc h as possible. This is really the basic condition. Other importan t conditions are lev eling production as m uc h as possible and alw a ys w orking in accordance with standard w ork methods. F or further information on the T o y ota production system w e refer the reader to Monden (1998), and for more information on JIT philosoph y and implemen tation guidelines Hall (1983), Hopp and Spearman (2000), W alleigh (1986), Nic holas (1998) and Ohno (1988) are good. 1.3 Production Smoothing In the previous section w e ha v e already stated that, in order to successfully implemen t JIT, an organization m ust adapt to the culture of c hange and use all the k ey elemen ts of JIT. And one of the k ey elemen ts is socalled production smoothing. W alleigh (1986, p.1) emphasizes the importance of production smoothing b y claiming it is lik ely to be the rst step in transformation to JIT: Excuse n um ber 1: `Our suppliers will not support JIT b y deliv ering our ra w material in small batc hes on a daily basis' Asking suppliers to mak e daily deliv eries is a common mistak e of managers who focus on the in v en toryreduction benets of JIT. Ultimately this is the righ t thing to do, but it is the wrong place to start. If manufacturing executiv es recognize JIT as a problem solving tec hnique rather than an in v en tory reduction plan, the starting poin t will be clearer. JIT should be adopted and practiced inside the factory where the compan y can con trol an y problems.:::F urthermore, JIT is a demandpull system. Eac h operation produces only what is necessary to satisfy the demand of the succeeding operation. ... Final assem bly is the con trol poin t for the en tire man ufacturing process, and it is the place to start implemen ting JIT. F or unin terrupted o ws in a demand pull en vironmen t, the sc hedule for nal assem bly m ust be smoothed out. T o ac hiev e successful production smoothing a compan y m ust mak e sure that these three requiremen ts are met (Nic holas, 1998, p.568):Con tin uous, stable demandShort setup times
PAGE 29
13Production equals demand The rst requiremen t pertains to product demand, something o v er whic h a compan y migh t ha v e con trol, but the other t w o are the things that a compan y can and m ust con trol. The rst requiremen t clearly pinpoin ts the production smoothing process. F or further analysis w e dene three production philosophies: mak etostoc k (MTS), assem bletoorder (A TO), and mak etoorder (MTO). MTS companies mak e products in an ticipation of demand. A ccording to demand forecast, the length of the planning horizon (for whic h a stable demand will be assumed) and the demand lev el for the horizon for eac h product can be determined sim ultaneously This tec hnique (demand stabilization) is dened in subsection 1.3.1. A TO companies produce subassem blies according to forecasts, then com bine these subassem blies in unique com binations as requested b y customers. A large v ariet y of dieren t products can be produced b y com bining dieren t com binations of relativ ely few kinds of subassem blies. F or example, if a compan y produces 3 subassem blies that go in to a product, and eac h subassem bly has 5 dieren t models, than the n um ber of the unique com binations is5 3 =125. Computers and cars are primary examples of this philosoph y In A TO companies production smoothing can be implemen ted b y shifting focus from the nal assem bly lev el to the subassem blies lev el. F or eac h subassem bly demand stabilization should be done separately MTO companies produce products in response to actual customer orders and hence are the most dicult class for stabilizing the demand for production smoothing. Because of the large n um ber of possible products and possible small demand for eac h product, it is impossible to forecast demand for products. In the best case, a compan y ma y group products according to subassem bly and componen t requiremen ts, and create product families. Then the problem becomes somewhat similar to the problem of A TOs. If the compan y cannot create product
PAGE 30
14 families, or the n um ber of families created is large, the only option ma y be to deal with the rm (future and bac klogged) demand. Since the compan y will try to minimize a measure of tardiness or completion times of the orders, the problem here is a classical sc heduling problem; production smoothing is not the primary concern. Ev en if it w ere, due to the uncertain t y of the future demands, it w ould be inapplicable. In the next subsection w e explain a tec hnique whic h stabilizes the demand for a single product o v er a nite horizon. The length of the horizon is also an output of the tec hnique. 1.3.1 Demand Stabilization If a compan y could ha v e a stable demand o v er a v ery long horizon, e.g., v e y ears, this w ould mak e smoothing and planning the production system m uc h easier. Ho w ev er, most companies are lik ely to face uctuating demand with seasonal eects. Then, in order to meet the prerequisite condition of production smoothing, the compan y should dene a planning horizon with a stable, con tin uous demand, based on demand forecast. If the planning horizon is too long, then the reliabilit y of the forecast is lo w, and the plan ma y need to be modied before the end of the horizon. On the other hand, if it is too short, then the purpose of production smoothing is lost. W e demonstrate the eect of c hoosing the appropriate length for the planning horizon in Figure 13 F or the demand pattern in Figure 13 the 2y ear period is inappropriate for sev eral reasons. First, since the lev el of demand for the rst y ear is mostly higher than the production lev el, an initial stoc k of in v en tory m ust be a v ailable to meet the demand, and the in v en tory in v estmen t for this rst y ear will be high. Second, the production lev el throughout the second y ear exceeds demand, y et the demand seems to be decreasing. As a result w e will most lik ely ha v e a large stoc k of
PAGE 31
15 Year 1r Year 2rTimer 2year planr 6month plansr Demandr Figure 13: Demand Stabilization Ov er Time in v en tory at hand at the end of the second y ear. Third, since the demand pattern ma y c hange, planning the rst y ear's production rate using the data for both y ears is risky In this example a series of 6mon th plans looks more reasonable. After the second period, w e need to replan with a new demand forecast. The lev el of production cannot be c hosen simply as the a v erage demand o v er the planning horizon. The lev el c hosen m ust accoun t for preexisting stoc k and be able to satisfy periods of peak demand. Also the length of the horizon should not cause unacceptably high in v en tory accum ulations throughout the period. Although the in ten t of production lev eling is to k eep production uniform for as long as possible, if forecasted demand sho ws high v ariation bet w een seasons or mon ths, then the lev el of production should be adjusted seasonally or mon thly (to assure that production equals demand). Using con tin uous, stable demand for products, the production smoothing problem aims to nd a production plan whic h disperses the production of eac h product uniformly o v er time. In the absence of production smoothing a compan y
PAGE 32
16 ma y face with t w o ineciencies, either some w ork units starv e or to prev en t starv ation, high amoun t of in v en tories are carried. Let us explain these problems with an example. A man ufacturing facilit y produces 2 dieren t products, sa y A and B. Demand for both products is iden tical and 200 units=mon th. Product A constitutes of 2 units of partand 2 units of part, product B constitutes of 2 units of partand 4 units of partr. P arts,andrare produced on separate w ork units, X,Y and Z respectiv ely all feeding a nal assem bly station (S), where products A and B are assem bled. The demand v alues for parts are200 2+200 2=800,200 2=400and200 2=400units, for,andrrespectiv ely Setup times on the assem bly station are negligible. Assuming 20 w orkda ys per mon th and 400 min utes per da y total productiv e time is 8000 min utes per mon th, on ev ery man ufacturing unit. This corresponds to a cycle time of 20 min utes on the assem bly station. Processing times of the parts,andrare giv en as 8, 15 and 10 min utes, respectiv ely The total time needed to produce all parts are calculated as800 8=6400,400 15=6000and800 10=8000min utes, on X, Y and Z, respectiv ely X and Y ha v e excess capacit y but w e're not so luc ky with Z. F or X, the sc hedule of the nal assem bly station does not mak e a dierence, since it has to produce 2 units ofev ery 20 min utes, no matter what is being assem bled on S. Y has excess capacit y and this slac k can be used eectiv ely with good sc heduling. That is, if a good sc hedule can be found, the in v en tory lev el forcan be k ept lo w. Ho w ev er, forr, nal assem bly sequence is of vital importance. Let us consider t w o extreme sequences for S. In the rst scenario, w e rst produce 200 units of A and then 200 units of B. In this scenario there is no demand for partrin the rst half of the mon th. Th us, if Z producesrin the rst half, then a h uge pile of partraccum ulates. If Z does not producerin the rst half, on the other hand, then S will starv e in the second half of the mon th, when it needs part
PAGE 33
17rto assem ble product B. In the second scenario the assem bly plan con tin uously shifts from product A to B, then to A again and so on. With this plan 4 units of partrare required ev ery 40 min utes, and the in v en tory ofrnev er exceeds 4 units. The in v en torytime graph in Figure 14 demonstrates the eect of nal assem bly sc hedule for this 2product example. As the n um ber of products and div ersit y in part requiremen ts gro ws, the eect of production smoothing becomes more vital. Inventoryr Timer Scenario 2: Smallest Batchesr Inventoryr Timer Scenario 1: Largest Batchesr . .r 4r 4r0r0rFigure 14: Eect of Production Smoothing on In v en tory Lev el In practice w e ma y not be able to smooth the production perfectly as in the example. Signican tly large setup times ma y imply batc hing. In this case, one ma y
PAGE 34
18 c hoose one of the t w o alternativ e methods to create a smooth production plan. These t w o methods are namely batc h and Heijunk a methods. The methods are briey explained in the follo wing subsections. 1.3.2 Batc h Method for Production Smoothing Let us consider a man ufacturing facilit y with 3 products, A, B and C. F or a giv en mon th, demands are giv en as 4000, 2000 and 1000 units respectiv ely The previous subsection has already sho wn ineciency of w orking with large batc hes. So, w e decide to produce 4As, 2Bs and a C in a cycle and repeat this cycle 1000 times. The sequence is 4A 2B C 4A 2B C 4A. F or the sak e of simplicit y w e assume assem bly operations are performed on a single station S, with processing times 1 min ute=unit for eac h product. Setup times on S are giv en as 3, 4 and 3 min utes=batc h, for products A, B and C respectiv ely In eac h cycle w e ha v e one setup for eac h product, 3 setups in total whic h means w e dev ote 10 min utes for setup operations in eac h cycle. T otal productiv e time per mon th is 8000 min utes. Since w e need 7000 min utes for processing times, w e can allocate at most 1000 min utes for setups. By dividing total time for setups b y setup time per cycle (1000 = 10=100), w e see that at most 100 cycles can be done, therefore the cycle with smallest batc hes (4A 2B C) is not feasible, since this cycle has to be repeated 1000 times. The allo w ed n um ber of cycles is 100, then the new cycle is calculated b y m ultiplying batc h sizes in the smallest cycle b y 10 (=1000 = 100). The batc h size for product A in the new cycle should be10 4=40. Similarly batc h size for product B is10 2=20, and for product B it is10 1=10. So, the new cycle is: 40A 20B 10C, whic h should be repeated 100 times. This method is quite simple to implemen t but since products are still being produced in batc hes in ev ery cycle, it ma y not result in the smoothest production plan possible.
PAGE 35
19 1.3.3 Heijunk a Heijunk a is a Japanese w ord whic h means smoothing in production systems. It has been dev eloped and used in T o y ota for man y decades. Although it is a bit more complicated (compared to the batc h method), it is more useful in man y cases. T o explain the heijunk a method, w e will use the same example discussed for the batc h method. The method is similar to the Batc h Method in general. The main dierence is in nding the smallest cycle. W e try to eliminate batc hes, or distribute products as uniformly as possible o v er the cycle. A heijunk a cycle migh t be ABAABAC for the example, whic h should be repeated 100 times. No w w e ha v e a total of 6 setups (ev ery cycle starts with a switc ho v er from C to A, therefore requires a setup for A) in the cycle, requiring 20 min utes per cycle. W e divide total time a v ailable for setups (1000 min utes) in to the setup requiremen t of the smallest cycle (20 min utes), and w e see that at most 50 cycles can t in to the planning horizon. Therefore, the smallest cycle should be extended, b y m ultiplying the batc h sizes of eac h product b y 1000/50 = 20 (the n um ber of cycles required b y the original smallest cycle / the maxim um allo w able n um ber of cycles). Then, the smallest feasible cycle is found straigh tforw ardly: 20A 20B 20A 20A 20B 20A 20C. Distributing the products uniformly o v er the cycle is not alw a ys as simple as sho wn in this example. If demands are unpleasan t n um bers (all in prime n um bers or ha v e no common divisor), one ma y not see the smoothest sequence for the cycle easily In this case, one ma y need a mathematical tool. T o y ota's tool is described in the next section. 1.4 T o y ota's W a y for Production Smoothing T o y ota has dev eloped socalled Goal Chasing Method for uniformly distributing product occurrences in a sequence, o v er the planning horizon. Monden describes the essence of the method as follo ws: (Monden, 1998, p.254)
PAGE 36
20 In the Kan ban systems used in T o y ota, preceding processes supplying the v arious parts or materials to the line are giv en greatest atten tion. Under this "pulling" system, the v ariation in production quan tities or con v ey ance times m ust be minimized. Also, their respectiv e w orkinprocess in v en tories m ust be minimized. T o do so, the quan tit y used per hour (i.e., consumption speed) for eac h part in the mixedmodel line m ust be k ept as constan t as possible. Keeping the quan tit y used per hour constan t can be in terpreted as k eeping the cum ulativ e quan tit y used proportional to time elapsed. In Figure 15 the straigh t line demonstrates the ideal consumption for a part (or product itself). Consumption rate is constan t, in other w ords consumption is proportional to time. The actual consumption is a step function, increasing b y the amoun t the part is consumed b y the product at that spot of the sequence. The c hart ma y be dra wn for products, as w ell. In this case, the actual consumption ma y only increase b y one if that product is assigned to that spot of the sequence. A ctual consumption ma y be abo v e or belo w the ideal consumption. If there are more than one products in the system considered, the actual cannot be alw a ys equal to the ideal, whic h means there occurs a gap bet w een the t w o. The smaller the gap, the more successful the sequence. Gap is calculated for ev ery part (or product) and for ev ery spot in the sequence. T o y ota's objectiv e is to minimize the total of the squared gaps. Since plus and min us gaps diminishes eac h other, using squared gaps seems appropriate. T o y ota is in terested in smoothing the parts usage at the subassem blies lev el only Therefore, they calculate the gaps for only subassem blies used b y the end products in the sequence. Also, in the literature there exist examples where only product lev el, or up to 4 dieren t lev els (including product lev el) are considered. If a method is about sequencing the products in a w a y that only the v ariation in the product lev el is considered, w e call it a singlelev el production smoothing problem, otherwise w e call it a m ultilev el production smoothing problem.
PAGE 37
21 End of ther Planning HorizonrDrermrarnrdr rirnr trhrer rHrorrr.rTimer Quantityr Actual Productionr Ideal Productionr Gapr Figure 15: Ideal and A ctual Consumptions T o y ota's goal c hasing method (GCM) w orks as follo ws. First no product is assigned to an y stage in the sequence. Starting with the rst stage, one stage is analyzed at ev ery iteration. A t an iteration, a product is assigned to the stage, and the assignmen t is nev er tak en bac k. The algorithm terminates when all the stages are considered, yielding a complete sequence. A t eac h iteration, ev ery product (if all units of a product has been assigned to some stages in the sequence, that product is ignored) is considered separately and the total gap is calculated for the product being considered. The product whic h giv es the smallest total gap is assigned to the stage. The GCM is a onepass greedy heuristic. It is quite ecien t in terms of time, but frequen tly fails to nd the optim um sequence. Sev eral algorithms ha v e been dev eloped and found superior to the GCM. These methods will be discussed later, in this dissertation. T o y ota's man ufacturing system structure has major adv an tages for production smoothing, that is, eac h product tak es one unit of time on the line and c hangeo v er
PAGE 38
22 time is negligible. W e call this a sync hronized assem bly line. This is actually a v ery special case, and th us, another compan y ma y not be able to use the GCM for smoothing its production. Assem bly lines are kno wn to be ecien t man ufacturing systems, especially for singleproduct man ufacturing. This dissertation is not in terested only in assem bly line but some other man ufacturing systems, as w ell. The follo wing section giv es a brief review of dieren t man ufacturing systems. 1.5 Man ufacturing En vironmen t T ypes Sc heduling theory has ev olv ed o v er man y decades no w. Numerous researc hers ha v e been w orking on impro ving the performance of some t ype of man ufacturing systems. The most common t ypes are:Single Mac hineP arallel Mac hinesFlo w ShopFlexible Flo w ShopAssem bly LineJob ShopOpen Shop Single Mac hine models are the simplest man ufacturing models. Ho w ev er, due to the complexit y of the problem, man y single mac hine sc heduling problems are NPhard problems. Pinedo (2001, p.33) explains importance of single mac hine models as follo ws: Single mac hine models are importan t for v arious reasons. The single mac hine en vironmen t is v ery simple and a special case of all other en vironmen ts. Single mac hine models often ha v e properties that neither mac hines in parallel nor mac hines in series ha v e. The results that can be obtained for single mac hine models not only pro vide insigh ts in to the single mac hine en vironmen t, but they also pro vide a basis for heuristics that are applicable to more complicated mac hine en vironmen ts. In practice, sc heduling problems in more complicated
PAGE 39
23 mac hine en vironmen ts are often decomposed in to subproblems that deal with single mac hines. F or example, a complicated mac hine en vironmen t with a single bottlenec k ma y giv e rise to a single mac hine model. F rom production smoothing poin t of view, ev en the single mac hine model is not a trivial problem to solv e. Solutions to some other models (o w shop and assem bly line) can be obtained from single mac hine tec hniques b y minor modications. If one can pro v e that the system being considered has one bottlenec k, and an y sc hedule that is feasible for this bottlenec k is feasible for the rest of the system, then a v ery complicated man ufacturing system (o w shop, job shop or open shop) can be reduced to a single mac hine system. In a parallel mac hine en vironmen t there is only one task to be performed for eac h job (product) but there are a n um ber of mac hines that can do this task. The mac hines migh t be iden tical. In theory parallel mac hine models are the generalizations of single mac hine models. In practice, generally there are more than one resources whic h can be used for the same task, therefore parallel mac hine systems are as common as single mac hine systems. F rom production smoothing poin t of view, if the parallel mac hines are iden tical and eac h product can be produced on an y of the mac hines, the problem can be transformed to a single mac hine problem. In other cases where mac hines are not iden tical in speed and eac h product can not be processed on eac h mac hine, problem is m uc h harder. Flo w shops are man ufacturing systems whic h consist of a n um ber of single mac hines, placed serially Eac h product has to follo w the same route, but ma y ha v e dieren t setup and processing times on eac h mac hine. A o w shop where a product's processing and setup times are the same on eac h mac hine, is iden tical to a single mac hine system. A exible o w shop is a com bination of a parallel mac hines in a o w shop setting. There are a n um ber operations to be performed on eac h product, products
PAGE 40
24 m ust follo w the same route and there ma y be more than one mac hines used for the same operation, in parallel. These parallel mac hines migh t be iden tical. Other names for a exible o w shop in the literature are compound o w shop, m ultiprocessor o w shop and h ybrid o w shop. Flo w shops and exible o w shops are common man ufacturing systems, especially where products can be grouped in to product families, according to commonalities in operation orders. If a product is not processed on one of the serial mac hines, than setup and processing time can be assumed zero for that product on that mac hine and the system can be treated as a o w shop. Assem bly lines are similar to o w shops. Generally on o w shops unique batc hes are processes, the demand for eac h product ma y not be large and there are relativ ely small n um ber of operations. On assem bly lines, relativ ely smaller n um ber of products (the original idea w as to design an assem bly line for a single product), with larger demands and more operation requiremen ts. A ccording to demand data, a cycle time is found for the line and tasks are allocated to stations. Eac h product should be processed on eac h station within the cycle time. In m ultiproduct lines this rule is generally too restrictiv e, th us a more relaxed rule suc h as a v erage processing time of all products on a station should be less than or equal to the cycle time, is used instead. Since the rst production smoothing system w as dev eloped for an assem bly line (in T o y ota) and has a wide application area, the assem bly line model is importan t for production smoothing studies. An assem bly line on whic h processing and setup times do not dier from station to station is iden tical to a single mac hine system. In a job shop there is a xed route for eac h product, but routes for dieren t products are not necessarily the same. Therefore, material o w within the system is complicated and is dicult for sc heduling.
PAGE 41
25 F rom production smoothing poin t of view, if the last operation of ev ery product is the same, then this job shop system is iden tical to a single mac hine system. If not, it migh t be v ery dicult to calculate the eect of the sc hedule on product nishing times. If the products ha v e certain operations but the order (route) of these operations is not xed, then the resulting man ufacturing system is an open shop. Open shops are the most general man ufacturing systems men tioned in this section, th us the most dicult ones to handle. Ho w ev er, this exibilit y of deciding on the product routes pro vides us an opportunit y to x the last operation for eac h product to a certain operation, th us transform open shop model to a single mac hine model. 1.6 Con tribution of the Dissertation Majorit y of researc h on Sc heduling Theory aims to sc hedule the jobs in suc h a w a y that a cost function (a function of completion times and due dates) is minimized. Jobs are discrete, i.e. man ufacturing a n um ber of units of a product is tak en as a job. Th us, setup times are considered em bedded in the job processing times. Some w ork com bining sc heduling and batc hing decisions exist, but in these papers the objectiv e is minimizing a cost function, where cost stems from setups and in v en tory carrying. The ultimate goal is not setting up a con tin uous o w of materials and products but just lo w ering cost in a v ery isolated, local part of the en tire man ufacturing system. Production smoothing problem addresses another question. The ultimate goal is spreading products o v er the horizon uniformly suc h that the upstream processes or sublev els of the man ufacturing system ha v e stable, con tin uous demand and can practice JIT eectiv ely
PAGE 42
26 Starting at T o y ota, production smoothing problem has been studied for decades no w. Man y researc hers ha v e con tributed to the literature, with alternativ e objectiv e functions or solution methodologies. Unfortunately in all these w ork the problem scope has been stuc k in T o y ota's system. Ev ery product is assumed to tak e exactly one unit of time on a sync hronized assem bly line. Th us, the literature does not help other man ufacturing systems, m uc h. Lumm us (1995) claimes that an increase in product v ariet y impacts the JIT negativ ely She considers a 9 station JIT man ufacturing system and runs sim ulations with dieren t setup and processing time requiremen ts for 2 products. She tests the eect of 3 dieren t sequencing methods; sequencing in large batc hes, T o y ota's rule and a random sequence. Results from the sim ulation study sho w that, when there is a signican t im balance bet w een time requiremen ts of dieren t products on at least one station, sequencing cannot impro v e system's performance. In other w ords, the sequence found with T o y ota's rule is not signican tly better than a random sequence. This result clearly states a need for a method whic h can be used in im balanced, complex man ufacturing systems. This dissertation addresses the production smoothing problem where eac h product ma y ha v e distinct processing and setup times on the processors (mac hines or stations). The man ufacturing system t ypes, whic h are subjects of this researc h, are primarily single mac hine and o w shop en vironmen ts. This dissertation addresses both singlelev el (considering only the products) and m ultilev el (product lev el being the rst, sev eral lev els of parts are considered) v ersions of the production smoothing problem. Existing literature on production smoothing problem had the adv an tage of analyzing the planning horizon in discrete time units. F or example, if the horizon is 1 mon th (assuming 8000 min utes productiv e time per mon th) and a total of 4000
PAGE 43
27 units of 3 dieren t products to sequence, one migh t w ork with 4000 discrete time units (eac h equal to 2 min utes), or spots in the sequence. Allo wing arbitrary processing and setup times implies that the planning horizon should be analyzed in a con tin uous manner. This fact mak es the problem v ery dicult in sev eral w a ys. First, the objectiv e function m ust be reform ulated. Since the time is con tin uous, measured gaps o v er minimal time units should be time in tegrated to get the total gap bet w een the ideal and actual consumptions. Second, in a o w shop, products ma y ha v e dieren t time requiremen ts on dieren t mac hines. Therefore, establishing a con tin uous o w of products is a dicult task. Ev en if the en vironmen t is not a o w shop, sync hronizing the upstream operations with this con tin uous time sc hedule is dicult. T o o v ercome these diculties and mak e use of the existing literature w e dene a xedlength timebuc k et. Ev ery product, no matter whether it is produced as a single unit or in batc hes, should t in to this timebuc k et, where the length of this timebuc k et is also a decision v ariable. Batc hing decisions are made in a w a y suc h that the total n um ber of batc hes denes the length of the buc k et, in whic h ev ery batc h should t. In this con text the problem can be divided in to t w o separate subproblems, and solv ed b y a 2phase approac h. In the rst phase, the n um ber of batc hes and batc h sizes for ev ery product are determined. In the second phase these batc hes are sequenced. Since ev ery batc h ts in to the xedlength timebuc k et, the sequencing problem can be solv ed b y using the existing discrete time methods a v ailable in the literature. Although the t w o phases are separate problems, there is a link bet w een them. Model form ulation of the second phase driv es form ulation of the model in the rst phase. Therefore, the second phase should be stated before the rst phase.
PAGE 44
28 The dissertation is organized as follo ws. SingleMac hine SingleLev el model is presen ted in Chapter 2 while c hapters 3 4 and 5 are dev oted to Flo wShop SingleLev el, SingleMac hine MultiLev el and Flo wShop MultiLev el models, respectiv ely Finally in Chapter 6 w e summarize our w ork, giv e concluding remarks and discuss possible future researc h directions. Eac h model will be studied, starting with a literature review, form ulation and solution approac hes for the second phase problem (whic h is mostly based on the w ork a v ailable in the literature) and form ulation and solution approac hes for the rst phase problem (whic h is completely in troduced b y us).
PAGE 45
CHAPTER 2 SINGLEMA CHINE SINGLELEVEL MODEL SingleMac hine SingleLev el (SMSL) model has a piv otal importance in this dissertation. Principles and the approac h used in building the model will be extended to the other models. F urthermore, model properties and ndings, as w ell as the solution methods, giv e useful insigh ts for the other models. The man ufacturing en vironmen t of in terest in this model is a single mac hine. This single mac hine can be in terpreted as a nal operation that all the products m ust go through, or a bottlenec k operation whic h again serv es all the products, and all other operations ma y be subordinated to the needs of this single mac hine. In other w ords, an y feasible sequence for this single mac hine is feasible for the en tire system. Th us, the en tire system can be con trolled b y setting a sequence for this single mac hine. The automotiv e pressure hose man ufacturing facilit y presen ted in the previous c hapter (see pages 1 and 2 ) is a good example for suc h systems. The nal stage of the man ufacturing system in the example (the assem bly station) is iden tied as the single mac hine on whic h w e focus. SingleLev el denotes that only the endproducts lev el is considered and v ariation in product consumption will be minimized. If the part requiremen ts of dieren t products are somewhat close, than con trolling the single lev el is appropriate. The idea is, a lev eled production sc hedule will result in lev eled consumption in sublev els as w ell. Kubiak (1993) calls this singlelev el problem as product rate v ariation (PR V) problem. In the majorit y of papers in the production smoothing literature, batc h sizes are assumed iden tical (and equal to one). The adv an tage of this assumption is ob vious; with this assumption one easily adds setup time to processing time, 29
PAGE 46
30 therefore eliminates the complexit y imposed on this problem b y setups and larger batc h sizes. Moreo v er, they assume that processing times are iden tical (and equal to one) for eac h product and there is enough time to process all these products in an y sequence. This mak es the ideal onepieceo w possible, thereb y eliminating the need to process the products in batc hes. With this assumption the en vironmen t is dened as a sync hronized (perfectly balanced) assem bly line. The models used in the papers men tioned abo v e cannot be used to obtain lev el sc hedules for a single mac hine en vironmen t where processing and setup times v ary for dieren t products and total a v ailable time is limited. In this en vironmen t one m ust decide on batc h sizes and the n um ber of batc hes that will be produced to meet the demand, for eac h product, before trying to sequence the batc hes. This dissertation is in terested in solving this harder and more realistic v ersion of the mixedmodel sequencing problem. As the previous c hapter has already explained, this dissertation dev elops a new structure where demands are met in batc hes, and eac h batc h can be processed within a xed timebuc k et, whic h itself is a decision v ariable. Th us, the problem can be analyzed in t w o phases, the rst phase is to determine length of the xed time buc k et, n um ber of batc hes and batc h sizes for eac h product. Once w e solv e the problem of the rst phase, the problem of sequencing those batc hes, whic h is the second phase, becomes trivial. Since eac h batc h should be processed in a xed time buc k et, and total n um ber of batc hes to produce is kno wn for eac h product, w e can treat eac h batc h as a single unit of that product. This second phase becomes similar to models in the literature. Therefore, w e can adapt one of the ecien t methods, whic h are already dev eloped and tested for a problem similar to ours. This c hapter is organized as follo ws. Section 2.1 presen ts the curren tly existing w ork in the literature, related to the singlemac hine singlelev el model. Sections 2.2 to 2.5 are dev oted to the2 ndphase problem, where the problem is formally stated
PAGE 47
31 and exact and heuristic solution approac hes are presen ted. The w ork related to the2 ndphase problem mostly relies on the existing w ork in the literature, therefore these sections include an extensiv e literature review. The rest of the c hapter is dev oted to the1 stphase problem, the main consideration of the c hapter. In section 2.6 w e presen t the mathematical form ulation of the problem; and in section 2.7 w e analyze the nature of the problem and dra w useful properties about the problem. Section 2.8 dev elops exact solution methods for the problem. Sections 2.9 and 2.10 are dev oted to heuristic solution procedures, as w e devise a parametric heuristic procedure for the problem and implemen t three metaheuristic tec hniques in these sections, respectiv ely Finally Section 2.11 presen ts a comparativ e analysis of the solution approac hes dev eloped for the problem. 2.1 Literature Review Before proceeding in to review of related papers in the eld, w e dene our notation in order to a v oid possible confusion due to dieren t notations used in these papers. n Num ber of products to be man ufactured i Product index k Stage indexs iSetup time of product i on the mac hinep iProcessing time of one unit of product id iDemand for product i for the planning horizonD iT otal demand of products 1 toito be man ufactured in the planning horizon (= n P h =1 d h)b iBatc h size of product iq iNum ber of batc hes of product i to be man ufactured in the planning horizon Q T otal n um ber of batc hes to be man ufactured in the planning horizon (= n P i =1 q i) T T otal a v ailable time, length of the planning horizon t Length of the timebuc k et, length of one stage in the sequence
PAGE 48
32x i;kCum ulativ e production of product i o v er stages 1 to k measured in batc hes. The planning horizon ( T ) is divided in to equal length ( t ) in terv als, or stages. The n um ber of stages is equal to the total n um ber of batc hes to be man ufactured (Q). This propert y will allo w us to measure the deviation from the ideal production rates in a discrete manner. Also,x i;kdenotes the total n um ber of batc hes of product i produced in stages 1,2,..,k. The follo wing recursiv e equalit y holds forx i;k.x i;k = 8>>>><>>>>: 0 ;if k = 0x i;k 1 +1 ;if product i is sequenced in thek thstagex i;k 1 ;o/w. W e ha v e already noted that (see pages 20 and 21 ), in the ideal sc hedule in a JIT en vironmen t the production rate should be constan t, in other terms cum ulativ e stoc k for a product at a giv en poin t in time (total n um ber of items produced from the beginning un til this time) should be proportional to the time elapsed since the beginning of the horizon. Milten burg's (1989) w ork can be seen as the seminal paper in singlelev el mixedmodel justintime sequencing literature. Milten burg denes the objectiv e function as the summation of squared deviations from the ideal sc hedule. Milten burg's model can be expressed as follo ws. MinimizeZ = D n X k =1 n X i =1 ( x i;k k d i D n ) 2S.T.n X i =1 x i;k = k;k =1 ; 2 ;::;D x i;k 2 Z + ; 8 i; 8 kThe model looks quite simple. It has only t w o sets of constrain ts. These constrain ts assure that one and only one product is sequenced at eac h stage. In
PAGE 49
33 the core of the objectiv e function, is the gap bet w een ideal and actual production amoun ts, for a giv en product at a giv en stage.x i;kis the actual cum ulativ e production amoun t of producti, including stages1 ; 2 ;:::;k. As Figure 15 sho ws, the ideal cum ulativ e production can be expressed with a straigh t line. Since the production amoun t at the end of the horizon (whic h is theD th nstage) m ust equal demand for that product (d i), the ideal cum ulativ e production in stages1 ; 2 ;:::;kcan be calculated ask d i D n. No w w e see that the core of the objectiv e function is nothing but the gap bet w een the ideal and actual production amoun ts (see Figure 15 ). The objectiv e is formed b y summing up squared gaps o v er the products and stages. Milten burg suggests an algorithm for the problem, whic h ma y yield an infeasible solution. If this algorithm pro vides a feasible solution, then it is also optimal to the problem. Ho w ev er, this infeasibilit y occurs frequen tly with his algorithm. He further proposes a resc heduling tool to reac h feasibilit y The resc heduling tool giv en to correct the infeasible solutions is based on en umerating all possible subsc hedules, therefore not practical for large problem sizes. Milten burg also suggests t w o heuristic methods to obtain nearoptimal solutions. The rst heuristic (MA3H1) is a onestage constructiv e heuristic with anO ( nD n )complexit y while the second one (MA3H2) is a t w ostage heuristic with a complexit y ofO ( n 2 D n )and a better performance than MA3H1. Ding and Cheng (1993) dev elop a new heuristic whic h has MA3H1's complexit y (O ( nD n )) and MA3H2's solution qualit y Cheng and Ding (1996) analyze a v ariation of Milten burg's problem with product w eigh ts. They modify some w ell kno wn algorithms for w eigh ted products, and giv e computational results. Inman and Buln (1991) dev elop an alternativ e objectiv e function for Milten burg's original model, where they dene due dates (ideal production times) for eac h cop y of eac h product. Then, the authors suggest an EDD solution approac h
PAGE 50
34 for this model, whic h is guaran teed to nd the optimal solution for this modied objectiv e function and expected to nd good (nearoptimal) solutions to the original problem as w ell. This EDD algorithm runs inO ( nD n )time. Another alternativ e objectiv e function based on the due dates is used in Zhang, Luh, Y oneda, Kano and Ky o y a (2000). The authors measure both the tardiness and earliness of eac h product with respect to the due dates and minimize the w eigh ted sum of earliness and tardiness. The authors propose a Lagrangian relaxation tec hnique to solv e the model. Milten burg, Steiner and Y eomans (1990) dev elop a dynamic programming procedure and sho w that the problem is solv able inO ( n n Q i =1 ( d i +1))time. Although this is the rst exact algorithm devised for the problem, w e see that it has an exponen tial w orst case complexit y ofO ( nd nmax ), whered max d i ; 8 i. Therefore, the use of this dynamic programming procedure is limited to smallsized instances of the problem. Kubiak and Sethi (1991) note that Milten burg's model can be seen as an assignmen t problem. They reform ulate the model based on ideal positions for eac h item of eac h product. The objectiv e function measures total squared deviation from the ideal positions. Kubiak (1993) notes that this assignmen t problem is solv able inO ( D 3 )time. Aigbedo (2000) dev elops a tigh t lo w er bound on Milten burg's original objectiv e function. A ccording to his w ork, in ideal sc hedule (with in tegralit y constrain ts) the objectiv e function can be appro ximated b yF = n X i =1 ( D 2 n d 2i ) 12 D n Z:All the papers men tioned abo v e use Milten burg's original objectiv e function or a sligh t modication thereof, ho w ev er the idea of measuring total deviation from the ideal sc hedule remains in tact in all of them. McMullen and his cow ork ers
PAGE 51
35 dene a model where w eigh ted sum of Milten burg's objectiv e and the total n um ber of setups is adopted as an additional alternativ e objectiv e (McMullen, 1998; McMullen, 2001a; McMullen, 2001b; McMullen, 2001c; McMullen and T arasewic h, 2005; McMullen, T arasewic h and F razier, 2000; McMullen and F razier, 2000). The model men tioned here is giv en belo w.MinimizeE = w S S + w U UWhere,U = D X k =1 n X i =1 ( x i;k k d i D ) 2 S =1+ D X k =2 s k s k = 8><>: 1 ;if setup required fork thitem;0 ;o/w. Here,w Sandw Uare the w eigh t factors associated with alternativ e objectiv e functions. Also note that U is exactly the same as Milten burg's original objectiv e function and S is the total n um ber of setups generated b y the sc hedule. Th us,w Scan be seen as cost per setup, thenw Ubecomes a penalt y for the total deviation form the ideal sc hedule. In order to use this model one should c hoose these t w o parameters carefully McMullen, et al. implemen t v arious heuristic searc h tec hniques for this model and compare the results after running n umerous experimen ts. In addition to McMullen's et al. papers, Cho, P aik, Y oon and Kim (2005) proposes an impro v ed sim ulated annealing approac h and Mansouri (2005) proposes a m ultiobjectiv e genetic algorithm solution to the problem.
PAGE 52
36 Steiner and Y eomans (1993) use the minmax form ulation of the problem (dened in Milten burg (1989)) and sho w that this form ulation is reducible to Release Date/Due Date Decision Problem and can be solv ed inO ( nD n )time with an earliest due date (EDD) algorithm. They further sho w that a sc hedule alw a ys exists, suc h that, at eac h stage, the absolute deviation of the actual production from the ideal production for an y product is not more than one unit. The papers men tioned in this section will be analyzed in detail in follo wing sections, as needed. 2.2 2 ndPhase F orm ulation In Milten burg's w ork follo wing assumptions hold:Eac h product tak es exactly one unit processing time,One piece o w is possible, hence no need for batc hing. In our w ork, w e use the follo wing assumptions instead:Processing and setup times v ary from product to product,T otal a v ailable time is limited, hence batc hing is necessary for certain products, if not all. Measuring the deviation where eac h product tak es a v ariable time is the major concern. After dening an objectiv e function whic h measures the deviation eectiv ely dev eloping a method to sequence the products with v ariable processing time comes in to vision as a tough problem. Most of the literature relies on Milten burg's w ork where concen tration is on units of products. So, if w e transform the problem with v ariable batc h sizes and processing and setup times to the problem with unit products and xed processing times, w e can mak e use of the curren t literature. Under these new assumptions our idea is to nd a xed timebuc k et and calculate the size of the batc hes that can be processed within this xed timebuc k et. Main c haracteristics of the model are:Fixed timebuc k et ( t )
PAGE 53
37Multiple products in a singlemac hine en vironmen t As stated earlier, in this new structure the problem has t w o phases. In the rst phase w e try to determine a xedlength timebuc k et ( t ), and the n um ber of batc hes and their associated batc h sizes, that should be produced in order to meet demand, for eac h product. Then the second phase is sequencing these batc hes so that the total deviation from the ideal sc hedule (uniform distribution of batc hes o v er the planning horizon) is minim um. Using a xed timebuc k et for processing batc hes mak es the second phase trivial. Oncet(length of the xed timebuc k et) andq is (n um bers of batc hes to produce) are found,tcan be seen as unit time and eac h batc h can be seen as one product. This second phase is almost iden tical to Milten burg's original model, the only dierence is that our problem includes w eigh ts associated with the products. Second phase problem (sequencing) can be stated as follo ws. MinimizeZ = Q X k =1 n X i =1 b i x i;k kq i Q 2(2.1) S.T.n X i =1 x i;k = k;k =1 ; 2 ;:::;Q(2.2)x i;k 2 Z + ; 8 i; 8 k(2.3) F or this phase, Milten burg's objectiv e function is used with a modication. In our objectiv e, the batc h sizes are tak en in to accoun t as product w eigh ts. This giv es higher priorit y to the products whic h ha v e larger batc h sizes. Another modication is that demand is adjusted before sequencing batc hes. Solution from the rst phase (q iandb iv alues) determines the new demand,d 0i = q i b i. A t the beginning of the next sc heduling period, demand should be adjusted to neutralize the eects of o v er production of this period. If w e assume that w e ha v e enough time to ac hiev e onepieceo w (b i =1 ; 8 i), then n um ber of batc hes for the period equals demand (q i = d i ; 8 i) and total n um ber
PAGE 54
38 of batc hes equals total demand (Q = D n). With this assumption our form ulation reduces to Milten burg's model. The other alternativ es for the second phase model can be summarized b y t w o major deviations from our model. First, one migh t adapt a minmax approac h instead of minsum. Second, one migh t use an absolute measure of the gap bet w een ideal and actual productions, instead of the squared measure c hosen here. The primary motiv e in forming the model this w a y is, that it enables us to adopt our model to Milten burg's and mak e use of the extensiv e literature. 2.3 Exact Methods for the2 ndphase problem If w e ignore the in tegralit y constrain t ( 2.3 ), then the problem reduces to minimizing a con v ex function subject to a set of linear constrain ts. The optimal solution is found straigh tforw ardly:x i;k = kq i Q,Z =0. Solution is feasible sincen P i =1 x i;k = k n P i =1 q i Q = k. Ho w ev er, this solution is not feasible for the complete model, i.e. violates constrain t ( 2.3 ). No w, w e need to nd a tool to con v ert this solution to a feasible one. W e dene t w o poin ts in space suc h thatY k =( y 1 ;k ;y 2 ;k ;:::;y n;k ) 2 R nandX k =( x 1 ;k ;x 2 ;k ;:::;x n;k ) 2 Z n.Y kis the ideal poin t whic h results in zero gap (y i;l = lq i Q ;l =1 ; 2 ;:::;kandX kis the nearest in teger poin t to the poin tY k, for thek thstage. Here, nearest means minimizen P i =1 ( b i ( x i;k y i;k )) 2. No w w e suggest Algorithm Nearest Point to nd the nearest in teger poin t to the ideal (infeasible) poin t. This algorithm is a modied v ersion of Milten burg's Algorithm 1 (Milten burg, 1989, p.195). Note that, in the follo wing algorithm and all the algorithms presen ted in this section, products denoted b y capital letters (A,B,...) can be an y of the products, not a certain product whic h is labeled b y this capital letter. The algorithm is giv en in Figure 21 W e illustrate this algorithm with a small example. Consider three products suc h that the n um ber of batc hes to be produced for eac h product is giv en as 5
PAGE 55
39 Algorithm NearestPoint 1.Set k =1. 2.Findthenearestnonnegativeinteger x i;k toeachcoordinate y i;k .Thatis,nd x i;k sothat j x i;k Ã‚Â¡ y i;k j 1 2 ; 8 i 3.Calculate k x = n P i =1 x i;k 4.(a)If k Ã‚Â¡ k x =0,thengotostep7.Thenearestintegerpointis X k =( x 1 ;k ;x 2 ;k ;:::;x n;k ). (b)If k Ã‚Â¡ k x > 0,thengotostep5. (c)If k Ã‚Â¡ k x < 0,thengotostep6. 5.Findthecoordinate y i;k withthesmallest b 2i (1+2( x i;k Ã‚Â¡ y i;k )).Breaktiesarbitrarily. Incrementthevalueofthis x i;k ; x i;k x i;k +1.Gotostep3. 6.Findthecoordinate y i;k withthesmallest b 2i (1 Ã‚Â¡ 2( x i;k Ã‚Â¡ y i;k )).Breaktiesarbitrarily.Decrementthevalueofthis x i;k ; x i;k x i;k Ã‚Â¡ 1.Gotostep3. 7.If k = Q stop.Elseset k = k +1,gotostep2. Figure 21: Pseudocode for Algorithm Nearest Point batc hes for products 1 and 2, and 1 batc h for product 3. Batc h sizes are 2, 2 and 3 units, for products 1, 2 and 3, respectiv ely Algorithm Nearest Point is used to nd a sequence whic h minimizes the objectiv e function. F ork =1(at the rst stage), ideal poin tY 1 =(0 : 45 ; 0 : 45 ; 0 : 09)and corresponding rounded poin t isX 1 =(0 ; 0 ; 0). ClearlyX 1is not acceptable (k x = n P i =1 x i;k =0 < 1= k), and step 5 is in v ok ed where a new poin tX 1 =(1 ; 0 ; 0)is found. The algorithm is run for stages 1 to 11, and the solution summarized in T able 21 is obtained. T o obtain a sequence from this solution w e proceed stage b y stage. The subsequence co v ering up to 5th stage is: 12123. Ho w ev er, the 6th stage brings a conict. It suggests us to produce products 1 and 2 at the same time, while destro ying one unit of product 3 whic h w as produced before. Th us, w e see that a feasible sequence can not be obtained from the solution, for this example. As this v ery simple example sho ws, Algorithm Nearest Point does not alw a ys yield a feasible sequence. F or cases in whic h this infeasibilit y occurs w e propose
PAGE 56
40 T able 21: Example for Algorithm Nearest Point Stage( k ) x 1 ;k x 2 ;k x 3 ;kProduct Sc heduled 1 1 0 0 1 2 1 1 0 2 3 2 1 0 1 4 2 2 0 2 5 2 2 1 3 6 3 3 0 1,2,3 7 3 3 1 3 8 4 3 1 1 9 4 4 1 2 10 5 4 1 1 11 5 5 1 1 Algorithm Modified Nearest Point whic h is again adapted from Milten burg (1989, p.196) (see Figure 22 ). Algorithm ModifiedNearestPoint 1.Solvethe2 nd phaseproblemusingAlgorithm NearestPoint ,anddetermineifthesequenceisfeasible.Ifyes,stop.Thesequenceistheoptimalsequence.Otherwise,gotostep2 2.Fortheinfeasiblesequencedeterminedinstep1,ndtherst(ornext)stage l where x i;l Ã‚Â¡ x i;l Ã‚Â¡ 1 < 0.Set =numberofproducts i ,forwhich x i;l Ã‚Â¡ x i;l Ã‚Â¡ 1 < 0. Resequencestages l Ã‚Â¡ ;l Ã‚Â¡ +1 ;:::;l +1byconsideringallpossiblesubsequencesforthis range.3.Repeatstep2forotherstageswhereinfeasibilityoccurs. Figure 22: Pseudocode for Algorithm Modified Nearest Point Algorithm Modified Nearest Point nds an optimal sequence, but in the w orst case the n um ber of infeasible stages in the sequence and the n um ber of stages to be resequenced ma y be as high as the total n um ber of batc hes,Q. Since the algorithm uses partial en umeration in step 2, the w orst case complexit y isO ( Q !). This result pro v es that neither Algorithm Nearest Point nor Algorithm Modified Nearest Point can be used to solv e the problems found in real life. No w, w e will dev elop a dynamic programming (DP) procedure for solving the2 ndphase problem more ecien tly
PAGE 57
41 DP ma y be used as an ecien t optimization tool for some problems where a n um ber of decisions ha v e to be made in sequen tial order, eac h decision denotes a transition from a state to another and objectiv e function can be expressed as a recursiv e equation. F or our problem w e kno w that the solution consists of a n um ber of decisions, i.e. whic h product w e should produce at whic h stage of the planning horizon. If w e ha v e decided on the subsequence up to a giv en stage, w e can form ulate the eect of the next stage's decision on the objectiv e function, using the subsequence at hand. The nal state for our problem is the state, in whic h ev ery batc h has been assigned to a stage of the sequence. The initial state is the state in whic h no batc h is assigned to an y stage. W e are trying to nd the most ecien t w a y to get from the initial state to the nal state. W e'll notate states with n v ectors,X k =( x 1 ;k ;x 2 ;k ;:::;x n;k ) 2 Z nsuc h thatn P i =1 x i;k = k, and0 x i;k q i ; 8 i. A t eac h stage w e should decide on what to produce in that stage. Therefore, a decision is simply a selection of one of the n products to produce. F or some states, since the requiremen t for some products ha v e already been met, the n um ber of possible decisions is less than n The follo wing recursiv e equation sho ws the impact of decisions on the objectiv e function and relationships bet w een neigh bor states.f ( X k )=min i f f ( X k e i )+ g ( X k ) j x i;k 1 0 g f ( X 0 )= f (0 ; 0 ;:::; 0)=0Where,e j = j thunit v ector; with n en tries, all of whic h are zero except a single 1 in thej thplaceg ( X k )= n X i =1 ( x i;k k q i Q ) 2
PAGE 58
42 Also, note thatf ( X k 1 )= f ( X k e i ), whereiis the index of product assigned tok thstage. T o calculate the complexit y for the suggested DP procedure, w e rst need to kno w n um ber of states. Sincex i;kcan tak e v alues 0,1,...,q i, there areq i +1possible v alues forx i;k. This means,n Q i =1 ( q i +1)distinct states exist. Since at most n decisions are ev aluated at eac h state, the complexit y of the procedure isO ( n n Q i =1 ( q i +1)). This complexit y is theoretically lo w er than the previous method. Therefore, it allo ws us to solv e sligh tly larger problems optimally but ev en for moderate problems, this complexit y is not practical. Whic h means w e need to design another method for large problems. In Kubiak and Sethi's (1991) assignmen t problem form ulation of Milten burg's (1989) problem, the authors further claim that the problem is con v ertible to an assignmen t problem in the presence of nonnegativ e product w eigh ts. In what follo ws w e propose a transformation to con v ert the second phase problem in to an assignmen t problem. First, w e dene an ideal position for eac h cop y of eac h product and a cost function whic h increases as a cop y deviates from its ideal position. LetZ i jbe the ideal position ofj thcop y of productiandC i jkbe the cost of assigningj thcop y of productito thek thstage of the sequence. Then, the
PAGE 59
43 follo wing form ulation is dened.Z i j = (2 j 1) Q 2 q i C i jk = 8>>>>>>>>>>>>><>>>>>>>>>>>>>: b 2i Z i j 1 P l = k ijl ;ifkZ i jWhere, ijl = j l q i Q 2 j 1 l q i Q 2 Here,d x edenotes the smallest in teger that is greater than or equal toxandj x jdenotes the absolute v alue ofx. LetY i jk 2f 0 ; 1 gbe the decision v ariable denoting whether thej thcop y of productiis assigned to thek thstage of the sequence. The assignmen t problem form ulation of the2 ndphase problem is giv en as follo ws.( AP )Minimizen X i =1 q n X j =1 Q X k =1 C i jk Y i jk(2.4) S.T.Q X k =1 C i jk Y i jk =1 ; 8 i;j(2.5)n X i =1 q n X j =1 C i jk Y i jk =1 ; 8 k(2.6)Y i jk 2f 0 ; 1 g ; 8 i;j;k(2.7) Constrain t set ( 2.5 ) assures that eac h cop y of eac h product is assigned to exactly one position. Similarly constrain t set ( 2.6 ) assures that exactly one cop y is assigned to eac h position. A graphical illustration of the assignmen t problem
PAGE 60
44 form ulation is giv en in Figure 23 As seen from the gure, our assignmen t form ulation has2 Qnodes. 1,1r Qr 2r 1r n,qr nr 2,1r 1,qr 1r 1,2ri,jr kr .r.r.r.r.r .r.r.r .r.r.r .r.r.rFigure 23: Graph Represen tation of the Assignmen t Problem F orm ulation The assignmen t problem with2 Qnodes can be solv ed inO ( Q 3 )time, and is one of the most ecien tly solv ed problems in the operations researc h literature. Solution methods for the assignmen t problem can be traced bac k to the w ellkno wn Hungarian Method (Kuhn, 1955). Balas, Miller, P ekn y and T oth (1991) giv e a parallel algorithm that can ecien tly solv e assignmen t problems with 900 million v ariables. This corresponds to 30000 batc hes in our problem. In a reallife man ufacturing system, planning for a larger n um ber of batc hes is highly unlik ely Therefore, the assignmen t problem form ulation for the2 ndphase problem is practical. 2.4 Problem Specic Heuristics for the2 ndPhase Problem In the previous section w e ha v e review ed exact algorithms that can be used to solv e the2 ndphase problem optimally The most ecien t method suggested is
PAGE 61
45 a transformation to assignmen t problem, whic h then can be solv ed inO ( Q 3 )time. If the problem at hand has a total n um ber of batc hes (Q) in the thousands, then it will tak e a signican t amoun t of space to con v ert the problem to an assignmen t problem and a signican t amoun t of time to solv e it. It will be quite useful if w e can propose some faster methods whic h ha v e been pro v ed to giv e good (near optimal) results. In the literature one can nd man y heuristic procedures addressing Milten burg's problem. The rst set of heuristics are suggested b y Milten burg (1989), along with the problem. Milten burg suggests an algorithm whic h uses t w o dieren t heuristic approac hes for the resequencing that stems from the nearest poin t calculation of his exact algorithm, as an alternativ e to partial en umeration. The rst heuristic is quite simple. Here w e explain this heuristic with adaptation to our problem and in a complete algorithmic structure (see Algorithm MH1 in Figure 24 ). Algorithm MH1 1.Solvethe2 nd phaseproblemusingAlgorithm NearestPoint ,anddetermineifthesequenceisfeasible.Ifyes,stop.Thesequenceistheoptimalsequence.Otherwise,gotostep2 2.Fortheinfeasiblesequencedeterminedinstep1,ndtherst(ornext)stage l where x i;l Ã‚Â¡ x i;l Ã‚Â¡ 1 < 0.Set =numberofproducts i ,forwhich x i;l Ã‚Â¡ x i;l Ã‚Â¡ 1 < 0.Resequencestages l Ã‚Â¡ ;l Ã‚Â¡ +1 ;:::;l +1byusingstep3foreverystageinthisrange. 3.Ifthestagetoberesequencedisstage k ,assigntheproductwithsmallest b 2i (1+2( x i;k Ã‚Â¡ 1 Ã‚Â¡ k q i Q ))tothisstage k 4.Repeatstep2forotherstageswhereinfeasibilityoccurs. Figure 24: Pseudocode for Algorithm MH1 The justication for the rule used in step 3 is as follo ws. Consider a stagek. If productAis assigned to stagek, the v ariation at this stage is denotes b yV k ( A ). Similarly if productBis assigned, then v ariation isV k ( B ). These v ariations are
PAGE 62
46 giv en belo w.V k ( A )= b 2A ( x A;k 1 +1 k q A Q ) 2 + b 2B ( x B;k 1 k q B Q ) 2 + X i 6 = A;B b 2i ( x i;k 1 k q i Q ) 2 V k ( B )= b 2A ( x A;k 1 k q A Q ) 2 + b 2B ( x B;k 1 +1 k q B Q ) 2 + X i 6 = A;B b 2i ( x i;k 1 k q i Q ) 2The dierence,V k ( A ) V k ( B )= b 2A (1+2( x A;k 1 k q A Q )) b 2B (1+2( x B;k 1 k q B Q ))As the dierence function sho ws,V k ( A )
PAGE 63
47 T able 22: Sequence F ound b y Algorithm Nearest Point Stage (k) x 1 k x 2 k x 3 k x 4 k Product Assigned to the Stage 1 1 0 0 0 1 2 1 0 1 0 3 3 2 0 1 0 1 4 2 0 1 1 4 5 2 0 2 1 3 6 2 0 2 2 4 7 3 0 3 1 1,3,4 8 3 0 3 2 4 9 4 0 4 1 1,3,4 10 4 1 3 2 2,4,3 11 4 1 4 2 3 12 5 1 5 1 1,3,4 13 5 1 5 2 4 14 6 1 6 1 1,3,4 15 6 1 6 2 4 16 7 1 6 2 1 17 7 1 6 3 4 18 7 1 7 3 3 19 8 1 8 2 1,3,4 20 8 1 8 3 4 means w e b ypass other steps and emphasize the rule in step 3. The sequence found b y using Algorithm MH1 is giv en in T able 23 F or the rst stages w e calculateb 2i (1+2( x i;k 1 k q i Q ))for eac h product. Comparing the v alues (0.2, 8.1, 0.8, 0.7), w e assign product 1 to stage 1. A tie can be brok en arbitrarily In stage 8, w e see suc h a tie where w e could c hoose either product 1 or product 4, and w e c hose product 4. The algorithm terminated with a feasible sequence whic h has a total v ariation of 27.85. The dra wbac k of Algorithm MH1 is, that it considers only the curren t stage, therefore it is m y opic. The algorithm ma y not be able to nd good solutions, due to its m y opic nature. Milten burg suggests another heuristic whic h promises better results. Here w e explain this second heuristic with adaptation to our problem and in Algorithm MH2 (see Figure 25 ).
PAGE 64
48 T able 23: Sequence F ound b y Algorithm MH1 Stage b 2i (1+2( x i;k 1 k q i Q )) Product V ariation (k) 1 2 3 4 Assigned (V k) 1 0.2 8.1 0.8 0.7 1 1.05 2 1.4 7.2 2.4 0.4 3 0.38 3 0.6 6.3 2.4 0.1 4 0.71 4 0.2 5.4 0.8 1.8 3 1.52 5 1.0 4.5 4.0 1.5 1 0.63 6 0.2 3.6 0.8 1.2 1 1.82 7 1.4 2.7 2.4 0.9 3 1.31 8 0.6 1.8 2.4 0.6 4 2.28 9 0.2 0.9 0.8 2.3 3 3.25 10 1.0 0.0 4.0 2.0 1 2.50 11 0.2 0.9 0.8 1.7 2 2.75 12 0.6 16.2 2.4 1.4 3 2.28 13 1.4 15.3 2.4 1.1 1 1.31 14 0.2 14.4 0.8 0.8 3 1.82 15 1.0 13.5 4.0 0.5 1 0.63 16 0.2 12.6 0.8 0.2 1 1.52 17 1.4 11.7 2.4 0.1 3 0.70 18 0.6 10.8 2.4 0.4 4 0.38 19 0.2 9.9 0.8 1.3 3 1.05 20 1.0 9.0 4.0 1.0 1 0.00 T otal 27.85 This algorithm is v ery similar to the previous one. Y et, it mak es use of the previous algorithm in step 4, to decide on the next product to be assigned. This is also a m y opic algorithm, but its vision is somewhat broader. This adv an tage results in abilit y to nd better solutions. As expected, this adv an tage has a cost whic h is paid in time complexit y Algorithm MH2 resequencesQproducts in the w orst case. Ev ery resequencing tak esO ( n 2 )time. Therefore, the time complexit y of the algorithm isO ( n 2 Q ). Applying Algorithm MH2 on the same example giv es the sequence in T able ( 24 ). Product pairs for the next t w o (k thandk +1 st) stages are found b y considering eac h product for thek thstage, separately and selecting the product for thek +1 ststage with the rule giv en in step 3 of Algorithm MH1 The rst elemen t of the pair
PAGE 65
49 Algorithm MH2 1.Solvethe2 nd phaseproblemusingAlgorithm NearestPoint ,anddetermineifthesequenceisfeasible.Ifyes,stop.Thesequenceistheoptimalsequence.Otherwise,gotostep2 2.Fortheinfeasiblesequencedeterminedinstep1,ndtherst(ornext)stage l where x i;l Ã‚Â¡ x i;l Ã‚Â¡ 1 < 0.Set =numberofproducts i ,forwhich x i;l Ã‚Â¡ x i;l Ã‚Â¡ 1 < 0.Resequencestages l Ã‚Â¡ ;l Ã‚Â¡ +1 ;:::;l +1byusingsteps3to5foreverystageinthisrange. 3.Ifthestagetoberesequencedisstage k ,assignaproduct(letussayproduct A)tothisstage k .Calculatevariationcausedbythisassignment, V k ( A ). 4.AssumingproductA(fromstep3)isassignedtostage k ,ndtheproducttobe assignedtostage k +1(letussayproductB),usingtheruleinAlgorithm2.3step3. Calculatevariationcausedbythisassignment, V k +1 ( B ). 5.Performsteps3and4foreachproduct(tobeassignedtostage k ).AssignproductAwhichgivessmallesttotalvariation V k +1 ( A )+ V k +1 ( B ),tostage k 6.Repeatstep2forotherstageswhereinfeasibilityoccurs. Figure 25: Pseudocode for Algorithm MH2 whic h leads to the lo w est v ariation v alue is assigned to that step. As seen from the table, product pairs (1,3),(2,3),(3,4) and (4,3) giv e 1.4, 17.1, 2.0 and 2.9 units of v ariation, respectiv ely Consequen tly product 1 is assigned to stage 1. F or the last stage there is only one alternativ e product that can be assigned, th us no calculation is performed and product 1 is directly assigned to stage 20. The algorithm terminated with a feasible sequence whic h has a total v ariation of 27.35. The sequence found is v ery similar to the sequence found b y the previous algorithm. The dierence in the total v ariation is minimal for this example but the n um ber of operations performed b y the algorithm is m uc h larger. The selection bet w een algorithms MH1 and MH2 is a tradeo problem bet w een time and solution qualit y If one can nd an algorithm pro viding Algorithm MH2 's solution qualit y in MH1 's time, it will be m uc h better than both MH1 and MH2 Using specic structure of the problem, Ding and Cheng (1993) ha v e dev eloped a new heuristic algorithm whic h has this propert y and outperforms Milten burg's heuristics. They named this new algorithm as T w oStage Algorithm. This algorithm is
PAGE 66
50 T able 24: Sequence F ound b y Algorithm MH2 Stage Product P air & V ariation for Stages (k;k +1) Product V ariation (k) 1 2 3 4 Assigned (V k) 1 (1,3) 1.4 (2,3) 17.1 (3,1) 2.0 (4,3) 2.9 1 1.05 2 (1,3) 5.4 (2,3) 16.9 (3,4) 1.1 (4,3) 3.9 3 0.38 3 (1,3) 2.7 (2,1) 14.6 (3,1) 4.5 (4,3) 2.2 4 0.71 4 (1,3) 2.7 (2,3) 13.8 (3,1) 2.1 (4,3) 7.2 3 1.52 5 (1,1) 2.4 (2,1) 11.3 (3,1) 8.0 (4,1) 5.9 1 0.63 6 (1,3) 3.1 (2,3) 9.8 (3,1) 3.7 (4,3) 5.6 1 1.82 7 (1,3) 7.4 (2,3) 9.9 (3,4) 3.6 (4,3) 6.9 3 1.31 8 (1,3) 5.0 (2,3) 7.3 (3,1) 6.8 (4,3) 5.5 1 2.28 9 (1,3) 8.8 (2,3) 6.9 (3,2) 5.2 (4,3) 6.3 3 2.75 10 (1,2) 6.7 (2,4) 5.2 (3,2) 10.3 (4,2) 5.2 2 2.50 11 (1,3) 5.5 (2,3) 39.2 (3,1) 6.1 (4,3) 5.0 4 2.75 12 (1,3) 5.4 (2,3) 38.9 (3,1) 3.6 (4,3) 9.9 3 2.28 13 (1,3) 3.1 (2,1) 35.0 (3,1) 6.9 (4,1) 7.2 1 1.31 14 (1,3) 3.0 (2,3) 32.1 (3,1) 2.4 (4,3) 5.5 3 1.82 15 (1,4) 2.1 (2,1) 29.0 (3,1) 7.7 (4,1) 3.6 1 0.63 16 (1,3) 2.2 (2,3) 26.9 (3,1) 2.8 (4,3) 2.7 1 1.52 17 (1,3) 5.9 (2,3) 26.4 (3,4) 1.1 (4,3) 3.4 3 0.70 18 (1,3) 2.9 (2,3) 23.2 (3,4) 4.2 (4,3) 1.4 4 0.38 19 (1,3) 1.6 (2,3) 21.7 (3,1) 1.0 (4,3) 5.1 3 1.05 20 1 1 0.00 T otal 27.35 practical in terms of computation time and n um ber of operations, but complicated to understand the form ulation. The explanation of the mathematical details and some useful proofs can be found in Ding and Cheng (1993) and Cheng and Ding (1996). A dapting the t w o stage algorithm to our problem, w e dene Algorithm Twostage as giv en in Figure 26 Applying this algorithm on the same example results in the sequence giv en in T able ( 25 ). F or the rst stage (k =1), iv alues are found as 0.1, 3.8, 0.4 and 0.3 for products 1 to 4, respectiv ely The lo w est of the four is 0.4 ( 3), so product 3 is selected as the rst candidate,A. Calculating is in a similar w a y product 1 is
PAGE 67
51 Algorithm Twostage 1.Set k =1. 2.Determineproduct A thathasthelowestÃ‚Â£ i = b 2i ( x i;k Ã‚Â¡ 1 Ã‚Â¡ ( k + 1 2 ) q i Q + 1 2 ).Breakties arbitrarily.3.Determineproduct B thathasthelowestÃ‚Â¤ i 6 = A = b 2i ( x i;k Ã‚Â¡ 1 Ã‚Â¡ ( k +1) q i Q + 1 2 ).If Ã‚Â¤ B >b 2A ( x A;k Ã‚Â¡ 1 Ã‚Â¡ ( k +1) q A Q + 3 2 ) g ,set B = A 4.If A 6 = B and= b 2A ( x A;k Ã‚Â¡ 1 Ã‚Â¡ k q A Q + 1 2 ) Ã‚Â¡ b 2B ( x B;k Ã‚Â¡ 1 Ã‚Â¡ k q B Q + 1 2 ) > 0assign product B tostage k ,otherwiseassignproduct A tostage k 5.Eliminateaproduct,ifthelastcopyofithasbeenassignedtostage k ,fromfurtherconsideration.Ifallproductsnished,stop.Otherwise,set k = k +1andgotostep 2. Figure 26: Pseudocode for Algorithm Twostage selected as the second candidate (B). Critical v alueis calculated as 0.30, whic h tells usBshould be assigned to this stage, so product 1 is assigned to stage 1. Although the mathematical basics of the algorithm are c hallenging, the application is quite simple. The algorithm is a onepass method, whic h performsO ( n )operations per stage. Therefore, the complexit y of the algorithm isO ( nQ ). As T able ( 25 ) sho ws, the algorithm giv es the same sequence as Algorithm MH2 for this example. T esting these algorithms in terms of time and solution qualit y is bey ond the scope of this dissertation. W e refer to Cheng and Ding (1996), where the authors analyze three algorithms in troduced in this section, plus t w o other heuristics and one exact method (assignmen t problem form ulation from the previous section). Their results sho w that Algorithm Twostage is the most fa v orable heuristic for the problem. 2.5 MetaHeuristics for the2 ndPhase Problem In this section w e will briey review metaheuristic approac hes whic h ha v e been applied to Milten burg's model, therefore can easily be adapted to the2 ndphase problem.
PAGE 68
52 T able 25: Sequence F ound b y Algorithm Twostage Stage i i Product V ariation (k) 1 2 3 4 A 1 2 3 4 B Assigned (V k) 1 0.1 3.8 0.4 0.3 3 0.3 3.6 2.8 0.2 1 0.30 1 1.05 2 0.5 3.4 2.0 0.1 3 0.3 3.2 1.2 0.1 4 1.40 3 0.38 3 0.1 2.9 0.4 0.0 4 0.1 2.7 0.4 0.9 3 1.15 4 0.71 4 0.3 2.5 1.2 0.8 3 0.5 2.3 2.0 0.8 1 0.30 3 1.52 5 0.7 2.0 1.2 0.7 1 0.1 1.8 0.4 0.6 1 1 0.63 6 0.1 1.6 0.4 0.5 3 0.3 1.4 2.8 0.5 1 0.30 1 1.82 7 0.5 1.1 2.0 0.4 3 0.3 0.9 1.2 0.3 1 1.90 3 1.31 8 0.1 0.7 0.4 0.2 1 0.9 0.5 0.4 0.2 3 0.90 1 2.28 9 0.7 0.2 1.2 0.1 3 0.5 0.0 2.0 0.0 4 0.55 3 2.75 10 0.3 0.2 1.2 0.1 2 0.1 8.6 0.4 0.2 4 0.00 2 2.50 11 0.1 8.3 0.4 0.2 3 0.3 8.1 2.8 0.3 4 0.55 4 2.75 12 0.5 7.9 2.0 0.6 3 0.7 7.7 1.2 0.6 1 0.90 3 2.28 13 0.9 7.4 0.4 0.5 1 0.1 7.2 0.4 0.4 3 1.90 1 1.31 14 0.3 7.0 1.2 0.3 3 0.5 6.8 2.0 0.3 1 0.30 3 1.82 15 0.7 6.5 1.2 0.2 1 0.1 6.3 0.4 0.1 4 0.75 1 0.63 16 0.1 6.1 0.4 0.0 3 0.3 5.9 2.8 0.0 1 0.30 1 1.52 17 0.5 5.6 2.0 0.1 3 0.3 5.4 1.2 0.2 4 1.15 3 0.70 18 0.1 5.2 0.4 0.3 4 0.1 5.0 0.4 0.7 3 1.40 4 0.38 19 0.3 4.7 1.2 0.6 3 0.5 4.5 2.0 0.5 1 0.30 3 1.05 20 0.7 4.3 1.2 0.4 1 0.1 4.1 0.4 0.4 1 1 0.00 T otal 27.35 Metaheuristic (MH) tec hniques are in the set of general heuristic tec hniques, whic h can be tailored for a v ery broad v ariet y of problems with relativ ely less eort. The core of a metaheuristic tec hnique remains in tact in all applications, but user has the exibilit y to c hange parameters and c hoose a policy in time / solution qualit y tradeo. This exibilit y allo ws a MH tec hnique to be successfully applied to v ery dieren t problems. On the other hand, this exibilit y sometimes creates an a wkw ard situation for the user. W rong selection of the parameters can lead to either premature con v ergence or long computation times. In order to get desired results from a MH tec hnique, user should ne tune the tec hnique to the problem on hand.
PAGE 69
53 In terms of solution qualit y MH tec hniques pro vide better results, than simple, problemspecic heuristic methods. This claim is based on a v ery simple fact, that majorit y of MH tec hniques tak e their initial solutions from the best problemspecic (PS) heuristic tec hniques designed for that problem. Therefore, the solution found b y the MH tec hnique cannot be w orse than the result found b y the em bedded PS tec hnique. If there is no kno wn PS heuristic for the problem studied, one should select an arbitrary initial solution (starting poin t), and this ma y cause poor performance for the MH tec hnique. MH tec hniques are based on searc hing the solutions space, with the expectation of nding a v ery good solution, or the optimal solution. Searc h is generally directed b y feasibilit y constrain ts and objectiv e function. Majorit y of MH tec hniques ignore specic properties of the problem studied and use a general searc h procedure. This causes an increase in the run time of the method. Since, better results are found, this excess time consumption is acceptable. Also, the tradeo bet w een running time and solution qualit y can be handled in the netuning phase. Among the best kno wn metaheuristic tec hniques are Sim ulated Annealing, T abu Searc h and Genetic Algorithms. Dev eloping a new metaheuristic approac h for2 ndphase problem or conducting a comparativ e analysis of existing metaheuristic tec hniques for Milten burg's model is not included in the scope of this dissertation. Hence, this section aims to pro vide a brief review of importan t papers in the eld. McMullen (1998) implemen ts T abu Searc h (TS) tec hnique on Milten burg's problem. He calls Milten burg's objectiv e function as Usage Go al and denes a second objectiv e for the problem. This second objectiv e is to minimize the total n um ber of setups occurring in the sequence. These t w o objectiv es are com bined in to a single objectiv e b y assigning w eigh ts associated with eac h objectiv e. T o nd w eigh ts for eac h objectiv e, McMullen creates random solutions and calculates
PAGE 70
54 a v erage v alues for the objectiv es. The w eigh ts are assigned in suc h a w a y that, con tributions of the alternativ e objectiv es are equal, on the a v erage. F or the experimen ts, he tests both extremes (setting one of the w eigh ts to zero, hence w orking with only one objectiv e), equal con tribution w eigh ts, and t w o other com binations whic h come from assuming one of the objectiv es is three times more importan t than the other. Minimization of total n um ber of setups is not addressed b y our problem, so the results with the rst objectiv e only is importan t for us. Ho w ev er, what w e w an t to emphasize in McMullen's study is the implemen tation, not the computational results. The neigh borhood structure used in this paper, is a simple one. It is constructed b y selecting an y t w o positions in the sequence and sw apping the products in these positions. This w a y an y solution hasQ ( Q 1)(the sequence hasQpositions) neigh bor solutions. F rom eac h solution, a n um ber of neigh bor solutions (this n um ber is a parameter of the method) are randomly selected and tested. The neigh bor whic h has the best objectiv e function v alue is selected as the candidate solution for the mo v e. If this candidate solution is prohibited b y the tabu list for this iteration, then the objectiv e v alue is compared to the aspiration v alue. If lo w er than the aspiration v alue, the mo v e is allo w ed, otherwise second best solution is examined. When an appropriate solution is found, mo v e is performed. By mo v e w e mean updating curren t solution, tabu list and aspiration v alue. The method stops when a predetermined n um ber of iterations (a parameter of the method) is reac hed, or the best solution has not been updated during the last sev eral operations. This n um ber is also a parameter for the method. T abu Searc h is kno wn to be an in telligen t searc h tec hnique, whic h can nd good solutions for the problem. Managing the tabu list is the k ey elemen t of the method. The size and con ten t of the list are v ery importan t. Generally solutions
PAGE 71
55 are not stored in the tabu list. A k ey c haracteristic of the mo v e, e.g., positions or products to sw ap, are stored in the tabu list. This propert y allo ws the user design a more ecien t searc h method. Sim ulated Annealing (SA) is one of the simplest metaheuristic tec hniques. The parameters are restricted to a cooling sc hedule and an acceptance probabilit y of the mo v e considered. Cooling sc hedule denes initial temperature and temperature c hange rate for an iteration. The method terminates when temperature reac hes a predetermined lo w est lev el. T o prev en t premature con v ergence to a local optim um, mo v es to neigh bor solutions with w orse objectiv e v alues are permitted. The probabilit y to accept suc h a mo v e is a function of the temperature and the dierence bet w een the objectiv e function v alues of curren t solution and candidate solution. When temperature is high, mo v es to poorer qualit y solutions are lik ely When temperature is lo w, a more conserv ativ e policy is adapted, where only the mo v es to better solutions are accepted. The k ey to a successful implemen tation of SA is the cooling sc hedule. If too fast, the method ma y return a v ery poor solution; if too slo w, the method is lik ely to return a good solution but tak e a v ery long time. McMullen and F razier (2000) apply SA to the same problem dened for T abu Searc h application in McMullen (1998). Same neigh borhood function is used. Therefore, a comparison bet w een performance of the t w o alternativ e methods mak es sense. The authors claim that SA outperformes TS for majorit y of the test cases. This result is importan t, since it pro v es that a simpler tec hnique can be used instead of a more complicated one, without a loss in the performance. Another popular MH tec hnique is the socalled Genetic Algorithm (GA). The idea is k eeping a population of solutions at hand, and perform some m utation and
PAGE 72
56 crosso v er operations on the solutions. As time (iterations) passes, poorer solutions die and more t (better objectiv e function v alue) solutions surviv e. This is actually a sim ulation of living organisms. P arameters for the method are population size, n um ber of iterations before termination, m utation probabilit y and set of solutions for crosso v ers. The simplest form of crosso v er operator selects a crosso v er poin t and tak es genes before the crosso v er poin t from one of the paren ts, and the rest from the other paren t. This crosso v er giv es t w o osprings, whic h are examined for surviving to the next generation. More complicated crosso v er operators select sev eral crosso v er poin ts or use sev eral paren ts to produce osprings. F or sequencing problems crosso v er operators generally cause infeasibilit y in ospring solutions. One can try to con v ert the infeasible ospring to a feasible one with some t ype of neigh borhood functions and a searc h method, or to kill the infeasible ospring instan tly Both w a ys cause ineciency b y increasing the run time. A more comprehensiv e approac h is to dene a new operator for the problem at hand. McMullen et al. (2000) dene a specic crosso v er operator for their study They select t w o crosso v er poin ts, the c hromosomes bet w een these poin ts are preserv ed in the osprings as they are. The remaining c hromosomes come from the other paren t. T o a v oid infeasibilit y an y repeating c hromosomes are deleted. This assures that no c hromosome will be represen ted more than required amoun t, and the osprings will yield feasible solutions. The dra wbac k of this approac h is, that the osprings are not similar to the paren ts. That is, the fundamen tal elemen t of the GA tec hnique is lost. The authors compare their results with TS and SA results from McMullen (1998) and McMullen and F razier (2000), and claim that GA giv es more fa v orable results.
PAGE 73
57 Indeed, it is dicult to claim that a metaheuristic method consisten tly outperforms other metaheuristic methods. One ma y w ork v ery w ell on some instances, while the other w orks w ell with some other instances of the same problem. And the sensitivit y and criticalit y of the netuning phase mak es a tec hnique v ery vulnerable. P oor results ma y be because of selecting a wrong set of parameters, rather than the w eakness of the tec hnique. F or further reading on metaheuristic applications on Milten burg's problem, w e refer the reader to Cho et al. (2005), Mansouri (2005), McMullen (2001a), McMullen (2001b), McMullen (2001c) and McMullen and T arasewic h (2005). 2.6 1 stPhase Problem F orm ulation W e start building the model b y dening constrain ts. First constrain t set comes from the need to meet the demand. Since the batc h sizes, the n um ber of batc hes and the demand are all in tegers and the batc h sizes are not allo w ed to c hange within the planning horizon, only few batc h sizes and n um bers of batc hes can meet the demand in exact quan tities. In this case, one should allo w either under or excess production. W e c hoose to allo w excess production in the minimal quan tities. These excess amoun ts can be used to adjust demands in the next planning horizon. The second set of constrain ts come from feasibilit y concerns. Since w e dene a xed timebuc k et ( t ), w e ha v e to assure that all batc hes can be processed within this xed timebuc k et.
PAGE 74
58 W e form ulate the constrain ts as follo ws.b i = d i q i ;i =1 ;::;n s i + p i b i t;i =1 ;::;n t n X i =1 q i = T t 0 b i ;q i 1 ;in teger.i =1 ;::;nThe parameters and v ariables used in the form ulation are dened as follo ws. n n um ber of products T total a v ailable time i product index (i =1 ;::;n)q in um ber of batc hes for product ib ibatc h size for product id idemand for product iD icum ulativ e demand of products 1 toi s isetup time for product ip iprocessing time for product i t length of xed time in terv al(= T= n P i =1 q i ) t itime required to process one batc h of producti (= s i + p i b i )Q total n um ber of batc hes(= n P i =1 q i )W e ha v e three decision v ariables in the model. Batc h size (b i) is expressed as a function of n um ber of batc hes (q i), so w e can eliminateb iand also the rst constrain t set from the model. By using the third constrain t w e can com bine the second set of constrain ts and third constrain t in to a single constrain t and t can be eliminated from the decision v ariables. The resulting set of constrain ts is giv en
PAGE 75
59 belo w. s i + p i d i q i n X h =1 q h T;i =1 ;::;n q i 1 ;in teger;i =1 ;::;nHa ving constructed the necessary constrain ts, w e no w can adv ance to dening the objectiv e function. The o v erall objectiv e of the model is to minimize the deviation bet w een the sequence found at the end of the second phase and the ideal sc hedule. So, the t w o phases should be considered together. The goal for the rst phase can be one of the follo wings.G 1Dev elop a lo w er bound for the resulting deviation. Minimize this lo w er bound b y c hoosing appropriate n um ber of batc hes.G 2Since ac hieving smallest batc h sizes (to w ards onepieceo w) lies in the philosoph y of JIT, w e should try to maximize total n um ber of batc hes to be sequenced. (The more the n um ber of batc hes the smaller the batc h sizes.)G 3Similar toG 2, w e can try to minimize length of the xed timebuc k et ( t ) to reduce cycle time and get closer to onepieceo w.G 4Considering the possibilit y thatG 3ma y giv e lo w utilization for the mac hine, i.e. one of the batc hes ma y use a v ery lo w percen tage of t W e ma y try to maximize the ratio of the minim um time tak en b y a batc h to the cycle length t .G 5Similar toG 4in order to impro v e a v erage mac hine utilization, w e ma y maximize the a v erage time usage ratio of all products.G 6Another importan t principal for JIT is to k eep lo w WIP in v en tory minimizing a v erage WIP in the system ma y be used as an objectiv e. W e can select one of the goals described abo v e, and form ulate it as the objectiv e function for the rst phase. Eac h one of these goals is justiable and w e
PAGE 76
60 ma y also construct a m ultiobjectiv e model whic h reects t w o or more of these goals. W e consider four dieren t objectiv e functions.MinF 1 = n X i =1 b 2i ( Q 2 q 2 i ) 12 Q LexminF 2 = f n X i =1 q i ; n X i =1 b i g LexminF 3 = f n X i =1 b i ; n X i =1 q i g LexminF 4 = f max i f t i g ;max i f t i g min i f t i ggNote that the last three objectiv e functions are lexicographic objectiv e functions. In lexicographic optimization the goal is to select from among all optimal solutions to objectiv e function 1, the one that optimizes the second objectiv e function. F or more on lexicographic optimization see, for example, Hamac her and T ufek ci (1984). Aigbedo's (2000) lo w er bound approac h is adopted to dene the rst objectiv e function,F 1(Details on deriv ation of this lo w er bound can be found in AppendixA ). The idea is to use the lo w er bound of the second phase objectiv e function, as the objectiv e function of the rst phase. Our preliminary experimen ts ha v e sho wn that there is a high correlation bet w een the original deviation function Z and the lo w er boundF 1. Th us, w e expect that optimal solution to the rst phase will lead to a nearoptimal solution for the second phase.F 2is a lexicographic expression of goalsG 2andG 6.F 3is similar toF 2but the priorit y bet w een the alternate objectiv es is in rev erse, i.e.,F 3is a lexicographic expression of goalsG 6andG 2. Details on formalization of these objectiv es can be found in AppendixA The rst three objectiv es ma y lead to some solutions where batc h processing times( t i )ma y uctuate highly Ha ving production smoothing idea in mind, w e dene a fourth objectiv eF 4where the primary objectiv e is to minimize the
PAGE 77
61 maxim um of batc h processing times, while k eeping batc h processing times close to eac h other. In other termsF 4is a lexicographic expression of goalsG 3andG 4. One of the objectiv e functions dened abo v e should be c hosen according to the problem at hand. Eac h objectiv e leads to a dieren t model, therefore w e ha v e four models to solv e. W e randomly create 125 test instances eac h with four products (n =4). F or eac h instance, w e en umerate all the solutions and for eac h feasible solution (eac h batc h can be processed withint) w e calculate four alternativ e objectiv e function v alues, whic h are noted as independen t v ariables. Then, w e use eac h feasible solution of the rst phase as input to the second phase problem and solv e the latter (sequencing) problem optimally The objectiv e v alue of the optimal solution to the second phase (Z ) is noted as the dependen t v ariable. After completing the en umeration for eac h test instance, w e compute correlation coecien ts for the four alternativ e objectiv es of the rst phase andZ Results from this analysis sho w that there is a higher correlation bet w eenF 1andZ as compared to the other objectiv e functions. The results of this correlation analysis are giv en in T able 26 T able 26: Correlations Bet w een Alternativ e1 stPhase Objectiv es andZ Correlation Coecien t (R 2) Statistic (F 1vs.Z ) (F 2vs.Z ) (F 3vs.Z ) (F 4vs.Z ) A v erage 0.9905 0.5399 0.8693 0.6170 Std.Dev. 0.0119 0.2769 0.1230 0.3129 Max. 1.0000 0.8986 0.9959 0.9915 Min. 0.9057 0.3141 0.2219 0.1651 Realizing the high correlation bet w een the lo w er bound and the optimal solution of the second phase problem, w e decide to use this lo w er bound function as the objectiv e function of the rst phase problem.
PAGE 78
62 The follo wing optimization model represen ts the1 stphase problem. MinimizeF = n X i =1 l d i q i m 2 n P h =1 q h 2 q 2 i n P h =1 q h(2.8) S.T. s i + p i d i q i n X h =1 q h T; 8 i(2.9) d i q i = b i ; 8 i(2.10)q i = d i b i ; 8 i(2.11)1 q i d i ;q iin teger; 8 i(2.12) Note that in constrain ts ( 2.10 ) and ( 2.11 )b i(batc h size for producti) is used as a state v ariable. These t w o constrain ts assure that excess production is limited to the minim um. Decreasingb iorq ib y 1 w ould result in under production. 2.7 Structural Properties of the1 stPhase Problem The1 stphase problem is an In teger NonLinear Programming (INLP) problem. INLP problems inherit the diculties of t w o paren t problem classes, IP and NLP Some v ery special cases of NLPs where constrain ts are linear or objectiv e function is con v ex can be solv ed ecien tly but the rest of the class is kno wn to be v ery hard. On the other hand, the general IP problems are NPHard. Our problem, being an INLP ma y be NPHard, as w ell. In the follo wing, w e rst reduce the problem to a simpler one and then formally pro v e its computational complexit y Prop osition 2.7.1 L etAb e a given pr o duct. Assuming all other variables ( q i ;i 2 N nf A g) and b atch size for pr o ductA(b A) c onstant, the obje ctive functionFis monotone incr e asing inq A.
PAGE 79
63 Proof The objectiv e function can be split in to smaller functionsF i ;i 2 N, as follo ws.F = n X i =1 b 2i (( q A + Q 0 ) 2 q 2 i ) q A + Q 0 = n X i =1 F iwhereF i = b 2i (( q A + Q 0 ) 2 q 2 i ) q A + Q 0andQ 0 = X i 2 N nf A g q iNote that, the objectiv e function is dieren tiable inq A. Nonnegativit y of the rst deriv ativ e clearly pro v es the proposition.@F i @q A = 8>><>>: b A Q 0 q A + Q 0 2 ; ( 0) ;ifi = A b i q A + Q 0 2 (( q A + Q 0 ) 2 + q 2 i ) ; ( 0) ;o/w) @F @q A = n X i =1 @F i @q A 0 Similarly if ev erything butb Ais constan t, smallerb Av alues yield smallerFv alues.@F i @b A = 8>><>>: 2 b A ( Q 2 q 2 A ) Q ; ( 0) ;ifi = A 0 ;o/w) @F @b A = n X i =1 @F i @b A 0This information about the deriv ativ es is closely related to constrain t sets ( 2.10 ) and ( 2.11 ). No w, in order to mak e use of this information and the constrain ts, w e in troduce the concept of ac c eptable values A cceptable v alues of a decision v ariable (q i) are the in teger v alues that satisfy the equationq i = d d i = d d i =q i ee. LetA i = f r i; 1 ;:::;r i;a i gbe the complete set of acceptable v alues ofq iwherer i;h iis theh thiacceptable v alue anda iis the cardinalit y of the set. F or an yq i = 2 A i, there exists anr i;j 2 A isuc h thatr i;jconsumes lesser resource time and yields smaller excess production, therefore preferred o v erq i. Algorithm Find Acceptable Values nds all the acceptable v alue setA ifor eac h producti 2 N(see Figure 27 ).
PAGE 80
64 Algorithm FindAcceptableValues Foreach i 2 N f 1.Startwithanemptylist: A i = ? a i =0.Set q i =1and b i = d i 2.Add q i tothelist: a i a i +1, r i;a i q i A i A i [f r i;a i g IF( q i = d i )THENSTOP. 3.IF( d d i = ( q i +1) e 55, the demand is met with o v erproduction. Ho w ev er, forb i =7, a better solution (q i =8) exists with only one unit of o v erproduction (8 7=56 > 55). A quic k er w a y to sho w thatq i =9is not an acceptable v alue is calculatingd d i = d d i =q i ee = d 55 = d 55 = 9 ee6 =8= q iAlso note that, the v alues sho w a symmetric pattern. The condition in Step 2 will be true in at mostp d iiterations. Using the symmetry relationship bet w een the acceptableq iand correspondingb iv alues, it is clear that the condition in Step 2 will be false in at mostp d iiterations. Therefore, the procedure terminates after at most2 p d iiterations. Since eac h iteration
PAGE 81
65 generates exactly one acceptable v alue, denoting the n um ber of acceptable v alues forq iwitha i, w e claim thata i 2 p d i. Using the acceptable v alues, w e dene a simpler v ersion of the problem. Note that, if w e could assumeQconstan t, then the objectiv e function w ould be m uc h easier to handle (P ni =1 b 2i ( Q 2 q 2 i )). Let us dene the con tribution of productito the objectiv e, ifh thiacceptable v alue is selected, b yf i;h i = d d i =r i;h i e 2 ( Q 2 r 2 i;h i ). Dening a new decision v ariabley i;h i 2f 0 ; 1 gdenoting whether an acceptable v alue is selected or not, the problem with a constan tQis reduced to the follo wing.( MP 1)Minimizen X i =1 a i X h i =1 f i;h i y i;h i(2.13) S.T.a i X h i =1 y i;h i =1 8 i(2.14)n X i =1 a i X h i =1 r i;h i y i;h i = Q(2.15)y i;h i 2f 0 ; 1 g8 i;h i(2.16) Theorem 2.7.1 MP1 is NPc omplete. Proof W e rst describeMP 1v erbally F or sak e of simplicit y w e w ork with the decision v ersion of the problem. Giv en a nite setA, a partition of this set in tondisjoin t subsetsA i ;i 2 N,j A i j = a i; sizesr i;h i 2 Z +and w eigh tsf i;h i 2 R +associated with eac h elemen th i 2 A i ;i 2 N, a positiv e in tegerQand a positiv e real n um berY; is there a subsetA 0 Awhic h includes exactly one elemen t from eac h subsetA i, suc h that the sum of sizes of the elemen ts inA 0is exactlyQunits and sum of w eigh ts of the elemen ts inA 0is less than or equal toY? Guessing a solution b y selecting an elemen t from eac h subsetA iand v erifying if the solution satises the conditions can be performed in polynomial time, th us
PAGE 82
66 the problem is in the set NP Ho w ev er, the harder part of the proof is nding an NPcomplete problem whic h can be reduced to a special case of the problem on hand (MP 1) in polynomial time. W e select the Subset Sum Problem (SSP) (Garey and Johnson, 1979, p.223): Giv en a nite setC, sizez c 2 Z +for eac hc 2 Cand a positiv e in tegerL, is there a subsetC 0 Csuc h that the sum of the sizes of the elemen ts inC 0is exactlyL? F or an y instance of theSSP, w e are giv enj C jelemen ts and w e createj C jdumm y elemen ts whic h ha v e 0 size,z c =0 ;c = j C j +1 ;::; 2 j C j. Then w e assign zero w eigh ts to ev ery elemen t (both the originals and dummies),f c =0 ;c =1 ;::; 2 j C j. The last step is to formj C jdisjoin t subsets consisting of exactly one original and one dumm y elemen t (n = j C j). SettingQ = Lclearly states that theSSPis reduced toMP 1. F or an y solution, if the answ er forMP 1is y es, then the answ er for theSSPinstance at hand is y es, as w ell. And this holds for no as w ell. Therefore, the general case of theSSPis iden tical to this special case of theMP 1and an y instance of theSSPcan be form ulated and solv ed as anMP 1. F urther, the reduction in v olv esO ( j C j )operations, and is a polynomial time reduction. If one can nd a polynomial time algorithm for theMP 1, it can be used to solv e theSSPas w ell. Since theSSPis NPcomplete (Karp, 1972), so isMP 1. Corollary 2.7.1 The1 stphase pr oblem is NPc omplete. Proof Kno wing that the modied problemMP 1is NPcomplete, being its general case, so m ust be the original problem (The1 stphase problem). 2.8 Exact Methods for the1 stPhase Problem In the literature, INLP problems are generally treated in a broader class of Mixed In teger NonLinear Programming (MINLP) problems. The dierence in MINLPs is some of the v ariables are con tin uous and some are in teger. Our problem can be seen as a MINLP problem with zero con tin uous v ariables.
PAGE 83
67 Methods for solving MINLPs include inno v ativ e approac hes and related tec hniques tak en and extended from Mixed In teger Linear Programming (MILP). Outer Appro ximation (O A) methods, Branc handBound (B&B), Extended Cutting Plane methods, and Generalized Bender's Decomposition (GBD) for solving MINLPs ha v e been discussed in the literature since the early 1980's. General idea in these algorithms is to pro vide o v erestimators (NLP subproblem) and underestimators (MILP master problem) for the problem. Algorithms pro vide guaran tee of con v ergence in a nite n um ber of iterations for problems with:Con v ex objectiv e functionCon v ex (in)equalit y constrain tsLinear equalit y constrain ts Since our problem has nonsmooth nonlinear functions in both in the objectiv e function and the constrain ts, w e are highly unlik ely to successfully apply one of the abo v e men tioned methods. W e propose a bounded dynamic programming (BDP) solution method that com bines features of dynamic programming and branc handbound methods to successfully handle m uc h larger size problems (see Morin and Marsten (1976) for details on BDP). 2.8.1 Dynamic Programming F orm ulation Giv en a xedQv alue, the objectiv e function ( 2.8 ) simplies toF 0 = P ni =1 ( d d i =q i e ) 2 ( Q 2 q 2 i ) =Q, whic h is separable inq iv ariables. If the v ectorq ( Q )= ( q 1 ;q 2 ;::;q n )is an optimal solution to the problem withP ni =1 q i = Q, then the subv ector( q 2 ;q 3 ;::;q n )should be optimal to the problem withP ni =2 q i = Q q 1, as w ell. Otherwise, the v ectorq ( Q )can not be an optimal solution. Th us, the principle of optimalit y holds for the problem and w e can build the optimal solution b y consecutiv ely deciding on theq iv alues. LetR ibe the total n um ber of batc hes committed to the rstiproducts. The product indexiis the stage index, and the
PAGE 84
68 pair( i;R i )represen ts the states of the DP form ulation. Figure 28 illustrates the underlying net w ork structure of the problem. n,Qr 0,0r 1,1r 1,2r 1,Dr 1r n,nr n,Dr nr 2,2r 2,3r 2,Dr 2r .r.r.r .r.r.r .r.r.r .r.r.r Figure 28: Net w ork Represen tation of the Problem In the net w ork, eac h node represen ts a state in the DP form ulation and arcs reect the acceptable v alues suc h that an arc is dra wn from node (i 1 ;R i 1) to node (i;R i 1 + q i) for eac hq i 2 A i. W e dene the follo wing recursiv e equation.F ( i;R i )= 8>><>>: 0 ;ifi =0 min q i f F ( i 1 ;R i q i )+ l d i q i m 2 ( Q 2 q 2 i ) =Q j s i + l d i q i m p i T Q g ;ifi> 0
PAGE 85
69 Note that, the recursiv e equation is a function ofQ, that can be used for a giv enQv alue only Also, the nal state is( n;Q ), and the solution to the problem,F ( n;Q ), can be found with the follo wing forw ard recursion (see Figure 29 ). Algorithm ForwardRecursion( Q ) 1.Initialize F (0 ; 0)=0, F ( i;R i )= 1 forall i 2 N ,1 R i D i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 f 4.Foreach q i 2 A i valuethatsatises s i + l d i q i m p i T Q f 5.IF( F ( i;R i Ã‚Â¡ 1 + q i ) >F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q )THEN f 6.Set F ( i;R i Ã‚Â¡ 1 + q i )) F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q 7.Update ActiveNodes i ActiveNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) 8. q Ã‚Â¤ i ( Q ) q i g g g g Figure 29: Pseudocode for Algorithm Forward Recursion When the algorithm terminates, it returnsq i ( Q )v ector that is an optimal solution for the giv enQv alue andF ( n;Q )that is the objectiv e v alue of this optimal solution. As in an y DP model, the n um ber of nodes gro ws exponen tially with the n um ber of stages. In the nal (n th) stage, w e migh t ha v e at mostQ ni =1 a inodes. This is a straigh tforw ard result of the fact that eac h node in the( i 1) ststage is connected to at mosta inodes in thei thstage. Ho w ev er, w e also kno w that the maxim um index for a node in the nal lev el is (n;D n). Therefore, the n um ber of nodes in the nal lev el is at mostmin f Q ni =1 a i ;D n n +1 g. An upper bound for the total n um ber of nodes in the graph isP ni =1 min f Q il =1 a l ;D i i +1 g.
PAGE 86
70 In order to deriv e the computational complexit y of algorithm Forward Recursion w e need to kno w the n um ber of arcs, as w ell. The n um ber of arcs in to thei thstage is a function of the n um ber of nodes in the( i 1) ststage anda i. An upper bound on this n um ber isa i min f Q i 1 l =1 a l ;D i 1 i +2 g. Therefore, w e claim that the total n um ber of arcs in the net w ork is at mosta 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g. In the w orst case, steps six through eigh t are executed as man y times as the n um ber of arcs in the net w ork. Therefore, the w orst case time complexit y of the algorithm isO ( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g ). Abo v e algorithm solv es the problem for a giv enQv alue. Ho w ev er, the problem does not tak e aQv alue as an input parameter, but returnsQas a result of the solution v ector. Moreo v er, an arc cost can be calculated only ifQis kno wn. Therefore, w e need to solv e a DP for eac h possible v alue ofQ. W e propose algorithm Solve with DP for the solution of the problem (see Figure 210 ). The algorithm iden ties all possible v alues ofQand emplo ys algorithm Forward Recursion successiv ely to solv e the emerging subproblems. The algorithm yieldsQ as the optimalQv alue, whic h leads to the optimal solution v ectorq ( Q )and also the optimal solution's objectiv e v alueF ( n;Q ). Steps one through v e can be considered as a preprocessing phase where the reac hable nodes are iden tied. The w orst case complexit y of this preprocessing phase depends on the n um ber of arcs in the net w ork represen tation of the problem, in that it is equal to that of algorithm Forward Recursion Since algorithm Forward Recursion is repetitiv ely in v ok ed in step eigh t, the preprocessing phase does not aect the o v erall time complexit y of the algorithm. Steps sev en through nine are repeated for eac h reac hable node at the last stage of the DP form ulation. The n um ber of reac hable nodes is bounded abo v e b yD n n +1. Therefore, algorithm Forward Recursion ma y be in v ok ed at mostD n n +1
PAGE 87
71 Algorithm SolvewithDP 1.Initialize Q Ã‚Â¤ =0, F ( n;Q Ã‚Â¤ )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q i 2 A i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) g g g6.Foreachreachablenode( n;R n ) f 7.Set Q R n 8.Findtheoptimalsolutionforthegiven Q valueusingAlgorithm ForwardRecursion 9.If F ( n;Q Ã‚Â¤ ) >F ( n;Q )THEN f 10.Update Q Ã‚Â¤ Q g g Figure 210: Pseudocode for Algorithm Solve with DP times, yielding an o v erall w orst case time complexit y ofO (( D n n +1)( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g ))This time complexit y sho ws that, the computational requiremen t of the DP procedure depends on external parameters suc h asd is anda is. Therefore, the procedure ma y be impractical for large size problems. In the next subsection, w e dev elop sev eral bounding strategies to reduce the computational burden of the DP procedure. 2.8.2 Bounding Strategies An upper limit forQ. Noting that the length of the takttime cannot be smaller than the sum of processing and setup times of an y batc h leads to the follo wing upper bound for the possibleQv alues.T Q s i + p i ; 8 i ) Q Q U = T max i f s i + p i ;i 2 N g
PAGE 88
72 Eliminate in termediate nodes whic h cannot yield a feasible solution A t an y stage,R ima y increase b y at mostd iand at least 1 units. Therefore, as w e proceed to w ards the nal state, w e eliminate the in termediate nodes (i;R i) withR i >Q n + iorR i
PAGE 89
73 Therefore, w e updateQ Lev ery timeQ is updated, and dynamically narro w the searc h space onQ. Incorporating all the bounding strategies dev eloped, w e propose algorithm Solve with BDP (Figure 211 ) for the solution of the problem, using algorithm Bounded Forward Recursion (Figure 212 ) for successiv ely solving emerging DPs. In the algorithm,d x edenotes the largest in teger smaller than or equal tox. Algorithm SolvewithBDP 1.Initialize Q Ã‚Â¤ =0, F ( n;Q Ã‚Â¤ )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N .Alsocompute U 0 and V 0 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q i 2 A i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) g g6.Compute U i and V i g7.Set Q L =1and Q U = b T= max i f s i + p i ;i 2 N gc 8.Foreachreachablenode( n;R n )satisfying Q L R n Q U ,indecreasingorder f 9.Set Q R n 10.Findtheoptimalsolutionforthegiven Q valueusing Algorithm BoundedForwardRecursion 11.If F ( n;Q Ã‚Â¤ ) >F ( n;Q )THEN f 12.Update Q Ã‚Â¤ Q 13.Update Q L b ( U 0 Ã‚Â¡ V 0 ) =F ( n;Q Ã‚Â¤ ) c g g Figure 211: Pseudocode for Algorithm Solve with BDP 2.8.3 Numerical Example W e illustrate the DP form ulation and implemen tation of the bounding policies on an example withn =2products. Let demand for the products be giv en b yd 1 =15andd 2 =10. Also, let the processing and setup time data be giv en b y
PAGE 90
74 Algorithm BoundedForwardRecursion( Q ) 1.Initialize F (0 ; 0)=0, F ( i;R i )= 1 forall i 2 N and1 R i D i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 thatsatises (( Q Ã‚Â¡ D n + D i Ã‚Â¡ 1 R i Ã‚Â¡ 1 Q Ã‚Â¡ n + i +1)AND ( F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ G ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) F ( n;Q Ã‚Â¤ ))) f 4.Foreach q i 2 A i valuethatsatises s i + l d i q i m p i T Q f 5.IF( F ( i;R i Ã‚Â¡ 1 + q i ) >F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q )THEN f 6.Set F ( i;R i Ã‚Â¡ 1 + q i ) F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q 7.Update ActiveNodes i ActiveNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) 8. q Ã‚Â¤ i ( Q ) q i g g g g Figure 212: Pseudocode for Algorithm Bounded Forward Recursionp 1 =1,p 2 =2,s 1 =8ands 2 =3min utes. Our goal is to nd the optimal batc hing plan for utilizing the total a v ailable time,T =180min utes. The acceptable v alues for the giv en demand data areq 1 2 A 1 = f 1 ; 2 ; 3 ; 4 ; 5 ; 8 ; 15 gandq 2 2 A 2 f 1 ; 2 ; 3 ; 4 ; 5 ; 10 g. The net w ork structure of the DP form ulation is depicted in Figure 213 As seen from the gure, the n um ber of nodes and arcs increase dramatically with the stages. The straigh tforw ard application of the DP procedure (using algorithm Solve with DP ) requires solving the problem for eac h possibleQv alue, starting withQ =2. F orQ =2, the optimal costs of reac hing the rst stage nodes are 337.5, 0, 62.5, 96, 94.5, 120, 110.5 forR 1 =1through 15, respectiv ely Here, w e see that theF ( i;R i )form ula can yield negativ e v alues. Ho w ev er, after the rst stage w e see that only the positiv e v alues can reac h the destination node (2,2). And that is only possible via node(1 ; 1)and using the arc that corresponds toq 2 =1. In this
PAGE 91
75 case, the second arc's cost is 150 and the total cost is 487.5 units. This requires updatingQ =2, whic h automatically updates the best objectiv e v alue obtained asF (2 ; 2)=487 : 5. F orQ =3, similarly w e calculate the rst stage costs as 600, 106.67, 0, 37.33, 48, 73.33, 72. State (2,3) can be reac hed via t w o alternativ e paths, using either(1 ; 1)or(1 ; 2)as the in termediate node. Using the node (1,1) yields a total cost of 600+41.67=641.67 and the node (1,2) yields 106.67+266.67=373.33. Therefore, the optimal solution forQ =3isq (3)=(2 ; 1), with an objectiv e function v alueF (2 ; 3)=373 : 33. This solution beats the previous one, soQ 3is updated. Same process is iterated un til all the possibleQv alues, and the best solution obtained is updated sev eral times forF (2 ; 4)=267 ,F (2 ; 5)=185 ,F (2 ; 6)=184.5 ,F (2 ; 7)=166.86 ,F (2 ; 8)=150 ,F (2 ; 9)=121 ,F (2 ; 10)=97.5 ,F (2 ; 11)=183 : 64,F (2 ; 12)=122 : 67,F (2 ; 13)=76.62 ,F (2 ; 14)=212 : 57,F (2 ; 15)=128 : 33,F (2 ; 18)=70.22 ,F (2 ; 19)=170 : 58andF (2 ; 20)=83 : 75. F or three of the possibleQv alues (16, 17 and 25), no feasible solution is found. The v alues prin ted in bold sho w the solutions that require updating the best solution found. The algorithm terminates yielding the optimal solutionQ =18,q (18)=(8 ; 10)andF (2 ; 18)=70 : 22, after solving 17 DPs and updating the best solution 11 times. Figure 214 demonstrates the iterations and the solution v alues found in eac h iteration. As the example sho ws, the straigh tforw ard approac h requires man y calculations that can be a v oided. No w, w e will demonstrate our bounded DP approac h on the same example. The rst vital dierence is starting from the highest feasibleQv alue. The rst bound impliesQ Q U =180 = max f 9 ; 5 g =20, th us the procedure starts with ev aluatingQ =20. In the rst stage,F (1 ;R 1 )v alues forR 1 < 20 10=10andR 1 > 20 1=19are eliminated b y the second bound, since these nodes cannot reac h the destination node(2 ; 20). So, the only calculation
PAGE 92
76 made in the rst stage areF (1 ; 15)=8 : 75. In the second stage, w e calculate the con tributions ofq 2 =5only whic h is75. Here, w e see that,Q =20yields a feasible solution andQ needs to be updated,Q 20andF (2 ; 20)=83 : 75. F rom this poin t on w e can also see the impacts of the third and fourth bounds. Ha ving calculatedU iandV iv alues asU 0 =1233 : 23,U 1 =100,V 0 =325andV 1 =100, forF ( n;Q )=83 : 75, the fourth bound tells usQ Q L =908 : 23 = 83 : 75= 10 : 84. This lets us prune a signican t portion of possibleQv alues (Q 10) at this step. F orQ =19, the second bound implies9 R 1 18, so w e ha v e only one state (1,15) at the rst stage, and calculateF (1 ; 15)=7 : 16. The third bound yieldsG (1 ; 15)=113 : 49. SinceF (1 ; 15)+ G (1 ; 15)=7 : 16+113 : 49=120 : 65 >F ( n;Q )= 83 : 75, the state (1,15) is eliminated b y the third bound. Therefore, all the nodes in the rst stage are terminated and since there is no w a y to reac h the nal state, the problem forQ =19is not solv ed completely F orQ =18, the DP is completely solv ed and a better solutionq 1 =8,q 2 =10,F (2 ; 18)=70 : 22is obtained. This requires updatingQ Las w ell. The new lo w er limit for theQv alue,Q L 908 : 23 = 70 : 22=12 : 93. This further prunes t w o possibleQv alues (11 and 12). In the rest of the process no other DP is solv ed completely That is, the BDP procedure solv ed the problem with attempting eigh t DPs, and solv ed t w o of them completely This example sho ws that the bounding policies can be v ery eectiv e in reducing the n um ber of DPs completely solv ed, or ev en attempted. 2.9 Problem Specic Heuristics for the1 stPhase Problem The complexities of the exact methods proposed for the problem imply that w e ma y not be able to solv e largesized instances with these exact methods. Therefore, w e dev elop heuristic algorithms whic h do not guaran tee to nd an optimal solution but lik ely to nd good solutions in a reasonable amoun t of time. In this section w e
PAGE 93
77 describe a parametric heuristic solution procedure that w e ha v e dev eloped for the1 stphase problem. W e start with explaining some basic principles whic h constitute the basis for our heuristic solution procedure. A solution is a com bination of decision v ariablesq i,i =1 ; 2 ;::;nsuc h that the v alue of eac h v ariable is c hosen from the acceptable v alues of the v ariable. In other w ords, constrain t sets ( 2.10 ), ( 2.11 ) and ( 2.12 ) are satised in an y solution. A fe asible solution is a solution whic h satises the rst constrain t set ( 2.9 ). In other w ords, if all the batc hes can be processed within a xedlength timebuc k et, then the solution is feasible. Here, the importan t poin t is that the length of the timebuc k et is a function of the n um ber of batc hes. That is, increasing the n um ber of batc hes for one of the products shortens the timebuc k et and ma y cause infeasibilit y LetAbe a selected product (A 2 N),Q 0 = P i 2 N nf A g q iandq 0 =( q 0 1 ;q 0 2 ;:::;q 0 n )be a feasible solution. Since the solution is feasible, w e kno w that the left hand side of the constrain t ( 2.9 ) is giv en as follo ws.C 0 i = 8>><>>: s A + p A d d A q 0 A e ( q 0 A + Q 0 ) ; ( T ) ;ifi = A s i + p i d d i q 0 i e ( q 0 A + Q 0 ) ; ( T ) ;o/w No w, if w e incremen tq Afromq 0 Atoq 1 A(the smallest acceptable v alue forq Awhic h is greater thanq 0 A), the follo wing inequalities hold.q 1 A q 0 A +1and d A q 1 A d A q 0 A 1Depending onp Aands Av alues and the increase inq A,C Ama y increase or decrease (C 1 A S C 0 A). On the other hand, since ev ery other v ariable remains unc hanged (q 1 i = q 0 i ;i 2 N nf A g),C i(i 2 N nf A g) will denitely increase
PAGE 94
78 (C 1 i >C 0 i ;i 2 N nf A g). Therefore, this incremen t inq Ama y lead to an infeasible solution (C 1 i >Tfor at least onei 2 N). This result tells us that an y increasing mo v e can con v ert a feasible solution to an infeasible one. Ho w ev er, exploiting the special structure of the problem w e dev elop a quic k method whic h con v erts an infeasible solution to a feasible one (if there exists one). The follo wing discussion is the k ey to the method men tioned. A t this poin t w e dene critic al c onstr aint as the constrain t with themax i f s i + p i d d i q i e ;i 2 N gv alue. If the solution on hand is feasible, then the critical constrain t is the tigh test constrain t. Similarly in an infeasible solution, the critical constrain t is the most violated constrain t. Also, critic al variable is dened as the product related to the critical constrain t. If there is a w a y to con v ert an infeasible solution to a feasible one b y increasing the n um ber of batc hes, it can only be possible b y exploiting the critical constrain t. Let us explain this fact in more detail. Assume that w e are giv en an infeasible solutionq 0 =( q 0 1 ;q 0 2 ;:::;q 0 n ), suc h that infeasibilit y occurs for only one of the products, namelyA. Then, if w e letQ 0 = P i 2 N nf A g q 0 i, the left hand side of the constrain t ( 2.9 ) is as follo ws.C 0 i = 8>><>>: s A + p A d d A q 0 A e ( q 0 A + Q 0 ) ; ( >T ) ;ifi = A s i + p i d d i q 0 i e ( q 0 A + Q 0 ) ; ( T ) ;o/w Here,C 0 Ais the critical constrain t. No w, w e will analyse the eect of increasing an yq iv alue to its next acceptable v alue. The possible outcomes of increasingq Aare: C 1 i T, for alli 2 N. The solution is feasible. C 1 A >T,C 1 i T, for alli 2 N nf A g. The solution is still infeasible and the infeasibilit y is still caused b y productAonly
PAGE 95
79 C 1 A T,C 1 i >T, for at least onei 2 N nf A g. The solution is still infeasible, but the source of infeasibilit y has shifted. C 1 A >T,C 1 i >T, for at least onei 2 N nf A g. The solution is still infeasible, and the sources of infeasibilit y ha v e increased in n um ber. The rst case occurs when a feasible solution can be reac hed b y one incremen t operation. The second case occurs when all the nonviolated constrain ts ha v e enough slac k, but the violated constrain t did not get enough relaxation b y the incremen t ofq A. The third and fourth cases represen t another critical situation whic h is lik ely to occur. Since increasingq AincreasesC i(i 2 N nf A g) linearly the incremen t operation consumes slac ks of the nonviolated constrain ts. Therefore, slac k in one or more of the nonviolated constrain ts ma y be depleted, whic h in turn ma y shift the source of infeasibilit y or increase the n um ber of violated constrain ts. Ho w ev er, increasing aq i(i 2 N nf A g) v alue alw a ys yields the follo wing. C 1 A >C 0 A >T. Therefore, the solution is still infeasible. Although this mo v e migh t violate more than one constrain t and shift the critical constrain t, w e denitely kno w that this mo v e can not lead to a feasible solution. This pro v es that exploiting a noncritical constrain t w ould lead to another infeasible solution. This fact lets us conclude the follo wing. Letq 0 =( q 0 1 ;q 0 2 ;:::;q 0 n )andq 1 =( q 1 1 ;q 1 2 ;:::;q 1 n )be t w o infeasible solutions suc h thatC 0 Ais the critical constrain t, andq 1is reac hed b y increasingq 0 Atoq 1 A(the smallest acceptable v alue forq Awhic h is greater thanq 0 A) only If there exists a feasible solution whic h can be reac hed fromq 0b y incremen t operations only then it can be reac hed fromq 1b y incremen t operations only as w ell. W e use this result as a basis in dev elop Algorithm NE Feasible Solution Search (Figure 215 ). The algorithm examines solution space starting from an y giv en solution, b y mo ving in the NorthEast (NE) direction, and reports the existence of a feasible solution. Mo ving in the NE direction means increasing at
PAGE 96
80 least oneq ito its next acceptable v alue. F or future use w e dene SW corner as the solution where the v ariables tak e their lo w est possible v alues, that isq i =1 ; 8 i, and NE corner as the solution whereq i = d i ; 8 i. The algorithm performs exactly one incremen t operation per iteration. Depending on the starting solution, the algorithm performs at mostP ni =1 a iiterations. Eac h iteration requires nding the critical constrain t and c hec king if the solution at hand is feasible or not, both these tasks tak eO ( n )time. Therefore, the time complexit y of the algorithm isO ( n P ni =1 a i ). Considering that the NE direction has at mostQ ni =1 a isolutions whic h ma y or ma y not be feasible, the algorithm scans this space signican tly fast. Space complexit y of the algorithm is also easily calculated. The algorithm stores the curren t solution whic h consists ofndecision v ariables only therefore the space complexit y isO ( n ). The algorithm can be rev ersed so that it scans the solution space in the SW direction. Although the nature of the problem is quite dicult, this ease in nding the closest feasible solution in a specic direction giv es us an adv an tage to dev elop a po w erful heuristic algorithm. Before proceeding with details of the algorithm, w e explain the neigh borhood structure used. A solutionq 1 =( q 1 1 ;q 1 2 ;:::;q 1 n )is a neighb or solution ofq 0 = ( q 0 1 ;q 0 2 ;:::;q 0 n )if and only if exactly one v ariable (sa yq A) v alue diers in these solutions, suc h thatq 1 Ais the next acceptable v alue ofq 0 Ain increasing or decreasing direction. That is, it can be reac hed b y only one incremen t or decremen t operation. With this denition, an y acceptable solution has at most2 nneigh bors,nof them being in the increasing direction and the othernin the decreasing direction. No w w e can proceed with dening our heuristic approac h. The algorithm tak es three parameters;SearchDepth,MoveDepthandEligibleNeighbors.SearchDepthparameter denotes depth of the searc h process. IfSearchDepth =1,
PAGE 97
81 then only the onestep neigh bors are ev aluated. IfSearchDepth =2, then the neigh bors' neigh bors (the t w ostep neigh bors) are also ev aluated, and so on. WhenSearchDepth> 1, thenMoveDepthbecomes an importan t parameter. IfMoveDepth =1, then the searc h terminates at a onestep neigh bor. IfMoveDepth =2, then the termination is t w o steps a w a y from the Curren t Solution, etc. The last parameter,EligibleNeighbors, denotes the eligible neigh bors for ev aluation. IfEligibleNeighbors= feasible, then only feasible neigh bors are considered. IfEligibleNeighbors= both, then both feasible and infeasible neigh bors are considered for ev aluation. In the algorithm, ev aluating a solution means calculating its objectiv e function v alue. When all the neigh bors are ev aluated, the follo wing solutions are iden tied. The Best Neigh bor is aSearchDepthstep neigh bor with the lo w est objectiv e v alue of all the neigh bors. The Leading Neigh bor is aMoveDepthstep neigh bor whic h leads to the Best Neigh bor. Similarly the Best F easible Neighbor is aSearchDepthstep feasible neigh bor with the lo w est objectiv e v alue of all the feasible neigh bors and the Leading F easible Neigh bor is aMoveDepthstep feasible neigh bor whic h leads to the Best F easible Neigh bor. Note that, ifEligibleNeighbors= both, then Best Neigh bor and Best F easible Neigh bor migh t dier. IfEligibleNeighbors= feasible, then these t w o solutions are the same. This also holds for the Leading Solution and the Leading F easible Solution. A mo v e consists of updating the Curren t Solution and comparing the objectiv e function v alue of this solution to the Best Solution. If the solution on hand has a lo w er objectiv e v alue and is feasible, then the Best Solution is updated. Figure 216 sho ws the pseudocode for our heuristic algorithm, namely Algorithm Parametric Heuristic Search The algorithm alw a ys mo v es in the NE direction. The total n um ber of iterations performed b y Algorithm 2 is at mostP ni =1 a i, wherea iis the n um ber of
PAGE 98
82 acceptable v alues for the decision v ariableq i. A t eac h iteration, if Algorithm 1 is not in v ok ed, at mostn SearchDepthneigh bors are ev aluated. W e already kno w that an iteration with Algorithm 1 tak esO ( n )time. SinceO ( n ) O ( n SearchDepth ), the time complexit y of the algorithm isO ( n SearchDepth P ni =1 a i ). Space complexit y of the algorithm is rather easy to calculate. The algorithm stores a constan t n um ber of solutions (Curren t Solution, Best Solution, etc.) during the iterations. Eac h solution consists ofnv ariable v alues. So, the space complexit y of the algorithm isO ( n ). 2.10 MetaHeuristics for the1 stPhase Problem Metaheuristic tec hniques are based on local searc h strategies, and require rst nding an initial solution and then mo ving to an impro ving neigh bor solution through a local searc h framew ork. In con trast to local searc h approac hes, metaheuristics do not necessarily stop when no impro ving neigh bor solutions can be found. They perform mo v es to w orsening solutions in order to prev en t premature con v ergence to a local optim um solution. Metaheuristics can emplo y some problemspecic heuristics in the initialization step, and use these solutions obtained as starting poin ts for the local searc hes. F or more on metaheuristic tec hniques, w e refer the reader to Osman and Laporte (1996) and Reev es (1993). The1 stphase problem is an in teger nonlinear programming problem with nonsmooth functions. Existence of nonsmooth, nonlinear functions in the constrain ts mak es the feasible region noncon v ex. Therefore, it is highly probable that most of the neigh bor solutions are infeasible, and nding a feasible neigh bor to ev aluate becomes a dicult task. In the literature, there exist some metaheuristic tec hniques whic h are able to w ork w ell with sparse and noncon v ex feasible regions. W e consider strategic oscillation, scatter searc h and path relinking metaheuristic tec hniques.
PAGE 99
83 2.10.1 Neigh borhood Structure W e dene a solutionq =( q 1 ;q 2 ;::;q n )as a v ector of the decision v ariables suc h that all the decision v ariables tak e an acceptable v alueq i 2 A i ; 8 i. W e further distinguish bet w een feasible and infeasible solutions as follo ws. A solution is fe asible if it satises the rst constrain t set ( 2.9 ), otherwise it is infe asible No w, consider the follo wing example withn =2products. Letd 1 =15andd 2 =20;s 1 = s 2 =1,p 1 = p 2 =1andT =50min utes. The abo v e procedure proposed for nding the acceptable v alues impliesq 1 2 A 1 = f 1 ; 2 ; 3 ; 4 ; 5 ; 8 ; 15 gandq 2 2 A 2 = f 1 ; 2 ; 3 ; 4 ; 5 ; 7 ; 10 ; 20 g. By the denition of a solution, an y pair of these acceptable v alues is tak en as a solution, for example (1,1), (5,5) and (5,20) are all solutions. (5,5) is a feasible solution, since the batc h sizes are 3 and 4 and these batc hes tak e 4 and 5 min utes, where the length of the time buc k et is50 = (5+5)=5, therefore both batc hes can be processed within the time buc k et. Similarly (5,20) requires 4 and 2 min utes to process the batc hes, ho w ev er the time buc k et is too short (50/(5+20)=2), th us this solution is infeasible. A solutionq 1 =( q 1 1 ;q 1 2 ;:::;q 1 n )is a neighb or ofq 0 =( q 0 1 ;q 0 2 ;:::;q 0 n )if and only if exactly one v ariable v alue is dieren t in these v ectors, and the categorical distance bet w een the v alues of this decision v ariable is at most, whereis a user dened in teger that is greater than or equal to one. If w e denote the set of neigh bor solutions of a solutionq 0withNS ( q 0 ; )and considerq 0 =(5 ; 5)and =2for example, then the neigh bor solutions set ofq 0isNS ((5 ; 5) ; 2)= f (3 ; 5) ; (4 ; 5) ; (8 ; 5) ; (15 ; 5) ; (5 ; 3) ; (5 ; 4) ; (5 ; 7) ; (5 ; 10) g. With this denition, a solution ma y ha v e at most2 nneigh bors. W e iden tify t w o particular solutions. The rst one is the origin where eac h decision v ariable tak es its lo w est possible v alue, that isq i =1 ; 8 i 2 N. The second one is the farthest c orner of the solution sp ac e where ev ery decision v ariable tak es its largest v alue, that isq i = d i ; 8 i 2 N. If w e relax in tegralit y of batc h sizes, and
PAGE 100
84 letr i = q i =Qwhere0 r i 1suc h thatP i 2 N r i =1denote the proportion of the n um ber of batc hes of a certain product to the total n um ber of batc hes, and assume these proportions (r is) are xed, then the objectiv e function ( 2.8 ) becomesP i 2 N ( d i =r i ) 2 (1 r 2 i ) =Q. This sho ws that largerQv alues are expected to yield better solutions. W e can in tuitiv ely argue that the global optim um ma y be located in the vicinit y of the farthest corner of the solution space. Therefore, guiding the searc h process to w ards this farthest corner migh t help us in nding the global optim um. 2.10.2 Strategic Oscillation The idea behind strategic oscillation (SO) is to driv e the searc h to w ards and a w a y from boundaries of feasibilit y (Kelly Golden and Assad, 1993; Do wsland, 1998). It operates b y mo ving with local searc hes un til hitting a boundary of feasibilit y Then, it crosses o v er the boundary and proceeds in to the infeasible region for a certain n um ber of mo v es. Then, a searc h in an opposite direction, whic h results in reen tering the feasible region, is performed. Crossing the boundary from feasible to infeasible and from infeasible to feasible regions con tin uously during the searc h process creates some form of oscillation, whic h giv es its name to the method (Amaral and W righ t, 2001; Do wsland, 1998; Glo v er, 2000; Kelly et al., 1993). There are sev eral reasons for considering the use of SO in solving com binatorial optimization problems. T w o suc h cases are depicted in Figure 217 In (a) w e see a case where the feasible region is composed of sev eral con v ex but disjoin t sets, while in (b) the feasible region is a noncon v ex set. In the rst case, the only w a y to reac h the global optim um b y main taining feasibilit y at all times, is to start from a solution in the same set with the global optim um, whic h is highly unlik ely In the second case, the starting solution ma y be a local optim um, and w e ma y not be able to reac h the global optim um b y a neigh borhood searc h method that main tains feasibilit y at all times, due to the shape of the feasible region. Ho w ev er, using SO
PAGE 101
85 ma y allo w us to nd some paths reac hing the global optim um, as sho wn in the gure. W e implemen t the SO method in a m ultistart manner, where the starting solutions are called as seed solutions. F or the generation of these seed solutions w e apply all four problemspecic heuristic methods proposed in Y a vuz and T ufek ci (2004a). F urthermore, w e tak e the farthest corner of the solutions space dened in Section 2.10.1 as an additional seed solution. These seed solutions are compared to eac h other and duplicates are eliminated before the searc h process. During the searc h process, the n um ber of mo v es in the feasible and infeasible regions are limited with the parametersNFMandNIM, respectiv ely F or the termination of the method, w e use t w o criteria sim ultaneously The rst criterion is set on the n um ber of iterations, suc h that at leastMaxItersiterations are performed. The second criterion is based on the relativ e impro v emen t obtained in the most recen t iterations, suc h that if the iterations are pro viding signican t impro v emen ts the procedure does not terminate ev en if the n um ber of iterations has exceededMaxIters. This criterion is iden tied b y parameterRelativeImprovementwhic h is actually a pair of t w o subparameters( Number;Percentage )whereNumberis the n um ber of the most recen t iterations to be follo w ed andPercentageis the limit of percen t relativ e impro v emen t. Another importan t parameter in our SO implemen tation is related to the local searc h method emplo y ed. W e consider only onestep neigh bors, but the neigh borhood size is con trolled b y a parameter,Range. The parameterRangecorresponds topresen ted in the denition of the neigh bor structure. W e presen t Algorithm SO in Figure 218 2.10.3 Scatter Searc h and P ath Relinking Scatter Searc h (SS) and P ath Relinking (PR) are ev olutionary methods whic h ha v e been pro v en to yield promising results in solving com binatorial optimization
PAGE 102
86 problems. Both ha v e been successfully applied to problems suc h as graph coloring, jobshop sc heduling, neural net w ork training and v arious net w ork design and assignmen t problems (Aiex and Resende, In press; Alfandari, Plateau and T olla, 2003; ElF allahi and Mart, 2003; Ghamlouc he, Crainic and Gendreau, 2004; Hamiez and Hao, 2002; Oliv eira, P ardalos and Resende, 2003; Souza, Duhamel and Ribeiro, 2003; Y agiura, Ibaraki and Glo v er, 2002), in addition to others cited in Glo v er (1998) and Glo v er (1999). SS is an ev olutionary method that constructs new solutions b y com bining curren tly kno wn solutions. It k eeps a population of solutions on hand, creates candidate solutions using the curren t population on eac h iteration, and selects the ttest solutions to be k ept in the population for the next iteration. The original form of SS, as proposed in Glo v er (1977), consists of three stages. First, a starting set of solution v ectors is generated b y using heuristic methods designed for the problem and a subset of the best v ectors is selected to be the r efer enc e set Second, subsets of the reference set are used to create new solutions as linear com binations of the curren t reference solutions. The linear com binations are c hosen to produce poin ts both inside and outside the con v ex regions spanned b y the reference solutions. Suc h linear com binations generally yield nonin tegral solutions, therefore a rounding process is emplo y ed to get in teger v alued solution v ectors when necessary Third and nally the candidate solutions created in the second phase and the original reference solutions are ev aluated and the ttest solutions are selected for the next iteration's reference set. The last t w o phases are repeatedly applied for a predetermined n um ber of iterations. Glo v er (1998) denes sev eral em bellishmen ts to this approac h. A Diversification Generator can be dev eloped to generate a collection of div erse trial solutions, using an arbitrary trial solution (or seed solution) as an input. The div ersication generator is emplo y ed in the rst phase where the solutions from the problem
PAGE 103
87 specic heuristic methods are processed to obtain a ric h initial reference set. An Improvement Method can be used to transform a trial solution in to one or more enhanced trial solutions. This impro v emen t method can be a local searc h procedure whic h is successiv ely applied un til a local optimal solution is found. A Reference Set Update Method can be emplo y ed to build and main tain a reference set consisting of the b best solutions found, organized to pro vide ecien t accessing b y other parts of the method. A Subset Generation Method can be used to operate on the reference set, to produce a subset of its solutions as a basis for creating com bined solutions. Finally a Solution Combination Method can be built to transform a giv en subset of solutions produced b y the Subset Generation Method in to one or more com bined solution v ectors. W e no w giv e a description of our SS and PR implemen tation. F or initialization, emplo ymen t of the problem specic heuristic methods is represen ted b y a parameter,PSHMethods. W e consider the problemspecic heuristic methods in order of their time consumption, as reported in Y a vuz and T ufek ci (2004a). IfPSHMethods =1, then the rst and fourth methods are emplo y ed. IfPSHMethods =2, then method 2 is emplo y ed in addition to the other t w o. Finally ifPSHMethods =3, all four methods are emplo y ed. Ha ving established a set of seed solutions, the div ersication generator processes these seed solutions and creates the initial reference set. W e use t w o alternativ e modes of the div ersication generator. The rst mode is similar to the m utation operator used in genetic algorithms (Goldberg, 1989; Holland, 1975; Reev es, 1997). That is, the seed solution v ector is tak en as the input and starting with the rst v ariable, a div ersied solution is created for eac h v ariable. This is ac hiev ed b y replacing the v ariable's v alue with its100 thnext acceptable v alue. Ifa i < 100, the mod operator is used in order to obtain an acceptable v alue with index v alue bet w een one anda i. Here 100 is arbitrarily selected, an y signican tly
PAGE 104
88 large in teger suc h as 50, 200 or 500 could be c hosen. The second mode, on the other hand, does not process seed solution v ectors. It performs a local searc h for eac h decision v ariable and iden ties solutions that maximize the v alue of that certain decision v ariable. This mode of div ersication yields a total ofnalternativ e solutions and enables us to explore extreme corners of the feasible region. The parameter represen ting the selection of the div ersication mode isDiversification, and it has four lev els. A t lev el 1 no div ersication is applied, at lev el 2 only corner searc h is applied, at lev el 3 only the div ersication generator is used and nally at lev el 4 both modes are used. Depending on the mode selection in the application of the algorithm, the n um ber of div ersied solutions ma y be less than the size of the reference set. In this case, the empt y slots in the reference set can be lled in the consecutiv e iterations. The size of the reference set is represen ted b y parameterb. In our implemen tation w e k eep one infeasible solution in the reference set, at all times. This infeasible solution is the farthest corner of the solution space discussed in Section 2.10.1 The most importan t aspect of subset generation is the subset size, whic h determines the n um ber of subsets generated. W e create all subsets with t w o elemen ts, subsets with three elemen ts whic h con tain the best solution, subsets with four elemen ts whic h con tain the best t w o solutions and subsets withkelemen ts (5 k b) whic h con tainkbest solutions. Use of these subset t ypes is con trolled b y parametersSubsetType 1throughSubsetType 4, whic h tak etrueorfalsev alues. If a parameter tak es v aluefalse, the associated subset t ype is not generated and if it tak es v aluetrue, all the subsets of that t ype are generated. F or the solution com bination mec hanism used in SS, w e tak e the w eigh t cen ter of the solutions in the subset under consideration, as the basis. Eac h solution in the subset is treated as an original solution and a line from the w eigh t cen ter to the original solution, the inner line se gment is dra wn. The line is extended as long
PAGE 105
89 as its length, yielding the external line se gment and the end poin t. Let parametersNICandNECrepresen t the n um ber of in ternal and external com binations, respectiv ely .NICequidistan t linear com binations on the inner line segmen t andNECequidistan t linear com binations on the external line segmen t are created. The w eigh t cen ter, the external end poin ts, in ternal and external com binations giv e a total of( NIC + NEC +1) SubsetSize +1com bination solutions, for a certainSubsetSize. Using the impro v emen t method on com bined solutions and updating the reference set are common in both the initial and iterativ e phases. Ho w ev er, performing a local searc h on ev ery solution obtained ma y be impractical.LSinPreProcessis the parameter that represen ts local searc h usage in the initial phase. IfLSinPreProcess=0, no local searc h is applied. IfLSinPreProcess=1, local searc h is only applied at the end of the initial phase on the solutions that are stored in the reference set. IfLSinPreProcess=2, a local searc h is applied for ev ery trial solution considered.LStoRefSetPPis the parameter represen ting the update frequency of the reference set and tak es the v alues oftrueorfalse. IfLStoRefSetPP = true, ev ery time a solution is ev aluated, it is compared to the solutions in the reference set and if necessary the reference set is updated. This requires that ev ery mo v e performed during the local searc h is considered for the reference set. IfLStoRefSetPP = false, only the nal result of the local searc h, a local optim um, is tried for the reference set. P arametersLSinIterationsandLStoRSItersha v e the same denition and lev els, applied to the iterativ e phase. F or the termination of the algorithm w e ha v e one criterion only If the reference set is not modied on a giv en iteration, it cannot be modied on the later iterations, either. Therefore, w e k eep trac k of the solutions in the reference set and immediately terminate if the reference set is the same before and after an iteration. This criterion does not require a parameter.
PAGE 106
90 W e demonstrate the dierence bet w een the SS and PR methods in Figure 219 In (a), sev eral candidate solutions are created from the linear com binations of some selected reference solutions. In con trast, in (b), the neigh borhood structure is exploited to generate a path bet w een t w o selected reference solutions. W e use the same structure for our SS and PR implemen tations. Ho w ev er, there are some sligh t dierences bet w een the t w o methods. The subset generation mec hanism used for PR considers the subsets with t w o solutions only These solutions are used as origin and destination poin ts in the solution com bination mec hanism. Based on the acceptable v alues, w e measure the distance bet w een the origin and the destination with a categorical distance measure. Ifq 1andq 2are the origin and destination v ectors, and w e dene the functionPosition ( q i )as an in teger function whic h returns the position of v ariablei's v alue inA i, then the distance bet w een these t w o v ectors is dened asP i 2 N j Position ( q 1 i ) Position ( q 2 i ) j, wherej x jis the absolute v alue ofx. Starting from the origin, the neigh bor solutions whic h decrease the distance b y one are considered and the bestNTSsolutions are stored in a list, whereNTSis the parameter standing for the n um ber of temporary solutions. In the next step, eac h solution in this list is considered as the origin, and again the neigh bor solutions that decrease the distance b y one are ev aluated. This is repeated un til the destination solution is reac hed, while k eepingNTSbest solutions bet w een the steps.NTS =1represen ts a single path bet w een the origin and the destination. Ho w ev er,NTS> 1can be considered asNTSparallel paths that are built bet w een the origin and the destination solutions. All the other mec hanisms explained for SS, and the associated parameters are used for the PR method. The follo wing generic algorithm (Algorithm SS/PR ) presen ts our implemen tation of the SS and PR methods (see Figure 220 ).
PAGE 107
91 2.11 Comparativ e Study 2.11.1 Design of Experimen ts In our study w e consider 10, 15 and 20 product problems with a v erage demand of 750, 500 and 375, respectiv ely whic h can be solv ed b y the dynamic programming procedure in reasonable times. W e use three experimen tal factors,s i =p iratio, T relaxation percen tageand div ersication lev elr.r 2f 0 ; 1 gis used to create test cases in whic h dieren t products are div ersied in terms of demand, processing time and setup time.r =1reects the div ersied case, andr =0reects the undiv ersied case where the products are v ery similar to eac h other. Demand v alues are randomly and uniformly generated bet w een the minim um and maxim um v alues, where maxim um demand is t wice as large as the a v erage demand for div ersied instances and 20% o v er the a v erage demand for the instances with similar products. The ratio of maxim um demand to minim um demand is 50 and 1.5 for these t w o t ypes of instances, respectiv ely W e useto denote the ratio bet w een the expected v alues ofs iandp ifor the div ersied instances. W e rst createp iaccording to uniform distribution bet w een (0,5] min utes, and thens iaccording to uniform distribution bet w een [ (1 0 : 1 r ) p i ; (1+0 : 1 r ) p i ] min utes. W e let 2f 100 ; 10 ; 1 gfor our experimen ts. T otal a v ailable time should allo w at least one setup per product, that isT T LB = P i 2 N ( d i p i + s i ). On the other hand,Tshould be limited withT
PAGE 108
92 25 instances are created for eac h problem set, giving a total of 450 test instances for eac hnv alue and 1350 test instances in total. 2.11.2 Methods Our heuristic procedure, specically designed for the1 stphase problem, tak es three parameters. The com bination of the parameters aect the beha vior of the procedure. Among man y possible com binations of the parameter v alues, w e select four whic h w e believ e to be the most ecien t ones. Complexit y analysis of the algorithm sho ws thatSearchDepthparameter is critical in time requiremen t. Our preliminary results sho w that settingSearchDepth> 2causes extensiv e time consumption but not yielding a signican t impro v emen t in solution qualit y Therefore w e narro wSearchDepth 2f 1 ; 2 g. If only onestep neigh bors are considered, then theMoveDepthparameter is xed to one. Ho w ev er, ifSearchDepth =2, then w e migh t speed up the algorithm b y mo ving directly to the best neigh bor found (MoveDepth =2). Therefore, w e test both lev els of this parameter. F or the combinations ev aluating the infeasible neigh bors as w ell, w e do not w an t to allo w the searc h to mo v e too far deep in to the infeasible region, but k eep the mo v es within onestep neigh borhood of the feasible region. Therefore, w e xSearchDepth =1for suc h com binations. The methods tested are: Method P arameter Com bination # (SearchDepth;MoveDepth;EligibleNeighbors) PSH1 (1, 1, feasible) PSH2 (2, 1, feasible) PSH3 (2, 2, feasible) PSH4 (1, 1, both) W e see the same parametric structure in our metaheuristic methods, as w ell. The parametric structure of our computer code is quite exible in terms of testing alternativ e strategies for a method. Ho w ev er, when the n um ber of parameters is
PAGE 109
93 large, an enormous n um ber of com binations of algorithm parameters exist. Finding the most eectiv e com bination is itself a com binatorial optimization problem. W e adopt a heuristic approac h to this problem; at eac h stage w e x some of the parameters to predetermined v alues and perform full factorial experimen ts on the rest of the parameters. The results obtained at eac h stage are used to x the tested parameters, so that the next stage performs experimen ts on some other parameters. Fixing a parameter to a tested v alue is sometimes based on the limits on time consumption or solution qualit y W e aim at nding methods that solv e the 20product problems in 200 seconds on a v erage and 500 seconds in the w orst case. If, at an y stage, w e see that a tested com bination tak es longer, w e eliminate that com bination. Similarly if a tested com bination is ob viously w orse than a previously tested one in terms of solution qualit y w e eliminate it. F or the signicance of the dierence bet w een the tested lev els of a parameter, w e apply pairedttests. W e denote the mean v alues of computation time and relativ e deviation from the optimal solution measures with tland dl, respectiv ely for thel thlev el of the parameter. If there are only t w o lev els for a parameter then one h ypothesis per measure is built. If, ho w ev er, there are more than t w o lev els, then the n um ber of h ypotheses to be built depends on the relationship bet w een the lev els of the parameter. F or some parameters, b y its role in the algorithm, w e kno w that the solution qualit y impro v es and the computational time increases with the lev els. F or example, if w e tak e the size of the reference set as a parameter, w e expect larger reference set sizes to require longer computational times and yield better results. In suc h cases, w e build h ypotheses on the dierence of adjacen t lev els in pairs. If all the adjacen t lev els are signican tly dieren t and a monotone order of the lev els is found, w e do not construct h ypotheses for eac h possible pair of labels. Otherwise, depending on the results obtained, w e ma y w an t to distinguish bet w een nonadjacen t lev els of the parameter and built h ypotheses for them. F or
PAGE 110
94 some other parameters, on the other hand, the results are not expected to be in suc h an order. Th us, w e build h ypotheses and applyttests for ev ery possible pair of the lev els of the parameter. F or allttests, w e use a condence lev el of95%. The ne tuning process terminates when all the parameters are considered. The ne tuning process can be seen as a supervised learning process. W e use 20% of the test instances (v e problems for eac h problem setting presen ted in the previous section) for ne tuning. That is, the most promising methods according to their performance on the fraction of the test instance will be used on the en tire set of test instances. Strategic oscillation Com bining all the parameters w e denote the SO method with SO(MaxIters,NFM,NIM,Range,( Number;Percentage )). In the rst stage w e focus on the neigh borhood used in the local searc h module and the structure of the iterativ e phase. The searc h depth for the local searc h is xed to one, that is only onestep neigh bors are ev aluated. The n um ber of neigh bor solutions ev aluated is a function of the rst parameter considered in this stage,Range. The parameterRangecorresponds to the parameterdened in the neigh borhood structure, and tak es 3 lev els,Range 2f 1 ; 2 ; 3 g. The second parameter w e consider is com bination of three parameters,MaxIters,NFMandNIM. W e name this parameter asIterativeand test it in three lev els, suc h that( MaxIters;NFM;NIM ) 2f ( d N= 2 e ; 2 N +50 ;N ) ; ( N;N +25 ; d N= 2 e ) ; (2 N; d ( N + 25 = 2) e ; d N= 4 e ) g. The rst lev el represen ts a case where the n um ber of iterations is small, but the n um ber of mo v es in an iteration is large, the third lev el is the opposite and the second lev el is bet w een the other t w o. Results from T ables C1 and C2 sho w that for the parameterRange t1 < t2 < t3and d1 > f d2 ; d3 g. Bet w een the t w o alternativ es yielding the better solution qualit y w e select the one with lo w er time consumption and setRange =2. Similarly among the three lev els ofIterativew e see that t1 < t2 < t3and
PAGE 111
95 d1 > f d2 ; d3 g. With respect to the tradeo bet w een solution qualit y and time consumption, w e can setIterativeto its rst lev el. Ho w ev er, taking a closer look at the a v erage deviation results, w e see that the most promising lev el of this parameter v aries for dieren tNv alues. So, w e analyse the componen ts of this parameter separately and with dieren t settings that reect the most promising domains for dieren tNv alues. In the second stage, w e analyse the parametersMaxIters,NFMandNIMeac h with t w o lev els,MaxIters 2f 3 N 20 ; 4 N 30 g ;NFM 2f 50 N; 50 2 N gandNIM 2f 5 ; 8 g. The results in T ables C3 and C4 sho w that the rst lev el ofMaxItersis dominan t o v er its second lev el, yielding exactly the same solution qualit y in less time. F or the parameterNFM, w e see that rst lev el is more time consuming, but yields better solution qualit y The time consumption of the rst lev el is acceptable, therefore w e setNFMto its rst lev el. F or the third parameter w e see that solution qualit y bet w een its alternativ e lev els is exactly same and time consumption is statistically indieren t. In this case, one can arbitrarily c hoose bet w een the alternativ es, w e setNIMto its rst lev el. In the third stage, w e analyse the parameterRelativeImprovementand further ne tuneNFM. W e test three lev els ofNFM 2f 45 N; 50 N; 55 N gand t w o lev elsRelativeImprovement 2f (1 ; 100%) ; (5 ; 0 : 01%) g. The results in T ables C5 and C6 sho w that the t w o tested lev els ofTerRelImpare statistically indieren t. W e xRelativeImprovementto its rst lev el. Also,NFM =45 Nis dominan t o v er the other t w o lev els with its better solution qualit y and shorter time consumption. A t this poin t, w e x the v alue of this parameter and conclude the ne tuning of the SO method. Our heuristic netuning approac h has determined a promising com bination of the parameters for SO in three stages. The process is summarized in T able 27 The table sho ws the parameters tested at eac h stage with corresponding lev els, the
PAGE 112
96 n um ber of alternativ e com binations and total time spen t for the stage in hours. The parameter lev els in bold sho w the parameters and their lev els set at that stage. A total of 23 methods ha v e been tested, whic h reects a v ery small portion of all the com binations in the parameter space. Scatter searc h. W e represen t the SS method with SS(PSHMethods,Diversification,b,SubsetType 1,SubsetType 2,SubsetType 3,SubsetType 4,NIC,NEC,LSinPreProcess,LStoRefSetPP,LSinIterations,LStoRSIters). Here, w e ha v e a total of 13 parameters. In the rst stage, w e examine the use of the problemspecic heuristic methods and lev el of div ersication used in the initialization phase. In this structure, w e ha v e 12 alternativ e com binations. The results of the rst stage are presen ted in T ables C7 and C8 Our rst observ ation is that lev els 2 and 4 ofDiversificationparameter increase time requiremen t signican tly Within the other t w o lev els, w e see that lev el 3 is superior to lev el 1 in terms of a v erage deviation and since its time consumption is acceptable, w e xDiversificationlev el to 3. Similarly w e see that the time consumption increases withPSHMethodslev el ( t1 < t2 < t3) while the a v erage deviation decreases ( d1 > d2 > d3). W e in v est the extra time that is required b y lev el 2 for the sak e of its con tribution to the solution qualit y and x the parameterPSHMethodsto lev el 2. The extra time required b y the lev el 3 is a v oided in an attempt to compensate its con tribution to the solution qualit y b y in v esting that time in the iterativ e searc h phase. In the second stage, w e focus on the other t w o parameters related to the initialization phase only ,LSinPreProcessandLStoRefSetPP. Three alternativ e v alues ofLSinPreProcess(0,1 and 2) and t w o alternativ e v alues ofLStoRefSetPP(falseandtrue) are tested, that is w e ha v e 6 alternativ e com binations. The results are summarized in T ables C9 and C10 .LSinPreProcess =2consumes a signican t amoun t of time for 20 product instances. Within the other
PAGE 113
97 t w o lev els, w e see thatLSinPreProcess =1promises higher solution qualit y although it requires more solution time ( t0 < t1 ; d0 > d1). Since its computational time is acceptable, w e xLSinPreProcess =1for the future stages. F or the parameterLStoRefSetPP, solution qualit y is indieren t for its t w o lev els ( d1 = d2). With strictly less time consumption ( d1 > d2),LStoRefSetPP = trueis dominan t o v erLStoRefSetPP = false, so w e xLStoRefSetPP = true. In the third stage, w e test parametersLSinIterationsandLStoRefSetIters. As w e see from T ables C11 and C12 the tested lev els of the parameterLStoRefSetItersare statistically indieren t. In this case, one can arbitrarily x the parameter to an y lev el. W e xLStoRefSetIters = false. F or the parameterLSinIterations, w e see that as the both the solution qualit y and time consumption increase with lev el. Since the a v erage computational time obtained forLSinIterations =2is acceptable, w e xLSinIterations =2. In the fourth stage w e focus on the parametersSubsetType 1throughSubsetType 4andNIC. W e test the creation of alternativ e subset t ypes one at a time, and create an articial parameterSubsetSizeto denote subsets of whic h t ype are being created. The results of testingSubSetSize 2f 2 ; 3 ; 4 ; 5+ gandNIC 2f 1 ; 3 ; 5 ; 10 gare summarized in T able C13 The results forN =10sho w thatSubsetSize> 2tak e a signican t amoun t of time and do not yield as good results asSubsetSize =2. Kno wing that the n um ber of trial solutions created for a subset is a linear function ofSubsetSizeand also the reference set size parameterbis a linear function ofN, w e eliminate the more time consuming alternativ e v alues of this parameter and xSubsetSize =2. Con tin uing the stage forN =15and 20, w e see that higher v alues of the parameterNICpromise better solution qualit y at the expense of m uc h longer solution times. With the expectation of reducing the time consumption b y trying smallerbv alues, w e setNIC =1.
PAGE 114
98 In the fth stage w e consider the parameterNEConly Results from testingNEC 2f 1 ; 3 ; 5 gin this stage and the results obtained forNEC =0in the previous stage are summarized in T ables C14 and C15 W e see that computational time signican tly increases withNECv alue, but the solution qualit y is indieren t for the last three v alues. Computational time ofNEC =1is acceptable, therefore w e xNEC =1for the future stages. Finally in the sixth stage, w e consider the parameterbwith three lev els;b 2f N;N +10 ;N +20 g. F rom the results summarized in T ables C16 and C17 w e see that solution qualit y impro v es withb, but the only reasonable time consumption is obtained b y the rst lev el, that is, w e xb = N. The process of ne tuning the SS method is summarized in T able 28 Similar to the one presen ted for the SO method, the table sho ws the parameters tested at eac h stage with tested lev els and the n um ber of alternativ e com binations and total time spen t for the stage in hours, and the parameter lev els in bold sho w the parameters and their lev els set at that stage. P ath relinking W e represen t the PR method with PR(PSHMethods,Diversification,b,NTS,LSinPreProcess,LStoRefSetPP,LSinIterations,LStoRSIters). Here, w e ha v e a total of 8 parameters, most of whic h are common with the SS method. W e use the same procedures for the initialization phase of both SS and PR methods. Therefore, w e use the results from the rst t w o stages of ne tuning of the SS method. This lets us to start with the parametersPSHMethods,Diversification,LSinPreProcessandLStoRefSetPPalready xed. In the rst stage w e test the parametersLSinIterationsandLStoRefSetIters, with t w o alternativ e v alues eac h, four alternativ e com binations in total. The reason w e do not testLSinIterations =2is that performing a local searc h on an in termediate solution on the path connecting the origin and destination solutions
PAGE 115
99 w ould tak e a signican t amoun t of time, because of the n um ber of in termediate solutions. As w e see from T ables C18 and C19 ,LSinIterations =1is preferable o v erLSinIterations =0. It yields a higher solution qualit y in acceptably longer times. F or the t w o parameterLStoRefSetIters, deviation measure is found lev elin v arian t, therefore thettest is applicable only on the time measure. Also, the tested lev els of this parameter are indieren t in terms of solution time. In this case, one can arbitrarily x the lev el of these parameters. W e xLStoRefSetIters = false. The remaining t w o parameters (bandNTS) are probably the most dicult ones, because their domain sets are not limited to t w o or three lev els and they should be expressed as functions ofn. In the fourth stage, w e test these t w o parameters in three lev els eac h. T estingb 2f N;N +10 ;N +20 gandNTS 2 fd 2 n 8 e ; d 3 n 8 e ; d 4 n 8 eg, w e obtain the results giv en in T ables C20 and C21 The results sho w that diminishing returns patterns exist for both parameters, that is, the impro v emen t obtained b y switc hing from lev el 2 to lev el 3 is not greater than the impro v emen t obtained b y switc hing from lev el 1 to lev el 2, t1 < t2 < t3and d1 > f d2 ; d3 g. F orNTS, w e see that lev el 3 yields a statistically indieren t deviation from lev el 2, although it tak es more time. Therefore, the third lev el can be eliminated. W e also see that the second lev el of the parameter yields better solutions than the rst lev el, in reasonably longer times. So, from this poin t on, w e xNTSto its second lev el tested (NTS = d 3 n 8 e). Although the same diminishing returns relationship is observ ed for the parameterb, w e see that largerbv alues are still promising forn =20problems. Before coming to a nal decision w e extend the second stage with t w o additionalbv alues (N +15 ;N +25). Using the detailed results from the second stage and this extension, w e get the results summarized in T ables C22 and C23 F rom these results, w e see that t1 < t2 < t3 < t4 < t5and d1 > d2 > f d3 ; d4 ; d5 g. The third
PAGE 116
100 lev el (b = N +15) is dominan t o v er the fourth and fth lev els. And relying on the tradeo bet w een time consumption and solution qualit y and the reasonable time consumption of the third lev el, w e xb = N +15and conclude the ne tuning of the PR method. The process of ne tuning the PR method is summarized in T able 29
PAGE 117
101 2,25r 2,15r 2,16r 2,17r 2,18r 2,19r 2,20r 2,14r 2,5r 2,6r 2,7r 2,8r 2,9r 2,10r 2,11r 2,12r 2,13r 2,4r 2,2r 2,3r ir 1r 25r20r15r10r 5r 2r Rr ir 0,0r 1,5r 1,8r 1,15r 1,4r 1,1r 1,2r 1,3rFigure 213: Net w ork Represen tation of the Example
PAGE 118
102 6r 500r100r 300r 400r Qr Fr (2,r Qr )r 19r 18r 17r 7r 5r 4r 3r 2r 16r 15r 14r 13r 12r 11r 10r 9r 8r 25r 20r 200r Figure 214: DP Solution to the Numerical Example Algorithm NEFeasibleSolutionSearch 1.Startfromagivensolution, q (= q 1 ;q 2 ;::;q n ).Declare q astheCurrentSolution. 2.CheckfeasibilityoftheCurrentSolution.Iffeasible,thenstopandreturntheCurrentSolution.Otherwisegotostep3.3.Findthecriticalconstraint(max i f s i + p i d d i q i e ;i 2 N g )andthecriticalvariable q A ( s A + p A d d A q A e =max i f s i + p i d d i q i e ;i 2 N g ).Ifthecriticalvariableisnotincreasable ( q A = d A ),thenstopandreturnnullsolution;nofeasiblesolutionexistsinthesolut ion space.Otherwise,increasethecriticalvariabletoitsnextacceptablevalueandgoto step2. Figure 215: Pseudocode for Algorithm NE Feasible Solution Search
PAGE 119
103 Algorithm ParametricHeuristicSearch( SearchDepth;MoveDepth;Eligible ) 1.SetCurrentSolutionasSWcornerandperformaNEfeasiblesolutionsearch(us ing Algorithm NEFeasibleSolutionSearch ).Ifnofeasiblesolutionscanbefound,stop. OtherwisesetCurrentSolutionasthissolution.2.Evaluateall SearchDepth step EligibleNeighbors oftheCurrentSolution.Ifthe BestNeighborisnotnull,thenmovetotheLeadingNeighbor.Ifthisnewsoluti onis feasiblethenrepeatstep2.Otherwisegotostep3.3.CheckifanyfeasiblesolutionexistsintheNEdirection,byemployingAlg orithm NEFeasibleSolutionSearch .Ifyes,thenmovetothatfeasiblesolution,andgotostep 2.Otherwisegotostep4.4.Returntothelastvisitedfeasiblesolution.IftheBestFeasibleNeighb orisnot null,thenmovetotheLeadingFeasibleNeighbor,andgotostep2.Otherwisest opand returntheBestSolution. Figure 216: Pseudocode for Algorithm Parametric Heuristic Search (a)r (b)r Optimal solutionsr Intermediate solutionsr Starting solutionsrFigure 217: Examples for F easible Regions that Can Benet from Strategic Oscillation
PAGE 120
104 AlgorithmSO1.Findseedsolutionsusingproblemspecicheuristics.Addthefarthestcornerofthesolutionspacetothelistofseedsolutions.2.Foreachseedsolutionf 3.Repeatuntil TerminationCriteria aresatised. f 4.Performalocalsearchfor NFM moves. 5.Crossovertheboundaryandperformalocalsearchfor NIM movesinto theinfeasibleregion.6.Performalocalsearchforafeasiblesolution. g7.Performsuccessivelocalsearchesonthecurrentsolutiontondalocaloptimum g Figure 218: Pseudocode for Algorithm SO qr1rqr2r 1r 2r 3r 4r 5r 10r 1r2r3r4r5r8r 15r qr1r 1r 2r 3r 4r 5r 10r 1r2r3r4r5r8r 15r qr2r (a)r (b)r Solutions generated as a combinationrof the reference solutionsr Reference solutionsrFigure 219: Example for SS and PR Methods
PAGE 121
105 Algorithm SS/PR Initialization 1.Initializethe ReferenceSet withseedsolutions,usingproblemspecicheuristics. 2.Foreachseedsolutionf 3.Createalldiversiedsolutionsoftheseedsolutiononhand.4.Foreachdiversiedsolutionf 5.Findalocaloptimumusingthe ImprovementMethod 6.Updatethe ReferenceSet g gImprovement 7.Generatesubsetsofthe ReferenceSet 8.Foreachsubsetf 9.Createcombinationsofthesolutionsinthesubset.10.Foreachcombinationf 11.Findalocaloptimumusingthe ImprovementMethod 12.Updatethe ReferenceSet g g13.Iterateuntil TerminationCriteria aresatised. Figure 220: Pseudocode for Algorithm SS/PR
PAGE 122
106T able 27: Summary of the Fine T uning Process for the SO Method Stage P arameters # of Methods Time # MaxIters NFM NIM Range ( Number;Percentage ) T ested (hrs) 1 ( d N= 2 e ; 2 N +50 ;N ) 1 (1,100.0%) 9 81.8 ( N;N +25 ; d N= 2 e ) 2 (2 N; d ( N +25 = 2) e ; d N= 4 e ) 3 2 3 N 20 50 N 5 2 (1,100.0%) 8 30.1 4 N 30 50 2 N 8 3 3 N 20 45 N 5 2 (1,100.0%) 6 85.6 50 N (5,0.01%) 55 N T otal 23 197.5
PAGE 123
107T able 28: Summary of the Fine T uning Process for the SS Method Stage P arameters # of Time SubsetType Methods # P D b 1 2 3 4 NIC NEC L 1 L 2 L 3 L 4 T ested (hrs) 1 1 1 N +10 0 false 12 86.3 2 2 3 3 4 2 2 3 N +10 0 false 6 10.4 1 true 2 3 2 3 N +10 true false false false 1 0 1 true 0 false 6 10.1 1 true 2 4 2 3 N +10 ( true;false;false;false ) 1 0 1 true 0 false 16 35.2 ( false;true;false;false ) 3 0 ( false;false;true;false ) 5 0 ( false;false;false;true ) 10 0 5 2 3 N +10 true false false false 1 0 1 true 0 false 4 27.7 1 3 5 6 2 3 N true false false false 1 1 1 true 0 false 3 13.8 N +10 N +20 P : PSHMethods;D : Diversification;L 1 : LSinPreProcess T otal 47 182.5 L 2 : LStoRefSetPP;L 3 : LSinIterations;L 4 : LStoRSIters
PAGE 124
108T able 29: Summary of the Fine T uning Process for the PR Method Stage P arameters # of Time Methods # P D b NTS L 1 L 2 L 3 L 4 T ested (hrs) 1 2 3 N +10 d N= 2 e 1 true 0 false 4 12.0 1 true 2 2 3 N d 2 n 8 e 1 true 1 false 9 28.2 N +10 d 3 n 8 e N +20 d 4 n 8 e 2 2 3 N +15 d 3 n 8 e 1 true 1 false 2 6.6 Ext. N +25 P : PSHMethods;D : Diversification T otal 15 46.8 L 1 : LSinPreProcess;L 2 : LStoRefSetPP L 3 : LSinIterations;L 4 : LStoRSIters
PAGE 125
109 The last t w o methods included in this comparativ e study are our dynamic programming (DP) and bounded dynamic programming (BDP) methods. In total, w e ha v e nine methods in our comparativ e study In the follo wing subsection, these methods are denoted with DP BDP PSH1, PSH2, PSH3, PSH4, SO, SS and PR. 2.11.3 Results and Discussion In ev aluating the computational performance of our solution methods, w e consider t w o performance measures, namely computational time and percen t deviation from the optimal solution. These t w o measures represen t the tradeo bet w een solution qualit y and time. Results from solving the test instances with all the methods considered, including three metaheuristic methods, four problemspecic heuristic methods and t w o exact methods, for the computation time and percen t deviation from the optim um measures, are summarized in T able 210 W e analyze the dierence of the methods pair b y pair, for both computational time and percen t deviation from the optimal solution measures. A total of 71 n ull h ypotheses are built and all of them are rejected, at a 95% condence lev el b y 2tailed pairedttests. That is, all the methods are signican tly dieren t from eac h other in terms of both the solution time and deviation from the optimal solution. Note that, the only comparison that is excluded from thettests is DPBDP comparison on the deviation measure. The ordering of the methods is tDP > tBDP > tSO > tSS > tPR > tPSH 2 > tPSH 3 > tPSH 1 > tPSH 4for the solution time and tDP = tBDP < tPR < tSS < tSO < tPSH 2 < tPSH 3 < tPSH 1 < tPSH 4for the deviation measure. The solution time and deviation orders sho w that the methods are divided in to three classes. These classes are the exact methods, the metaheuristic methods and the problemspecic heuristic methods. The dynamic programming procedure proposed for the exact solution of the problem requires extensiv e computational time. In the w orst case, w e see that, it solv es 20 product problems in appro ximately 2.5 hours. The bounded dynamic
PAGE 126
110 T able 210: Summary of Results Time (seconds) Deviation (%) n Method A vg. Min. Max. A vg. Max. 10 DP 1,208.27 26.54 5,116.26 BDP 274.88 2.31 1,675.82 PSH1 0.35 0.17 0.46 0.556 15.374 PSH2 3.07 1.35 4.10 0.377 15.374 PSH3 1.62 0.89 2.26 0.362 15.374 PSH4 0.30 0.06 0.41 0.666 10.233 SO 8.67 4.84 17.36 0.133 4.219 SS 6.57 2.69 27.27 0.091 3.153 PR 5.83 2.39 17.52 0.015 2.897 15 DP 1,928.12 60.36 7,398.65 BDP 424.37 12.44 2,033.06 PSH1 0.89 0.29 1.10 0.570 9.773 PSH2 11.63 3.40 14.87 0.414 9.773 PSH3 6.14 3.06 8.01 0.454 9.333 PSH4 0.75 0.08 1.01 0.678 9.773 SO 37.75 15.05 87.87 0.192 6.227 SS 30.13 8.01 181.67 0.092 6.227 PR 24.84 8.15 71.56 0.013 1.293 20 DP 2,527.53 44.74 8,990.55 BDP 543.59 7.54 2,262.12 PSH1 1.72 0.39 2.10 0.607 14.006 PSH2 29.96 5.99 37.63 0.403 12.923 PSH3 15.78 5.61 19.01 0.484 14.006 PSH4 1.44 0.14 1.89 0.768 14.006 SO 97.42 28.88 262.28 0.212 3.845 SS 81.44 8.58 423.65 0.089 3.138 PR 76.34 10.72 278.83 0.044 1.638 programming procedure signican tly reduces this time requiremen t, in that it requires appro ximately 38 min utes in the w orst case, for the 20 product problems. Among the metaheuristic methods w e consider in this study w e see that PR is dominan t o v er SS and SO with its better performance in terms of solution qualit y and shorter solution time. F or SS to ac hiev e the same solution qualit y more iterations w ould be required, whic h w ould increase the computational time signican tly This is due to the rounding and local searc h operations that SS method performs on the in termediate solutions. PR is expected to w ork faster
PAGE 127
111 because it exploits the special neigh borhood structure of the problem, and does not perform an y local searc hes on the in termediate solutions. SO, on the other hand, has not yielded as good solution qualit y as the other t w o, despite the fact that it tak es longer solution times. F or the class of problemspecic heuristics, the results sho w that the four alternativ e methods are signican tly dieren t in terms of both solution qualit y and solution time measures. The orderings of the methods emphasize the tradeo bet w een the solution time and the solution qualit y none of the four methods is dominan t o v er the others. The better solution qualit y w e expect, the longer the time w e should dev ote to the method.
PAGE 128
CHAPTER 3 FLO WSHOP SINGLELEVEL MODEL The Flo wShop SingleLev el (FSSL) model is v ery similar to the SingleMac hine SingleLev el model, whic h is extensiv ely discussed in the previous c hapter. The man ufacturing en vironmen t of in terest in the FSSL model is a o wshop. All the products go through a set of mac hines, follo wing the same routing. This o wshop en vironmen t can be the nal stage of a m ultistage man ufacturing system. Examples of suc h a system arise from the electronics man ufacturing, where the endproducts are man ufactured in a o wshop. Due to the lo wv olume highmix structure of the productmix, the mac hines can not be dedicated to a certain product, but ha v e to be shared among a v ariet y of the products. Our motiv ating example for the FSSL model comes from the St. P etersburg facilit y of a leading electronic con tract man ufacturer. A m ultinational corporation in the business of con tract man ufacturing for electronic componen ts operates the facilit y under consideration. Recen t trends in the mark et indicate that the con tract man ufacturing industry is transforming from the traditional high v olume lo w mix production, to lo w v olume high mix production. The equipmen t used in the processes are w orth millions of dollars, th us dedicating separate lines to dieren t products is not protable. Also, the v olumes are so lo w that no distinct product can be man ufactured on a line for a long period, suc h as a w eek. On the con trary the production runs are short enough to require sev eral c hangeo v ers in a da y St. P etersburg facilit y has t w o main o w lines, where the nal products are assem bled. The o w lines are iden tical and eac h consist of the follo wing units: board loader, board prin ter, glue dots, HSP #1, HSP #2, GSM #1, GSM #2 112
PAGE 129
113 and reo w. Although there exists a line structure and the line performs the assem bly of the nal products, these lines are not assem blylines in the traditional sense. Dieren t mac hines ha v e dieren t setup and processing times for dieren t products. F urthermore, setup times are signican t and there exist storage areas bet w een some of the successiv e mac hines. In this en vironmen t, the traditional production smoothing methods are not applicable. Therefore, there exists a need for new and ecien t methods. In this c hapter, w e address the singlelev el o wshop en vironmen ts similar to that of the electronics con tract man ufacturing discussed abo v e. SingleLev el denotes that only the endproducts lev el is considered. If the part requiremen ts of dieren t products are somewhat close, then con trolling the single lev el is appropriate. The idea is, a lev eled production sc hedule will result in lev eled consumption in sublev els as w ell. As the previous c hapters ha v e already explained, this dissertation dev elops a new structure where demand is met in batc hes, and eac h batc h can be processed within a xed timebuc k et, whic h itself is a decision v ariable. Th us, the problem is analyzed in t w o phases, the rst phase is to determine length of the xed time buc k et, and the n um ber of batc hes and batc h sizes for eac h product. Once w e solv e the problem of the rst phase, the problem of sequencing those batc hes, whic h is the second phase, becomes trivial. Since eac h batc h should be processed in a xed timebuc k et, and total n um ber of batc hes to produce is kno wn for eac h product, w e can treat eac h batc h as a single unit of that product. All the batc hes should t in to the timebuc k et (the length of whic h isttime units) on all the mac hines. Therefore, once the timebuc k et is established, all the batc hes mo v e one mac hine do wnstream, in ev eryt. As a result, a batc h is completed in ev eryttime units. As far as the sequencing of the batc hes (the2 ndphase problem) is considered, the FSSL model is iden tical to the SMSL model.
PAGE 130
114 Th us, w e refer to the SMSL model for the2 ndphase problem and focus on the1 stphase problem only This c hapter is organized as follo ws. In section 3.1 w e presen t the mathematical form ulation of the1 stphase problem. In section 3.2 w e analyze the nature of the problem and dra w useful properties about the problem. Section 3.3 dev elops exact solution methods for the problem. Sections 3.4 and 3.5 are dev oted to heuristic solution procedures, as w e devise a heuristic algorithm for the problem and implemen t three metaheuristic tec hniques in these sections, respectiv ely Finally Section 3.6 presen ts a comparativ e analysis of the solution approac hes dev eloped for the problem. 3.1 1 stPhase Problem F orm ulation The parameters and v ariables used in the form ulation are dened as follo ws. n n um ber of products m n um ber of mac hines T total a v ailable time i product index (i =1 ;::;n) j mac hine index (i =1 ;::;m) N set of products (= f 1 ;::;n g) M set of mac hines (= f 1 ;::;m g)q in um ber of batc hes for product ib ibatc h size for product id idemand for product iD itotal demand for products 1 to i (=i P h =1 d h)s i;jsetup time for product i on mac hinej p i;jprocessing time for product i on mac hinejt length of xed time in terv al(= T= n P i =1 q i )
PAGE 131
115 Q total n um ber of batc hes(= n P i =1 q i )Similar to our w ork in the SMSL model, w e start building the model b y dening constrain ts. First constrain t set comes from the need to meet the demands. W e do not allo w production under the demand, instead w e allo w excess production within a tolerance limit. These excess amoun ts can be used to adjust demands of the next planning horizon. The second set of constrain ts comes from feasibilit y concerns. Since w e dene a xed timebuc k et ( t ), w e ha v e to assure that all batc hes can be processed within this xed timebuc k et, on eac h mac hine. W e form ulate the constrain ts as follo ws.b i = d i q i ;i =1 ;::;n s i;j + p i;j b i t;i =1 ;::;n;j =1 ; 2 ;::;m t n X i =1 q i = T t 0 b i ;q i 1 ;in teger.i =1 ;::;nW e ha v e three decision v ariables in the model. Batc h size (b i) is expressed as a function of n um ber of batc hes (q i), so w e can eliminateb ifrom the system and also eliminate the rst constrain t set from the model. By using the third constrain t w e can com bine the second set of constrain ts and third constrain t in to a single set of constrain ts and t can be eliminated from the decision v ariables. The resulting set of constrain ts is giv en belo w. s i;j + p i;j d i q i n X h =1 q h T;i =1 ;::;n;j =1 ; 2 ;::;m q i 1 ;in teger;i =1 ;::;n
PAGE 132
116 The objectiv e function of the model is the same as that of the SMSL model. Again, w e adopt the lo w er bound of the2 ndphase problem as the objectiv e function of the1 stphase problem. The follo wing optimization model represen ts the1 stphase problem in the FSSL model. MinimizeF = n X i =1 l d i q i m 2 n P h =1 q h 2 q 2 i n P h =1 q h(3.1) S.T. s i;j + p i;j d i q i n X h =1 q h T; 8 i; 8 j(3.2) d i q i = b i ; 8 i(3.3)q i = d i b i ; 8 i(3.4)1 q i d i ;q iin teger; 8 i(3.5) Note that in constrain ts ( 3.3 ) and ( 3.4 )b i(batc h size for producti) is used as a state v ariable. These t w o constrain ts assure that excess production is limited to the minim um. Decreasingb iorq ib y 1 w ould result in under production. 3.2 Structural Properties of the1 stPhase Problem Abo v e form ulation sho ws that the decision v ariables are common in the SMSL and FSSL models, for the1 stphase problem. Therefore, w e refer the reader to the denition of acceptable v alues and related discussions in the SMSL model (Section 2.7 ), and proceed with dening a simpler v ersion of the problem. Note that, if w e could assumeQconstan t, then the objectiv e function w ould be m uc h easier to handle (P ni =1 d d i =q i e 2 ( Q 2 q 2 i )). LetA ibe the set of acceptable v alues of v ariableq i,a ibe the cardinalit y ofA iandr i;h ibe theh thiacceptable v alue of the v ariableq i. Let us dene the con tribution of productito the objectiv e, ifh thiacceptable v alue is selected, b y
PAGE 133
117f i;h i = d d i =r i;h i e 2 ( Q 2 r 2 i;h i ). Dening a new decision v ariabley i;h i 2f 0 ; 1 gdenoting whether an acceptable v alue is selected or not, the problem with a constan tQis reduced to the follo wing.( MP 1)Minimizen X i =1 a i X h i =1 f i;h i y i;h i(3.6) S.T.a i X h i =1 y i;h i =1 8 i(3.7)n X i =1 a i X h i =1 r i;h i y i;h i = Q(3.8)y i;h i 2f 0 ; 1 g8 i;h i(3.9) Theorem 3.2.1 MP1 is NPc omplete. Proof W e rst describeMP 1v erbally F or sak e of simplicit y w e w ork with the decision v ersion of the problem. Giv en a nite setA, a partition of this set in tondisjoin t subsetsA i ;i 2 N,j A i j = a i; sizesr i;h i 2 Z +and w eigh tsf i;h i 2 R +associated with eac h elemen th i 2 A i ;i 2 N, a positiv e in tegerQand a positiv e real n um berY; is there a subsetA 0 Awhic h includes exactly one elemen t from eac h subsetA i, suc h that the sum of sizes of the elemen ts inA 0is exactlyQunits and sum of w eigh ts of the elemen ts inA 0is less than or equal toY? Guessing a solution b y selecting an elemen t from eac h subsetA iand v erifying if the solution satises the conditions can be performed in polynomial time, th us the problem is in the set NP Ho w ev er, the harder part of the proof is nding an NPcomplete problem whic h can be reduced to a special case of the problem at hand (MP 1) in polynomial time. W e select the Subset Sum Problem (SSP) (Garey and Johnson, 1979, p.223):
PAGE 134
118 Giv en a nite setC, sizez c 2 Z +for eac hc 2 Cand a positiv e in tegerL, is there a subsetC 0 Csuc h that the sum of the sizes of the elemen ts inC 0is exactlyL? F or an y instance of theSSP, w e are giv enj C jelemen ts and w e createj C jdumm y elemen ts whic h ha v e 0 size,z c =0 ;c = j C j +1 ;::; 2 j C j. Then w e assign zero w eigh ts to ev ery elemen t (both the originals and dummies),f c =0 ;c =1 ;::; 2 j C j. The last step is to formj C jdisjoin t subsets consisting of exactly one original and one dumm y elemen t (n = j C j). SettingQ = Lclearly states that theSSPis reduced toMP 1. F or an y solution, if the answ er forMP 1is y es, then the answ er for theSSPinstance at hand is y es, as w ell. And this holds for no as w ell. Therefore, the general case of theSSPis iden tical to this special case of theMP 1and an y instance of theSSPcan be form ulated and solv ed as anMP 1. F urther, the reduction in v olv esO ( j C j )operations, and is a polynomial time reduction. If one can nd a polynomial time algorithm for theMP 1it can be used to solv e theSSPas w ell. Since theSSPis NPcomplete (Karp, 1972), so isMP 1. This modied problem actually is iden tical to the one discussed in the SMSL model. This pro v es that the1 stphase problems faced in SMSL and FSSL models reduce to the same problem. This lead us to the follo wing result. Corollary 3.2.1 The1 stphase pr oblem in the FSSL mo del is NPc omplete. Proof Kno wing that the modied problemMP 1in the FSSL model is NPcomplete, being its general case, so m ust be the original problem (the1 stphase problem). 3.3 Exact Methods for the1 stPhase Problem The dynamic programming procedure and its bounded v ersion dev eloped for the SMSL model are not directly applicable to the FSSL model. In the follo wing discussion, w e propose a bounded dynamic programming (BDP) solution method that inherits majorit y of its componen ts from its SMSL coun terpart and com bines
PAGE 135
119 features of dynamic programming and branc handbound methods to successfully solv e the1 stphase problem in the FSSL model. 3.3.1 Dynamic Programming F orm ulation Giv en a xedQv alue, the objectiv e function ( 3.1 ) simplies toF 0 = P ni =1 ( d d i =q i e ) 2 ( Q 2 q 2 i ) =Q, whic h is separable inq iv ariables. If the v ectorq ( Q )=( q 1 ;q 2 ;::;q n )is an optimal solution to the problem withP ni =1 q i = Q, then the subv ector( q 2 ;q 3 ;::;q n )should be optimal to the problem withP ni =2 q i = Q q 1, as w ell. Otherwise, the v ectorq ( Q )can not be an optimal solution. Th us, the principle of optimalit y holds for the problem and w e can build the optimal solution b y consecutiv ely deciding onq iv alues. LetR ibe the total n um ber of batc hes committed to the rstiproducts. The product indexiis the stage index, and the pair( i;R i )represen ts the states of the DP form ulation. Figure 31 illustrates the underlying net w ork structure of the problem. In the net w ork, eac h node represen ts a state in the DP form ulation and arcs reect the acceptable v alues suc h that an arc is dra wn from node (i 1 ;R i 1) to node (i;R i 1 + q i) for eac hq i 2 A i. W e dene the follo wing recursiv e equation.F ( i;R i )= 8>><>>: 0 ;ifi =0 min q i f F ( i 1 ;R i q i )+ l d i q i m 2 ( Q 2 q 2 i ) =Q j s i;j + l d i q i m p i;j T Q ; 8 j g ;ifi> 0Note that, the recursiv e equation is a function ofQ, that can be used for a giv enQv alue only Also, the nal state is( n;Q ), and the solution to the problem,F ( n;Q ), can be found with the forw ard recursiv e algorithm presen ted in Figure 32 When the algorithm terminates, it returnsq i ( Q )v ector that is an optimal solution for the giv enQv alue andF ( n;Q )that is the objectiv e v alue of this optimal solution.
PAGE 136
120 n,Qr 0,0r 1,1r 1,2r 1,Dr 1r n,nr n,Dr nr 2,2r 2,3r 2,Dr 2r .r.r.r .r.r.r .r.r.r .r.r.r Figure 31: Net w ork Represen tation of the Problem As in an y DP model, the n um ber of nodes gro ws exponen tially with the n um ber of stages. In the nal (n th) stage, w e migh t ha v e at mostQ ni =1 a inodes. This is a straigh tforw ard result of the fact that eac h node in the( i 1) ststage is connected to at mosta inodes in thei thstage. Ho w ev er, w e also kno w that the maxim um index for a node in the nal lev el is (n;D n). Therefore, the n um ber of nodes in the nal lev el is at mostmin f Q ni =1 a i ;D n n +1 g. An upper bound for the total n um ber of nodes in the graph isP ni =1 min f Q il =1 a l ;D i i +1 g. In order to deriv e the computational complexit y of algorithm Forward Recursion w e need to kno w the n um ber of arcs, as w ell. The n um ber of arcs in to thei thstage is a function of the n um ber of nodes in the( i 1) ststage
PAGE 137
121 Algorithm ForwardRecursion( Q ) 1.Initialize F (0 ; 0)=0, F ( i;R i )= 1 forall i 2 N ,1 R i D i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 f 4.Foreach q i 2 A i valuethatsatises s i;j + l d i q i m p i;j T Q forall j 2 M f 5.IF( F ( i;R i Ã‚Â¡ 1 + q i ) >F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q )THEN f 6.Set F ( i;R i Ã‚Â¡ 1 + q i )) F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q 7.Update ActiveNodes i ActiveNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) 8. q Ã‚Â¤ i ( Q ) q i g g g g Figure 32: Pseudocode for Algorithm Forward Recursion anda i. An upper bound on this n um ber isa i min f Q i 1 l =1 a l ;D i 1 i +2 g. Therefore, w e claim that the total n um ber of arcs in the net w ork is at mosta 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g. F or eac h arc, it tak esO ( m )time to c hec k feasibilit y of the arc. In the w orst case, steps six through eigh t are executed as man y times as the n um ber of arcs in the net w ork. Therefore, the w orst case time complexit y of the algorithm isO ( m ( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g )). Abo v e algorithm solv es the problem for a giv enQv alue. Ho w ev er, the problem does not tak e aQv alue as an input parameter, but returnsQas a result of the solution v ector. This and the fact that an arc cost can be calculated only ifQis kno wn, w e need to solv e a DP for eac h possible v alue ofQ. W e propose algorithm Solve with DP for the solution of the problem (see Figure 33 ). The algorithm iden ties all possible v alues ofQand emplo ys algorithm Forward Recursion successiv ely to solv e the emerging subproblems. The algorithm yieldsQ as the
PAGE 138
122 optimalQv alue, whic h leads to the optimal solution v ectorq ( Q )and also the optimal solution's objectiv e v alueF ( n;Q ). Algorithm SolvewithDP 1.Initialize Q Ã‚Â¤ =0, F ( n;Q Ã‚Â¤ )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q i 2 A i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) g g g6.Foreachreachablenode( n;R n ) f 7.Set Q R n 8.Findtheoptimalsolutionforthegiven Q valueusingAlgorithm ForwardRecursion 9.If F ( n;Q Ã‚Â¤ ) >F ( n;Q )THEN f 10.Update Q Ã‚Â¤ Q g g Figure 33: Pseudocode for Algorithm Solve with DP Steps one through v e can be considered as a preprocessing phase where the reac hable nodes are iden tied. The w orst case complexit y of this preprocessing phase depends on the n um ber of arcs in the net w ork represen tation of the problem, in that it is equal to that of algorithm Forward Recursion Since algorithm Forward Recursion is repetitiv ely in v ok ed in step eigh t, the preprocessing phase does not aect the o v erall time complexit y of the algorithm. Steps sev en through nine are repeated for eac h reac hable node at the nal stage of the DP form ulation. The n um ber of reac hable nodes is bounded abo v e b yD n n +1. Therefore, algorithm Forward Recursion ma y be in v ok ed at mostD n n +1times, yielding an o v erall w orst case time complexit y ofO (( D n n +1) m ( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g ))
PAGE 139
123 This time complexit y sho ws that, the computational requiremen t of the DP procedure depends on external parameters suc h asd is anda is. Therefore, the procedure ma y be impractical for large size problems. In the next subsection, w e dev elop sev eral bounding strategies to reduce the computational burden of the DP procedure. 3.3.2 Bounding Strategies An upper limit forQ. Noting that the length of the takttime cannot be smaller than the sum of processing and setup times of an y batc h leads to the follo wing upper bound for the possibleQv alues.T Q s i;j + p i;j ; 8 i; 8 j ) Q Q U = T max i;j f s i;j + p i;j ;i 2 N;j 2 M g Eliminate in termediate nodes whic h cannot yield a feasible solution A t an y stage,R ima y increase b y at mostd iand at least 1 units. Therefore, as w e proceed to w ards the nal state, w e eliminate the in termediate nodes (i;R i) withR i >Q n + iorR i
PAGE 140
124 andV i, for alli 2 Ncan be computed inO ( n 2 )time in a preprocessing phase, th us the lo w er bounds for the future costs can be obtained inO (1)time, when needed. F urthermore, if all the states at a stage are eliminated, then the iteration is terminated, since there is no w a y to reac h the nal state. A lo w er limit forQ. Starting with a high v alue ofQand decreasing it at ev ery step requires a stopping condition based on a lo w er limit forQv alues. The most basic lo w er limit isQ L = P i 2 N 1= n, as the smallest acceptable v alue is one, for eac hi 2 N. F or a better lo w er limit, w e adaptG ( i;R i )to the complete solution and obtainG (0 ; 0)=( U 0 V 0 ) =Q. UsingF ( n;Q )as the upper bound on the objectiv e v alue of the optimal solution,Q Q L =( U 0 V 0 ) =F ( n;Q )giv es a lo w er limit onQv alue. Note that as a better solution is found,Q Lv alue increases. Therefore, w e updateQ Lev ery timeQ is updated, and dynamically narro w the searc h space onQ. Incorporating all the bounding strategies dev eloped, w e propose algorithm Solve with BDP (Figure 34 ) for the solution of the problem, using algorithm Bounded Forward Recursion (Figure 35 ) for successiv ely solving emerging DPs. The proposed dynamic programming procedure and the bounding policies sho w sligh t dierences from the ones proposed for the SMSL model. The w orst case complexities aremtimes greater in the FSSL model. This dierence is due to the ev aluation of the candidate arcs connecting states in t w o adjacen t stages in the form ulation. In the singlemac hine case, feasibilit y of an arc is tested with respect to only one mac hine, whereas in the o wshop casemmac hines are in v olv ed in the calculations. Therefore, w e predict that the computational burden of the algorithms will increase with the n um ber of mac hines. 3.4 Problem Specic Heuristics for the1 stPhase Problem The complexit y of the dynamic programming approac h proposed for the problem implies that w e ma y not be able to solv e largesized instances with these
PAGE 141
125 Algorithm SolvewithBDP 1.Initialize Q Ã‚Â¤ =0, F ( n;Q Ã‚Â¤ )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N .Alsocompute U 0 and V 0 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q i 2 A i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) g g6.Compute U i and V i g7.Set Q L =1and Q U = b T= max i;j f s i;j + p i;j ;i 2 N;j 2 M gc 8.Foreachreachablenode( n;R n )satisfying Q L R n Q U ,indecreasingorder f 9.Set Q R n 10.Findtheoptimalsolutionforthegiven Q valueusing Algorithm BoundedForwardRecursion 11.If F ( n;Q Ã‚Â¤ ) >F ( n;Q )THEN f 12.Update Q Ã‚Â¤ Q 13.Update Q L b ( U 0 Ã‚Â¡ V 0 ) =F ( n;Q Ã‚Â¤ ) c g g Figure 34: Pseudocode for Algorithm Solve with BDP exact methods. Therefore, w e dev elop heuristic algorithms whic h do not guaran tee to nd an optimal solution but lik ely to nd good solutions in a reasonable amoun t of time. In this section w e describe a parametric heuristic solution procedure that w e ha v e dev eloped for the1 stphase problem. The basic principles whic h constitute the basis for our heuristic solution procedure are mostly similar to those discussed in the SMSL model. Here, w e rebuild our algorithms in order to incorporate the c hanges required b y the FSSL model. The only modication required in the feasible solution searc h method is the denition of the critic al c onstr aint Critical constrain t is the constrain t with themax i f max j f s i;j + p i;j d d i q i e ;j 2 M g ;i 2 N gv alue. If the solution on hand is feasible, then the critical constrain t is the tigh test constrain t. Similarly in an
PAGE 142
126 Algorithm BoundedForwardRecursion( Q ) 1.Initialize F (0 ; 0)=0, F ( i;R i )= 1 forall i 2 N and1 R i D i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N 2.For i =1to n ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 thatsatises (( Q Ã‚Â¡ D n + D i Ã‚Â¡ 1 R i Ã‚Â¡ 1 Q Ã‚Â¡ n + i +1)AND ( F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ G ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 ) F ( n;Q Ã‚Â¤ ))) f 4.Foreach q i 2 A i valuethatsatises s i;j + l d i q i m p i;j T Q forall j 2 M f 5.IF( F ( i;R i Ã‚Â¡ 1 + q i ) >F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q )THEN f 6.Set F ( i;R i Ã‚Â¡ 1 + q i ) F ( i Ã‚Â¡ 1 ;R i Ã‚Â¡ 1 )+ l d i q i m 2 Ã‚Â¡ Q 2 Ã‚Â¡ q 2 i Ã‚Â¢ =Q 7.Update ActiveNodes i ActiveNodes i [ ( i;R i Ã‚Â¡ 1 + q i ) 8. q Ã‚Â¤ i ( Q ) q i g g g g Figure 35: Pseudocode for Algorithm Bounded Forward Recursion infeasible solution, the critical constrain t is the most violated constrain t. Also, critic al variable is dened as the product related to the critical constrain t. The discussions giv en in the SMSL model hold for the FSSL model, as w ell. Therefore, w e use the results obtained for the SMSL model and build Algorithm NE Feasible Solution Search (Figure 36 ). The algorithm examines solution space starting from an y giv en solution, b y mo ving in the NorthEast (NE) direction, and reports the existence of a feasible solution. Mo ving in the NE direction means increasing at least oneq ito its next acceptable v alue. F or future use w e dene SW corner as the solution where the v ariables tak e their lo w est possible v alues, that isq i =1 ; 8 i, and NE corner as the solution whereq i = d i ; 8 i. The algorithm performs exactly one incremen t operation per iteration. Depending on the starting solution, the algorithm performs at mostP ni =1 a iiterations. Eac h iteration requires nding the critical constrain t and c hec king if the
PAGE 143
127 Algorithm NEFeasibleSolutionSearch 1.Startfromagivensolution, q (= q 1 ;q 2 ;::;q n ).Declare q astheCurrentSolution. 2.CheckfeasibilityoftheCurrentSolution.Iffeasible,thenstopandreturntheCurrentSolution.Otherwisegotostep3.3.Findthecriticalconstraint(max i f max j f s i;j + p i;j d d i q i e ;j 2 M g ;i 2 N g )andthecritical variable q A (max j f s A;j + p A;j d d A q A e ;j 2 M g =max i f max j f s i;j + p i;j d d i q i e ;j 2 M g ;i 2 N g ). Ifthecriticalvariableisnotincreasable( q A = d A ),thenstopandreturnnullsolution;no feasiblesolutionexistsinthesolutionspace.Otherwise,increasethecritical variabletoits nextacceptablevalueandgotostep2. Figure 36: Pseudocode for Algorithm NE Feasible Solution Search solution at hand is feasible or not, both these tasks tak eO ( nm )time. Therefore, the time complexit y of the algorithm isO ( nm P ni =1 a i ). Considering that the NE direction has at mostQ ni =1 a isolutions whic h ma y or ma y not be feasible, the algorithm scans this space signican tly fast. Space complexit y of the algorithm is also easily calculated. The algorithm stores the curren t solution whic h consists ofndecision v ariables only therefore the space complexit y isO ( n ). The algorithm can be rev ersed so that it scans the solution space in the SW direction. Although the nature of the problem is quite dicult, this ease in nding the closest feasible solution in a specic direction giv es us an adv an tage to dev elop a po w erful heuristic algorithm. Before proceeding with details of the algorithm, w e explain the neigh borhood structure used. A solutionq 1 =( q 1 1 ;q 1 2 ;:::;q 1 n )is a neighb or solution ofq 0 = ( q 0 1 ;q 0 2 ;:::;q 0 n )if and only if exactly one v ariable (sa yq A) v alue diers in these solutions, suc h thatq 1 Ais the next acceptable v alue ofq 0 Ain increasing or decreasing direction. That is, it can be reac hed b y only one incremen t or decremen t operation. With this denition, an y acceptable solution has at most2 nneigh bors,nof them being in the increasing direction and the othernin the decreasing direction.
PAGE 144
128 No w w e can proceed with dening our heuristic approac h. The algorithm tak es three parameters;SearchDepth,MoveDepthandEligibleNeighbors.SearchDepthparameter denotes depth of the searc h process. IfSearchDepth =1, then only the onestep neigh bors are ev aluated. IfSearchDepth =2, then the neigh bors' neigh bors (the t w ostep neigh bors) are also ev aluated, and so on. WhenSearchDepth> 1, thenMoveDepthbecomes an importan t parameter. IfMoveDepth =1, then the searc h terminates at a onestep neigh bor. IfMoveDepth =2, then the termination is t w o steps a w a y from the Curren t Solution, etc. The last parameter,EligibleNeighbors, denotes the eligible neigh bors for ev aluation. IfEligibleNeighbors= feasible, then only feasible neigh bors are considered. IfEligibleNeighbors= both, then both feasible and infeasible neigh bors are considered for ev aluation. In the algorithm, ev aluating a solution means calculating its objectiv e function v alue and determining if it is feasible. When all the neigh bors are ev aluated, the follo wing solutions are iden tied. The Best Neigh bor is aSearchDepthstep neighbor with the lo w est objectiv e v alue of all the neigh bors. The Leading Neigh bor is aMoveDepthstep neigh bor whic h leads to the Best Neigh bor. Similarly the Best F easible Neigh bor is aSearchDepthstep feasible neigh bor with the lo w est objectiv e v alue of all the feasible neigh bors and the Leading F easible Neigh bor is aMoveDepthstep feasible neigh bor whic h leads to the Best F easible Neigh bor. Note that, ifEligibleNeighbors= both, then Best Neigh bor and Best F easible Neigh bor migh t dier. IfEligibleNeighbors= feasible, then these t w o solutions are the same. This also holds for the Leading Solution and the Leading F easible Solution. A mo v e consists of updating the Curren t Solution and comparing the objectiv e function v alue of this solution to the Best Solution. If the solution at hand has a lo w er objectiv e v alue and is feasible, then the Best Solution is updated.
PAGE 145
129 Figure 37 sho ws the pseudocode for our heuristic algorithm, namely Algorithm Parametric Heuristic Search Algorithm ParametricHeuristicSearch( SearchDepth;MoveDepth;Eligible ) 1.SetCurrentSolutionasSWcornerandperformaNEfeasiblesolutionsearch(us ing Algorithm NEFeasibleSolutionSearch ).Ifnofeasiblesolutionscanbefound,stop. OtherwisesetCurrentSolutionasthissolution.2.Evaluateall SearchDepth step EligibleNeighbors oftheCurrentSolution.Ifthe BestNeighborisnotnull,thenmovetotheLeadingNeighbor.Ifthisnewsoluti onis feasiblethenrepeatstep2.Otherwisegotostep3.3.CheckifanyfeasiblesolutionexistsintheNEdirection,byemployingAlg orithm NEFeasibleSolutionSearch .Ifyes,thenmovetothatfeasiblesolution,andgotostep 2.Otherwisegotostep4.4.Returntothelastvisitedfeasiblesolution.IftheBestFeasibleNeighb orisnot null,thenmovetotheLeadingFeasibleNeighbor,andgotostep2.Otherwisest opand returntheBestSolution. Figure 37: Pseudocode for Algorithm Parametric Heuristic Search The algorithm alw a ys mo v es in the NE direction. The total n um ber of iterations performed b y Algorithm Parametric Heuristic Search is at mostP ni =1 a i, wherea iis the n um ber of acceptable v alues for the decision v ariableq i. A t eac h iteration, if Algorithm NE Feasible Solution Search is not in v ok ed, at mostn SearchDepthneigh bors are ev aluated, inO ( mn SearchDepth )time. W e already kno w that an iteration with Algorithm NE Feasible Solution Search tak esO ( nm )time. SinceO ( mn ) O ( mn SearchDepth ), the time complexit y of the algorithm isO ( mn SearchDepth P ni =1 a i ). Space complexit y of the algorithm is rather easy to calculate. The algorithm stores a constan t n um ber of solutions (Curren t Solution, Best Solution, etc.) during the iterations. Eac h solution consists ofnv ariable v alues. So, the space complexit y of the algorithm isO ( n ). The parametric heuristic procedure giv en here is v ery similar to that of the SMSL model. Again, w e note that the computational burden of the procedure will
PAGE 146
130 be higher than its SMSL equiv alen t. This is due to the time required to assess feasibilit y of the solutions. 3.5 MetaHeuristics for the1 stPhase Problem Our implemen tation of three metaheuristic methods on the SMSL model sho ws that the path relinking method suits the problem best, among the three methods. In the FSSL model, w e tak e this result in to accoun t and limit our implemen tation to path relinking method only 3.5.1 Neigh borhood Structure W e dene a solutionq =( q 1 ;q 2 ;::;q n )as a v ector of the decision v ariables suc h that all the decision v ariables tak e an acceptable v alueq i 2 A i ; 8 i. W e further distinguish bet w een feasible and infeasible solutions as follo ws. A solution is fe asible if it satises the rst constrain t set ( 3.2 ), otherwise it is infe asible A solutionq 1 =( q 1 1 ;q 1 2 ;:::;q 1 n )is a neighb or ofq 0 =( q 0 1 ;q 0 2 ;:::;q 0 n )if and only if exactly one v ariable v alue is dieren t in these v ectors, and the categorical distance bet w een the v alues of this decision v ariable is at most, whereis a user dened in teger that is greater than or equal to one. With this denition, a solution ma y ha v e at most2 nneigh bors. W e iden tify t w o particular solutions. The rst one is the origin where eac h decision v ariable tak es its lo w est possible v alue, that isq i =1 ; 8 i 2 N. The second one is the farthest c orner of the solution sp ac e where ev ery decision v ariable tak es its largest v alue, that isq i = d i ; 8 i 2 N. If w e relax in tegralit y of batc h sizes, and letr i = q i =Qwhere0 r i 1suc h thatP i 2 N r i =1denote the proportion of the n um ber of batc hes of a certain product to the total n um ber of batc hes, and assume these proportions (r is) are xed, then the objectiv e function ( 3.1 ) becomesP i 2 N ( d i =r i ) 2 (1 r 2 i ) =Q. This sho ws that largerQv alues are expected to yield better solutions. W e can in tuitiv ely argue that the global optim um ma y be located in the vicinit y of the farthest corner of the solution space. Therefore, guiding the
PAGE 147
131 searc h process to w ards this farthest corner migh t help us in nding the global optim um. 3.5.2 P ath Relinking In the follo wing, w e giv e a description of our PR implemen tation in the FSSL model. W e use the generic algorithm presen ted in Figure 38 F or initialization, emplo ymen t of the problem specic heuristic methods is represen ted b y a parameter,PSHMethods. W e consider the problemspecic heuristic methods in order of their time consumption, as reported in Y a vuz and T ufek ci (2004a). IfPSHMethods =1, then the rst and fourth methods are emplo y ed. IfPSHMethods =2, then method 2 is emplo y ed in addition to the other t w o. Finally ifPSHMethods =3, all four methods are emplo y ed. Ha ving established a set of seed solutions, the div ersication generator processes these seed solutions and creates the initial reference set. W e use t w o alternativ e modes of the div ersication generator. The rst mode is similar to the m utation operator used in genetic algorithms (Goldberg, 1989; Holland, 1975; Reev es, 1997). That is, the seed solution v ector is tak en as the input and starting with the rst v ariable, a div ersied solution is created for eac h v ariable. This is ac hiev ed b y replacing the v ariable's v alue with its100 thnext acceptable v alue. Ifa i < 100, the mod operator is used in order to obtain an acceptable v alue with index v alue bet w een one anda i. Here 100 is arbitrarily selected, an y signican tly large in teger suc h as 50, 200 or 500 could ha v e been c hosen. The second mode, on the other hand, does not process seed solution v ectors. It performs a local searc h for eac h decision v ariable and iden ties solutions that maximize the v alue of that certain decision v ariable. This mode of div ersication yields a total ofnalternativ e solutions and enables us to explore extreme corners of the feasible region. The parameter represen ting the selection of the div ersication mode isDiversification, and it has four lev els. A t lev el 1 no div ersication is applied, at lev el 2 only corner
PAGE 148
132 Algorithm PR Initialization 1.Initializethe ReferenceSet withseedsolutions,usingproblemspecicheuristics. 2.Foreachseedsolutionf 3.Createalldiversiedsolutionsoftheseedsolutiononhand.4.Foreachdiversiedsolutionf 5.Findalocaloptimumusingthe ImprovementMethod 6.Updatethe ReferenceSet g gImprovement 7.Generatesubsetsofthe ReferenceSet 8.Foreachsubsetf 9.Createcombinationsofthesolutionsinthesubset.10.Foreachcombinationf 11.Findalocaloptimumusingthe ImprovementMethod 12.Updatethe ReferenceSet g g13.Iterateuntil TerminationCriteria aresatised. Figure 38: Pseudocode for Algorithm PR searc h is applied, at lev el 3 only the div ersication generator is used and nally at lev el 4 both modes are used. Depending on the mode selection in the application of the algorithm, the n um ber of div ersied solutions ma y be less than the size of the reference set. In this case, the empt y slots in the reference set can be lled in the consecutiv e iterations. The size of the reference set is represen ted b y parameterb. In our implemen tation w e k eep one infeasible solution in the reference set, at all times. This infeasible solution is the farthest corner of the solution space discussed in Section 3.5.1
PAGE 149
133 The subset generation mec hanism used for PR considers the subsets with t w o solutions only These solutions are used as origin and destination poin ts in the solution com bination mec hanism. Based on the acceptable v alues, w e measure the distance bet w een the origin and the destination with a categorical distance measure. Ifq 1andq 2are the origin and destination v ectors, and w e dene the functionPosition ( q i )as an in teger function whic h returns the position of v ariablei's v alue inA i, then the distance bet w een these t w o v ectors is dened asP i 2 N j Position ( q 1 i ) Position ( q 2 i ) j, wherej x jis the absolute v alue ofx. Starting from the origin, the neigh bor solutions whic h decrease the distance b y one are considered and the bestNTSsolutions are stored in a list, whereNTSis the parameter standing for the n um ber of temporary solutions. In the next step, eac h solution in this list is considered as the origin, and again the neigh bor solutions that decrease the distance b y one are ev aluated. This is repeated un til the destination solution is reac hed, while k eepingNTSbest solutions bet w een the steps.NTS =1represen ts a single path bet w een the origin and the destination. Ho w ev er,NTS> 1can be considered asNTSparallel paths that are built bet w een the origin and the destination solutions. Using the impro v emen t method on com bined solutions and updating the reference set are common in both the initial and iterativ e phases. Ho w ev er, performing a local searc h on ev ery solution obtained ma y be impractical.LSinPreProcessis the parameter that represen ts local searc h usage in the initial phase. IfLSinPreProcess= 0, no local searc h is applied. IfLSinPreProcess= 1, local searc h is only applied at the end of the initial phase on the solutions that are stored in the reference set. IfLSinPreProcess= 2, a local searc h is applied for ev ery trial solution considered.LStoRefSetPPis the parameter represen ting the update frequency of the reference set and tak es the v alues oftrueorfalse. IfLStoRefSetPP = true, ev ery time a solution is ev aluated, it is compared to the
PAGE 150
134 solutions in the reference set and if necessary the reference set is updated. This requires that ev ery mo v e performed during the local searc h is considered for the reference set. IfLStoRefSetPP = false, only the nal result of the local searc h, a local optim um, is tried for the reference set. P arametersLSinIterationsandLStoRSItersha v e the same denition and lev els, applied to the iterativ e phase. F or the termination of the algorithm w e ha v e one criterion only If the reference set is not modied on a giv en iteration, it cannot be modied on the later iterations, either. Therefore, w e k eep trac k of the solutions in the reference set and immediately terminate if the reference set is the same before and after an iteration. This criterion does not require a parameter. 3.6 Comparativ e Study 3.6.1 Design of Experimen ts In our study w e consider 10 product problems with a v erage demand of 750 units, whic h can be solv ed b y the dynamic programming procedure in reasonable times. W e consider 2, 5 and 10 mac hine o wshops (m 2f 2 ; 5 ; 10 g). W e use three experimen tal factors,s i;j =p i;jratio, T relaxation percen tageand div ersication lev elr.r 2f 0 ; 1 gis used to create test cases in whic h dieren t products are div ersied in terms of demand, processing time and setup time.r =1reects the div ersied case, andr =0reects the undiv ersied case where the products are v ery similar to eac h other. Demand v alues are randomly and uniformly generated bet w een the minim um and maxim um v alues, where maxim um demand is t wice as large as the a v erage demand for div ersied instances and 20% o v er the a v erage demand for the instances with similar products. The ratio of maxim um demand to minim um demand is 50 and 1.5 for these t w o t ypes of instances, respectiv ely W e useto denote the ratio bet w een the expected v alues ofs iandp ifor the div ersied instances. W e rst createp i;jaccording to uniform distribution bet w een
PAGE 151
135 (0,5], and thens i;jaccording to uniform distribution bet w een [(1 0 : 1 r ) p i;j ; (1+0 : 1 r ) p i;j ]. W e let 2f 100 ; 10 ; 1 gfor our experimen ts. T otal a v ailable time should allo w at least one setup per product, that isT T LB = P i 2 N max j f d i p i;j + s i;j j j 2 M g. On the other hand,Tshould be limited withT 2causes extensiv e time consumption but not yielding a signican t impro v emen t in solution qualit y Therefore w e narro wSearchDepth 2f 1 ; 2 g. If only onestep neigh bors are considered, then theMoveDepthparameter is xed to one. Ho w ev er, ifSearchDepth =2, then w e migh t speed up the algorithm b y mo ving directly to the best neigh bor found (MoveDepth =2). Therefore, w e test both lev els of this parameter. F or the combinations ev aluating the infeasible neigh bors as w ell, w e do not w an t to allo w the searc h to mo v e too far deep in to the infeasible region, but k eep the mo v es within
PAGE 152
136 onestep neigh borhood of the feasible region. Therefore, w e xSearchDepth =1for suc h com binations. The methods tested are: Method P arameter Com bination # (SearchDepth;MoveDepth;EligibleNeighbors) PSH1 (1, 1, feasible) PSH2 (2, 1, feasible) PSH3 (2, 2, feasible) PSH4 (1, 1, both) W e see the same parametric structure in our path relinking implemen tation, as w ell. The parametric structure of our computer code is v ery exible in terms of testing alternativ e strategies for a method. Ho w ev er, when the n um ber of parameters is large, an enormous n um ber of com binations of algorithm parameters exist. Finding the most eectiv e com bination is itself a com binatorial optimization problem. W e adopt the same heuristic approac h as of the previous c hapter to this problem; at eac h stage w e x some of the parameters to predetermined v alues and perform full factorial experimen ts on the rest of the parameters. F or the signicance of the dierence bet w een the tested lev els of a parameter, w e apply pairedttests. W e denote the mean v alues of computation time and relativ e deviation from the optimal solution measures with tland dl, respectiv ely for thel thlev el of the parameter. If there are only t w o lev els for a parameter then one h ypothesis per measure is built. If, ho w ev er, there are more than t w o lev els, then the n um ber of h ypotheses to be built depends on the relationship bet w een the lev els of the parameter. F or some parameters, b y its role in the algorithm, w e kno w that the solution qualit y impro v es and the computational time increases with the lev els. F or example, if w e tak e the size of the reference set as a parameter, w e expect larger reference set sizes to require longer computational times and yield better results. In suc h cases, w e build h ypotheses on the dierence of adjacen t
PAGE 153
137 lev els in pairs. If all the adjacen t lev els are signican tly dieren t and a monotone order of the lev els is found, w e do not construct h ypotheses for eac h possible pair of labels. Otherwise, depending on the results obtained, w e ma y w an t to distinguish bet w een nonadjacen t lev els of the parameter and built h ypotheses for them. F or some other parameters, on the other hand, the results are not expected to be in suc h an order. Th us, w e build h ypotheses and applyttests for ev ery possible pair of the lev els of the parameter. F or allttests, w e use a condence lev el of95%. The ne tuning process terminates when all the parameters are considered. The ne tuning process can be seen as a supervised learning process. W e use 20% of the test instances (v e problems for eac h problem setting presen ted in the previous section) for ne tuning. That is, the most promising methods according to their performance on the fraction of the test instance will be used on the en tire set of test instances. W e represen t the PR method with PR(PSHMethods,Diversification,b,NTS,LSinPreProcess,LStoRefSetPP,LSinIterations,LStoRSIters). Here, w e ha v e a total of 8 parameters. W e use the ne tuning results for the PR method, in the SMSL model as a starting poin t. That is, w e start with an initial com bination of the parameters of PR(2, 3,n +15,d 3 n 8 e, 1,true, 1,false). In the rst stage w e test the parametersbandNTS, with three and four alternativ e v alues, respectiv ely This giv es us 12 com binations in total. As w e see from T ables D1 and D2 ,b =40andNTS =8are preferable o v er the other lev els of these parameters. A t this poin t, w e see that the results are satisfactory in terms of both the deviation and solution time measures. Therefore, w e stop ne tuning and select PR(2, 3, 40, 8, 1,true, 1,false) as the com bination to be tested with the comparativ e analysis. The process of ne tuning the PR method is summarized in T able 31
PAGE 154
138 T able 31: Summary of the Fine T uning Process for the PR Method Stage P arameters # of Time Methods # P D b NTS L 1 L 2 L 3 L 4 T ested (hrs) 1 2 3 20 4 1 true 1 false 12 12.3 30 6 40 8 10 P : PSHMethods;D : Diversification T otal 12 12.3 L 1 : LSinPreProcess;L 2 : LStoRefSetPP L 3 : LSinIterations;L 4 : LStoRSItersThe last t w o methods included in this comparativ e study are our dynamic programming (DP) and bounded dynamic programming (BDP) methods. In total, w e ha v e six methods in our comparativ e study In the follo wing subsection, these methods are denoted with DP BDP PSH1, PSH2, PSH3, PSH4 and PR. 3.6.3 Results and Discussion In ev aluating the computational performance of our solution methods, w e consider t w o performance measures, namely computational time and percen t deviation from the optimal solution. These t w o measures represen t the tradeo bet w een solution qualit y and time. Results from solving the test instances with all the methods considered, including three metaheuristic methods, four problemspecic heuristic methods and an exact method, for the computation time and percen t deviation from the optim um measures, are summarized in T able 32 W e analyze the dierence of the methods pair b y pair, for both computational time and percen t deviation from the optimal solution measures. A total of 41 n ull h ypotheses are built and all but one of them are rejected, at a 95% condence lev el b y 2tailed pairedttests. The only h ypothesis that w e can not reject states that PR method is indieren t from the BDP (or DP) in terms of deviation. That is, PR yields an excellen t solution qualit y All the methods are signican tly dieren t from eac h other in terms of both the solution time and deviation from the optimal
PAGE 155
139 T able 32: Summary of Results Time (seconds) Deviation (%) m Method A vg. Min. Max. A vg. Max. 2 DP 5,606.51 841.41 17,172.10 BDP 287.57 15.64 942.57 PSH1 0.38 0.18 0.51 0.782 23.697 PSH2 3.70 1.38 5.00 0.624 23.697 PSH3 1.92 0.91 2.63 0.654 23.697 PSH4 0.34 0.06 0.49 0.806 23.697 PR 12.90 1.39 47.52 0.0063 0.300 5 DP 7,715.82 960.36 32,945.12 BDP 328.83 14.25 1,190.13 PSH1 0.48 0.21 0.67 0.752 26.892 PSH2 4.64 1.40 6.50 0.552 11.706 PSH3 2.39 1.07 3.25 0.602 13.758 PSH4 0.44 0.08 0.64 0.702 26.892 PR 15.82 1.78 56.59 0.0139 0.144 10 DP 10,903.43 1,644.74 45,629.51 BDP 469.00 14.88 1,554.59 PSH1 0.64 0.31 1.76 0.726 25.233 PSH2 6.22 1.91 8.62 0.683 25.233 PSH3 3.20 1.16 6.59 0.647 25.233 PSH4 0.59 0.11 1.76 0.708 25.233 PR 26.10 3.36 174.28 0.0047 0.161 solution. Note that, the only comparison that is excluded from thettests is DPBDP comparison on the deviation measure. The ordering of the methods is tDP > tBDP > tPR > tPSH 2 > tPSH 3 > tPSH 1 > tPSH 4for the solution time and tDP = tBDP = tPR < tPSH 2 < tPSH 3 < tPSH 1 < tPSH 4for the deviation measure. The solution time and deviation orders sho w that the methods are divided in to t w o classes. The rst class includes the exact methods and PR, and the second class includes the problemspecic heuristics. The dynamic programming procedure proposed for the exact solution of the problem requires extensiv e computational time. In the w orst case, w e see that, it solv es 10 mac hine problems in appro ximately 12.5 hours. The bounded dynamic
PAGE 156
140 programming procedure signican tly reduces this time requiremen t, in that it requires less than an hour in the w orst case, for the 10 mac hine problems. W e see that PR is dominan t o v er both of the exact methods, as it is indieren t in solution qualit y and tak es signican tly less time. This pro v es that our PR implemen tation for the problem is v ery successful. F or the class of problemspecic heuristics, our ndings for the FSSL model are parallel to those of the SMSL model. The results sho w that the four alternativ e methods are signican tly dieren t in terms of both solution qualit y and solution time measures. The orderings of the methods emphasize the tradeo bet w een the solution time and the solution qualit y none of the four methods is dominan t o v er the others. The better solution qualit y w e expect, the longer the time w e should dev ote to the method.
PAGE 157
CHAPTER 4 SINGLEMA CHINE MUL TILEVEL MODEL The man ufacturing en vironmen t of in terest in the singlemac hine m ultilev el (SMML) model is a single mac hine, whic h is common with the SMSL model discussed in Chapter 2 The main dierence bet w een the SMSL and SMML models is the n um ber of lev els considered. MultiLev el denotes that not only the endproducts lev el but also the sublev els of subassem blies, parts and ra wmaterial are considered. The v ariation in endproducts' appearances in the nal sc hedule will be minimized, through a complex objectiv e function whic h tak es the sublev el material requiremen ts of the endproducts as parameters. F or an example of the man ufacturing system w e are concerned with in this c hapter, w e refer the reader to the in troduction c hapter (page 1 ), where w e discuss a case from automotiv e pressure hose man ufacturing industry (see Figure 11 ). In the m ultilev el v ersion of the problem, not only the latter stages of the process are considered, but also the former stages are tak en in to accoun t. In this case the assembly stage can represen t the rst lev el and oven curing stage can represen t the second lev el of this m ultilev el production system. In this example, reeled hoses are the componen ts required to man ufacture the endproducts at the nal lev el. Pressure hoses require v ariable setup and processing times on the assem bly station. Similarly semiprocessed hoses require v ariable setup and processing times on the o v en curing stage. In this example, one endproduct requires one t ype of componen t (reeled hose) only whereas one componen t can be used in sev eral dieren t endproducts. Ho w ev er, in our model, w e consider the general case where an y product can require an y componen t. W e aim to smooth the production at all 141
PAGE 158
142 lev els, so that the demand for parts/products at all sublev els are ev enly dispersed o v er the planning horizon and the in v en tory lev els can be lo w. In the majorit y of papers in production smoothing literature, batc h sizes are assumed iden tical (and equal to one). The adv an tage of this assumption is ob vious; with this assumption one easily adds setup time to processing time, therefore eliminates the complexit y imposed on this problem b y setups and larger batc h sizes. Moreo v er they assume that processing times are iden tical (and equal to one) for eac h product and there is enough time to process all these products in an y sequence. This mak es the ideal onepieceo w possible, thereb y eliminating the need to process the products in batc hes. With this assumption the en vironmen t is dened as a sync hronized (perfectly balanced) assem bly line. The models used in the papers men tioned abo v e cannot be used to obtain lev el sc hedules for a single mac hine en vironmen t where processing and setup times v ary for dieren t products and total a v ailable time is limited. In this en vironmen t one m ust decide on batc h sizes and n um ber of batc hes that will be produced to meet the demand, for eac h product, before trying to sequence the batc hes. This dissertation dev elops a new structure where demands are met in batc hes, and eac h batc h can be processed within a xed timebuc k et, whic h itself is a decision v ariable. Th us, the problem can be analyzed in t w o phases, the rst phase is to determine length of the xed timebuc k et, n um ber of batc hes and batc h sizes for eac h product. Once w e solv e the problem of the rst phase, the problem of sequencing those batc hes, whic h is the second phase, becomes somewhat easier. Since eac h batc h should be processed in a xed timebuc k et, and total n um ber of batc hes to produce is kno wn for eac h product, w e can treat eac h batc h as a single unit of that product. This second phase is similar to models in the literature. Therefore, w e can adapt one of the ecien t methods, whic h are already dev eloped and tested for a problem similar to ours.
PAGE 159
143 F urthermore, the curren t literature on the m ultilev el v ersion of the problem focuses on sc heduling the nal lev el only in trinsically assuming that the sublev els can be subordinated to the nal lev el. W e see this as a shortcoming of the literature and dev elop a methodology that sc hedules the sublev els, as w ell. W e start with the nal lev el (the lev el of the end products), con tin ue with the second lev el (where the subassem blies required b y the nal lev el are man ufactured) and the other sublev els, in the hierarc hical order. W e exclude the ra w materials lev el, as the optimal planning of the ra w material purc hases brings a dynamic lotsizing problem whic h is bey ond the scope of this dissertation. This c hapter is organized as follo ws. Section 4.1 presen ts the curren tly existing w ork in the literature, related to the SMML model. Sections 4.2 to 4.5 are dev oted to the2 ndphase problem, where the problem is formally stated and exact and heuristic solution approac hes are presen ted. The w ork related to the2 ndphase problem mostly relies on the existing w ork in the literature, therefore these sections include an extensiv e literature review. The rest of the c hapter is dev oted to the1 stphase problem, the main consideration of the c hapter. In section 4.6 w e presen t the mathematical form ulation of the problem; and in section 4.7 w e dra w useful properties about the problem. Section 4.8 dev elops exact solution methods for the problem. Sections 4.9 and 4.10 are dev oted to heuristic solution procedures, as w e devise a parametric heuristic algorithm for the problem and implemen t a metaheuristic tec hnique in these sections, respectiv ely Finally Section 4.11 presen ts a comparativ e analysis of the solution approac hes dev eloped for the problem. 4.1 Literature Review Before proceeding in to review of related papers in the eld, w e dene our notation in order to a v oid possible confusion due to dieren t notations used in these papers. L Num ber of lev els in the man ufacturing system
PAGE 160
144 l Lev el indexn lNum ber of products/parts to be man ufactured at thel thlev elN lSet of products/parts to be man ufactured at thel thlev el (= f 1 ; 2 ;::;n l g) i Product index k Stage indexs l;iSetup time of product i on the mac hine, at thel thlev elp l;iProcessing time of one unit of product i at thel thlev elr u;v;l;iAmoun t of partvat the lev elurequired to produce one unit of partiat thel thlev eld l;iDemand for product i at thel thlev el, for the planning horizonD l;iT otal demand of products 1 toito be man ufactured in the planning horizon, at thel thlev el (= i P h =1 d l;i)b l;iBatc h size of product i at thel thlev elq l;iNum ber of batc hes of product i at thel thlev el, to be man ufactured in the planning horizonQ lT otal n um ber of batc hes to be man ufactured in the planning horizon, at thel thlev el (= n l P i =1 q l;i) T T otal a v ailable time, length of the planning horizont lLength of the timebuc k et, at thel thlev elx l;i;kCum ulativ e production of product i at thel thlev el, o v er stages 1 to k measured in batc hes. The planning horizon ( T ) is divided in to equal length in terv als, or stages. The n um ber of stages at a certain lev el (l) is equal to the total n um ber of batc hes to be man ufactured at that lev el (Q l). This propert y will allo w us to measure the deviation from the ideal production rates in a discrete manner. Also, (x l;i;k) denotes the total n um ber of batc hes of product i at thel thlev el, produced in
PAGE 161
145 stages 1,2,..,k. The follo wing recursiv e equalit y holds forx l;i;k.x l;i;k = 8>>>><>>>>: 0 ;if k = 0x l;i;k 1 +1 ;if product i is sequenced in thek thstagex l;i;k 1 ;o/w. W e ha v e already noted that, in the ideal sc hedule in a JIT en vironmen t the production rate should be constan t, in other terms cum ulativ e stoc k for a product at a giv en poin t in time (total n um ber of items produced from the beginning un til this time) should be proportional to the time elapsed since the beginning of the horizon. This ideal sc hedule concept applies to not only the endproducts but also subassem blies, componen ts, parts and ra w materials. Monden (1998) reports that, in T o y ota, the smoothness of the nal sc hedule is assessed with regard to the part consumptions at the second lev el. Monden denes the objectiv e function as the summation of squared deviations from the ideal sc hedule, for all the parts at the second lev el. MinimizeZ = D 1 ;n 1 X k =1 n 2 X i =1 ( x 2 ;i;k k d 2 ;i D 1 ;n 1 ) 2The objectiv e is formed b y summing up squared gaps o v er the parts and stages. Milten burg and Sinnamon (1989) generalize Monden's form ulation of the problem, in that they measure the gap bet w een the ideal and actual sc hedules at four lev els. Their objectiv e function incorporates all the four lev els, with respect to the w eigh ts assigned to eac h lev el. Also, Milten burg and Sinnamon's objectiv e function denes the ideal consumption amoun t using the demand rate of a certain part (r i;j) at a certain lev el and the total production amoun t (X l;k) at that lev el, up
PAGE 162
146 to a certain stage in the sequence. MinimizeZ = D 1 ;n 1 X k =1 4 X l =1 n l X i =1 w l ( x l;i;k X l;k r l;i ) 2These t w o alternativ e models are the main sc hools of though t in m ultilev el mixedmodel sc heduling literature. W e will refer to these t w o models b y Monden model and Milten burg and Sinnamon model, respectiv ely Monden (1998) presen ts t w o heuristic algorithms, namely the Goal Chasing Method (GCM) and the Goal Chasing Method 2 (GCM2), for the nearoptimal solution of the Monden model. Both heuristics are m y opic onepass constructiv e heuristics that ev aluate alternativ e products that can be sc heduled imminen tly and select the product with the lo w est con tribution to the objectiv e function (deviation from the ideal sc hedule). Milten burg and Sinnamon also suggest t w o heuristic algorithms for their model. Their rst heuristic considers the immediate stage in the sequence, whereas the second heuristic considers the follo wing t w o stages, hence is less m y opic. Both methods are onepass constructiv e heuristics. The computational complexities for the methods areO ( D 1 ;n 1 n 1 ( n 2 + n 3 + :: + N L ))andO ( D 1 ;n 1 n 21 ( n 2 + n 3 + :: + N L )), respectiv ely Milten burg and Goldstein (1991) extend Milten burg and Sinnamon model, in that they call the objectiv e function in this model as the usage goal and dene a loading goal in addition. The authors build a m ultiobjectiv e optimization model and adapt Milten burg and Sinnamon's (1989) heuristics to this m ultiobjectiv e problem. Milten burg and Sinnamon (1992) extend their seminal w ork, in that they adapt a nearest poin t approac h and dev elop a new heuristic procedure to yield good solutions to the Milten burg and Sinnamon model.
PAGE 163
147 Inman and Buln (1992) propose an alternativ e objectiv e function for Milten burg and Sinnamon model, where they dene due dates (ideal production times) for eac h cop y of eac h product, and totally ignore the sublev els. Then, the authors suggest an EDD solution approac h for this model, whic h is guaran teed to nd the optimal solution for this modied objectiv e function and expected to nd good (nearoptimal) solutions to the original problem as w ell. This EDD algorithm runs inO ( D 1 ;n 1 log D 1 ;n 1 )time. Cakir and Inman (1993) address the Monden model and analyze GCM2 method in detail. The authors argue that the GCM2 method is not applicable to cases where products ha v e nonzero/one part requiremen ts. They propose a new heuristic to close this gap and also sho w that their heuristic method runs faster than the GCM2 method. Kubiak (1993) giv es a general form ulation whic h represen ts both Monden and Milten burg and Sinnamon models and calls the problem as the output rate v ariation (OR V) problem. He sho ws that the problem in the general case is NPHard. Morabito and Kraus (1995) underline the importance of feasibilit y and claim that Milten burg and Sinnamon model lac ks a necessary constrain t, therefore the heuristics proposed in Milten burg and Sinnamon (1989) can yield infeasible solutions. They modify the model b y adding a constrain t to assure feasibilit y and also modify the heuristic algorithms accordingly Milten burg and Sinnamon (1995) answ er the critiques regarding feasibilit y and state the importance of w eigh ts associated with the lev els. The authors argue that their model allo ws the user to giv e higher priorit y to smoothing the part consumptions at the sublev els than meeting the demand for endproducts in exact quan tities. They further argue that
PAGE 164
148 in some industries smoothing the part consumption can be far more importan t than meeting the demand in exact quan tities. Bautista, Compan ys and Corominas (1996) address the Monden model and state the m y opic beha vior of the GCM method. The authors propose an impro v ed heuristic and also an exact solution procedure that is based on dynamic programming and branc h and bound methods. Leu, Matheson and Rees (1996) address the Monden model. The authors dev elop a genetic algorithm for the problem's solution. The computational study sho ws that, with randomly generated initial populations, the genetic algorithm yields better solutions than those of the GCM method in the majorit y of the test instances. Leu, Huang and Russell (1997) adopt a beam searc h approac h to solv e a generalized model whic h co v ers both Monden and Milten burg and Sinnamon models. Through a computational study with 400 test problems, the authors sho w that their beam searc h approac h yields better results than both the GCM method and Milten burg and Sinnamon's heuristic. Duplaga, Hahn and Hur (1996) are concerned with reallife applications of mixedmodel assem bly line sequencing. The authors describe the model and solution method used b y Hyundai, with comparison to the Monden model and GCM and GCM2 methods, used b y T o y ota. Duplaga and Bragg (1998) conduct a comparativ e study in whic h they review heuristic methods used in the eld, namely the GCM and GCM2 methods used b y T o y ota, the Hyundai method and Milten burg and Sinnamon's (1989) heuristic methods. The results of the computational study sho w that GCM2 and Milten burg and Sinnamon's second heuristic yield better solutions than the others. Aigbedo and Monden (1997) generalize the Monden model to incorporate sev eral lev els of parts, not only the immediate sublev el of the nallev el. They
PAGE 165
149 also call the objectiv e function of the Monden model as the usage goal and dene a loading goal in addition. The study includes four objectiv e functions and proposes a m ultiobjectiv e optimization model. Kubiak, Steiner and Y eomans (1997) study a v arian t of the problem in whic h they use absolute v alue in the core of the Milten burg and Sinnamon measure and build a minmax objectiv e function. The authors propose a dynamic programming procedure for the problem's solution and sho w that the procedure can be used to solv e the minsum form ulation, as w ell. The computational complexit y of the DP procedure is giv en asO ( n 1 ( n 1 + n 2 + :: + n L ) Q n 1 i =1 ( d 1 ;i +1)). Y a vuz and T ufek ci (2004b) dev elop lo w er bounds on both Monden and Milten burg and Sinnamon models' objectiv e functions. The authors also consider a batc h production case, and extend the models to incorporate the batc h sizes and dev elop lo w er bounds for these cases. The papers men tioned in this section will be analyzed in detail in follo wing sections, as needed. 4.2 2 ndPhase F orm ulation The t w o sc hools of though t in production smoothing stem from Monden (1998) and Milten burg and Sinnamon (1989). In this section, w e discuss both of these models and build our o wn model. The Monden model is form ulated as follo ws (Monden, 1998, Chapter 16).
PAGE 166
150 MinimizeZ = D 1 ;n 1 X k =1 n 2 X h =1 x 2 ;h;k kd 2 ;h D 1 ;n 1 2(4.1) S.T.n 1 X i =1 x 1 ;i;k r 2 ;h; 1 ;i = x 2 ;h;k ;h =1 ; 2 ;::;n 2 ;k =1 ; 2 ;::;D 1 ;n 1(4.2)n 1 X i =1 x 1 ;i;k = k;k =1 ; 2 ;::;D 1 ;n 1(4.3)x 1 ;i;k x 1 ;i;k 1 2f 0 ; 1 g ;k =2 ; 3 ;::;D 1 ;n 1(4.4)x 1 ;i;D 1 ;n 1 = d 1 ;i ;i =1 ; 2 ;::;n 1(4.5)x 1 ;i;k 2 Z + ;i =1 ; 2 ;::;n 1 ;k =1 ; 2 ;::;D 1 ;n 1(4.6) In the core of the objectiv e function ( 4.1 ), w e see the ideal consumption rate for a part at the second lev el and its actual amoun t that is consumed b y the end products, according to the sc hedule of the rst lev el. The objectiv e function measures the deviation at the second lev el only This is the main c haracteristic of T o y ota's model. The lo w errank ed sublev els of parts and ra w materials are ignored, as w ell as the nal (rst) lev el. The rst constrain t set ( 4.2 ) establishes the connection bet w een the actual consumption of a part and the actual sc heduled amoun ts of the endproducts at the nal lev el. The second and third constrain t sets ( 4.3 )( 4.4 ) assure that one and only one product is assigned to eac h stage of the sequence. The fourth constrain t set ( 4.5 ) assures that the demand for the end products is met in exact quan tities. Finally the last constrain t set ( 4.6 ) dene the decision v ariables as in teger n um bers. The alternativ e to the abo v e model in the literature is Milten burg and Sinnamon's (1989, p. 1494) model.
PAGE 167
151 MinimizeZ = D 1 ;n 1 X k =1 4 X l =1 n l X h =1 w l x l;h;k X l;k d l;h D 1 ;n 1 2(4.7) S.T.n l X h =1 x l;h;k = X l;k ;l =1 ; 2 ;::; 4 ;k =1 ; 2 ;::;D 1 ;n 1(4.8)n 1 X i =1 x 1 ;i;k r l;h; 1 ;i = x l;h;k ;l =1 ; 2 ;::; 4 ;h =1 ; 2 ;::;n l ;k =1 ; 2 ;::;D 1 ;n 1(4.9)x 1 ;i;k x 1 ;i;k 1 2f 0 ; 1 g ;k =2 ; 3 ;::;D 1 ;n 1(4.10)n 1 X i =1 x 1 ;i;k = k;k =1 ; 2 ;::;D 1 ;n 1(4.11)x 1 ;i;D 1 ;n 1 = d 1 ;i ;i =1 ; 2 ;::;n 1(4.12)x 1 ;i;k 2 Z + ;i =1 ; 2 ;::;n 1 ;k =1 ; 2 ;::;D 1(4.13) The objectiv e function measures deviations at four lev els of products, parts and ra w materials. The objectiv e is built upon an actual consumption amoun t and an ideal consumption amoun t, ho w ev er the ideal amoun t is dened in a dieren t w a y The ideal consumption is calculated with respect to the total consumption at that lev el up to that stage (X l;k) and the demand ratio of that part in the total demand of that lev el. In this w a y Milten burg and Sinnamon puts more emphasis on the balance within a certain lev el's sc hedule, rather than an ideal balance. The rst constrain t set ( 4.8 ) establishes the total consumption amoun ts (X l;k). The remaining constrain ts ( 4.9 )( 4.13 ) are similar to the constrain ts ( 4.2 )( 4.6 ) in the Monden model. The k ey dierences bet w een the t w o alternativ e models presen ted abo v e are the lev els considered and the w a y the objectiv e function is constructed. W e adopt Monden's objectiv e function in whic h the ideal consumptions are calculated
PAGE 168
152 according to the straigh t line depicted in Figure 15 F or the n um ber of lev els, on the other hand, w e adopt Milten burg and Sinnamon's m ultilev el structure and extend it toLlev els, whereLis a userdened parameter. Both of the existing models assume onepieceo w and ignore batc h sizes. W e incorporate the batc h sizes (b l;i) and the n um ber of batc hes (q l;i) in to our model. The n um ber of batc hes (q l;i) replace the demand v alues (d l;i), and the batc h sizes (b l;i) are used in calculating the total demand and actual consumption of the parts. In our solution approac h, w e do not sc hedule the nal lev el only but all the lev els of componen ts, parts and ra w materials. W e build our model to be used in an y lev el (l). The follo wing optimization model will be used as the2 ndphase problem, throughout this c hapter. MinimizeZ = Q l X k =1 L X u = l n u X h =1 w u x u;h;k kb u;h q u;h Q l 2(4.14) S.T.n l X i =1 x l;i;k b l;i r u;h;l;i = x u;h;k ;u = l;l +1 ;::;L;h =1 ; 2 ;::;n u ;k =1 ; 2 ;::;Q l(4.15)n l X i =1 x l;i;k = k;k =1 ; 2 ;::;Q l(4.16)x 1 ;i;k x 1 ;i;k 1 2f 0 ; 1 g ;k =2 ; 3 ;::;Q l(4.17)x l;i;Q l = q l;i ;i =1 ; 2 ;::;n l(4.18)x l;i;k 2 Z + ;i =1 ; 2 ;::;n l ;k =1 ; 2 ;::;Q l(4.19) The objectiv e function ( 4.14 ) measures the deviations from the ideal consumptions of all parts at all lev els and at eac h stage of the sequence. The rst constrain t set ( 4.15 ) links the decision v ariable (x l;i;k) to the actual consumption amoun ts at sublev els. Constrain t sets ( 4.16 ) and ( 4.17 ) assure that one and only one batc h is
PAGE 169
153 assigned to ev ery stage of the sequence. Constrain t set ( 4.18 ) assures that all the batc hes of ev ery product are assigned to a stage in the sequence. Finally the last constrain t set ( 4.19 ) denes the decision v ariable (x l;i;k) as a nonnegativ e in teger. 4.3 Exact Methods for the2 ndphase problem Kubiak (1993) expresses both Monden and Milten burg and Sinnamon models in a single form ulation, where in the core of the objectiv e function, he denes a con v ex, unimodal function whic h tak es v alue zero at zero (F (0)=0, see (Kubiak, 1993, page 267)). He states that the problem with suc h an objectiv e function is NPHard. This applies to our model, in that our objectiv e function is the summation of squared deviations (w u ( x u;h;k ( kb u;h q u;h ) =Q l ) 2), whic h is a con v ex unimodal function taking v alue zero at zero. That is, if the deviation measured for a certain part at a certain stage in the sequence is zero (x u;h;k ( kb u;h q u;h ) =Q l =0), then the con tribution of that part at that stage of the sequence to the objectiv e function is zero. Kubiak et al. (1997) propose a dynamic programming based solution method to a v arian t of this problem. Their model is built upon absolute v alues of the deviation, in con trast to our squared measure. In this section, w e dev elop a dynamic programming solution to our model. DP ma y be used as an ecien t optimization tool for some problems where a n um ber of decisions ha v e to be made in sequen tial order, eac h decision denotes a transition from a state to another and objectiv e function can be expressed as a recursiv e equation. In our problem, the solution consists of a n um ber of decisions, i.e. whic h product w e should produce at whic h stage of the planning horizon. If w e ha v e decided on the subsequence up to a giv en stage, w e can form ulate the eect of the next stage's decision on the objectiv e function, using the subsequence at hand. The nal state for our problem is the state, in whic h ev ery batc h at a certain lev el (l) has been assigned to a stage of the horizon. The initial state is the state
PAGE 170
154 in whic h no batc h is assigned to an y stage. W e w an t to nd the most ecien t w a y to get from the initial state to the nal state. W e notate states withn lv ectors,X l;k =( x l; 1 ;k ;x l; 2 ;k ;:::;x l;n l ;k ) 2 Z n lsuc h thatn l P i =1 x l;i;k = k, andx l;i;k q l;i ; 8 i. A t eac h stage w e should decide on what to produce in that stage. Therefore, a decision is simply a selection of one of then lproducts to produce. F or some states, since the requiremen t for some products ha v e already been met, the n um ber of possible decisions is less thann l. The follo wing recursiv e equation sho ws the impact of decisions on the objectiv e function and relationships bet w een neigh bor states.f ( X l;k )=min i f f ( X l;k e i )+ g ( X l;k ) j x l;i;k 1 0 g f ( X 0 )= f (0 ; 0 ;:::; 0)=0Where,e j = j thunit v ector; withn len tries, all of whic h are zero except a single 1 in thej thplaceg ( X l;k )= w l n l X i =1 ( x l;i;k k b l;i q l;i Q l ) 2 + L X u = l +1 n u X h =1 w u n l X i =1 x l;i;k b l;i r u;h;l;i kb u;h q u;h Q l 2Also, note thatf ( X l;k 1 )= f ( X l;k e i ), whereiis the index of product assigned to thek thstage. T o calculate the complexit y for the suggested DP procedure, w e rst need to kno w n um ber of states. Sincex l;i;kcan tak e v alues 0,1,...,q l;i, there areq l;i +1possible v alues forx l;i;k. This means,n l Q i =1 ( q l;i +1)distinct states exist. Since at mostn ldecisions are ev aluated at eac h state and an ev aluation tak esO ( n l L P u = l +1 n u )time, the complexit y of the procedure isO ( n 2l ( L P u = l +1 n u ) n l Q i =1 ( q l;i +1)). This complexit y is v ery high. Therefore, w e need to use heuristic methods for large problems.
PAGE 171
155 4.4 Problem Specic Heuristics for the2 ndPhase Problem In the literature, heuristic algorithms proposed for the solution of m ultilev el production smoothing models are due to the leaders of the t w o alternativ e sc hools, namely Monden (1998) and Milten burg and Sinnamon (1989). Monden's heuristics are kno wn as the Goal Chasing Methods, whereas Milten burg and Sinnamon's heuristics are called onestage and t w ostage heuristics. All these heuristic methods are m y opic, constructiv e heuristics. In this section, w e dev elop a onestage m y opic heuristic for our model. Our heuristic starts with initializingx l;i; 0v alues to zero, whic h means assuming no in v en tory is a v ailable at the beginning. Then, the method iterates for stages 1 throughQ land calculates the eect of assigning eac h product to that stage. The product that has minim um eect (minim um increase in the objectiv e function) is assigned to that particular stage. The assignmen t of a product is completed with updating the actual production amoun ts at the lev el for whic h the sc hedule is created and all the actual consumption amoun ts at the sublev els. The heuristic algorithm does not necessarily yield an optimal solution. Its main strength is in its being a onepass constructiv e method, for whic h the computational time is negligible for ev en v ery large problems. The w orst case complexit y of this heuristic isO ( n l L P u = l n u ), as at mostn lalternativ es are ev aluated at eac h stage and an ev aluation consists ofL P u = l n ucalculations. W e presen t the algorithm in Figure 41 4.5 MetaHeuristics for the2 ndPhase Problem Leu et al. (1996) applied genetic algorithms (GA) on Monden's problem. GA is one of the most popular metaheuristic tec hniques, that k eeps a population of solutions on hand, and performs m utation and crosso v er operations on the solutions. As time (iterations) passes, poorer solutions die and more t solutions surviv e.
PAGE 172
156 The simplest form of crosso v er operator selects a crosso v er poin t and tak es genes before the crosso v er poin t from one of the paren ts, and the rest from the other paren t. This crosso v er giv es t w o osprings, whic h are examined for surviving to the next generation. More complicated crosso v er operators select sev eral crosso v er poin ts or use sev eral paren ts to produce osprings. F or sequencing problems simple crosso v er operators generally cause infeasibilit y in the ospring solutions. One can con v ert the infeasible ospring to a feasible one with some t ype of neigh borhood functions and a searc h method, or kill the infeasible ospring instan tly Both w a ys cause ineciency b y increasing the run time. A more comprehensiv e approac h is to dene a new operator for the problem at hand. Leu et al. (1996) dene a specic crosso v er operator for their study They select one crosso v er poin t on random, and produce one ospring. The genes to the left of the crosso v er poin t form the head, whereas the genes to the righ t form the tail. Then, the genes (dieren t products) in the tail of the rst paren t are randomly deleted from the second paren t. Then, the remaining genes are used in building the head part of the c hild. The tail of the rst paren t becomes the tail of the c hild also, and the recom bination is completed. With this crosso v er operator, feasibilit y of the c hild solution is assured. Ho w ev er, the dra wbac k of this approac h is, if the crosso v er poin t is close to the left end of the paren ts, then the ospring ma y not be similar to paren ts. That is, the fundamen tal elemen t of the GA tec hnique is lost. The authors also apply a simple m utation operator. A m utation poin t is selected on random and the head and tail of the c hromosome are sw apped. This simple operator assures feasibilit y of the new c hromosome, as w ell.
PAGE 173
157 The authors presen t a computational study with 80 test problems and sho w that the proposed GA approac h is eectiv e in solving the Monden problem, in that it yields solutions superior to those of the goal c hasing method. 4.6 1 stPhase Problem F orm ulation In the m ultilev el v ersions of the problem, w e treat the dieren t lev els of the man ufacturing system independen tly That is, the batc hing policies of the dieren t lev els are established independen t of eac h other. The en tire batc hing problem consists ofL 1iden tical subproblems. The only lev el excluded is theL thlev el of the system, since it is composed of the ra w materials. The production sequence of theL 1 stlev el denes discrete demand times and quan tities for these ra w materials. The optimal purc hasing of the ra w materials can be ac hiev ed through a dynamic lotsizing model, and the lotsizing decisions are bey ond the scope of this dissertation. F or the sak e of simplicit y w e giv e the form ulation in the general form considering only one lev el, namely thel thlev el, that applies to alll =1 ; 2 ;::;L 1. W e start building the model b y dening constrain ts. First constrain t set comes from the need to meet the demands. As discussed in the previous c hapters, w e do not allo w production under the demand, instead w e allo w excess production within a tolerance limit. These excess amoun ts can be used to adjust demands of the next planning horizon. The second set of constrain ts comes from feasibilit y concerns. Since w e dene a xed timebuc k et (t l), w e ha v e to assure that all batc hes can be processed within this xed timebuc k et.
PAGE 174
158 W e form ulate the constrain ts as follo ws.b l;i = d l;i q l;i ;i =1 ;::;n l s l;i + p l;i b l;i t l ;i =1 ;::;n l t l n l X i =1 q l;i = T t l 0 b l;i ;q l;i 2 Z + ;i =1 ;::;n lThe parameters and v ariables used in the form ulation are dened as follo ws. L Num ber of lev els l Lev el indexn ln um ber of products at thel thlev el (l =1 ; 2 ;::;L 1)N lthe set of products at thel thlev el (N l = f 1 ; 2 ;::;n l g), (l =1 ; 2 ;::;L 1) T total a v ailable time i product indexq l;in um ber of batc hes for product i at thel thlev el (l =1 ; 2 ;::;L 1)b l;ibatc h size for product i at thel thlev el (l =1 ; 2 ;::;L 1)d l;idemand for product i at thel thlev el (l =1 ; 2 ;::;L 1)D l;icum ulativ e demand for products 1 through i at thel thlev el (l =1 ; 2 ;::;L 1)(= i P h =1 d l;h ) s l;isetup time for product i at thel thlev el (l =1 ; 2 ;::;L 1)p l;iprocessing time for product i at thel thlev el (l =1 ; 2 ;::;L 1)t llength of xed time in terv al, at thel thlev el (l =1 ; 2 ;::;L 1)(= T= n l P i =1 q l;i ) Q ltotal n um ber of batc hes(= n P i =1 q l;i )W e ha v e three decision v ariables in the model. Batc h size (b l;i) is expressed as a function of n um ber of batc hes (q l;i), th us w e can eliminateb l;ifrom the system and also eliminate the rst constrain t set from the model. Using the third
PAGE 175
159 constrain t, w e com bine the second set of constrain ts and third constrain t in to a single set of constrain ts and eliminatet lfrom the decision v ariables. The resulting constrain ts are giv en belo w. s l;i + p l;i d l;i q l;i n l X h =1 q l;h T;i =1 ;::;n l q l;i 2 Z + ;i =1 ;::;n lHa ving constructed the necessary constrain ts, w e no w adv ance to dening the objectiv e function. The o v erall objectiv e of the model is to minimize the deviation bet w een the sequence found at the end of the second phase and the ideal sc hedule. Therefore, the t w o phases should be considered together. Based on our extensiv e analysis and the results found in the SMSL model (see pages 59 61 ), w e adopt the same lo w er bound approac h in the m ultilev el models, as w ell. A lo w er bound (F) to the objectiv e function of the2 ndphase problem is dev eloped. The details on deriving this lo w er bound are giv en in Appendix E The follo wing optimization model represen ts the1 stphase problem. MinimizeF = n l X i =1 266664 w l b 2l;i n l P h =1 q l;h 2 q 2 l;i 12 n l P h =1 q l;h + q l;i 4 L X u = l +1 n u X v =1 w u 0BB@ b l;i r u;v;l;i d u;v n l P h =1 q l;h 1CCA 2 377775(4.20)
PAGE 176
160 S.T. s l;i + p l;i d l;i q l;i n l X h =1 q l;h T; 8 i(4.21) d l;i q l;i = b l;i ; 8 i(4.22)q l;i = d l;i b l;i ; 8 i(4.23)1 q l;i d l;i ;q l;iin teger; 8 i(4.24) Note that in constrain ts ( 4.22 ) and ( 4.23 )b l;i(batc h size for producti) is used as a state v ariable. These t w o constrain ts assure that excess production is limited to the minim um. Decreasingb l;iorq l;ib y one w ould result in under production. 4.7 Structural Properties of the1 stPhase Problem Abo v e form ulation sho ws that the decision v ariables are common in the SMSL and SMML models, for the1 stphase problem. Therefore, w e refer the reader to the denition of acceptable v alues and related discussions in the SMSL model (Section 2.7 ). The only dierence is in the notation, in that,A l;idenotes the set of the acceptable v alues of the v ariableq l;i, anda l;iis the cardinalit y ofA l;i. No w, w e proceed with dening a simpler v ersion of the problem. If w e assume the w eigh ts of the sublev els zero,w u =0 ;u>l, then the problem reduces to that of the SMSL model. This allo ws us to use complexit y result obtained for the SMSL model. Corollary 4.7.1 The1 stphase pr oblem in the SMML mo del is NP c omplete. Proof Since w e ha v e pro v en that the1 stphase problem in the SMSL model is NPcomplete, so m ust be the1 stphase problem in the SMML model. 4.8 Exact Methods for the1 stPhase Problem The dynamic programming procedure and its bounded v ersion dev eloped for the SMSL model are not directly applicable to the SMML model. In the follo wing discussion, w e propose a bounded dynamic programming (BDP) solution method
PAGE 177
161 that inherits majorit y of its componen ts from its SMSL coun terpart and com bines features of dynamic programming and branc handbound methods to successfully solv e the1 stphase problem in the SMML model. 4.8.1 Dynamic Programming F orm ulation Giv en a xedQ lv alue, the objectiv e function ( 4.20 ) simplies toF 0 = n l X i =1 w l d d l;i =q l;i e 2 Q 2l q 2 l;i 12 Q l + q l;i 4 L X u = l +1 n u X v =1 w u d d l;i =q l;i e r u;v;l;i d u;v Q l 2 # ;whic h is separable inq l;iv ariables. If the v ectorq l ( Q l )=( q l; 1 ;q l; 2 ;::;q l;n l )is an optimal solution to the problem withP n l i =1 q l;i = Q l, then the subv ector( q l; 2 ;q l; 3 ;::;q l;n l )should be optimal to the problem withP n l i =2 q l;i = Q l q l; 1, as w ell. Otherwise, the v ectorq l ( Q l )can not be an optimal solution. Th us, the principle of optimalit y holds for the problem and w e can build the optimal solution b y consecutiv ely deciding onq l;iv alues. LetR l;ibe the total n um ber of batc hes committed to the rstiproducts, at thel thlev el. The product indexiis the stage index, and the pair( i;R l;i )represen ts the states of the DP form ulation. Figure 42 illustrates the underlying net w ork structure of the problem. In the net w ork, eac h node represen ts a state in the DP form ulation and arcs reect the acceptable v alues suc h that an arc is dra wn from node (i 1 ;R l;i 1) to node (i;R l;i 1 + q l;i) for eac hq l;i 2 A l;i. W e dene the follo wing recursiv e equation.F ( i;R l;i )= 8>><>>: 0 ;ifi =0 min q l;i f F ( i 1 ;R l;i q l;i )+ f ( l;i;q l;i ) j s l;i + l d l;i q l;i m p l;i T Q l g ;ifi> 0Where,f ( l;i;q l;i )= w l d d l;i =q l;i e 2 Q 2l q 2 l;i 12 Q l + q l;i 4 L X u = l +1 n u X v =1 w u d d l;i =q l;i e r u;v;l;i d u;v Q l 2 :Note that, the recursiv e equation is a function ofQ l, that can be used for a giv enQ lv alue only Also, the nal state is( n l ;Q l ), and the solution to the
PAGE 178
162 problem,F ( n l ;Q l ), can be obtained with the forw ard recursiv e algorithm presen ted in Figure 43 When the algorithm terminates, it returnsq l;i ( Q l )v ector that is an optimal solution for the giv enQ lv alue andF ( n l ;Q l )that is the objectiv e v alue of this optimal solution. As in an y DP model, the n um ber of nodes gro ws exponen tially with the n um ber of stages. In the nal (n thl) stage, w e migh t ha v e at mostQ n l i =1 a l;inodes. This is a straigh tforw ard result of the fact that eac h node in the( i 1) ststage is connected to at mosta l;inodes in thei thstage. Ho w ev er, w e also kno w that the maxim um index for a node in the nal lev el is (n l ;D l;n). Therefore, the n um ber of nodes in the nal lev el is at mostmin f Q n l i =1 a l;i ;D l;n n l +1 g. An upper bound for the total n um ber of nodes in the graph isP n l i =1 min f Q ih =1 a l;h ;D l;i i +1 g. In order to deriv e the computational complexit y of algorithm Forward Recursion w e need to kno w the n um ber of arcs and the time it tak es to calculate an arc cost, as w ell. The n um ber of arcs in to thei thstage is a function of the n um ber of nodes in the( i 1) ststage anda l;i. An upper bound on this n um ber isa l;i min f Q i 1 h =1 a l;h ;D l;i 1 i +2 g. Therefore, w e claim that the total n um ber of arcs in the net w ork is at mosta l; 1 + P n l i =2 a l;i min f Q i 1 h =1 a l;h ;D l;i 1 i +2 g. Calculating a single arc cost, on the other hand, requires1+ L P u = l +1 n ucomputations. In the w orst case, steps sev en through nine are executed as man y times as the n um ber of arcs in the net w ork. Therefore, the w orst case time complexit y of the algorithm is obtained b y m ultiplying the n um ber of arcs with the time required to calculate an arc cost:O (( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g )(1+ L P u = l +1 n u )). Abo v e algorithm solv es the problem for a giv enQ lv alue. Ho w ev er, the problem does not tak e aQ lv alue as an input parameter, but returnsQ las a result of the solution v ector. This and the fact that an arc cost can be calculated only ifQ lis kno wn, imply that w e need to solv e a DP for eac h possible v alue ofQ l. W e
PAGE 179
163 propose algorithm Solve with DP for the solution of the problem (see Figure 44 ). The algorithm iden ties all possible v alues ofQ land emplo ys algorithm Forward Recursion successiv ely to solv e the emerging subproblems. The algorithm yieldsQ las the optimalQ lv alue, whic h leads to the optimal solution v ectorq l ( Q l )and also the optimal solution's objectiv e v alueF ( n l ;Q l ). Steps one through v e can be considered as a preprocessing phase where the reac hable nodes are iden tied. The w orst case complexit y of this preprocessing phase depends on the n um ber of arcs in the net w ork represen tation of the problem, in that it is equal to that of algorithm Forward Recursion Since algorithm Forward Recursion is repetitiv ely in v ok ed in step eigh t, the preprocessing phase does not aect the o v erall time complexit y of the algorithm. Steps sev en through nine are repeated for eac h reac hable node at the nal stage of the DP form ulation. The n um ber of reac hable nodes is bounded abo v e b yD l;n l n l +1. Therefore, algorithm Forward Recursion ma y be in v ok ed at mostD l;n l n l +1times, yielding an o v erall w orst case time complexit y ofO (( D l;n l n l +1)( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g )(1+ L P u = l +1 n u ))This time complexit y sho ws that, the computational requiremen t of the DP procedure depends on external parameters suc h asd l;is anda l;is. Therefore, the procedure ma y be impractical for large size problems. In the next subsection, w e dev elop sev eral bounding strategies to reduce the computational burden of the DP procedure. 4.8.2 Bounding Strategies An upper limit forQ l. Noting that the length of the timebuc k et cannot be smaller than the sum of processing and setup times of an y batc h leads to the follo wing upper bound for the possibleQ lv alues.T Q l s l;i + p l;i ; 8 i ) Q l Q Ul = T max i f s l;i + p l;i ;i 2 N l g
PAGE 180
164 Eliminate in termediate nodes whic h cannot yield a feasible solution A t an y stage,R l;ima y increase b y at mostd l;iand at least 1 units. Therefore, as w e proceed to w ards the nal state, w e eliminate the in termediate nodes (i;R l;i) withR l;i >Q l n l + iorR l;i
PAGE 181
165 solution is found,Q Llv alue increases. Therefore, w e updateQ Llev ery timeQ lis updated, and dynamically narro w the searc h space onQ l. Incorporating all the bounding strategies dev eloped, w e propose algorithm Solve with BDP (Figure 45 ) for the solution of the problem, using algorithm Bounded Forward Recursion (Figure 46 ) for successiv ely solving emerging DPs. 4.9 Problem Specic Heuristics for the1 stPhase Problem The complexities of the exact methods proposed for the problem imply that w e ma y not be able to solv e largesized instances with these exact methods. Therefore, w e dev elop heuristic algorithms whic h do not guaran tee to nd an optimal solution but lik ely to nd good solutions in a reasonable amoun t of time. In this section w e describe a parametric heuristic solution procedure that w e ha v e dev eloped for the1 stphase problem. W e start with explaining some basic principles whic h constitute the basis for our heuristic solution procedure. A solution is a com bination of decision v ariablesq l;i,i =1 ; 2 ;::;n lsuc h that the v alue of eac h v ariable is c hosen from the acceptable v alues of the v ariable. In other w ords, constrain t sets ( 4.22 ), ( 4.23 ) and ( 4.24 ) are satised in an y solution. A fe asible solution is a solution whic h satises the rst constrain t set ( 4.21 ). In other w ords, if all the batc hes can be processed within a xedlength timebuc k et, then the solution is feasible. Here, the importan t poin t is that the length of the timebuc k et is a function of the n um ber of batc hes. That is, increasing the n um ber of batc hes for one of the products shortens the timebuc k et and ma y cause infeasibilit y LetAbe a selected product (A 2 N l),Q 0l = P i 2 N l nf A g q l;iandq 0 l = ( q 0 l; 1 ;q 0 l; 2 ;:::;q 0 l;n l )be a feasible solution. Since the solution is feasible, w e kno w
PAGE 182
166 that the left hand side of the constrain t ( 4.21 ) is giv en belo w.C 0 l;i = 8>><>>: s l;A + p l;A d d l;A q 0 l;A e q 0 l;A + Q 0l ; ( T ) ;ifi = A s l;i + p l;i d d l;i q 0 l;i e q 0 l;A + Q 0l ; ( T ) ;o/w No w, if w e incremen tq l;Afromq 0 l;Atoq 1 l;A(the smallest acceptable v alue forq l;Awhic h is greater thanq 0 l;A), the follo wing inequalities hold.q 1 l;A q 0 l;A +1and& d l;A q 1 l;A & d l;A q 0 l;A 1Depending onp l;Aands l;Av alues and the increase inq l;A,C l;Ama y increase or decrease (C 1 l;A S C 0 l;A). On the other hand, since ev ery other v ariable remains unc hanged (q 1 l;i = q 0 l;i ;i 2 N l nf A g),C l;i(i 2 N l nf A g) will denitely increase (C 1 l;i >C 0 l;i ;i 2 N l nf A g). Therefore, this incremen t inq l;Ama y lead to an infeasible solution (C 1 l;i >Tfor at least onei 2 N l). This result tells us that an y increasing mo v e can con v ert a feasible solution to an infeasible one. Ho w ev er, exploiting the special structure of the problem w e dev elop a quic k method whic h con v erts an infeasible solution to a feasible one (if there exists one). The follo wing discussion is the k ey to the method men tioned. A t this poin t w e dene critic al c onstr aint as the constrain t with themax i f s l;i + p l;i d d l;i q l;i e ;i 2 N l gv alue. If the solution at hand is feasible, then the critical constrain t is the tigh test constrain t. Similarly in an infeasible solution, the critical constrain t is the most violated constrain t. Also, critic al variable is dened as the product related to the critical constrain t. If there is a w a y to con v ert an infeasible solution to a feasible one b y increasing the n um ber of batc hes, it can only be possible b y exploiting the critical constrain t. Let us explain this fact in more detail. Assume that w e are giv en an infeasible solutionq 0 l =( q 0 l; 1 ;q 0 l; 2 ;:::;q 0 l;n l ), suc h that infeasibilit y occurs for only one of the
PAGE 183
167 products, namelyA. Then, if w e letQ 0l = P i 2 N l nf A g q 0 l;i, the left hand side of the constrain t ( 4.21 ) is as follo ws.C 0 l;i = 8>><>>: s l;A + p l;A d d l;A q 0 l;A e q 0 l;A + Q 0l ; ( >T ) ;ifi = A s l;i + p l;i d d l;i q 0 l;i e q 0 l;A + Q 0l ; ( T ) ;o/w Here,C 0 l;Ais the critical constrain t. No w, w e will analyze the eect of increasing an yq l;iv alue to its next acceptable v alue. The possible outcomes of increasingq l;Aare: C 1 l;i T, for alli 2 N l. The solution is feasible. C 1 l;A >T,C 1 l;i T, for alli 2 N l nf A g. The solution is still infeasible and the infeasibilit y is still caused b y productAonly C 1 l;A T,C 1 l;i >T, for at least onei 2 N l nf A g. The solution is still infeasible, but the source of infeasibilit y has shifted. C 1 l;A >T,C 1 l;i >T, for at least onei 2 N l nf A g. The solution is still infeasible, and the sources of infeasibilit y ha v e increased in n um ber. The rst case occurs when a feasible solution can be reac hed b y one incremen t operation. The second case occurs when all the nonviolated constrain ts ha v e enough slac k, but the violated constrain t did not get enough relaxation b y the incremen t ofq l;A. The third and fourth cases represen t another critical situation whic h is lik ely to occur. Since increasingq l;AincreasesC l;i(i 2 N l nf A g) linearly the incremen t operation consumes slac ks of the nonviolated constrain ts. Therefore, slac k in one or more of the nonviolated constrain ts ma y be depleted, whic h in turn ma y shift the source of infeasibilit y or increase the n um ber of violated constrain ts. Ho w ev er, increasing aq l;i(i 2 N l nf A g) v alue alw a ys yields the follo wing outcome. C 1 l;A >C 0 l;A >T. Therefore, the solution is still infeasible.
PAGE 184
168 Although this mo v e migh t violate more than one constrain t and shift the critical constrain t, w e denitely kno w that this mo v e can not lead to a feasible solution. This pro v es that exploiting a noncritical constrain t w ould lead to another infeasible solution. This fact lets us conclude the follo wing. Letq 0 l =( q 0 l; 1 ;q 0 l; 2 ;:::;q 0 l;n l )andq 1 l =( q 1 l; 1 ;q 1 l; 2 ;:::;q 1 l;n l )be t w o infeasible solutions suc h thatC 0 l;Ais the critical constrain t, andq 1 lis reac hed b y increasingq 0 l;Atoq 1 l;A(the smallest acceptable v alue forq l;Awhic h is greater thanq 0 l;A) only If there exists a feasible solution whic h can be reac hed fromq 0 lb y incremen t operations only then it can be reac hed fromq 1 lb y incremen t operations only as w ell. W e use this result as a basis in dev eloping Algorithm NE Feasible Solution Search (see Figure 47 ). The algorithm presen ted belo w examines solution space starting from an y giv en solution, b y mo ving in the NorthEast (NE) direction, and reports the existence of a feasible solution. Mo ving in the NE direction means increasing at least oneq l;ito its next acceptable v alue. F or future use w e dene SW corner as the solution where the v ariables tak e their lo w est possible v alues, that isq l;i =1 ; 8 i, and NE corner as the solution whereq l;i = d l;i ; 8 i. The algorithm performs exactly one incremen t operation per iteration. Depending on the starting solution, the algorithm performs at mostP n l i =1 a l;iiterations. Eac h iteration requires nding the critical constrain t and c hec king if the solution at hand is feasible or not, both these tasks tak eO ( n l )time. Therefore, the time complexit y of the algorithm isO ( n l P n l i =1 a l;i ). Considering that the NE direction has at mostQ n l i =1 a l;isolutions whic h ma y or ma y not be feasible, the algorithm scans this space signican tly fast. Space complexit y of the algorithm is also easily calculated. The algorithm stores the curren t solution whic h consists ofn ldecision v ariables only therefore the space complexit y isO ( n l ).
PAGE 185
169 The algorithm can be rev ersed so that it scans the solution space in the SW direction. Although the nature of the problem is quite dicult, this ease in nding the closest feasible solution in a specic direction giv es us an adv an tage to dev elop a po w erful heuristic algorithm. Before proceeding with details of the algorithm, w e explain the neigh borhood structure used. A solutionq 1 l =( q 1 l; 1 ;q 1 l; 2 ;:::;q 1 l;n l )is a neighb or solution ofq 0 l = ( q 0 l; 1 ;q 0 l; 2 ;:::;q 0 l;n l )if and only if exactly one v ariable (sa yq l;A) v alue diers in these solutions, suc h thatq 1 l;Ais the next acceptable v alue ofq 0 l;Ain increasing or decreasing direction. That is, it can be reac hed b y only one incremen t or decremen t operation. With this denition, an y acceptable solution has at most2 n lneigh bors,n lof them being in the increasing direction and the othern lin the decreasing direction. No w w e can proceed with dening our heuristic approac h. The algorithm tak es three parameters;SearchDepth,MoveDepthandEligibleNeighbors.SearchDepthparameter denotes depth of the searc h process. IfSearchDepth =1, then only the onestep neigh bors are ev aluated. IfSearchDepth =2, then the neigh bors' neigh bors (the t w ostep neigh bors) are also ev aluated, and so on. WhenSearchDepth> 1, thenMoveDepthbecomes an importan t parameter. IfMoveDepth =1, then the searc h terminates at a onestep neigh bor. IfMoveDepth =2, then the termination is t w o steps a w a y from the Curren t Solution, etc. The last parameter,EligibleNeighbors, denotes the eligible neigh bors for ev aluation. IfEligibleNeighbors= feasible, then only feasible neigh bors are considered. IfEligibleNeighbors= both, then both feasible and infeasible neigh bors are considered for ev aluation. In the algorithm, ev aluating a solution means calculating its objectiv e function v alue. When all the neigh bors are ev aluated, the follo wing solutions are iden tied. The Best Neigh bor is aSearchDepthstep neigh bor with the lo w est
PAGE 186
170 objectiv e v alue of all the neigh bors. The Leading Neigh bor is aMoveDepthstep neigh bor whic h leads to the Best Neigh bor. Similarly the Best F easible Neighbor is aSearchDepthstep feasible neigh bor with the lo w est objectiv e v alue of all the feasible neigh bors and the Leading F easible Neigh bor is aMoveDepthstep feasible neigh bor whic h leads to the Best F easible Neigh bor. Note that, ifEligibleNeighbors= both, then Best Neigh bor and Best F easible Neigh bor migh t dier. IfEligibleNeighbors= feasible, then these t w o solutions are the same. This also holds for the Leading Solution and the Leading F easible Solution. A mo v e consists of updating the Curren t Solution and comparing the objectiv e function v alue of this solution to the Best Solution. If the solution at hand has a lo w er objectiv e v alue and is feasible, then the Best Solution is updated. Figure 48 sho ws the pseudocode for our heuristic algorithm, namely Algorithm Parametric Heuristic Search The algorithm alw a ys mo v es in the NE direction. The total n um ber of iterations performed b y Algorithm Parametric Heuristic Search is at mostP n l i =1 a l;i, wherea l;iis the n um ber of acceptable v alues for the decision v ariableq l;i. A t eac h iteration, if Algorithm NE Feasible Solution Search is not in v ok ed, at mostn SearchDepthlneigh bors are ev aluated. W e already kno w that an iteration with Algorithm NE Feasible Solution Search tak esO ( n l )time. SinceO ( n l ) O ( n SearchDepthl ), the n um ber of solution ev aluations the algorithm performs isO ( n SearchDepthl P n l i =1 a l;i ). An ev aluation tak esO (1+ L P u = l +1 n u )time, th us the total time complexit y of the heuristic procedure isO ( n SearchDepthl ( P n l i =1 a l;i )(1+ L P u = l +1 n u )). Space complexit y of the algorithm is rather easy to calculate. The algorithm stores a constan t n um ber of solutions (Curren t Solution, Best Solution, etc.) during the iterations. Eac h solution consists ofn lv ariable v alues. So, the space complexit y of the algorithm isO ( n l ).
PAGE 187
171 4.10 MetaHeuristics for the1 stPhase Problem Our implemen tation of three metaheuristic methods on the SMSL model sho ws that the path relinking method suits the problem best, among the three methods. In the SMML model, w e tak e this result in to consideration and focus on the path relinking method only 4.10.1 Neigh borhood Structure W e dene a solutionq l =( q l; 1 ;q l; 2 ;::;q l;n l )as a v ector of the decision v ariables suc h that all the decision v ariables tak e an acceptable v alueq l;i 2 A l;i ; 8 i. W e further distinguish bet w een feasible and infeasible solutions as follo ws. A solution is fe asible if it satises the rst constrain t set ( 4.21 ), otherwise it is infe asible No w, consider the follo wing example withn l =2products at the nal lev el (l =1). Letd 1 ; 1 =15andd 1 ; 2 =20;s 1 ; 1 = s 1 ; 2 =1,p 1 ; 1 = p 1 ; 2 =1andT =50min utes. The abo v e procedure proposed for nding the acceptable v alues impliesq 1 ; 1 2 A 1 ; 1 = f 1 ; 2 ; 3 ; 4 ; 5 ; 8 ; 15 gandq 1 ; 2 2 A 1 ; 2 = f 1 ; 2 ; 3 ; 4 ; 5 ; 7 ; 10 ; 20 g. By the denition of a solution, an y pair of these acceptable v alues is tak en as a solution, for example (1,1), (5,5) and (5,20) are all solutions. (5,5) is a feasible solution, since the batc h sizes are 3 and 4 and these batc hes tak e 4 and 5 min utes, where the length of the timebuc k et is50 = (5+5)=5, therefore both batc hes can be processed within the timebuc k et. Similarly (5,20) requires 4 and 2 min utes to process the batc hes, ho w ev er the timebuc k et is too short (50/(5+20)=2), th us this solution is infeasible. A solutionq 1 l =( q 1 l; 1 ;q 1 l; 2 ;:::;q 1 l;n l )is a neighb or ofq 0 l =( q 0 l; 1 ;q 0 l; 2 ;:::;q 0 l;n l )if and only if exactly one v ariable v alue is dieren t in these v ectors, and the categorical distance bet w een the v alues of this decision v ariable is at most, whereis a user dened in teger that is greater than or equal to one. If w e denote the set of neigh bor solutions of a solutionq 0 lwithNS ( q 0 l ; )and considerq 0 1 =(5 ; 5)and =2for example, then the neigh bor solutions set ofq 0 1isNS ((5 ; 5) ; 2)=
PAGE 188
172f (3 ; 5) ; (4 ; 5) ; (8 ; 5) ; (15 ; 5) ; (5 ; 3) ; (5 ; 4) ; (5 ; 7) ; (5 ; 10) g. With this denition, a solution ma y ha v e at most2 n lneigh bors. W e iden tify t w o particular solutions. The rst one is the origin where eac h decision v ariable tak es its lo w est possible v alue, that isq l;i =1 ; 8 i 2 N l. The second one is the farthest c orner of the solution sp ac e where ev ery decision v ariable tak es its largest v alue, that isq l;i = d l;i ; 8 i 2 N l. This farthest corner is found useful in obtaining the global optim um in the SMSL and FSSL models, therefore w e k eep it in the population of the solution in the SMML model, as w ell. 4.10.2 P ath Relinking In the follo wing, w e giv e a description of our PR implemen tation in the SMSL model. W e use the generic algorithm presen ted in Figure 49 F or initialization, emplo ymen t of the problem specic heuristic methods is represen ted b y a parameter,PSHMethods. W e consider the problemspecic heuristic methods in order of their time consumption, as reported in Y a vuz and T ufek ci (2004a). IfPSHMethods =1, then the rst and fourth methods are emplo y ed. IfPSHMethods =2, then method 2 is emplo y ed in addition to the other t w o. Finally ifPSHMethods =3, all four methods are emplo y ed. Ha ving established a set of seed solutions, the div ersication generator processes these seed solutions and creates the initial reference set. W e use t w o alternativ e modes of the div ersication generator. The rst mode is similar to the m utation operator used in genetic algorithms (Goldberg, 1989; Holland, 1975; Reev es, 1997). That is, the seed solution v ector is tak en as the input and starting with the rst v ariable, a div ersied solution is created for eac h v ariable. This is ac hiev ed b y replacing the v ariable's v alue with its100 thnext acceptable v alue. Ifa l;i < 100, the mod operator is used in order to obtain an acceptable v alue with index v alue bet w een one anda i. Here 100 is arbitrarily selected, an y signican tly large in teger suc h as 50, 200 or 500 could be c hosen. The second mode, on the
PAGE 189
173 other hand, does not process seed solution v ectors. It performs a local searc h for eac h decision v ariable and iden ties solutions that maximize the v alue of that certain decision v ariable. This mode of div ersication yields a total ofnalternativ e solutions and enables us to explore extreme corners of the feasible region. The parameter represen ting the selection of the div ersication mode isDiversification, and it has four lev els. A t lev el 1 no div ersication is applied, at lev el 2 only corner searc h is applied, at lev el 3 only the div ersication generator is used and nally at lev el 4 both modes are used. Depending on the mode selection in the application of the algorithm, the n um ber of div ersied solutions ma y be less than the size of the reference set. In this case, the empt y slots in the reference set can be lled in the consecutiv e iterations. The size of the reference set is represen ted b y parameterb. In our implemen tation w e k eep one infeasible solution in the reference set, at all times. This infeasible solution is the farthest corner of the solution space discussed in Section 4.10.1 The subset generation mec hanism used for PR considers the subsets with t w o solutions only These solutions are used as origin and destination poin ts in the solution com bination mec hanism. Based on the acceptable v alues, w e measure the distance bet w een the origin and the destination with a categorical distance measure. Ifq 1 landq 2 lare the origin and destination v ectors, and w e dene the functionPosition ( q l;i )as an in teger function whic h returns the position of v ariablei's v alue inA l;i, then the distance bet w een these t w o v ectors is dened asP i 2 N l j Position ( q 1 l;i ) Position ( q 2 l;i ) j, wherej x jis the absolute v alue ofx. Starting from the origin, the neigh bor solutions whic h decrease the distance b y one are considered and the bestNTSsolutions are stored in a list, whereNTSis the parameter standing for the n um ber of temporary solutions. In the next step, eac h solution in this list is considered as the origin, and again the neigh bor solutions that decrease the distance b y one are ev aluated. This is repeated un til
PAGE 190
174 the destination solution is reac hed, while k eepingNTSbest solutions bet w een the steps.NTS =1represen ts a single path bet w een the origin and the destination. Ho w ev er,NTS> 1can be considered asNTSparallel paths that are built bet w een the origin and the destination solutions. Using the impro v emen t method on com bined solutions and updating the reference set are common in both the initial and iterativ e phases. Ho w ev er, performing a local searc h on ev ery solution obtained ma y be impractical.LSinPreProcessis the parameter that represen ts local searc h usage in the initial phase. IfLSinPreProcess =0, no local searc h is applied. IfLSinPreProcess =1, local searc h is only applied at the end of the initial phase on the solutions that are stored in the reference set. IfLSinPreProcess =2, a local searc h is applied for ev ery trial solution considered.LStoRefSetPPis the parameter represen ting the update frequency of the reference set and tak es the v alues oftrueorfalse. IfLStoRefSetPP = true, ev ery time a solution is ev aluated, it is compared to the solutions in the reference set and if necessary the reference set is updated. This requires that ev ery mo v e performed during the local searc h is considered for the reference set. IfLStoRefSetPP = false, only the nal result of the local searc h, a local optim um, is tried for the reference set. P arametersLSinIterationsandLStoRSItersha v e the same denition and lev els, applied to the iterativ e phase. F or the termination of the algorithm w e ha v e one criterion only If the reference set is not modied on a giv en iteration, it cannot be modied on the later iterations, either. Therefore, w e k eep trac k of the solutions in the reference set and immediately terminate if the reference set is the same before and after an iteration. This criterion does not require a parameter.
PAGE 191
175 4.11 Comparativ e Study 4.11.1 Researc h Questions This dissertation proposes a new planning tool for m ultilev el justintime man ufacturing systems. In the SMML model, w e address the batc hing and sequencing decisions and propose exact and heuristic methods to solv e the arising problems. W e rst focus on the nal (endproducts) lev el and solv e the batc hing and sequencing problems at this lev el. Then, w e proceed with the sublev els, except for the ra w materials lev el, whic h is not included in the study The sequence built for the nal (rst) lev el determines the demands at the second lev el, as the production at the rst lev el (end products) consume output of the second lev el (componen ts). Similarly the sequence built for the second lev el determines the demands at the third lev el. Therefore, the sequences of the subsequen t lev els dene the in v en tory lev els at the supermark et in v en tory buers bet w een the lev els. In this comparativ e study w e address t w o researc h questions. Our rst researc h question is similar to our w ork in the previous c hapters, as w e ev aluate the performance of our solution methods for the batc hing problem. The second researc h question, on the other hand, addresses the supermark et in v en tory lev els. The lo w er the in v en tory lev els are, the more suitable our planning approac h is, according to JIT principles. The researc h questions are:Ho w do the alternativ e solution methods perform on the test instances? Does an y of the methods perform signican tly better than the others in terms of solution qualit y and solution time measures?What are the appropriate supermark et in v en tory lev els, that should be k ept? 4.11.2 Design of Experimen ts In our study w e consider fourlev el problems (L =4) with v e products/parts at eac h lev el (n l =5 ;l =1 ;::; 4). The rst lev el is the endproducts lev el, the
PAGE 192
176 remaining lev els are the componen ts, parts and ra w materials lev els, respectiv ely The a v erage demand for an endproduct is 750 units. The demands for the parts and ra w materials at the sublev els depend on the endproduct demands and billsofmaterial. W e generate the bills of material (r u;v;l;iv alues) in suc h a w a y that exactly t w o units of a part is needed b y the parts/products at its immediate superlev el. Therefore, the a v erage demands for the parts increase exponen tially with the lev el n um ber; 1500 at the second lev el, 3000 at the third lev el, etc. W e use four experimen tal factors,s l;i =p l;iratio, T relaxation percen tage, relaxation c hange bet w een the lev els ('), and div ersication lev elr.r 2f 0 ; 1 gis used to create test cases in whic h dieren t products are div ersied in terms of demand, processing time and setup time.r =1reects the div ersied case, andr =0reects the undiv ersied case where the products are v ery similar to eac h other. Demand v alues are randomly and uniformly generated bet w een the minim um and maxim um v alues, where maxim um demand is t wice as large as the a v erage demand for div ersied instances and 20% o v er the a v erage demand for the instances with similar products. The ratio of maxim um demand to minim um demand is 50 and 1.5 for these t w o t ypes of instances, respectiv ely W e useto denote the ratio bet w een the expected v alues ofs l;iandp l;ifor the div ersied instances. W e rst createp l;iaccording to uniform distribution bet w een (0,5] min utes, and thens l;iaccording to uniform distribution bet w een [(1 0 : 1 r ) p l;i ; (1+0 : 1 r ) p l;i ]. W e let 2f 100 ; 10 ; 1 gfor our experimen ts. W e create the total a v ailable time v ariable immediately after creating the rst lev el data. T otal a v ailable time should allo w at least one setup per product, that isT T LB = P i 2 N 1 ( d 1 ;i p 1 ;i + s 1 ;i ). On the other hand,Tshould be limited withT
PAGE 193
177 full planning horizon suc h as a mon th, t w o mon ths, a quarter or half a y ear long. Afterw ards, w e normalize the setup and processing times to be consisten t with the length of the horizon andparameter. W e con tin ue with creating the sublev els' data in the same w a y and normalize them to reect the'parameter, in addition.'represen ts the ratio of eectiv e relaxation percen tage in a certain lev el, to that of its immediate higher lev el. F or example if' =0 : 8and =0 : 8, then at the rst lev el eectiv e relaxation percen tage is 0.8, at the second lev el it is0 : 8 0 : 8=0 : 64, and at third lev el it is0 : 8 0 : 64=0 : 512. The eectiv e relaxation percen tage at thel thlev el can be expressed as l 1. In our computational study w e test' 2f 1 : 0 ; 0 : 8 ; 0 : 6 g. Since the fourth lev el is the ra w materials lev el, w e do not generate processing and setup times for this lev el. The batc hing and sequencing problems occur only at the rst three lev els. Allo wing three dieren t v alues for the parameters,and'and t w o dieren t v alues forrresults in 54 dieren t problem sets. 10 instances are created for eac h problem set, giving a total of 540 test instances. Since w e solv e three separate batc hing and sequencing problems for three lev els, w e ha v e 1620 test instances in total. 4.11.3 Methods The rst researc h question is addressed through a detailed study where w e implemen t our exact and heuristic solution methods, dev eloped for the1 stphase problem. Our heuristic procedure, specically designed for the1 stphase problem, tak es three parameters. The com bination of the parameters aect the beha vior of the procedure. Among man y possible com binations of the parameter v alues, w e select four whic h w e believ e to be the most ecien t ones. Complexit y analysis of the algorithm sho ws thatSearchDepthparameter is critical in time requiremen t. Our
PAGE 194
178 preliminary results sho w that settingSearchDepth> 2causes extensiv e time consumption but not yielding a signican t impro v emen t in solution qualit y Therefore w e narro wSearchDepth 2f 1 ; 2 g. If only onestep neigh bors are considered, then theMoveDepthparameter is xed to one. Ho w ev er, ifSearchDepth =2, then w e migh t speed up the algorithm b y mo ving directly to the best neigh bor found (MoveDepth =2). Therefore, w e test both lev els of this parameter. F or the combinations ev aluating the infeasible neigh bors as w ell, w e do not w an t to allo w the searc h to mo v e too far deep in to the infeasible region, but k eep the mo v es within onestep neigh borhood of the feasible region. Therefore, w e xSearchDepth =1for suc h com binations. The methods tested are: Method P arameter Com bination # (SearchDepth;MoveDepth;EligibleNeighbors) PSH1 (1, 1, feasible) PSH2 (2, 1, feasible) PSH3 (2, 2, feasible) PSH4 (1, 1, both) W e see the same parametric structure in our path relinking implemen tation, as w ell. The parametric structure of our computer code is v ery exible in terms of testing alternativ e strategies for a method. Ho w ev er, when the n um ber of parameters is large, an enormous n um ber of com binations of algorithm parameters exist. Finding the most eectiv e com bination is itself a com binatorial optimization problem. W e adopt the same heuristic approac h as of the previous c hapters to this problem; at eac h stage w e x some of the parameters to predetermined v alues and perform full factorial experimen ts on the rest of the parameters. F or the signicance of the dierence bet w een the tested lev els of a parameter, w e apply pairedttests. W e denote the mean v alues of computation time and relativ e deviation from the optimal solution measures with trand dr, respectiv ely
PAGE 195
179 for ther thlev el of the parameter. If there are only t w o lev els for a parameter then one h ypothesis per measure is built. If, ho w ev er, there are more than t w o lev els, then the n um ber of h ypotheses to be built depends on the relationship bet w een the lev els of the parameter. F or some parameters, b y its role in the algorithm, w e kno w that the solution qualit y impro v es and the computational time increases with the lev els. F or example, if w e tak e the size of the reference set as a parameter, w e expect larger reference set sizes to require longer computational times and yield better results. In suc h cases, w e build h ypotheses on the dierence of adjacen t lev els in pairs. If all the adjacen t lev els are signican tly dieren t and a monotone order of the lev els is found, w e do not construct h ypotheses for eac h possible pair of labels. Otherwise, depending on the results obtained, w e ma y w an t to distinguish bet w een nonadjacen t lev els of the parameter and built h ypotheses for them. F or some other parameters, on the other hand, the results are not expected to be in suc h an order. Th us, w e build h ypotheses and applyttests for ev ery possible pair of the lev els of the parameter. F or allttests, w e use a condence lev el of95%. The ne tuning process terminates when all the parameters are considered. The ne tuning process can be seen as a supervised learning process. W e use 20% of the test instances (v e problems for eac h problem setting presen ted in the previous section) for ne tuning. That is, the most promising methods according to their performance on the fraction of the test instance will be used on the en tire set of test instances. W e represen t the PR method with PR(PSHMethods,Diversification,b,NTS,LSinPreProcess,LStoRefSetPP,LSinIterations,LStoRSIters). Here, w e ha v e a total of 8 parameters. W e use the ne tuning results for the PR method, in the SMSL model as a starting poin t. That is, w e start with an initial com bination of the parameters of PR(2, 3,n l +15,d 3 n l 8 e, 1,true, 1,false).
PAGE 196
180 In the rst stage w e test the parametersbandNTS, with three alternativ e v alues eac h. This giv es us 9 com binations in total. The results are presen ted in T ables G1 and G2 The results sho w that solution time signican tly dier with both parameters, while the relativ e deviation tends to be indieren t. W e decide to focus on lo w er v alues of both parameters and deepen our in v estigation. In the second stage, w e test the parametersbandNTSin four and v e lev els, respectiv ely W e also includePSHMethodsparameter in this stage. The results are summarized in T ables G3 and G4 The results sho w that the parameterPSHMethodsdoes not mak e a dierence in either measures. Also, solution time signican tly increases withNTS, but the percen t deviation does not signican tly impro v e. Therefore, w e set these t w o parameters to their lo w est tested lev els (PSHMethods =1 ;NTS =1). F or parameterbon the other hand, w e see that dieren t lev els yield signican tly dieren t solution time and qualit y Since the solution time of the parameter lev el that yields best solution qualit y (b =20) is acceptable, w e setb =20and conclude the second stage of the ne tuning process. A t this poin t, w e see that the results are satisfactory in terms of both the deviation and solution time measures. Therefore, w e stop ne tuning and select PR(1, 3, 20, 1, 1,true, 1,false) as the com bination to be tested with the comparativ e analysis. The process of ne tuning the PR method is summarized in T able 41 The last methods included in this comparativ e study is our bounded dynamic programming (BDP) method. In total, w e ha v e v e methods in our comparativ e study In the follo wing subsection, these methods are denoted with BDP PSH1, PSH2, PSH3, PSH4 and PR. In order to address the second researc h question w e use the batc hes obtained with the BDP method for the1 stphase problem. W e sequence the resulting batc hes using our heuristic method dev eloped for the2 ndphase problem in Section
PAGE 197
181 T able 41: Summary of the Fine T uning Process for the PR Method Stage P arameters # of Time Methods # P D b NTS L 1 L 2 L 3 L 4 T ested (hrs) 1 2 3 15 1 1 true 1 false 9 5.8 25 3 35 5 2 1 3 5 1 1 true 1 false 40 11.7 2 10 2 15 3 20 4 5 P : PSHMethods;D : Diversification T otal 49 17.5 L 1 : LSinPreProcess;L 2 : LStoRefSetPP L 3 : LSinIterations;L 4 : LStoRSIters 4.4 The sequences pro vide the demand and supply information for the componen t and part lev els. W e com bine the supply and demand information in a single arra y and sort it c hronologically Starting with a zero initial in v en tory for eac h part, w e scan the arra y and subtract demands from the in v en tory and add the supplies. The most negativ e v alue of the in v en tory lev el obtained giv es us the minimal in v en tory lev el that should be k ept to a v oid starving of the do wnstream operation. 4.11.4 Results and Discussion In ev aluating the computational performance of our solution methods, w e consider t w o performance measures, namely computational time and percen t deviation from the optimal solution. These t w o measures represen t the tradeo bet w een solution qualit y and time. Results from solving the test instances with all the methods considered, including three metaheuristic methods, four problemspecic heuristic methods and an exact method, for the computation time and percen t deviation from the optim um measures, are summarized in T able 42 W e analyze the dierence of the methods pair b y pair, for both computational time and percen t deviation from the optimal solution measures. A total of 30 n ull h ypotheses are built and all but one of them are rejected, at a 95% condence lev el
PAGE 198
182 T able 42: Summary of Results Time (seconds) Deviation (%) Method A vg. Min. Max. A vg. Max. BDP 653.98 6.68 12484.90 PSH1 0.19 0.02 0.57 3.697 80.380 PSH2 0.91 0.05 2.60 2.577 73.451 PSH3 0.53 0.06 1.03 3.099 77.694 PSH4 0.13 0.01 0.64 3.913 95.327 PR 0.92 0.09 4.26 0.051 30.800 b y 2tailed pairedttests. There are only t w o h ypotheses that w e can not reject. They state that PR method is indieren t fromPSH 2in terms of solution time andPSH 1andPSH 4are indieren t in terms of deviation. The ordering of the methods is tBDP > tPR = tPSH 2 > tPSH 3 > tPSH 1 > tPSH 4for the solution time and dBDP < dPR < dPSH 2 < dPSH 3 < dPSH 1 = dPSH 4for the deviation measure. The bounded dynamic programming procedure requires appro ximately four hours in the w orst case. This time requiremen t is extensiv ely large, th us the solution qualit y of the heuristic methods becomes extremely importan t. F or the problemspecic heuristics, the results sho w that the four alternativ e methods are signican tly dieren t in terms of solution time measure. F or the solution qualit y measure, w e see that there exists only one indieren t pair of methods. Ho w ev er, the solution qualit y of the problemspecic heuristic methods can not compete with that of the PR method. F urthermore, the time requiremen t of the PR method is negligibly small, as it is statistically indieren t from thePSH 2method. This result sho ws that our PR implemen tation to the SMML model is v ery successful. With respect to its solution qualit y and time performance, w e argue that it can be used b y the practitioners in the eld, in almost real time. W e answ er the second researc h question through T able 43 Our rst observ ation is that the a v erage percen t in v en tory lev els are appro ximately %1 at both lev els. This result sho ws that our solution approac h of solving the batc hing
PAGE 199
183 problems independen tly is acceptable, as it requires lo w in v en tory lev els bet w een successiv e lev els. T able 43: Summary of Supermark et In v en tory Lev els Design % In v en tory Lev el F actor V alue Lev el A v erage Max. Ov erall 2 1.16 23.33 3 0.72 11.86 1.0 2 1.01 6.94 3 0.49 11.86 0.8 2 1.12 23.33 3 0.65 3.13 0.6 2 1.34 20.55 3 1.03 11.49 r 0 2 1.49 23.33 3 0.93 11.49 1 2 0.82 20.55 3 0.52 11.86 10.0 2 0.66 20.55 3 0.36 1.97 1.0 2 0.71 3.16 3 0.46 2.41 0.1 2 2.10 23.33 3 1.35 11.86 0.4 2 1.48 20.55 3 0.92 11.49 0.6 2 1.24 23.33 3 0.69 6.98 0.8 2 0.74 4.41 3 0.56 11.86 In order to commen t on the eect of the design factors on the a v erage in v entory lev els, w e conductttests and compare alternativ e v alues of eac h parameter, in pairs. The results from thettests state that all the alternativ e lev els of all parameters are signican tly dieren t in terms of the resulting a v erage in v en tory lev els. As'decreases, the a v erage in v en tory lev el increases. This is due to the limitation created on the total a v ailable time. As'decreases, less time can be dev oted to the setups at the sublev els, the batc h sizes at the sublev els increase
PAGE 200
184 and the in v en tories at the supermark et are replenished less frequen tly As a result, the in v en tory that should be k ept at the supermark et increases. W e see the same relationship with, as w ell. The smallerv alues require higher in v en tory lev els. F orr =0the products/parts are more div ersied and forr =1the products/parts are more similar to eac h other. As the systems consists of similar items, larger n um ber of batc hing options arise and a smoother con trol of the system can be established. As a result, the in v en tory lev els at the supermark ets are lo w er. As thev alue decreases, the setup requiremen ts also decrease, the processing time become more importan t and few er n um ber of batc hing options exist. As a result, suc h smooth sc hedules can not be established and the supermark et in v en tories that should be k ept become higher.
PAGE 201
185 Algorithm OneStage( l ) 1.Initialize xu;h; 0=0,forall u = l;l +1 ;::;L and h 2 Nu. 2.For k =1to Ql,increase k by1 f 3.Set Fk= 1 and Assignk=0. 4.For i =1to nl,increase i by1 f 5.Calculate fl;i;k= wl" b2l;i xl;i;k Ã‚Â¡ 1+1 Ã‚Â¡kql;i Ql 2+ Ph 2 Nlnf i gb2l;h xl;h;k Ã‚Â¡ 1Ã‚Â¡kqh;i Ql 2# +LPu = l +1 nu Ph =1wu xu;h;k Ã‚Â¡ 1+ bl;iru;h;l;iÃ‚Â¡kbu;hqu;h Ql 26.If Fk>fl;i;k,then f 7.Update Assignk i and Fk fl;i;k. g g8.For i =1to nl,increase i by1 f 9.If i = Assignk,then f 10.Update xl;i;k xl;i;k Ã‚Â¡ 1+1. g11.Elsef 12.Update xl;i;k xl;i;k Ã‚Â¡ 1. g g13.For u = l +1to L ,increase u by1 f 14.For h =1to nu,increase h by1 f 15.Update xu;h;k xu;h;k Ã‚Â¡ 1+ bl;Assignk ru;h;l;Assighnk g g g Figure 41: Pseudocode for Algorithm One Stage
PAGE 202
186 nr lr ,Qr lr 0,0r 1,1r 1,2r 1,Dr l,1r nr lr ,nr lr nr lr ,Dr l,nr 2,2r 2,3r 2,Dr l,2r .r.r.r .r.r.r .r.r.r .r.r.r Figure 42: Net w ork Represen tation of the Problem
PAGE 203
187 Algorithm ForwardRecursion( l;Q l ) 1.Initialize F (0 ; 0)=0, F ( i;R l;i )= 1 forall i 2 N l ,1 R l;i D l;i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N l 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 f 4.Foreach q l;i 2 A l;i valuethatsatises s l;i + l d l;i q l;i m p l;i T Q l f 5.Calculate f ( l;i;q l;i )= w l d d l;i =q l;i e 2 ( Q 2l Ã‚Â¡ q 2 l;i ) 12 Q l + q l;i 4 L P u = l +1 n u P v =1 w u d d l;i =q l;i e r u;v;l;i Ã‚Â¡ d u;v Q l 2 6.IF( F ( i;R l;i Ã‚Â¡ 1 + q l;i ) >F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ))THEN f 7.Set F ( i;R l;i Ã‚Â¡ 1 + q l;i )) F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ). 8.Update ActiveNodes i ActiveNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) 9. q Ã‚Â¤ l;i ( Q l ) q l;i g g g g Figure 43: Pseudocode for Algorithm Forward Recursion Algorithm SolvewithDP ( l ) 1.Initialize Q Ã‚Â¤l =0, F ( n l ;Q Ã‚Â¤l )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N l 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q l;i 2 A l;i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) g g g6.Foreachreachablenode( n l ;R l;n l ) f 7.Set Q l R l;n l 8.Findtheoptimalsolutionforthegiven Q l valueusingAlgorithm ForwardRecursion 9.If F ( n l ;Q Ã‚Â¤l ) >F ( n l ;Q l )THEN f 10.Update Q Ã‚Â¤l Q l g g Figure 44: Pseudocode for Algorithm Solve with DP
PAGE 204
188 Algorithm SolvewithBDP ( l ) 1.Initialize Q Ã‚Â¤l =0, F ( n l ;Q Ã‚Â¤l )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N l .Alsocompute U l; 0 and V l; 0 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q l;i 2 A l;i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) g g6.Compute U l;i and V l;i g7.Set Q Ll =1and Q Ul = b T= max i f s l;i + p l;i ;i 2 N l gc 8.Foreachreachablenode( n l ;R l;n )satisfying Q Ll R l;n Q Ul ,indecreasingorder f 9.Set Q l R l;n 10.Findtheoptimalsolutionforthegiven Q l valueusing Algorithm BoundedForwardRecursion 11.If F ( n l ;Q Ã‚Â¤l ) >F ( n l ;Q l )THEN f 12.Update Q Ã‚Â¤l Q l 13.Update Q Ll b ( U l; 0 Ã‚Â¡ V l; 0 ) =F ( n l ;Q Ã‚Â¤l ) c g g Figure 45: Pseudocode for Algorithm Solve with BDP
PAGE 205
189 Algorithm BoundedForwardRecursion( l;Q l ) 1.Initialize F (0 ; 0)=0, F ( i;R l;i )= 1 forall i 2 N l and1 R l;i D l;i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N l 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 thatsatises (( Q l Ã‚Â¡ D l;n + D l;i Ã‚Â¡ 1 R l;i Ã‚Â¡ 1 Q l Ã‚Â¡ n l + i +1)AND ( F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ G ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) F ( n l ;Q Ã‚Â¤l ))) f 4.Foreach q l;i 2 A l;i valuethatsatises s l;i + l d l;i q l;i m p l;i T Q l f 5.Calculate f ( l;i;q l;i )= w l d d l;i =q l;i e 2 ( Q 2l Ã‚Â¡ q 2 l;i ) 12 Q l + q l;i 4 L P u = l +1 n u P v =1 w u d d l;i =q l;i e r u;v;l;i Ã‚Â¡ d u;v Q l 2 6.IF( F ( i;R l;i Ã‚Â¡ 1 + q l;i ) >F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ))THEN f 7.Set F ( i;R l;i Ã‚Â¡ 1 + q l;i ) F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ). 8.Update ActiveNodes i ActiveNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) 9. q Ã‚Â¤ l;i ( Q l ) q l;i g g g g Figure 46: Pseudocode for Algorithm Bounded Forward Recursion Algorithm NEFeasibleSolutionSearch ( l ) 1.Startfromagivensolution, q l (= q l; 1 ;q l; 2 ;::;q l;n l ).Declare q l astheCurrentSolution. 2.CheckfeasibilityoftheCurrentSolution.Iffeasible,thenstopandreturntheCurrentSolution.Otherwisegotostep3.3.Findthecriticalconstraint(max i f s l;i + p l;i d d l;i q l;i e ;i 2 N l g )andthecriticalvariable q l;A ( s l;A + p l;A d d l;A q l;A e =max i f s l;i + p l;i d d l;i q l;i e ;i 2 N l g ).Ifthecriticalvariableisnot increasable( q l;A = d l;A ),thenstopandreturnnullsolution;nofeasiblesolutionexistsin thesolutionspace.Otherwise,increasethecriticalvariabletoitsnextacceptablev alueand gotostep2. Figure 47: Pseudocode for Algorithm NE Feasible Solution Search
PAGE 206
190 Algorithm ParametricHeuristicSearch( SearchDepth;MoveDepth;Eligible;l ) 1.SetCurrentSolutionasSWcornerofthe l th levelandperformaNEfeasiblesolution search(usingAlgorithm NEFeasibleSolutionSearch ).Ifnofeasiblesolutionscanbe found,stop.OtherwisesetCurrentSolutionasthissolution.2.Evaluateall SearchDepth step EligibleNeighbors oftheCurrentSolution.Ifthe BestNeighborisnotnull,thenmovetotheLeadingNeighbor.Ifthisnewsoluti onis feasiblethenrepeatstep2.Otherwisegotostep3.3.CheckifanyfeasiblesolutionexistsintheNEdirection,byemployingAlg orithm NEFeasibleSolutionSearch .Ifyes,thenmovetothatfeasiblesolution,andgotostep 2.Otherwisegotostep4.4.Returntothelastvisitedfeasiblesolution.IftheBestFeasibleNeighb orisnot null,thenmovetotheLeadingFeasibleNeighbor,andgotostep2.Otherwisest opand returntheBestSolution. Figure 48: Pseudocode for Algorithm Parametric Heuristic Search
PAGE 207
191 Algorithm PR Initialization 1.Initializethe ReferenceSet withseedsolutions,usingproblemspecicheuristics. 2.Foreachseedsolutionf 3.Createalldiversiedsolutionsoftheseedsolutiononhand.4.Foreachdiversiedsolutionf 5.Findalocaloptimumusingthe ImprovementMethod 6.Updatethe ReferenceSet g gImprovement 7.Generatesubsetsofthe ReferenceSet 8.Foreachsubsetf 9.Createcombinationsofthesolutionsinthesubset.10.Foreachcombinationf 11.Findalocaloptimumusingthe ImprovementMethod 12.Updatethe ReferenceSet g g13.Iterateuntil TerminationCriteria aresatised. Figure 49: Pseudocode for Algorithm PR
PAGE 208
CHAPTER 5 FLO WSHOP MUL TILEVEL MODEL The Flo wShop MultiLev el (FSML) model is v ery similar to the SingleMac hine MultiLev el model, whic h is extensiv ely discussed in the previous c hapter. The man ufacturing en vironmen t of in terest in the FSML model consists of a collection of o wshops, one at eac h lev el. A t eac h lev el, all the products go through a set of mac hines, with the same routing. Examples of suc h systems arise from the electronics man ufacturing, where the endproducts are man ufactured in a o wshop and the sublev els of the operations are also performed in o wshops. MultiLev el denotes that not only the endproducts lev el but also the sublev els of subassem blies, parts and ra wmaterial are considered. The v ariation in endproducts' appearances in the nal sc hedule will be minimized, through a complex objectiv e function whic h tak es the sublev el material requiremen ts of the endproducts as parameters. The curren t literature on the m ultilev el v ersion of the production smoothing problem focuses on sc heduling the nal lev el only in trinsically assuming that the sublev els can be subordinated to the nal lev el. W e see this as a shortcoming of the literature and dev elop a methodology that sc hedules the sublev els, as w ell. W e start with the nal lev el (the lev el of the end products), con tin ue with the second lev el (where the subassem blies required b y the nal lev el are man ufactured) and the other sublev els, in the hierarc hical order. W e exclude the ra w materials lev el, since the optimal planning of the ra w material purc hases brings a dynamic lotsizing problem whic h is bey ond the scope of this dissertation. As the previous c hapters ha v e already explained, this dissertation dev elops a new structure where demands are met in batc hes, and eac h batc h can be processed 192
PAGE 209
193 within a xed timebuc k et, whic h itself is a decision v ariable. Th us, the problem can be analyzed in t w o phases, the rst phase is to determine length of the xed time buc k et, n um ber of batc hes and batc h sizes for eac h product. Once w e solv e the problem of the rst phase, the problem of sequencing those batc hes, whic h is the second phase, becomes somewhat easier. Since eac h batc h should be processed in a xed timebuc k et, and total n um ber of batc hes to produce is kno wn for eac h product, w e can treat eac h batc h as a single unit of that product. All the batc hes should t in to the timebuc k et of the lev el considered (the length of whic h ist ltime units) on all the mac hines. Therefore, once the timebuc k et is established, all the batc hes mo v e one mac hine do wnstream, in ev eryt l. As a result, a batc h is completed in ev eryt ltime units. As far as the sequencing of the batc hes (the2 ndphase problem) is considered, the FSML model is iden tical to the SMML model. Th us, w e refer to the SMML model for the2 ndphase problem and focus on the1 stphase problem only This c hapter is organized as follo ws. In section 5.1 w e presen t the mathematical form ulation of the1 stphase problem. In section 5.2 w e dra w useful properties about the problem. Section 5.3 dev elops exact solution methods for the problem. Sections 5.4 and 5.5 are dev oted to heuristic solution procedures, as w e devise a heuristic algorithm for the problem and implemen t three metaheuristic tec hniques in these sections, respectiv ely Finally Section 5.6 presen ts a comparativ e analysis of the solution approac hes dev eloped for the problem. 5.1 1 stPhase Problem F orm ulation The parameters and v ariables used in the form ulation are dened as follo ws. L n um ber of lev els l lev el index (i =1 ; 2 ;::;L)n ln um ber of products at thel thlev elN lset of products (= f 1 ; 2 ;::;n l g) at thel thlev el
PAGE 210
194m ln um ber of mac hines at thel thlev elM lset of mac hines (= f 1 ;::;m l g) at thel thlev el T total a v ailable time i product index (i =1 ;::;n l) j mac hine index (i =1 ;::;m l)s l;i;jSetup time of product i on mac hinej, at thel thlev elp l;i;jProcessing time of one unit of product i on mac hinej, at thel thlev elr u;v;l;iAmoun t of partvat the lev elurequired to produce one unit of partiat thel thlev eld l;iDemand for product i at thel thlev el, for the planning horizonD l;iT otal demand of products 1 toito be man ufactured in the planning horizon, at thel thlev el (= i P h =1 d l;i)b l;iBatc h size of product i at thel thlev elq l;iNum ber of batc hes of product i at thel thlev el, to be man ufactured in the planning horizonQ lT otal n um ber of batc hes to be man ufactured in the planning horizon, at thel thlev el (= n P i =1 q l;i) T T otal a v ailable time, length of the planning horizont lLength of the timebuc k et, at thel thlev el, length of one stagex l;i;kCum ulativ e production of product i at thel thlev el, o v er stages 1 to k measured in batc hes. W e express the1 stphase problem with an optimization model similar to the ones discussed in the previous c hapters. The constrain ts reect the demand satisfaction and tting in to the timebuc k et concerns. W e form ulate the model as follo ws.
PAGE 211
195 MinimizeF = n l X i =1 266664 w l b 2l;i n l P h =1 q l;h 2 q 2 l;i 12 n l P h =1 q l;h + q l;i 4 L X u = l +1 n u X v =1 w u 0BB@ b l;i r u;v;l;i d u;v n l P h =1 q l;h 1CCA 2 377775(5.1) S.T. s l;i;j + p l;i;j d l;i q l;i n l X h =1 q l;h T; 8 i;j(5.2) d l;i q l;i = b l;i ; 8 i(5.3)q l;i = d l;i b l;i ; 8 i(5.4)1 q l;i d l;i ;q l;iin teger; 8 i(5.5) The objectiv e function ( 5.1 ) is a lo w er bound on the objectiv e function of the2 ndphase problem. F or the details on deriv ation of the lo w er bound, w e refer the reader to AppendixE The rst constrain t set ( 5.2 ) assures that ev ery batc h can be processed within the timebuc k et. The constrain t sets ( 5.3 ) and ( 5.4 ) assure that the o v erproduction is allo w ed in minimal quan tities, or synon ymously the decision v ariable (q l;i) v alues are restricted to the acceptable v alues. Finally the last constrain t set ( 5.5 ) denes the decision v ariable as an in teger v ariable. 5.2 Structural Properties of the1 stPhase Problem Abo v e form ulation sho ws that the decision v ariables are common in the FSSL and FSML models, for the1 stphase problem. Therefore, w e refer the reader to the denition of acceptable v alues and related discussions in the FSSL model (Section 3.2 ). The only dierence is in the notation, in that,A l;idenotes the set of the acceptable v alues of the v ariableq l;i, anda l;iis the cardinalit y ofA l;i. No w, w e proceed with dening a simpler v ersion of the problem.
PAGE 212
196 If w e could assume the w eigh t of the sublev els zero,w u =0 ;u>l, then the problem reduces to that of the FSSL model. This allo ws us to use complexit y results obtained for the FSSL model. Corollary 5.2.1 The1 stphase pr oblem in the FSML mo del is NPc omplete. Proof Since w e ha v e pro v en that the1 stphase problem in the FSSL model is NPcomplete, so m ust be the1 stphase problem in the FSML model. 5.3 Exact Methods for the1 stPhase Problem The dynamic programming procedure and its bounded v ersion dev eloped for the SMML model are not directly applicable to the FSML model. In the follo wing discussion, w e propose a bounded dynamic programming (BDP) solution method that inherits majorit y of its componen ts from its SMML coun terpart and com bines features of dynamic programming and branc handbound methods to successfully solv e the1 stphase problem in the FSML model. 5.3.1 Dynamic Programming F orm ulation Giv en a xedQ lv alue, the objectiv e function ( 5.1 ) simplies toF 0 = n l X i =1 w l d d l;i =q l;i e 2 Q 2l q 2 l;i 12 Q l + q l;i 4 L X u = l +1 n u X v =1 w u d d l;i =q l;i e r u;v;l;i d u;v Q l 2 # ;whic h is separable inq l;iv ariables. If the v ectorq l ( Q l )=( q l; 1 ;q l; 2 ;::;q l;n l )is an optimal solution to the problem withP n l i =1 q l;i = Q l, then the subv ector( q l; 2 ;q l; 3 ;::;q l;n l )should be optimal to the problem withP n l i =2 q l;i = Q l q l; 1, as w ell. Otherwise, the v ectorq l ( Q l )can not be an optimal solution. Th us, the principle of optimalit y holds for the problem and w e can build the optimal solution b y consecutiv ely deciding onq l;iv alues. LetR l;ibe the total n um ber of batc hes committed to the rstiproducts, at thel thlev el. The product indexiis the stage index, and the pair( i;R l;i )represen ts the states of the DP form ulation. Figure 51 illustrates the underlying net w ork structure of the problem.
PAGE 213
197 nr lr ,Qr lr 0,0r 1,1r 1,2r 1,Dr l,1r nr lr ,nr lr nr lr ,Dr l,nr 2,2r 2,3r 2,Dr l,2r .r.r.r .r.r.r .r.r.r .r.r.r Figure 51: Net w ork Represen tation of the Problem In the net w ork, eac h node represen ts a state in the DP form ulation and arcs reect the acceptable v alues suc h that an arc is dra wn from node (i 1 ;R l;i 1) to node (i;R l;i 1 + q l;i) for eac hq l;i 2 A l;i. W e dene the follo wing recursiv e equation.F ( i;R l;i )= 8>><>>: 0 ;ifi =0 min q l;i f F ( i 1 ;R l;i q l;i )+ f ( l;i;q l;i ) j s l;i;j + l d l;i q l;i m p l;i;j T Q l ; 8 j g ;ifi> 0Where,f ( l;i;q l;i )= w l d d l;i =q l;i e 2 Q 2l q 2 l;i 12 Q l + q l;i 4 L X u = l +1 n u X v =1 w u d d l;i =q l;i e r u;v;l;i d u;v Q l 2 :
PAGE 214
198 Note that, the recursiv e equation is a function ofQ l, that can be used for a giv enQ lv alue only Also, the nal state is( n l ;Q l ), and the solution to the problem,F ( n l ;Q l ), can be found with the forw ard recursiv e algorithm presen ted in Figure 52 Algorithm ForwardRecursion( l;Q l ) 1.Initialize F (0 ; 0)=0, F ( i;R l;i )= 1 forall i 2 N l ,1 R l;i D l;i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N l 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 f 4.Foreach q l;i 2 A l;i valuethatsatises s l;i;j + l d l;i q l;i m p l;i;j T Q l ; 8 j 2 M l f 5.Calculate f ( l;i;q l;i )= w l d d l;i =q l;i e 2 ( Q 2l Ã‚Â¡ q 2 l;i ) 12 Q l + q l;i 4 L P u = l +1 n u P v =1 w u d d l;i =q l;i e r u;v;l;i Ã‚Â¡ d u;v Q l 2 6.IF( F ( i;R l;i Ã‚Â¡ 1 + q l;i ) >F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ))THEN f 7.Set F ( i;R l;i Ã‚Â¡ 1 + q l;i )) F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ). 8.Update ActiveNodes i ActiveNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) 9. q Ã‚Â¤ l;i ( Q l ) q l;i g g g g Figure 52: Pseudocode for Algorithm Forward Recursion When the algorithm terminates, it returnsq l;i ( Q l )v ector that is an optimal solution for the giv enQ lv alue andF ( n l ;Q l )that is the objectiv e v alue of this optimal solution. As in an y DP model, the n um ber of nodes gro ws exponen tially with the n um ber of stages. In the nal (n thl) stage, w e migh t ha v e at mostQ n l i =1 a l;inodes. This is a straigh tforw ard result of the fact that eac h node in the( i 1) ststage is connected to at mosta l;inodes in thei thstage. Ho w ev er, w e also kno w that the maxim um index for a node in the nal lev el is (n l ;D l;n). Therefore, the n um ber of
PAGE 215
199 nodes in the nal lev el is at mostmin f Q n l i =1 a l;i ;D l;n n l +1 g. An upper bound for the total n um ber of nodes in the graph isP n l i =1 min f Q ih =1 a l;h ;D l;i i +1 g. In order to deriv e the computational complexit y of algorithm Forward Recursion w e need to kno w the n um ber of arcs and the time it tak es to calculate an arc cost, as w ell. The n um ber of arcs in to thei thstage is a function of the n um ber of nodes in the( i 1) ststage anda l;i. An upper bound on this n um ber isa l;i min f Q i 1 h =1 a l;h ;D l;i 1 i +2 g. Therefore, w e claim that the total n um ber of arcs in the net w ork is at mosta l; 1 + P n l i =2 a l;i min f Q i 1 h =1 a l;h ;D l;i 1 i +2 g. Calculating a single arc cost, on the other hand, requiresm l (1+ L P u = l +1 n u )computations. In the w orst case, steps sev en through nine are executed as man y times as the n um ber of arcs in the net w ork. Therefore, the w orst case time complexit y of the algorithm is obtained b y m ultiplying the n um ber of arcs with the time required to calculate an arc cost:O (( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g ) m l (1+ L P u = l +1 n u )). Abo v e algorithm solv es the problem for a giv enQ lv alue. Ho w ev er, the problem does not tak e aQ lv alue as an input parameter, but returnsQ las a result of the solution v ector. This and the fact that an arc cost can be calculated only ifQ lis kno wn, w e need to solv e a DP for eac h possible v alue ofQ l. W e propose algorithm Solve with DP for the solution of the problem (see Figure 53 ). The algorithm iden ties all possible v alues ofQ land emplo ys algorithm Forward Recursion successiv ely to solv e the emerging subproblems. The algorithm yieldsQ las the optimalQ lv alue, whic h leads to the optimal solution v ectorq l ( Q l )and also the optimal solution's objectiv e v alueF ( n l ;Q l ). Steps one through v e can be considered as a preprocessing phase where the reac hable nodes are iden tied. The w orst case complexit y of this preprocessing phase depends on the n um ber of arcs in the net w ork represen tation of the problem, in that it is equal to that of algorithm Forward Recursion Since algorithm Forward Recursion is repetitiv ely in v ok ed in step eigh t, the preprocessing
PAGE 216
200 Algorithm SolvewithDP ( l ) 1.Initialize Q Ã‚Â¤l =0, F ( n l ;Q Ã‚Â¤l )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N l 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q l;i 2 A l;i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) g g g6.Foreachreachablenode( n l ;R l;n l ) f 7.Set Q l R l;n l 8.Findtheoptimalsolutionforthegiven Q l valueusingAlgorithm ForwardRecursion 9.If F ( n l ;Q Ã‚Â¤l ) >F ( n l ;Q l )THEN f 10.Update Q Ã‚Â¤l Q l g g Figure 53: Pseudocode for Algorithm Solve with DP phase does not aect the o v erall time complexit y of the algorithm. Steps sev en through nine are repeated for eac h reac hable node at the nal stage of the DP form ulation. The n um ber of reac hable nodes is bounded abo v e b yD l;n l n l +1. Therefore, algorithm Forward Recursion ma y be in v ok ed at mostD l;n l n l +1times, yielding an o v erall w orst case time complexit y ofO (( D l;n l n l +1)( a 1 + P ni =2 a i min f Q i 1 l =1 a l ;D i 1 i +2 g ) m l (1+ L P u = l +1 n u ))This time complexit y sho ws that, the computational requiremen t of the DP procedure depends on external parameters suc h asd l;is anda l;is. Therefore, the procedure ma y be impractical for large size problems. In the next subsection, w e dev elop sev eral bounding strategies to reduce the computational burden of the DP procedure.
PAGE 217
201 5.3.2 Bounding Strategies An upper limit forQ l. Noting that the length of the timebuc k et cannot be smaller than the sum of processing and setup times of an y batc h leads to the follo wing upper bound for the possibleQ lv alues.T Q l s l;i;j + p l;i;j ; 8 i;j ) Q l Q Ul = T max i;j f s l;i;j + p l;i;j ;i 2 N l ;j 2 M l g Eliminate in termediate nodes whic h cannot yield a feasible solution A t an y stage,R l;ima y increase b y at mostd l;iand at least 1 units. Therefore, as w e proceed to w ards the nal state, w e eliminate the in termediate nodes (i;R l;i) withR l;i >Q l n l + iorR l;i
PAGE 218
202 A lo w er limit forQ l. Starting with a high v alue ofQ land decreasing it at ev ery step requires a stopping condition based on a lo w er limit forQ lv alues. The most basic lo w er limit isQ Ll = P i 2 N l 1= n l, since the smallest acceptable v alue is one, for eac hi 2 N l. F or a better lo w er limit, w e adaptG ( i;R l;i )to the complete solution and obtainG (0 ; 0)=( U l; 0 V l; 0 ) =Q l. UsingF ( n l ;Q l )as the upper bound on the objectiv e v alue of the optimal solution,Q l Q Ll = ( U l; 0 V l; 0 ) =F ( n l ;Q l )giv es a lo w er limit onQ lv alue. Note that when a better solution is found,Q Llv alue increases. Therefore, w e updateQ Llev ery timeQ lis updated, and dynamically narro w the searc h space onQ l. Incorporating all the bounding strategies dev eloped, w e propose algorithm Solve with BDP (Figure 54 ) for the solution of the problem, using algorithm Bounded Forward Recursion (Figure 55 ) for successiv ely solving emerging DPs. The proposed dynamic programming procedure and the bounding policies sho w sligh t dierences from the ones proposed for the SMML model. The w orst case complexities aremtimes greater in the FSML model. This dierence is due to the ev aluation of the candidate arcs connecting states in t w o adjacen t stages in the form ulation. In the singlemac hine case, feasibilit y of an arc is tested with respect to only one mac hine, whereas in the o wshop casemmac hines are in v olv ed in the calculations. 5.4 Problem Specic Heuristics for the1 stPhase Problem The complexit y of the dynamic programming approac h proposed for the problem implies that w e ma y not be able to solv e largesized instances with these exact methods. Therefore, w e dev elop heuristic algorithms whic h do not guaran tee to nd an optimal solution but lik ely to nd good solutions in a reasonable amoun t of time. In this section w e describe a parametric heuristic solution procedure that w e ha v e dev eloped for the1 stphase problem.
PAGE 219
203 Algorithm SolvewithBDP ( l ) 1.Initialize Q Ã‚Â¤l =0, F ( n l ;Q Ã‚Â¤l )= 1 ReachableNodes 0 = f (0 ; 0) g and ReachableNodes i = ? forall i 2 N l .Alsocompute U l; 0 and V l; 0 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ReachableNodes i Ã‚Â¡ 1 f 4.Foreach q l;i 2 A l;i value f 5.Update ReachableNodes i ReachableNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) g g6.Compute U l;i and V l;i g7.Set Q Ll =1and Q Ul = b T= max i;j f s l;i;j + p l;i;j ;i 2 N l ;j 2 M l gc 8.Foreachreachablenode( n l ;R l;n )satisfying Q Ll R l;n Q Ul ,indecreasingorder f 9.Set Q l R l;n 10.Findtheoptimalsolutionforthegiven Q l valueusing Algorithm BoundedForwardRecursion 11.If F ( n l ;Q Ã‚Â¤l ) >F ( n l ;Q l )THEN f 12.Update Q Ã‚Â¤l Q l 13.Update Q Ll b ( U l; 0 Ã‚Â¡ V l; 0 ) =F ( n l ;Q Ã‚Â¤l ) c g g Figure 54: Pseudocode for Algorithm Solve with BDP The basic principles whic h constitute the basis for our heuristic solution procedure are mostly similar to those discussed in the SMML model. Here, w e rebuild our algorithms in order to incorporate the c hanges required b y the FSML model. The only modication required in the feasible solution searc h method is the denition of the critic al c onstr aint Critical constrain t is the constrain t with themax i f max j f s l;i;j + p l;i;j d d l;i q l;i e ;j 2 M l g ;i 2 N l gv alue. If the solution on hand is feasible, then the critical constrain t is the tigh test constrain t. Similarly in an infeasible solution, the critical constrain t is the most violated constrain t. Also, critic al variable is dened as the product related to the critical constrain t. The discussions giv en in the SMML model hold for the FSML model, as w ell. Therefore, w e use the results obtained for the SMML model and build
PAGE 220
204 Algorithm BoundedForwardRecursion( l;Q l ) 1.Initialize F (0 ; 0)=0, F ( i;R l;i )= 1 forall i 2 N l and1 R l;i D l;i ActiveNodes 0 = f (0 ; 0) g and ActiveNodes i = ? forall i 2 N l 2.For i =1to n l ,increase i by1 f 3.Foreachnode( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) 2 ActiveNodes i Ã‚Â¡ 1 thatsatises (( Q l Ã‚Â¡ D l;n + D l;i Ã‚Â¡ 1 R l;i Ã‚Â¡ 1 Q l Ã‚Â¡ n l + i +1)AND ( F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ G ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 ) F ( n l ;Q Ã‚Â¤l ))) f 4.Foreach q l;i 2 A l;i valuethatsatises s l;i;j + l d l;i q l;i m p l;i;j T Q l ; 8 j 2 M l f 5.Calculate f ( l;i;q l;i )= w l d d l;i =q l;i e 2 ( Q 2l Ã‚Â¡ q 2 l;i ) 12 Q l + q l;i 4 L P u = l +1 n u P v =1 w u d d l;i =q l;i e r u;v;l;i Ã‚Â¡ d u;v Q l 2 6.IF( F ( i;R l;i Ã‚Â¡ 1 + q l;i ) >F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ))THEN f 7.Set F ( i;R l;i Ã‚Â¡ 1 + q l;i ) F ( i Ã‚Â¡ 1 ;R l;i Ã‚Â¡ 1 )+ f ( l;i;q l;i ). 8.Update ActiveNodes i ActiveNodes i [ ( i;R l;i Ã‚Â¡ 1 + q l;i ) 9. q Ã‚Â¤ l;i ( Q l ) q l;i g g g g Figure 55: Pseudocode for Algorithm Bounded Forward Recursion Algorithm NE Feasible Solution Search (Figure 56 ). The algorithm examines solution space starting from an y giv en solution, b y mo ving in the NorthEast (NE) direction, and reports the existence of a feasible solution. Mo ving in the NE direction means increasing at least oneq l;ito its next acceptable v alue. F or future use w e dene SW corner as the solution where the v ariables tak e their lo w est possible v alues, that isq l;i =1 ; 8 i, and NE corner as the solution whereq l;i = d l;i ; 8 i. The algorithm performs exactly one incremen t operation per iteration. Depending on the starting solution, the algorithm performs at mostP n l i =1 a l;iiterations. Eac h iteration requires nding the critical constrain t and c hec king if the solution at hand is feasible or not, both these tasks tak eO ( n l m l )time. Therefore, the time complexit y of the algorithm isO ( n l m l P n l i =1 a l;i ). Considering that the
PAGE 221
205 Algorithm NEFeasibleSolutionSearch( l ) 1.Startfromagivensolution, q l (= q l; 1 ;q l; 2 ;::;q l;n l ).Declare q l astheCurrentSolution. 2.CheckfeasibilityoftheCurrentSolution.Iffeasible,thenstopandreturntheCurrentSolution.Otherwisegotostep3.3.Findthecriticalconstraint(max i f max j f s l;i;j + p l;i;j d d l;i q l;i e ;j 2 M l g ;i 2 N l g )andthe criticalvariable q l;A (max j f s l;A;j + p l;A;j d d l;A q l;A e ;j 2 M l g =max i f max j f s l;i;j + p l;i;j d d l;i q l;i e ;j 2 M l g ;i 2 N l g ).Ifthecriticalvariableisnotincreasable( q l;A = d l;A ),thenstopandreturn nullsolution;nofeasiblesolutionexistsinthesolutionspace.Otherwise,incr easethe criticalvariabletoitsnextacceptablevalueandgotostep2. Figure 56: Pseudocode for Algorithm NE Feasible Solution Search NE direction has at mostQ n l i =1 a l;isolutions whic h ma y or ma y not be feasible, the algorithm scans this space signican tly fast. Space complexit y of the algorithm is also easily calculated. The algorithm stores the curren t solution whic h consists ofn ldecision v ariables only therefore the space complexit y isO ( n l ). The algorithm can be rev ersed so that it scans the solution space in the SW direction. Although the nature of the problem is quite dicult, this ease in nding the closest feasible solution in a specic direction giv es us an adv an tage to dev elop a po w erful heuristic algorithm. Before proceeding with details of the algorithm, w e explain the neigh borhood structure used. A solutionq 1 l =( q 1 l; 1 ;q 1 l; 2 ;:::;q 1 l;n )is a neighb or solution ofq 0 l = ( q 0 l; 1 ;q 0 l; 2 ;:::;q 0 l;n )if and only if exactly one v ariable (sa yq l;A) v alue diers in these solutions, suc h thatq 1 l;Ais the next acceptable v alue ofq 0 l;Ain increasing or decreasing direction. That is, it can be reac hed b y only one incremen t or decremen t operation. With this denition, an y acceptable solution has at most2 n lneigh bors,n lof them being in the increasing direction and the othern lin the decreasing direction. No w w e can proceed with dening our heuristic approac h. The algorithm tak es three parameters;SearchDepth,MoveDepthandEligibleNeighbors.
PAGE 222
206SearchDepthparameter denotes depth of the searc h process. IfSearchDepth =1, then only the onestep neigh bors are ev aluated. IfSearchDepth =2, then the neigh bors' neigh bors (the t w ostep neigh bors) are also ev aluated, and so on. WhenSearchDepth> 1, thenMoveDepthbecomes an importan t parameter. IfMoveDepth =1, then the searc h terminates at a onestep neigh bor. IfMoveDepth =2, then the termination is t w o steps a w a y from the Curren t Solution, etc. The last parameter,EligibleNeighbors, denotes the eligible neigh bors for ev aluation. IfEligibleNeighbors= feasible, then only feasible neigh bors are considered. IfEligibleNeighbors= both, then both feasible and infeasible neigh bors are considered for ev aluation. In the algorithm, ev aluating a solution means calculating its objectiv e function v alue and determining if it is feasible. When all the neigh bors are ev aluated, the follo wing solutions are iden tied. The Best Neigh bor is aSearchDepthstep neighbor with the lo w est objectiv e v alue of all the neigh bors. The Leading Neigh bor is aMoveDepthstep neigh bor whic h leads to the Best Neigh bor. Similarly the Best F easible Neigh bor is aSearchDepthstep feasible neigh bor with the lo w est objectiv e v alue of all the feasible neigh bors and the Leading F easible Neigh bor is aMoveDepthstep feasible neigh bor whic h leads to the Best F easible Neigh bor. Note that, ifEligibleNeighbors= both, then Best Neigh bor and Best F easible Neigh bor migh t dier. IfEligibleNeighbors= feasible, then these t w o solutions are the same. This also holds for the Leading Solution and the Leading F easible Solution. A mo v e consists of updating the Curren t Solution and comparing the objectiv e function v alue of this solution to the Best Solution. If the solution at hand has a lo w er objectiv e v alue and is feasible, then the Best Solution is updated. Figure 57 sho ws the pseudocode for our heuristic algorithm, namely Algorithm Parametric Heuristic Search
PAGE 223
207 Algorithm ParametricHeuristicSearch( SearchDepth;MoveDepth;Eligible;l ) 1.SetCurrentSolutionasSWcornerofthe l th levelandperformaNEfeasiblesolution search(usingAlgorithm NEFeasibleSolutionSearch ).Ifnofeasiblesolutionscanbe found,stop.OtherwisesetCurrentSolutionasthissolution.2.Evaluateall SearchDepth step EligibleNeighbors oftheCurrentSolution.Ifthe BestNeighborisnotnull,thenmovetotheLeadingNeighbor.Ifthisnewsoluti onis feasiblethenrepeatstep2.Otherwisegotostep3.3.CheckifanyfeasiblesolutionexistsintheNEdirection,byemployingAlg orithm NEFeasibleSolutionSearch .Ifyes,thenmovetothatfeasiblesolution,andgotostep 2.Otherwisegotostep4.4.Returntothelastvisitedfeasiblesolution.IftheBestFeasibleNeighb orisnot null,thenmovetotheLeadingFeasibleNeighbor,andgotostep2.Otherwisest opand returntheBestSolution. Figure 57: Pseudocode for Algorithm Parametric Heuristic Search The algorithm alw a ys mo v es in the NE direction. The total n um ber of iterations performed b y Algorithm Parametric Heuristic Search is at mostP n l i =1 a l;i, wherea l;iis the n um ber of acceptable v alues for the decision v ariableq l;i. A t eac h iteration, if Algorithm NE Feasible Solution Search is not in v ok ed, at mostn SearchDepthlneigh bors are ev aluated, inO ( m l n SearchDepthl )time. W e already kno w that an iteration with Algorithm NE Feasible Solution Search tak esO ( n l m l )time. SinceO ( n l m l ) O ( m l n SearchDepthl ), the n um ber of solution ev aluations the algorithm performs isO ( m l n SearchDepthl P n l i =1 a l;i ). An ev aluation tak esO (1+ L P u = l +1 n u )time, th us the total time complexit y of the heuristic procedure isO ( m l n SearchDepthl ( P n l i =1 a l;i )(1+ L P u = l +1 n u )). Space complexit y of the algorithm is rather easy to calculate. The algorithm stores a constan t n um ber of solutions (Curren t Solution, Best Solution, etc.) during the iterations. Eac h solution consists ofn lv ariable v alues. So, the space complexit y of the algorithm isO ( n l ).
PAGE 224
208 5.5 MetaHeuristics for the1 stPhase Problem In the FSML model, w e implemen tation path relinking method on the1 stphase problem. Before proceeding with the details of the method and the algorithm w e use, w e presen t the neigh borhood structure. 5.5.1 Neigh borhood Structure W e dene a solutionq l =( q l; 1 ;q l; 2 ;::;q l;n l )as a v ector of the decision v ariables suc h that all the decision v ariables tak e an acceptable v alueq l;i 2 A l;i ; 8 i. W e further distinguish bet w een feasible and infeasible solutions as follo ws. A solution is fe asible if it satises the rst constrain t set ( 5.2 ), otherwise it is infe asible A solutionq 1 l =( q 1 l; 1 ;q 1 l; 2 ;:::;q 1 l;n l )is a neighb or ofq 0 l =( q 0 l; 1 ;q 0 l; 2 ;:::;q 0 l;n l )if and only if exactly one v ariable v alue is dieren t in these v ectors, and the categorical distance bet w een the v alues of this decision v ariable is at most, whereis a user dened in teger that is greater than or equal to one. If w e denote the set of neigh bor solutions of a solutionq 0 lwithNS ( q 0 l ; )and considerq 0 1 =(5 ; 5)and =2for example, then the neigh bor solutions set ofq 0 1isNS ((5 ; 5) ; 2)= f (3 ; 5) ; (4 ; 5) ; (8 ; 5) ; (15 ; 5) ; (5 ; 3) ; (5 ; 4) ; (5 ; 7) ; (5 ; 10) g. With this denition, a solution ma y ha v e at most2 n lneigh bors. W e iden tify t w o particular solutions. The rst one is the origin where eac h decision v ariable tak es its lo w est possible v alue, that isq l;i =1 ; 8 i 2 N l. The second one is the farthest c orner of the solution sp ac e where ev ery decision v ariable tak es its largest v alue, that isq l;i = d l;i ; 8 i 2 N l. This farthest corner is found useful in obtaining the global optim um in the previous models, therefore w e k eep it in the population of the solution in the FSML model, as w ell. 5.5.2 P ath Relinking In the follo wing, w e giv e a description of our PR implemen tation in the FSML model. W e use the generic algorithm presen ted in Figure 58 F or initialization, emplo ymen t of the problem specic heuristic methods is represen ted
PAGE 225
209 b y a parameter,PSHMethods. W e consider the problemspecic heuristic methods in order of their time consumption, as reported in Y a vuz and T ufek ci (2004a). IfPSHMethods =1, then the rst and fourth methods are emplo y ed. IfPSHMethods =2, then method 2 is emplo y ed in addition to the other t w o. Finally ifPSHMethods =3, all four methods are emplo y ed. Algorithm PR Initialization 1.Initializethe ReferenceSet withseedsolutions,usingproblemspecicheuristics. 2.Foreachseedsolutionf 3.Createalldiversiedsolutionsoftheseedsolutiononhand.4.Foreachdiversiedsolutionf 5.Findalocaloptimumusingthe ImprovementMethod 6.Updatethe ReferenceSet g gImprovement 7.Generatesubsetsofthe ReferenceSet 8.Foreachsubsetf 9.Createcombinationsofthesolutionsinthesubset.10.Foreachcombinationf 11.Findalocaloptimumusingthe ImprovementMethod 12.Updatethe ReferenceSet g g13.Iterateuntil TerminationCriteria aresatised. Figure 58: Pseudocode for Algorithm PR Ha ving established a set of seed solutions, the div ersication generator processes these seed solutions and creates the initial reference set. W e use t w o alternativ e modes of the div ersication generator. The rst mode is similar to the m utation operator used in genetic algorithms (Goldberg, 1989; Holland, 1975;
PAGE 226
210 Reev es, 1997). That is, the seed solution v ector is tak en as the input and starting with the rst v ariable, a div ersied solution is created for eac h v ariable. This is ac hiev ed b y replacing the v ariable's v alue with its100 thnext acceptable v alue. Ifa l;i < 100, the mod operator is used in order to obtain an acceptable v alue with index v alue bet w een one anda i. Here 100 is arbitrarily selected, an y signican tly large in teger suc h as 50, 200 or 500 could be c hosen. The second mode, on the other hand, does not process seed solution v ectors. It performs a local searc h for eac h decision v ariable and iden ties solutions that maximize the v alue of that certain decision v ariable. This mode of div ersication yields a total ofnalternativ e solutions and enables us to explore extreme corners of the feasible region. The parameter represen ting the selection of the div ersication mode isDiversification, and it has four lev els. A t lev el 1 no div ersication is applied, at lev el 2 only corner searc h is applied, at lev el 3 only the div ersication generator is used and nally at lev el 4 both modes are used. Depending on the mode selection in the application of the algorithm, the n um ber of div ersied solutions ma y be less than the size of the reference set. In this case, the empt y slots in the reference set can be lled in the consecutiv e iterations. The size of the reference set is represen ted b y parameterb. In our implemen tation w e k eep one infeasible solution in the reference set, at all times. This infeasible solution is the farthest corner of the solution space discussed in Section 5.5.1 The subset generation mec hanism used for PR considers the subsets with t w o solutions only These solutions are used as origin and destination poin ts in the solution com bination mec hanism. Based on the acceptable v alues, w e measure the distance bet w een the origin and the destination with a categorical distance measure. Ifq 1 landq 2 lare the origin and destination v ectors, and w e dene the functionPosition ( q l;i )as an in teger function whic h returns the position of v ariablei's v alue inA l;i, then the distance bet w een these t w o v ectors is dened
PAGE 227
211 asP i 2 N l j Position ( q 1 l;i ) Position ( q 2 l;i ) j, wherej x jis the absolute v alue ofx. Starting from the origin, the neigh bor solutions whic h decrease the distance b y one are considered and the bestNTSsolutions are stored in a list, whereNTSis the parameter standing for the n um ber of temporary solutions. In the next step, eac h solution in this list is considered as the origin, and again the neigh bor solutions that decrease the distance b y one are ev aluated. This is repeated un til the destination solution is reac hed, while k eepingNTSbest solutions bet w een the steps.NTS =1represen ts a single path bet w een the origin and the destination. Ho w ev er,NTS> 1can be considered asNTSparallel paths that are built bet w een the origin and the destination solutions. Using the impro v emen t method on com bined solutions and updating the reference set are common in both the initial and iterativ e phases. Ho w ev er, performing a local searc h on ev ery solution obtained ma y be impractical.LSinPreProcessis the parameter that represen ts local searc h usage in the initial phase. IfLSinPreProcess =0, no local searc h is applied. IfLSinPreProcess =1, local searc h is only applied at the end of the initial phase on the solutions that are stored in the reference set. IfLSinPreProcess =2, a local searc h is applied for ev ery trial solution considered.LStoRefSetPPis the parameter represen ting the update frequency of the reference set and tak es the v alues oftrueorfalse. IfLStoRefSetPP = true, ev ery time a solution is ev aluated, it is compared to the solutions in the reference set and if necessary the reference set is updated. This requires that ev ery mo v e performed during the local searc h is considered for the reference set. IfLStoRefSetPP = false, only the nal result of the local searc h, a local optim um, is tried for the reference set. P arametersLSinIterationsandLStoRSItersha v e the same denition and lev els, applied to the iterativ e phase. F or the termination of the algorithm w e ha v e one criterion only If the reference set is not modied on a giv en iteration, it cannot be modied on the later
PAGE 228
212 iterations, either. Therefore, w e k eep trac k of the solutions in the reference set and immediately terminate if the reference set is the same before and after an iteration. This criterion does not require a parameter. 5.6 Comparativ e Study 5.6.1 Researc h Questions This dissertation proposes a gen uine planning tool for m ultilev el justintime man ufacturing systems. In the FSML model, w e address the batc hing and sequencing decisions and propose exact and heuristic methods to solv e the arising problems. W e rst focus on the nal (endproducts) lev el and solv e the batc hing and sequencing problems at this lev el. Then, w e proceed with the sublev els, except for the ra w materials lev el, whic h is not included in the study The sequence built for the nal (rst) lev el determines the demands at the second lev el, as the production at the rst lev el (end products) consume output of the second lev el (componen ts). Similarly the sequence built for the second lev el determines the demands at the third lev el. Therefore, the sequences of the subsequen t lev els dene the in v en tory lev els at the supermark et in v en tory buers bet w een the lev els. In this comparativ e study w e address t w o researc h questions. Our rst researc h question is similar to our w ork in the previous c hapters, as w e ev aluate the performance of our solution methods for the batc hing problem. The second researc h question, on the other hand, addresses the supermark et in v en tory lev els. The lo w er the in v en tory lev els are, the more suitable our planning approac h is, according to JIT principles. The researc h questions are:Ho w do the alternativ e solution methods perform on the test instances? Does an y of the methods perform signican tly better than the others in terms of solution qualit y and solution time measures?What are the appropriate supermark et in v en tory lev els, that should be k ept?
PAGE 229
213 5.6.2 Design of Experimen ts In our study w e consider four lev el problems (L =4) with v e products/parts at eac h lev el (n l =5 ;l =1 ;::; 4). The rst lev el is the endproducts lev el, the remaining lev els are the componen ts, parts and ra w materials lev els, respectiv ely The a v erage demand for an endproduct is 750 units. The demands for the parts and ra w materials at the sublev els depend on the endproduct demands and billsofmaterial. W e generate the bills of material (r u;v;l;iv alues) in suc h a w a y that exactly t w o units of a part is needed b y the parts/products at its immediate superlev el. Therefore, the a v erage demands for the parts increase exponen tially with the lev el n um ber; 1500 at the second lev el, 3000 at the third lev el, etc. W e use four experimen tal factors,s l;i;j =p l;i;jratio, T relaxation percen tage, relaxation c hange bet w een the lev els ('), and div ersication lev elr.r 2f 0 ; 1 gis used to create test cases in whic h dieren t products are div ersied in terms of demand, processing time and setup time.r =1reects the div ersied case, andr =0reects the undiv ersied case where the products are v ery similar to eac h other. Demand v alues are randomly and uniformly generated bet w een the minim um and maxim um v alues, where maxim um demand is t wice as large as the a v erage demand for div ersied instances and 20% o v er the a v erage demand for the instances with similar products. The ratio of maxim um demand to minim um demand is 50 and 1.5 for these t w o t ypes of instances, respectiv ely W e useto denote the ratio bet w een the expected v alues ofs l;i;jandp l;i;jfor the div ersied instances. W e rst createp l;i;jaccording to uniform distribution bet w een (0,5] min utes, and thens l;i;jaccording to uniform distribution bet w een [(1 0 : 1 r ) p l;i;j ; (1+0 : 1 r ) p l;i;j ]. W e let 2f 100 ; 10 ; 1 gfor our experimen ts. W e create the total a v ailable time v ariable righ t after creating the rst lev el data. T otal a v ailable time should allo w at least one setup per product, that is
PAGE 230
214T T LB = P i 2 N 1 max j f d 1 ;i p 1 ;i;j + s 1 ;i;j j j 2 M 1 g. On the other hand,Tshould be limited withT
PAGE 231
215 algorithm sho ws thatSearchDepthparameter is critical in time requiremen t. Our preliminary results sho w that settingSearchDepth> 2causes extensiv e time consumption but not yielding a signican t impro v emen t in solution qualit y Therefore w e narro wSearchDepth 2f 1 ; 2 g. If only onestep neigh bors are considered, then theMoveDepthparameter is xed to one. Ho w ev er, ifSearchDepth =2, then w e migh t speed up the algorithm b y mo ving directly to the best neigh bor found (MoveDepth =2). Therefore, w e test both lev els of this parameter. F or the combinations ev aluating the infeasible neigh bors as w ell, w e do not w an t to allo w the searc h to mo v e too far deep in to the infeasible region, but k eep the mo v es within onestep neigh borhood of the feasible region. Therefore, w e xSearchDepth =1for suc h com binations. The methods tested are: Method P arameter Com bination # (SearchDepth;MoveDepth;EligibleNeighbors) PSH1 (1, 1, feasible) PSH2 (2, 1, feasible) PSH3 (2, 2, feasible) PSH4 (1, 1, both) W e see the same parametric structure in our path relinking implemen tation, as w ell. The parametric structure of our computer code is v ery exible in terms of testing alternativ e strategies for a method. Ho w ev er, when the n um ber of parameters is large, an enormous n um ber of com binations of algorithm parameters exist. Finding the most eectiv e com bination is itself a com binatorial optimization problem. W e adopt the same heuristic approac h as of the previous c hapter to this problem; at eac h stage w e x some of the parameters to predetermined v alues and perform full factorial experimen ts on the rest of the parameters. F or the signicance of the dierence bet w een the tested lev els of a parameter, w e apply pairedttests. W e denote the mean v alues of computation time and
PAGE 232
216 relativ e deviation from the optimal solution measures with tland dl, respectiv ely for thel thlev el of the parameter. If there are only t w o lev els for a parameter then one h ypothesis per measure is built. If, ho w ev er, there are more than t w o lev els, then the n um ber of h ypotheses to be built depends on the relationship bet w een the lev els of the parameter. F or some parameters, b y its role in the algorithm, w e kno w that the solution qualit y impro v es and the computational time increases with the lev els. F or example, if w e tak e the size of the reference set as a parameter, w e expect larger reference set sizes to require longer computational times and yield better results. In suc h cases, w e build h ypotheses on the dierence of adjacen t lev els in pairs. If all the adjacen t lev els are signican tly dieren t and a monotone order of the lev els is found, w e do not construct h ypotheses for eac h possible pair of labels. Otherwise, depending on the results obtained, w e ma y w an t to distinguish bet w een nonadjacen t lev els of the parameter and built h ypotheses for them. F or some other parameters, on the other hand, the results are not expected to be in suc h an order. Th us, w e build h ypotheses and applyttests for ev ery possible pair of the lev els of the parameter. F or allttests, w e use a condence lev el of95%. The ne tuning process terminates when all the parameters are considered. The ne tuning process can be seen as a supervised learning process. W e use 20% of the test instances (v e problems for eac h problem setting presen ted in the previous section) for ne tuning. That is, the most promising methods according to their performance on the fraction of the test instance will be used on the en tire set of test instances. W e represen t the PR method with PR(PSHMethods,Diversification,b,NTS,LSinPreProcess,LStoRefSetPP,LSinIterations,LStoRSIters). Here, w e ha v e a total of 8 parameters. W e use the ne tuning results for the PR method, in the SMML model as a starting poin t. That is, w e start with an initial com bination of the parameters of PR(1, 3, 20, 1, 1,true, 1,false).
PAGE 233
217 In the rst stage w e test the parametersbandNTS, with three v alues eac h. This giv es us 9 com binations in total. As w e see from T ables H1 and H2 ,b =30andNTS =5are preferable o v er the other lev els of these parameters. A t this poin t, w e see that the results are satisfactory in terms of both the deviation and solution time measures. Therefore, w e stop ne tuning and select PR(1, 3, 30, 5, 1,true, 1,false) as the com bination to be tested with the comparativ e analysis. The process of ne tuning the PR method is summarized in T able 51 T able 51: Summary of the Fine T uning Process for the PR Method Stage P arameters # of Time Methods # P D b NTS L 1 L 2 L 3 L 4 T ested (hrs) 1 1 3 20 1 1 true 1 false 9 19.0 30 3 40 5 P : PSHMethods;D : Diversification T otal 9 19.0 L 1 : LSinPreProcess;L 2 : LStoRefSetPP L 3 : LSinIterations;L 4 : LStoRSItersThe methods included in this comparativ e study is our bounded dynamic programming (BDP) method. In total, w e ha v e v e methods in our comparativ e study In the follo wing subsection, these methods are denoted with BDP PSH1, PSH2, PSH3, PSH4 and PR. 5.6.4 Results and Discussion In ev aluating the computational performance of our solution methods, w e consider t w o performance measures, namely computational time and percen t deviation from the optimal solution. These t w o measures represen t the tradeo bet w een solution qualit y and time. Results from solving the test instances with all the methods considered, including three metaheuristic methods, four problemspecic heuristic methods and an exact method, for the computation time and percen t deviation from the optim um measures, are summarized in T able 52
PAGE 234
218 T able 52: Summary of Results Time (seconds) Deviation (%) Method A vg. Min. Max. A vg. Max. BDP 1673.21 51.84 18814.08 PSH1 0.27 0.04 0.74 4.135 91.842 PSH2 1.31 0.09 3.02 3.101 82.215 PSH3 0.73 0.11 1.93 3.754 86.486 PSH4 0.20 0.03 0.87 4.311 98.254 PR 4.63 0.49 14.64 0.057 32.051 W e analyze the dierence of the methods pair b y pair, for both computational time and percen t deviation from the optimal solution measures. A total of 30 n ull h ypotheses are built and tested at a 95% condence lev el b y 2tailed pairedttests. All of the h ypotheses are rejected. The ordering of the methods is tBDP > tPR > tPSH 2 > tPSH 3 > tPSH 1 > tPSH 4for the solution time and dBDP < dPR < dPSH 2 < dPSH 3 < dPSH 1 < dPSH 4for the deviation measure. The bounded dynamic programming procedure requires appro ximately v e hours in the w orst case. This time requiremen t is extensiv ely large, th us the solution qualit y of the heuristic methods becomes extremely importan t. F or the problemspecic heuristics, the results sho w that the four alternativ e methods are signican tly dieren t in terms of both the solution time and solution qualit y measures. Also, the solution qualit y of the problemspecic heuristic methods can not compete with that of the PR method. F urthermore, the time requiremen t of the PR method is negligibly small, although it is statistically larger than the problemspecic heuristics. This result sho ws that our PR implemen tation to the FSML model is v ery successful. With respect to its solution qualit y and time performance, w e argue that it can be used b y the practitioners in the eld, in almost real time. W e answ er the second researc h question through T able 53 The results are similar to those found for the SMML model. Our rst observ ation is that the
PAGE 235
219 a v erage percen t in v en tory lev els are appro ximately %1 at both lev els. This result sho ws that our solution approac h of solving the batc hing problems independen tly is acceptable, as it requires lo w in v en tory lev els bet w een successiv e lev els. T able 53: Summary of Supermark et In v en tory Lev els Design % In v en tory Lev el F actor V alue Lev el A v erage Max. Ov erall 2 1.14 24.13 3 0.78 12.61 1.0 2 1.02 6.41 3 0.48 11.45 0.8 2 1.12 23.63 3 0.68 3.41 0.6 2 1.28 20.73 3 1.18 11.45 r 0 2 1.44 23.63 3 0.98 11.45 1 2 0.84 20.73 3 0.58 11.45 10.0 2 0.64 20.73 3 0.39 1.51 1.0 2 0.79 3.45 3 0.50 2.42 0.1 2 1.99 23.63 3 1.45 11.45 0.4 2 1.44 20.73 3 0.94 11.78 0.6 2 1.24 23.63 3 0.78 6.45 0.8 2 0.74 4.48 3 0.62 11.45 In order to commen t on the eect of the design factors on the a v erage in v entory lev els, w e conductttests and compare alternativ e v alues of eac h parameter, in pairs. The results from thettests state that all the alternativ e lev els of all parameters are signican tly dieren t in terms of the resulting a v erage in v en tory lev els. As'decreases, the a v erage in v en tory lev el increases. This is due to the limitation created on the total a v ailable time. As'decreases, less time can be
PAGE 236
220 dev oted to the setups at the sublev els, the batc h sizes at the sublev els increase and the in v en tories at the supermark et are replenished less frequen tly As a result, the in v en tory that should be k ept at the supermark et increases. W e see the same relationship with, as w ell. The smallerv alues require higher in v en tory lev els. F orr =0the products/parts are more div ersied and forr =1the products/parts are more similar to eac h other. As the systems consists of similar items, larger n um ber of batc hing options arise and a smoother con trol of the system can be established. As a result, the in v en tory lev els at the supermark ets are lo w er. As thev alue decreases, the setup requiremen ts also decrease, the processing time become more importan t and few er n um ber of batc hing options exist. As a result, suc h smooth sc hedules can not be established and the supermark et in v en tories that should be k ept become higher.
PAGE 237
CHAPTER 6 SUMMAR Y AND CONCLUSIONS This dissertation dev elops a new planning methodology and tools to successfully operate mixedmodel man ufacturing systems under the JIT philosoph y Curren tly existing literature focuses on sync hronized assem bly lines where alternativ e products ha v e iden tical processing times and no setup requiremen ts. Our w ork con tributes to the literature b y allo wing dieren t products to ha v e arbitrary nonzero setup and processing times on the man ufacturing resources. The dissertation considers singlemac hine and o wshop t ype man ufacturing en vironmen ts both in singlelev el and m ultilev el structures. In order to mak e use of the existing literature and create an easytoimplemen t solution approac h, w e dev elop a t w ophase solution methodology The k ey to the t w ophase methodology is a xedlength timebuc k et that is determined as a part of the decision in the rst phase and is used as the discrete time unit in the second phase. The rst phase establishes the batc h sizes and the n um ber of batc hes to be man ufactured for dieren t products, as w ell as the the length of the timebuc k et. The planning horizon is divided in to equallength timebuc k ets, where the timebuc k et is the time unit that will be later assigned to the batc hes of products. Eac h product, no matter whether man ufactured in a batc h of one or a m ultiple n um ber of items, should t in to the timebuc k et. That is, the summation of the setup time and the processing times of eac h unit in the batc h should not exceed the length of the timebuc k et. This giv es the rst phase problem a dynamic and hard nature, due to the in terdependency bet w een the timebuc k et, batc h sizes and the n um bers of batc hes of dieren t products. The second phase problem is relativ ely easy in 221
PAGE 238
222 that the timebuc k ets are allocated to the products, whic h is a special t ype of an assignmen t problem. W e mak e use of the existing solution methods in the literature, where appropriate, and dev elop our o wn solution methods in the other cases. In all the problems, dynamic programming based methods are found ecien t, for small to moderate size problems. W e summarize the complexities of the arising rst and second phase problems, the most ecien t exact solution algorithms and their w orstcase time complexities in T able 61 The second phase problem is mostly related to the existing literature. Therefore, w e adapt existing solution methods to the second phase problem and focus our atten tion on the rst phase problem. F or the exact solution of the rst phase problem, w e propose algorithms based on dynamic programming and branc handbound methods. The w orstcase time complexities of the exact algorithms for the rst phase problem are prohibitiv e on large problems. Therefore, w e propose heuristic algorithms for the nearoptimal solution of the problem. The a v erage case performance of the heuristic methods, measured in solution time and percen t deviation from the optimal solution, are presen ted in T able 62 The results sho wn in the table are summarized from our computational studies in all four models. The SMSL model results are tak en fromn =10product instances. The m ultilev el results are based on a fourlev el (L =4) structure with v e parts/products at eac h lev el (n l =5,l =1 ; 2 ;::; 4). The o wshop v ersions assume the same n um ber of products, in a v emac hine o wshop setting at eac h lev el (m l =5). In the m ultilev el v ersions of the problem, w e assume supermark et in v en tories bet w een successiv e lev els. W e solv e the batc hing (the rst phase) and sequencing (the second phase) problems of the alternativ e lev els independen tly In this case, the optim um lev els of the supermark et in v en tories come in to question. The lev els should be high enough to prev en t starving of the do wnstream operations (super
PAGE 239
223 lev els) and k ept minimal in order to minimize the in v en toryrelated costs. The results of our computational study sho w that the a v erage supermark et in v en tory lev els are appro ximately 1% of the total demand in both the singlemac hine and o wshop models. This result pro v es that our solution approac h can be successfully applied with lo w supermark et in v en tories. The batc h production smoothing problem (BPSP) can be extended in sev eral directions, whic h w e consider in our future researc h plans, as w ell. An importan t v arian t of the BPSP can tak e setup and in v en tory costs. In our w ork, w e ha v e focused on the viabilit y of the JIT system with arbitrary nonzero setup and processing times. Incorporating setup and in v en tory holding costs can help the man ufacturers lo w er their operating expenses and sustain the JIT system at the same time. The second researc h direction comes from the needs of certain industries. As there exist n umerous examples in the sc heduling theory the setup times can be sequence dependen t. In this case, the utilization of the productiv e time critically depends on the sequence of products. Determining the length of the timebuc k et also becomes m uc h more importan t. Our nal future researc h direction is to dene the BPSP in a realtime sc heduling framew ork. The setup and processing times can be uncertain, as w ell as the demand. In this case, the sequencing phase becomes more importan t and should be solv ed dynamically Moreo v er, the drastic c hanges in demand can also be faced and batc hing decisions m ust be recalculated dynamically
PAGE 240
224T able 61: Problem Complexities and W orstCase Time Complexities of the Exact Solution Methods Model SMSL FSSL SMML FSML 1 stPhase Problem NPcomplete Exact method BDP Time complexit y O (( D n n +1) m (1) (1+ L P u = l +1 n u ) (1) m l (1+ L P u = l +1 n u ) (1) ( a 1 + n P i =2 a i min f i 1 Q l =1 a l ;D i 1 i +2 g ))(1) 2 ndPhase Problem P olynomially solv able NPcomplete Exact method T ransformation to Assignmen t Problem DP Time complexit y O ( Q 3 ) O ( n 2l ( L P u = l +1 n u ) n l Q i =1 ( q l;i +1)) T able 62: A v erage P erformance of our Heuristic Methods on the1 stPhase Problem Model Measure Method SMSL FSSL SMML FSML A vg. Time PSH1 0.35 0.48 0.19 0.27 (seconds) PSH2 3.07 4.64 0.91 1.31 PSH3 1.62 2.39 0.53 0.73 PSH4 0.30 0.44 0.13 0.20 PR 5.83 15.82 0.92 4.63 A vg. Deviation PSH1 0.556 0.752 3.697 4.135 (%) PSH2 0.377 0.552 2.577 3.101 PSH3 0.362 0.602 3.099 3.754 PSH4 0.666 0.702 3.913 4.311 PR 0.015 0.014 0.051 0.057
PAGE 241
REFERENCES Aiex, R. and Resende, M., In press. P arallel strategies for GRASP with pathrelinking, In: Ibaraki, T., Nonobe, K. and Y agiura, M. (eds.), Metaheuristics: Progress as Real Problem Solv ers, V ol. 32 of Op er ations R ese ar ch/Computer Scienc e Interfac es Series Klu w er A cademic Publishers. Aigbedo, H., 2000. Some structural properties for the justintime lev el sc hedule problem. Production Planning & Con trol 11(4), 357362. Aigbedo, H. and Monden, Y., 1997. A parametric procedure for m ulticriterion sequence sc heduling for justintime mixedmodel assem bly lines. In ternational Journal of Production Researc h 35(9), 25432564. Alfandari, L., Plateau, A. and T olla, P ., 2003. A pathrelinking algorithm for the generalized assignmen t problem, In: Resende, M. and de Souza, J. (eds.), Metaheuristics: Computer Decision Making. Klu w er A cademic Publishers, Boston, MA, c hapter 1, pp. 118. Amaral, A. and W righ t, M., 2001. Experimen ts with strategic oscillation algorithm for the pallet loading problem. In terational Journal of Production Researc h 39(11), 23412351. Balas, E., Miller, D., P ekn y J. and T oth, P ., 1991. A parallel shortest augmen ting path algorithm for the assignmen t problem. Journal of the A CM 38(4), 985 1004. Baran y I., v an Ro y T. and W olsey L., 1984. Strong form ulations for m ultiitem capacitated lotsizing. Managemen t Science 30(10), 12551261. Bautista, J., Compan ys, R. and Corominas, A., 1996. Heuristics and exact algorithms for solving the Monden problem. European Journal of Operational Researc h 88, 101113. Cakir, A. and Inman, R., 1993. Modied goal c hasing for products with nonzero/one bills of material. In ternational Journal of Production Researc h 31(1), 107115. Cheng, L. and Ding, F., 1996. Modifying mixedmodel assem bly line sequencing methods to consider w eigh ted v ariations for justintime production systems. IIE T ransactions 28, 919927. 225
PAGE 242
226 Cho, H., P aik, C., Y oon, H. and Kim, H., 2005. A robust design of sim ulated annealing approac h for mixedmodel sequencing. Computers & Industrial Engineering 48, 753764. Ding, F. and Cheng, L., 1993. A simple sequencing algorithm for mixedmodel assem bly lines in justintime production systems. Operations Researc h Letters 13, 2736. Do wsland, K., 1998. Nurse sc heduling with tabu searc h and strategic oscillation. European Journal of Operational Researc h 106, 393407. Drexl, A. and Haase, K., 1995. Proportional lotsizing and sc heduling. In ternational Journal of Production Economics 40(1), 7387. Drexl, A. and Kimms, A., 1997. Lot sizing and sc heduling surv ey and extensions. European Journal of Operational Researc h 99, 221235. Duplaga, E. and Bragg, D., 1998. Mixedmodel assem bly line sequencing heuristics for smoothing componen t parts usage: A comparativ e analysis. In ternational Journal of Production Researc h 36(8), 22092224. Duplaga, E., Hahn, C. and Hur, D., 1996. Mixedmodel assem bly line sequencing at Hyundai motor compan y Production and In v en tory Managemen t Journal (3), 2025. Edw ards, L., 1983. MRP and Kan ban, American st yle, APICS 26th Ann ual In ternational Conference Proceedings. American Production and In v en tory Con trol Societ y Press, Milw auk ee, WI, pp. 586603. ElF allahi, A. and Mart, R., 2003. T abu and scatter searc h for articial neural net w orks, In: Bharga v a, H. and Y e, N. (eds.), Computational Modeling and Problem Solving in the Net w ork ed W orld, In terfaces in Computer Science and Operations Researc h. Klu w er A cademic Publishers, Boston, MA, c hapter 4, pp. 7996. ElNajda wi, M. and Kleindorfer, P ., 1993. Common cycle lotsize sc heduling for m ultiproduct, m ultistage production. Managemen t Science 39(7), 872885. Elmaghrab y S., 1978. The economic lot sc heduling problem (ELSP): Review and extensions. Managemen t Science 24(6), 587598. Fleisc hman, B. and Meyr, H., 1997. The general lotsizing and sc heduling problem. OR Spektrum 19(1), 1121. Garey M. and Johnson, D., 1979. Computers and In tractabilit y: A Guide to the Theory of NPCompleteness. W. H. F reeman and Compan y New Y ork.
PAGE 243
227 Ghamlouc he, I., Crainic, T. and Gendreau, M., 2004. P ath relinking, cyclebased neigh bourhoods and capacitated m ulticommodit y net w ork design. Annals of Operations Researc h 131(14), 109134. Glo v er, F., 1977. Heuristics for in teger programming using surrogate constrain ts. Decision Sciences 8(1), 156166. Glo v er, F., 1998. A template for scatter searc h and path relinking, In: J.K., H., Lutton, E., Ronald, E., Sc hoenauer, M. and Sn y ers, D. (eds.), Selected P apers from the Third European Conference on Articial Ev olution, V ol. 1363 of L e ctur e Notes in Computer Scienc e Springer, London, UK, pp. 1354. Glo v er, F., 1999. Scatter searc h and path relinking, In: Corne, D., Dorigo, M. and Glo v er, F. (eds.), New Ideas in Optimization. McGra w Hill, Maidenhead, UK, c hapter 19, pp. 297316. Glo v er, F., 2000. Multistart and strategic oscillation methods principles to exploit adaptiv e memory: A tutorial on unexplored opportunities, In: Laguna, M. and V elarde, J. G. (eds.), Computing T ools for Modeling, Optimization and Sim ulation: In terfaces in Computer Science and Operations Researc h, V ol. 12. Klu w er A cademic Publishers, Boston, MA, pp. 124. Goldberg, D., 1989. Genetic Algorithms in Searc h, Optimization, and Mac hine Learning. A ddisonW esley Reading, MA. Goldratt, E., 1999. Theory of Constrain ts. North Riv er Press, Great Barrington, MA. Hall, R., 1983. Zero In v en tories. McGra wHill T rade, New Y ork. Hamac her, H. and T ufek ci, S., 1984. Algebraic o ws and timecost tradeo problems. Annals of Discrete Mathematics 19, 165182. Hamiez, J. and Hao, J., 2002. Scatter searc h for graph coloring, In: Collet, P ., F onlupt, C., Hao, J., Lutton, E. and Sc hoenauer, M. (eds.), Selected P apers from the Fifth In ternational Conference on Articial Ev olution, V ol. 2310 of L e ctur e notes in c omputer scienc e Springer V erlag, Le Creusot, FR, pp. 168 179. Harris, F., 1913. Ho w man y parts to mak e at once. F actory the Magazine of Managemen t 10, 135136,152. Holland, J., 1975. A daptation in Natural and Articial Systems. The Univ ersit y of Mic higan Press, Ann Arbor, MI. Hopp, W. and Spearman, M., 2000. F actory Ph ysics: F oundations of Man ufacturing Managemen t. McGra wHill Irwin, New Y ork.
PAGE 244
228 Inman, R. and Buln, R., 1991. Sequencing JIT mixedmodel assem bly lines. Managemen t Science 37(7), 901904. Inman, R. and Buln, R., 1992. Quic k and dirt y sequencing for mixedmodel m ultilev el JIT systems. In ternational Journal of Production Researc h 30(9), 2011 2018. Jordan, C., 1996. Batc hing and Sc heduling: Models and Methods for Sev eral Problem Classes. SpringerV erlag, London, UK. Karmark ar, U. and Sc hrage, L., 1985. The deterministic dynamic product cycling problem. Operations Researc h 33(2), 326345. Karp, R., 1972. Reducibilit y among com binatorial problems, In: Miller, R. and Thatc her, J. (eds.), Complexit y of Computer Computations. Plen um Press, New Y ork, pp. 85103. Kelly J., Golden, B. and Assad, A., 1993. Largescale con trolled rounding using tabu searc h with strategic oscillation. Annals of Operations Researc h 41, 69 84. Kubiak, W., 1993. Minimizing v ariation of production rates in justintime systems: A surv ey European Journal of Operational Researc h 66, 259271. Kubiak, W. and Sethi, S., 1991. A note on lev el sc hedules for mixedmodel assembly lines in justintime production systems. Managemen t Science 37(1), 121 122. Kubiak, W., Steiner, G. and Y eomans, J. S., 1997. Optimal lev el sc hedules for mixedmodel, m ultilev el justintime assem bly systems. Annals of Operations Researc h 69, 241259. Kuhn, H., 1955. The Hungarian method for the assignmen t problem. Na v al Researc h Logistics Quarterly 2, 8397. Leu, Y., Huang, P and Russell, R., 1997. Using beam searc h tec hniques for sequencing mixedmodel assem bly lines. Annals of Operations Researc h 70, 379397. Leu, Y., Matheson, L. and Rees, L., 1996. Sequencing mixedmodel assem bly lines with genetic algorithms. Computers in Engineering 30(4), 10271036. Lumm us, R., 1995. A sim ulation analysis of sequencing alternativ es for JIT lines using k an bans. Journal of Operations Managemen t 13, 183191. Mansouri, S., 2005. A m ultiobjectiv e genetic algorithm for mixedmodel sequencing on JIT lines. European Journal of Operational Researc h 167, 696716.
PAGE 245
229 McMullen, P ., 1998. JIT sequencing for mixedmodel assem bly lines with setups using tabu searc h. Production Planning & Con trol 9(5), 504510. McMullen, P ., 2001a. An an t colon y optimization approac h to addressing a JIT sequencing problem with m ultiple objectiv es. Articial In teligence in Engineering 15, 309317. McMullen, P ., 2001b. An ecien t fron tier approac h to addressing JIT sequencing problems with setups via searc h heuristics. Computers & Industrial Engineering 41, 335353. McMullen, P ., 2001c. A k ohonen selforganizing map approac h to addressing a m ultiple objectiv e, mixedmodel JIT sequencing problem. In ternational Journal of Production Economics 72, 5971. McMullen, P and F razier, G., 2000. A sim ulated annealing approac h to mixedmodel sequencing with m ultiple objectiv es on a justintime line. IIE T ransactions 32, 679686. McMullen, P and T arasewic h, P ., 2005. A beam searc h heuristic mixedmodel sc heduling with setups. In ternational Journal of Production Economics 96, 273283. McMullen, P ., T arasewic h, P and F razier, G., 2000. Using genetic algorithms to solv e the m ultiproduct JIT sequencing problem with setups. In ternational Journal of Production Researc h 38(12), 26532670. Milten burg, J., 1989. Lev el sc hedules for mixedmodel assem bly lines in justintime production systems. Managemen t Science 35(2), 192207. Milten burg, J. and Goldstein, T., 1991. Dev eloping production sc hedules whic h balance part usage and smooth production loads for justintime production systems. Na v al Researc h Logistics 38, 893910. Milten burg, J. and Sinnamon, G., 1989. Sequencing mixedmodel m ultilev el justintime production systems. In ternational Journal of Production Researc h 27(9), 14871509. Milten burg, J. and Sinnamon, G., 1992. Algorithms for sc heduling m ultilev el justintime production systems. IIE T ransactions 24(2), 121130. Milten burg, J. and Sinnamon, G., 1995. Revisiting the mixedmodel m ultilev el justintime sc heduling problem. In ternational Journal of Production Researc h 33(7), 20492052. Milten burg, J., Steiner, G. and Y eomans, S., 1990. A dynamic programming algorithm for sc heduling mixedmodel justintime production systems. Mathematical and Computer Modeling 13(3), 5766.
PAGE 246
230 Monden, Y., 1998. T o y ota Production System: An In tegrated Approac h to JustInTime, third edn. Engineering & Managemen t Press, Norcross, GA. Morabito, M. and Kraus, M., 1995. A note on sc heduling mixedmodel m ultilev el justintime production systems. In ternational Journal of Production Researc h 33(7), 20612063. Morin, T. and Marsten, R., 1976. Branc handbound strategies for dynamic programming. Operations Researc h 24(4), 611627. Nahmias, S., 2001. Production and Operations Analysis, fourth edn. McGra wHill Irwin, New Y ork. Nic holas, J., 1998. Competitiv e Man ufacturing Managemen t: Con tin uous Impro v emen t, Lean Production and CustomerF ocused Qualit y McGra wHill Irwin, New Y ork. Ohno, T., 1988. T o y ota Production System: Bey ond LargeScale Production. Productivit y Press, New Y ork. Oliv eira, C., P ardalos, P and Resende, M., 2003. GRASP with pathrelinking for the QAP. Proceedings of the Fifth Metaheuristics In ternational Conference, Klu w er A cademic Publishers, Ky oto, Japan, pp. 57.157.6. Osman, I. and Laporte, G., 1996. Metaheuristics: A bibliograph y Annals of Operations Researc h 63, 513623. Pinedo, M., 2001. Sc heduling: Theory Algorithms and Systems, second edn. Pren tice Hall, New Y ork. P otts, C. and v an W assenho v e, L., 1992. In tegrating sc heduling with batc hing and lotsizing: A review of algorithms and complexit y The Journal of the Operational Researc h Societ y 43(5), 395406. Reev es, C., 1993. Modern Heuristic T ec hniques for Com binatorial Problems. John Wiley & Sons, Inc., London, UK. Reev es, C., 1997. Genetic algorithms for the operations researc her. INF ORMS Journal on Computing 9(3), 231250. Salomon, M., Kroon, L., Kuik, R. and W assenho v e, L., 1991. Some extensions of the discrete lotsizing and sc heduling problem. Managemen t Science 37(7), 801 812. Souza, M., Duhamel, C. and Ribeiro, C., 2003. A GRASP with pathrelinking heuristic for the capacitated minim um spanning tree problem, In: Resende, M. and de Souza, J. (eds.), Metaheuristics: Computer Decision Making. Klu w er A cademic Publishers, Boston, pp. 627658.
PAGE 247
231 Steiner, G. and Y eomans, S., 1993. Lev el sc hedules for mixedmodel, justintime processes. Managemen t Science 39(6), 728735. Suerie, C. and Stadtler, H., 2003. The capacitated lotsizing problem with link ed lot sizes. Managemen t Science 49(8), 10391054. W agner, H. and Whitin, T., 1958. Dynamic v ersion of the economic lot size model. Managemen t Science 5(1), 8996. W alleigh, R., 1986. What's y our excuse for not using JIT?, In: Mortimer, J. (ed.), JustInTime: An Executiv e Brieng. IFS (Publications) Ltd., Bedford, UK, pp. 119. Y agiura, M., Ibaraki, T. and Glo v er, F., 2002. A path relinking approac h for the generalized assignmen t problem. Proceedings of the In ternational Symposium on Sc heduling, V ol. 1, Hamamatsu, Japan, pp. 105108. Y a vuz, M. and T ufek ci, S., 2004a. The singlelev el batc h production smoothing problem: An analysis and a heuristic solution. T ec hnical Report 05, Departmen t of Industrial and Systems Engineering, Univ ersit y of Florida, Gainesville, FL. Y a vuz, M. and T ufek ci, S., 2004b. Some lo w er bounds on the mixedmodel lev elsc heduling problems. Proceedings of the 10th In ternational Conference on Industry Engineering and Managemen t Systems, Southeastern Publishing Compan y Cocoa Beac h, FL, pp. 385395. Zhang, Y., Luh, P ., Y oneda, K., Kano, T. and Ky o y a, Y., 2000. Mixedmodel assem bly line sc heduling using the Lagrangian relaxation tec hnique. IIE T ransactions 32, 125134.
PAGE 248
APPENDIX A DERIVING OBJECTIVES FR OM GO ALS F OR THE SINGLELEVEL MODELS A.1 DerivingF 1F romG 1The objectiv e function for the2 ndphase problem of the SMSL and FSSL models isZ = Q X k =1 n X i =1 b 2i ( x i;k k q i Q ) 2 :(A.1) If only a single unit of a giv en product is sc heduled, w e will denote the con tribution of this product to the o v erall objectiv e function b yF i [ q i =1] ( s ), wheresis the stage where this product is assigned. Then, this con tribution can be calculated as follo ws.F i [ q i =1] ( s )= b 2i s 1 X k =1 (0 k 1 Q ) 2 + Q X k = s (1 k 1 Q ) 2 # = b 2i Q 2 s 1 X k =1 k 2 + Q X k = s ( Q k ) 2 # = b 2i Q 2 s 1 X k =1 k 2 + Q X k = s k 2 Q X k = s 2 Qk + Q X k = s Q 2 # = b 2i Q 2 Q ( Q +1)(2 Q +1) 6 2 Q Q ( Q +1) 2 ( s 1) s 2 +( Q s +1) Q 2 = b 2i Q ( Q 1)(2 Q 1) 6 +( s 1)( s Q ) (A.2) No w w e extend this result to the general case, whereq i 1. The con tribution is denoted b yF i [ q i ] ( s ), wheresis a v ector, indicating stages associated with eac h cop y of the product. 232
PAGE 249
233F i [ q i ] ( s )= b 2i 24 q i X r =1 s r 1 X k = s r 1 ( r 1 k q i Q ) 2 + Q X k = s q i ( q i k q i Q ) 2 35 = b 2i 24 q i X r =1 s r 1 X k = s r 1 ( r 1) 2 2( r 1) q i Q k +( q i Q ) 2 k 2 + Q X k = s q i q 2 i 2 q 2 i Q k +( q i Q ) 2 k 2 35 = b 2i q i X r =1 ( r 1) 2 ( s r s r 1 ) 2( r 1) q i Q ( s r 1)( s r ) 2 ( s r 1 1)( s r 1 ) 2 # + b 2i q i X r =1 q i Q 2 ( s r 1)( s r )(2 s r 1) 6 ( s r 1 1)( s r 1 )(2 s r 1 1) 6 ## + b 2i q 2 i ( Q s q i +1) 2 q 2 i Q Q ( Q +1) 2 ( s q i 1) s q i 2 + b 2i q i Q 2 Q ( Q +1)(2 Q +1) 6 ( s q i 1) s q i (2 s q i 1) 6 # = b 2i q 2 i ( Q 1)(2 Q 1) 6 Q + 1 Q q i X r =1 ( s r 1)( q i s r (2 r 1) Q ) #(A.3) No w, in order to deriv e a lo w er bound to the o v erall objectiv e function one migh t w an t to nd ideal positions for eac h cop y of the product. F rom equation ( A.3 ) w e can sa y:@ 2 F i [ q i ] @s u @s v = 8>><>>: 2 q i b 2i Q ;ifu = v 0 ;o/w This propert y is a direct result of the fact that equation ( A.3 ) con tains no crossproduct terms. And leads us to claim that the Hessian Matrix ofF i [ q i ] ( s )is positiv e denite, andF i [ q i ] ( s )is strictly con v ex. So, using the rst deriv ativ es, w e can nd ideal positions for eac h cop y of the product. If w e denote the ideal position forr thcop y withs r, then@F i [ q i ] ( s ) @s r = b 2i [2 q i s r ((2 r 1) Q + q i )]=0 ) s r = (2 r 1) Q + q i 2 q i :(A.4)
PAGE 250
234 If w e plug ( A.4 ) in to ( A.3 ) w e get the follo wing.F i [ q i ] = b 2i q 2 i ( Q 1)(2 Q 1) 6 Q + 1 Q q i X r =1 ( (2 r 1) Q + q i 2 q i 1)( q i ( (2 r 1) Q + q i 2 q i ) (2 r 1) Q ) # = b 2i Q 2 q 2 i 12 Q(A.5) Equation ( A.5 ) giv es us the least possible con tribution of productito the o v erall objectiv e function ( A.1 Summing these con tributions o v er the products will giv e us a lo w er bound for the objectiv e function. If w e denote this lo w er bound withF 1, then w e obtain the follo wing.F 1 = n X i =1 F i [ q i ] = n X i =1 b 2i ( Q 2 q 2 i ) 12 Q(A.6) A.2 ExploitingG 2Goal 2 suggests to maximize total n um ber of batc hes. This can be form ulated v ery easily .G 2 =max Q =max n X i =1 q i(A.7) A.3 ExploitingG 3Minimizing length of xedlength timebuc k et is iden tical to maximizing total n um ber of batc hes where total a v ailable time is a constan t.G 3 =min t =min T Q(A.8)) G 3 G 2
PAGE 251
235 A.4 ExploitingG 4What goal 4 suggests can be in terpreted in 2 w a ys. First one is a direct form ulation.G 4 =max f t i t j i =1 ; 2 ;:::;n g =max f t i Q T j i =1 ; 2 ;:::;n g(A.9) A dieren t approac h is using a state v ariable as the decision v ariable.G 4 =max (A.10) S.T. t i ; 8 i(A.11) A.5 ExploitingG 5What goal 5 suggests is using the a v erage utilization ratio instead of the w orst one. This goal can be formalized as follo ws.G 5 =max n P i =1 t i t n =max n P i =1 t i n P i =1 q i nT(A.12) A.6 ExploitingG 6F or a giv en product (i)q i b iunits will be produced within the planning horizon. Assuming the demand for the product is uniformly distributed o v er the horizon, demand for eac h stage isd i =Q. An ideal sequence assigns productito ev eryQ=q th istage. If (b i) is the decision v ariable, higher v alues ofb iresult in producingirarely On the con trary smaller v alues ofb iresult in more frequen t production ofi. Assuming con tin uous demand and replenishmen t only when in v en tory lev el reac hes
PAGE 252
236 0, giv es us the a v erage lev el in v en tory asb i = 2.G 6 =min n X i =1 b i(A.13) Ifb iis not treated as the decision v ariable, but a function ofq i, then an appro ximate form ulation of goal 6 migh t be the follo wing.G 6 =max n X i =1 q i(A.14)) G 6 G 2
PAGE 253
APPENDIX B LO WER BOUND F OR THE FUTURE P A TH IN THE DP F ORMULA TION OF THE SINGLELEVEL MODELS If w e relax the in tegralit y requiremen t ofq iand ignore the constrain ts, w e get: MinimizeF 0 = Q X i 2 N d i q i 2 P i 2 N d 2i QW e rst consider problems withn =2products. Since the DP is solv ed for a giv enQv alue, w e can express the objectiv e function in terms of only one v ariable,q 1, as follo ws. MinimizeF 0 = Q d 1 q 1 2 + d 2 Q q 1 2 P i 2 N d 2i QThe second part of this function is constan t. Therefore, w e can drop the second part, as w ell as theQconstan t in the rst part, and reexpress the objectiv e function. MinimizeF 0 = d 1 q 1 2 + d 2 Q q 1 2In order to nd the minim um v alue of this function, w e c hec k the rst and second deriv ativ e with respect toq 1.d F 0 d q 1 = 2 d 21 q 3 1 + 2 d 22 ( Q q 1 ) 3 d 2 F 0 d q 2 1 = 6 d 21 q 4 1 + 6 d 22 ( Q q 1 ) 4 > 0 ; 0
PAGE 254
238 deriv ativ e to zero.d F 0 d q 1 =0 ) 2 d 21 q 3 1 + 2 d 22 ( Q q 1 ) 3 =0 ) 2 d 21 q 3 1 + 2 d 22 ( q 2 ) 3 =0 ) q 1 q 2 = d 1 d 2 (2 = 3)No w, consider problems withn> 2. If w e decide on all but t w o of theq iv alues then the nal t w o v ariable v alues can be set to their optimal lev els using the abo v e relationship. Since w e can select the t w o v ariables that will be decided on last, arbitrarily then the relationship can be generalized ton> 2case, with relativ e ease. The optimal lev el of v ariableq iis found as follo ws.q j = q i d j d i (2 = 3) ; 8 j ) Q = X j 2 N q j = q i X j 2 N d j d i (2 = 3) ) q i = Q P j 2 N d j d i (2 = 3) ; 8 iThis result can be used in devising a lo w er bound to the solution of a problem with a kno wnQv alue. F or a giv en state( i;R i ), w e kno w the v alues of the v ariablesq 1 ;q 2 ;::;q i, and w e ha v e to allocateQ R ibatc hes to the remaining v ariablesq i +1 ;q i +2 ;::;q n. The abo v e result is generalized to this situation asq l =( Q R i ) = ( P nj = i +1 d j d i (2 = 3) ), for alll>i. W e deneq 0 i;l =( d l =d i ) (2 = 3) = ( P nj = i +1 ( d j =d i ) (2 = 3) ),8 i =0 ; 1 ;::;n 1 ;l>ias the optimal v alues ofq iratios in a partial solution, where the rstiv ariables are xed andq 0 i;lis the ratio ofq lin the remaining part of the solution. F or a giv en state
PAGE 255
239( i;R i ), the optimal solution to the relaxed problem for the rest of the v ariables isG ( i;R i )= Q n X l = i +1 d l ( Q R i ) q 0 i;l 2 n P l = i +1 d 2l Q :W e simplify this result as follo ws.G ( i;R i )= Q ( Q R i ) 2 U i V i Q ;whereU i = n X l = i +1 d l q 0 i;l 2andV i = n X l = i +1 d 2l :
PAGE 256
APPENDIX C FINE TUNING THE MET AHEURISTIC METHODS F OR THE SMSL MODEL C.1 Strategic Oscillation T able C1: Analysis of the P arametersRangeandIterativefor the SO Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 Range 1 4.279 7.446 0.123% 2.208% 2 5.085 11.550 0.096% 2.208% 3 5.683 12.539 0.100% 2.208% Iterative 1 4.146 7.435 0.125% 2.208% 2 4.980 9.071 0.092% 2.208% 3 5.820 12.539 0.101% 2.208% 15 Range 1 14.840 34.887 0.031% 0.655% 2 18.813 39.607 0.028% 0.655% 3 41.436 48.848 0.035% 0.655% Iterative 1 16.310 31.775 0.030% 0.655% 2 18.667 39.938 0.032% 0.655% 3 20.112 48.848 0.032% 0.655% 20 Range 1 36.025 65.507 0.225% 2.594% 2 44.177 77.432 0.223% 2.594% 3 50.605 96.066 0.182% 2.594% Iterative 1 38.620 62.653 0.228% 2.594% 2 45.271 87.279 0.209% 2.594% 3 46.960 96.066 0.193% 2.594% 240
PAGE 257
241 T able C2: t test Results of the P arametersRangeandIterativefor the SO Method P arameter H 0 H Alt t Sig. (2tailed) Range t1 = t2 t1 < t2 7.924 0.000 d1 = d2 d1 > d2 2.535 0.012 t1 = t3 t1 < t3 8.340 0.000 d1 = d3 d1 > d3 2.691 0.007 t2 = t3 t2 < t3 7.174 0.000 d2 = d3 d2 > d3 0.505 0.614 Iterative t1 = t2 t1 < t2 6.760 0.000 d1 = d2 d1 > d2 2.845 0.005 t1 = t3 t1 < t3 6.700 0.000 d1 = d3 d1 > d3 2.429 0.015 t2 = t3 t2 < t3 3.826 0.000 d2 = d3 d2 > d3 0.135 0.893 T able C3: Analysis of the P arametersMaxIters,NFMandNIMfor the SO Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 MaxIters 1 5.159 10.151 0.071% 1.832% 2 5.154 10.196 0.071% 1.832% NFM 1 5.275 10.196 0.071% 1.832% 2 5.037 9.367 0.072% 1.832% NIM 1 5.154 10.072 0.071% 1.832% 2 5.159 10.196 0.071% 1.832% 15 MaxIters 1 20.299 46.450 0.051% 1.169% 2 22.244 57.317 0.051% 1.169% NFM 1 23.601 57.317 0.051% 1.169% 2 18.942 40.674 0.051% 1.169% NIM 1 21.291 57.282 0.051% 1.169% 2 21.253 57.317 0.051% 1.169% 20 MaxIters 1 46.189 121.986 0.191% 2.594% 2 51.440 162.790 0.191% 2.594% NFM 1 60.367 162.790 0.188% 2.594% 2 37.263 87.561 0.194% 2.594% NIM 1 48.820 162.722 0.191% 2.594% 2 48.810 162.790 0.191% 2.594%
PAGE 258
242 T able C4: t test Results of the P arametersMaxIters,NFMandNIMfor the SO Method P arameter H 0 H Alt t Sig. (2tailed) MaxIters t1 = t2 t1 < t2 13.386 0.000 d1 = d2 n/a NFM t1 = t2 t1 > t2 13.596 0.000 d1 = d2 d1 < d2 2.246 0.025 NIM t1 = t2 t1 < t2 1.369 0.172 d1 = d2 n/a T able C5: Analysis of the P arametersNFMandRelativeImprovementfor the SO Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 NFM 1 9.199 14.871 0.067% 1.832% 2 9.394 15.241 0.071% 1.832% 3 9.564 16.093 0.080% 1.832% RelativeImprovement 1 9.391 16.093 0.073% 1.832% 2 9.381 16.092 0.073% 1.832% 15 NFM 1 37.394 58.253 0.047% 1.169% 2 39.953 68.479 0.051% 1.169% 3 42.172 86.895 0.051% 1.169% RelativeImprovement 1 39.843 86.875 0.050% 1.169% 2 39.836 86.895 0.050% 1.169% 20 NFM 1 91.158 199.247 0.180% 2.594% 2 100.780 199.207 0.188% 2.594% 3 109.534 199.197 0.187% 2.594% RelativeImprovement 1 100.487 199.247 0.185% 2.594% 2 100.494 199.207 0.185% 2.594% T able C6: t test Results of the P arametersNFMandRelativeImprovementfor the SO Method P arameter H 0 H Alt t Sig. (2tailed) NFM t1 = t2 t1 < t2 8.924 0.000 d1 = d2 d1 < d2 2.548 0.011 t1 = t3 t1 < t3 9.136 0.000 d1 = d3 d1 < d3 3.061 0.002 t2 = t3 t2 < t3 8.747 0.000 d2 = d3 d2 < d3 2.069 0.039 RelativeImprovement t1 = t2 t0 > t1 0.234 0.815 d1 = d2 n/a
PAGE 259
243 C.2 Scatter Searc h T able C7: Analysis of the P arametersPSHMethodsandDiversificationfor the SS Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 PSHMethods 1 12.106 61.988 0.298% 8.234% 2 13.323 63.811 0.184% 3.098% 3 16.083 67.406 0.153% 3.098% Diversification 1 2.770 6.309 0.307% 8.234% 2 34.007 67.406 0.214% 3.098% 3 4.867 11.435 0.163% 2.687% 4 13.707 56.360 0.163% 2.687% 15 PSHMethods 1 75.654 375.939 0.351% 6.120% 2 78.450 383.231 0.268% 6.120% 3 89.209 397.692 0.172% 6.120% Diversification 1 9.802 24.025 0.379% 6.120% 2 228.123 397.692 0.321% 6.120% 3 18.980 41.159 0.177% 6.120% 4 67.512 329.013 0.177% 6.120% 20 PSHMethods 1 247.558 1,295.890 0.644% 5.044% 2 257.496 1,313.720 0.600% 5.044% 3 272.942 1,348.730 0.432% 5.044% Diversification 1 22.276 57.233 0.684% 5.044% 2 761.040 1,348.730 0.637% 5.044% 3 46.220 104.550 0.460% 5.044% 4 207.793 827.220 0.455% 5.044%
PAGE 260
244 T able C8: t test Results of the P arametersPSHMethodsandDiversificationfor the SS Method P arameter H 0 H Alt t Sig. (2tailed) Diversification t1 = t3 t1 < t3 19.019 0.000 d1 = d3 d1 > d3 7.810 0.000 PSHMethods t1 = t2 t1 < t2 3.961 0.000 d1 = d2 d1 > d2 4.574 0.000 t2 = t3 t2 < t3 7.500 0.000 d2 = d3 d2 > d3 9.177 0.000 T able C9: Analysis of the P arametersLSinPreProcessandLStoRefSetPPfor the SS Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 LSinPreProcess 0 2.314 2.725 0.195% 3.098% 1 4.403 8.744 0.139% 2.076% 2 15.252 147.421 0.069% 2.076% LStoRefSetPP 1 7.513 147.421 0.134% 3.098% 2 7.134 146.489 0.134% 3.098% 15 LSinPreProcess 0 7.941 9.634 0.343% 6.120% 1 17.143 37.795 0.158% 6.120% 2 57.520 305.218 0.054% 1.382% LStoRefSetPP 1 28.396 305.218 0.185% 6.120% 2 26.674 304.497 0.185% 6.120% 20 LSinPreProcess 0 19.012 22.954 0.573% 5.044% 1 42.204 75.228 0.383% 5.044% 2 249.216 1,453.060 0.241% 5.044% LStoRefSetPP 1 105.391 1,453.060 0.399% 5.044% 2 101.564 1,432.880 0.399% 5.044% T able C10: t test Results of the P arametersLSinPreProcessandLStoRefSetPPfor the SS Method P arameter H 0 H Alt t Sig. (2tailed) LSinPreProcess t0 = t1 t0 < t1 16.867 0.000 d0 = d1 d0 > d1 7.417 0.000 LStoRefSetPP t1 = t2 t1 > t2 8.069 0.000 d1 = d2 d1 > d2 0.431 0.667
PAGE 261
245 T able C11: Analysis of the P arametersLSinIterationsandLStoRefSetItersfor the SS Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 LSinIterations 0 2.942 13.070 0.138% 2.076% 1 4.072 17.519 0.098% 2.076% 2 7.500 23.930 0.054% 2.076% LStoRefSetIters 1 3.856 23.930 0.101% 2.076% 2 3.848 22.597 0.093% 2.076% 15 LSinIterations 0 12.658 43.230 0.130% 6.120% 1 17.275 49.588 0.035% 1.382% 2 36.855 162.010 0.009% 0.277% LStoRefSetIters 1 17.671 162.010 0.058% 6.120% 2 17.413 129.619 0.058% 6.120% 20 LSinIterations 0 32.446 92.863 0.277% 3.445% 1 44.687 230.579 0.218% 3.445% 2 80.902 434.240 0.073% 1.805% LStoRefSetIters 1 45.957 328.316 0.187% 3.445% 2 45.899 434.240 0.191% 3.445% T able C12: t test Results of the P arametersLSinIterationsandLStoRefSetItersfor the SS Method P arameter H 0 H Alt t Sig. (2tailed) LSinIterations t0 = t1 t0 < t1 14.714 0.000 d0 = d1 d0 > d1 5.174 0.000 t1 = t2 t1 < t2 14.605 0.000 d1 = d2 d1 > d2 6.751 0.000 LStoRefSetIters t1 = t2 t1 > t2 1.536 0.125 d1 = d2 d1 > d2 1.000 0.318
PAGE 262
246 T able C13: Analysis of the P arametersSubsetSizeandNICfor the SS Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 NIC 1 7.727 48.260 0.101% 2.076% 3 15.629 107.324 0.087% 2.076% 5 25.325 289.800 0.085% 2.076% 10 48.944 512.059 0.080% 2.076% Subset Size 2 14.862 130.657 0.036% 2.076% 3 31.899 520.642 0.118% 2.076% 4 31.423 512.059 0.118% 2.076% 5+ 19.440 246.311 0.081% 2.076% 15 NIC 1 23.920 86.339 0.008% 0.277% 3 39.705 155.232 0.006% 0.277% 5 55.174 241.001 0.006% 0.277% 10 94.651 393.210 0.006% 0.277% 20 NIC 1 69.964 226.323 0.031% 0.781% 3 139.118 676.100 0.027% 0.781% 5 215.881 1061.360 0.020% 0.781% 10 377.286 2494.110 0.020% 0.781% T able C14: Analysis of the P arameterNECfor the SS Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 NEC 0 6.390 17.205 0.066% 2.076% 1 11.262 31.255 0.028% 2.076% 3 19.830 61.361 0.026% 2.076% 5 29.316 108.061 0.003% 0.228% 15 NEC 0 32.408 130.268 0.009% 0.277% 1 61.512 394.856 0.006% 0.277% 3 108.865 676.045 0.003% 0.141% 5 145.207 918.805 0.003% 0.141% 20 NEC 0 65.571 184.670 0.032% 0.781% 1 107.767 373.697 0.020% 0.781% 3 180.657 727.925 0.020% 0.781% 5 254.272 1165.760 0.020% 0.781%
PAGE 263
247 T able C15: t test Results of the P arameterNECfor the SS Method P arameter H 0 H Alt t Sig. (2tailed) NEC t0 = t1 t0 < t1 5.892 0.000 d0 = d1 d0 > d1 1.998 0.047 t1 = t3 t1 < t3 6.278 0.000 d1 = d3 d1 > d3 1.399 0.163 t3 = t5 t3 < t5 5.615 0.000 d3 = d5 d3 > d5 1.000 0.318 d1 = d5 d1 > d5 1.214 0.226 T able C16: Analysis of the P arameterbfor the SS Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 b 1 9.500 22.750 0.021% 1.156% 2 11.419 29.982 0.018% 1.156% 3 16.193 37.895 0.008% 0.274% 15 b 1 36.519 178.378 0.009% 0.277% 2 41.686 214.246 0.006% 0.277% 3 66.465 298.973 0.004% 0.141% 20 b 1 99.137 387.514 0.027% 0.781% 2 111.838 517.114 0.022% 0.781% 3 158.321 719.810 0.021% 0.781% T able C17: t test Results of the P arameterbfor the SS Method P arameter H 0 H Alt t Sig. (2tailed) b t1 = t2 t1 < t2 10.266 0.000 d1 = d2 d1 > d2 3.316 0.001 t2 = t3 t2 < t3 9.824 0.000 d2 = d3 d2 > d3 2.438 0.015
PAGE 264
248 C.3 P ath Relinking T able C18: Analysis of the P arametersLSinIterationsandLStoRefSetItersfor the PR Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 LSinIterations 0 5.509 16.533 0.139% 2.076% 1 7.019 21.711 0.018% 1.156% LStoRefSetIters false 6.263 21.700 0.078% 2.076% true 6.265 21.711 0.078% 2.076% 15 LSinIterations 0 22.895 50.113 0.158% 6.120% 1 32.773 86.434 0.003% 0.142% LStoRefSetIters false 27.834 86.434 0.081% 6.120% true 27.834 86.434 0.081% 6.120% 20 LSinIterations 0 64.829 192.808 0.383% 5.044% 1 107.142 320.700 0.032% 0.646% LStoRefSetIters false 85.986 320.699 0.208% 5.044% true 85.985 320.700 0.208% 5.044% T able C19: t test Results of the P arametersLSinIterationsandLStoRefSetItersfor the PR Method P arameter H 0 H Alt t Sig. (2tailed) LSinIterations t0 = t1 t0 > t1 9.746 0.000 d0 = d1 d0 > d1 5.430 0.000 LStoRefSetIters t1 = t2 t1 < t2 0.111 0.911 d1 = d2 n/a RelativeImprovement t1 = t2 t1 < t2 0.422 0.673 d1 = d2 n/a
PAGE 265
249 T able C20: Analysis of the P arametersbandNTSfor the PR Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 b 1 3.889 12.229 0.038% 1.156% 2 6.591 21.511 0.018% 1.156% 3 8.357 33.166 0.018% 1.156% NTS 1 5.870 23.172 0.025% 1.156% 2 6.283 28.040 0.025% 1.156% 3 6.685 33.166 0.025% 1.156% 15 b 1 18.994 62.990 0.017% 0.737% 2 29.331 86.464 0.003% 0.142% 3 38.699 149.404 0.002% 0.142% NTS 1 25.439 99.473 0.008% 0.737% 2 29.057 123.898 0.008% 0.737% 3 32.529 149.404 0.008% 0.737% 20 b 1 64.047 327.711 0.062% 0.838% 2 92.999 320.719 0.033% 0.646% 3 125.229 423.648 0.016% 0.589% NTS 1 77.517 252.993 0.039% 0.838% 2 96.115 353.808 0.036% 0.838% 3 108.643 423.648 0.036% 0.838% T able C21: t test Results of the P arameterNTSfor the PR Method P arameter H 0 H Alt t Sig. (2tailed) NTS t1 = t2 t1 < t2 13.397 0.000 d1 = d2 d1 > d2 6.298 0.000 t2 = t3 t2 < t3 11.430 0.000 d2 = d3 d2 > d3 1.401 0.161
PAGE 266
250 T able C22: Extended Analysis of the P arameterbfor the PR Method Time Deviation n P arameter Lev el A vg. Max. A vg. Max. 10 b 1 3.787 6.540 0.038% 1.156% 2 5.371 11.277 0.018% 1.156% 3 5.832 15.857 0.018% 1.156% 4 6.393 18.000 0.018% 1.156% 5 7.586 25.981 0.018% 1.156% 15 b 1 13.345 30.017 0.017% 0.737% 2 20.127 40.401 0.003% 0.142% 3 24.309 56.303 0.002% 0.142% 4 29.519 61.891 0.002% 0.142% 5 34.398 72.626 0.002% 0.142% 20 b 1 41.222 146.525 0.073% 0.838% 2 66.442 163.477 0.036% 0.646% 3 77.908 222.773 0.023% 0.646% 4 86.128 253.496 0.018% 0.643% 5 111.024 324.960 0.013% 0.589% T able C23: t test Results of the P arameterbfor the PR Method P arameter H 0 H Alt t Sig. (2tailed) b t1 = t2 t1 < t2 8.019 0.000 d1 = d2 d1 > d2 3.594 0.000 t1 = t3 t1 < t3 7.931 0.000 d1 = d3 d1 > d3 3.954 0.000 t1 = t4 t1 < t4 7.842 0.000 d1 = d4 d1 > d4 3.994 0.000 t1 = t5 t1 < t5 7.962 0.000 d1 = d5 d1 > d5 4.128 0.000 t2 = t3 t2 < t3 5.701 0.000 d2 = d3 d2 > d3 2.015 0.045 t2 = t4 t2 < t4 6.633 0.000 d2 = d4 d2 > d4 2.130 0.034 t2 = t5 t2 < t5 6.955 0.000 d2 = d5 d2 > d5 2.356 0.019 t3 = t4 t3 < t4 3.951 0.000 d3 = d4 d3 > d4 0.534 0.594 t3 = t5 t3 < t5 5.701 0.000 d3 = d5 d3 > d5 1.316 0.189 t4 = t5 t4 < t5 4.286 0.000 d4 = d5 d4 > d5 1.000 0.318
PAGE 267
APPENDIX D FINE TUNING THE P A TH RELINKING METHOD IN THE FSSL MODEL T able D1: Analysis of the P arametersbandNTSfor the PR Method Time Deviation m P arameter Lev el A vg. Max. A vg. Max. 2 b 20 7.667 31.233 0.0045% 0.296% 30 10.019 52.780 0.0045% 0.296% 40 12.053 65.936 0.0045% 0.296% NTS 4 8.194 33.855 0.0045% 0.296% 6 9.368 43.074 0.0045% 0.296% 8 10.508 56.406 0.0045% 0.296% 10 11.583 65.936 0.0045% 0.296% 5 b 20 9.660 39.175 0.0196% 1.619% 30 12.129 49.612 0.0106% 1.619% 40 14.781 65.459 0.0106% 1.619% NTS 4 9.966 36.334 0.0196% 1.619% 6 11.425 46.798 0.0196% 1.619% 8 12.938 56.592 0.0076% 1.619% 10 14.432 65.459 0.0076% 1.619% 10 b 20 14.139 56.248 0.0152% 1.981% 30 18.416 114.659 0.0152% 1.981% 40 24.117 211.652 0.0020% 0.149% NTS 4 15.163 92.918 0.0196% 1.981% 6 17.564 135.330 0.0196% 1.981% 8 20.221 174.277 0.0020% 0.149% 10 22.615 211.652 0.0020% 0.149% 251
PAGE 268
252 T able D2: t test Results of the P arametersbandNTSfor the PR Method P arameter H 0 H Alt t Sig. (2tailed) b t20 = t30 t20 < t30 20.561 0.000 d20 = d30 d20 > d30 1.415 0.157 t30 = t40 t30 < t40 19.144 0.000 d30 = d40 d30 > d40 1.415 0.157 NTS t4 = t6 t4 < t6 14.977 0.000 d4 = d6 n/a t6 = t8 t6 < t8 16.659 0.000 d6 = d8 d6 > d8 1.944 0.047 t8 = t10 t8 < t10 16.380 0.000 d8 = d10 n/a
PAGE 269
APPENDIX E DERIVING OBJECTIVES FR OM GO ALS F OR THE MUL TILEVEL MODELS The objectiv e function for the2 ndphase problem of the SMML and FSML models is giv en belo w.Z = w l Q l X k =1 n l X i =1 b 2l;i ( x l;i;k k q l;i Q l ) 2 + L X u = l +1 w u Q u X k =1 n u X v =1 x u;v;k k d u;v Q l 2(E.1) The rst part of the objectiv e function is iden tical to that of the singlelev el model. Therefore, the lo w er bound dev eloped in Appendix A can be used for this part, if it is m ultiplied with the proper lev el w eigh tw l. W e focus on the second part and letZ ( u;v;k )denote the deviation measured for partvof lev eluat thek thposition.Z ( u;v;k )= x u;v;k k d u;v Q l 2Then, if productiis assigned to thek thposition, the total deviation of the last t w o positions is as follo ws.Z ( u;v;k 1)+ Z ( u;v;k )= x u;v;k 1 ( k 1) d u;v Q l 2 + x u;v;k k d u;v Q l 2 =2 x u;v;k 1 ( k 1) d u;v Q l 2 + b l;i r u;v;l;i d u;v Q l 2 +2 x u;v;k 1 ( k 1) d u;v Q l b l;i r u;v;l;i d u;v Q l =2 A 2 + B 2 +2 ABWhere,A = x u;v;k 1 ( k 1) d u;v Q l andB = b l;i r u;v;l;i d u;v Q l The resulting function is con v ex inAandB, and tak es its minim um v alue atA = B= 2.Z ( u;v;k 1)+ Z ( u;v;k ) 2 B 2 2253
PAGE 270
254 If w e sum this inequalit y o v er the positions of the sequence, then w e get the follo wing.Z ( u;v )= Q l X k =1 Z ( u;v;k 1)+ Z ( u;v;k ) 2 B 2 2 = n l X i =1 q l;i 4 b l;i r u;v;l;i d u;v Q l 2This is a lo w er bound for the deviation measured for a certain part at a certain lev el. Finally in tegrating this partial lo w er bound o v er the lev els and parts, and also considering the lo w er bound dev eloped for the rst part of the objectiv e function, w e get the follo wing complete lo w er bound.F = n l X i =1 w l b 2l;i ( Q 2l q 2 l;i ) 12 Q l + q l;i 4 L X u = l +1 n u X v =1 w u b l;i r u;v;l;i d u;v Q l 2 #(E.2)
PAGE 271
APPENDIX F LO WER BOUND F OR THE FUTURE P A TH IN THE DP F ORMULA TION OF THE MUL TILEVEL MODELS If w e relax the in tegralit y requiremen t ofq l;iand ignore the constrain ts, and set the w eigh ts associated with the sublev els to zero, w e get: MinimizeF 0 = w l 12 Q l X i 2 N l d l;i q l;i 2 P i 2 N l d 2l;i Q lW e rst consider problems withn l =2products. Since the DP is solv ed for a giv enQ lv alue, w e can express the objectiv e function in terms of only one v ariable,q l; 1, as follo ws. MinimizeF 0 = w l 12 Q l d l; 1 q l; 1 2 + d l; 2 Q l q l; 1 2 P i 2 N l d 2l;i Q lThe second part of this function is constan t. Therefore, w e can drop the second part, as w ell as theQ lconstan t in the rst part, and reexpress the objectiv e function as follo ws. MinimizeF 0 = w l 12 d l; 1 q l; 1 2 + d l; 2 Q l q l; 1 2In order to nd the minim um v alue of this function, w e c hec k the rst and second deriv ativ e with respect toq l; 1.d F 0 d q l; 1 = w l 12 2 d 2l; 1 q 3 l; 1 + 2 d 2l; 2 ( Q l q l; 1 ) 3 d 2 F 0 d q 2 l; 1 = w l 12 6 d 2l; 1 q 4 l; 1 + 6 d 2l; 2 ( Q l q l; 1 ) 4 > 0 ; 0
PAGE 272
256 deriv ativ e to zero.d F 0 d q l; 1 =0 ) 2 d 2l; 1 q 3 l; 1 + 2 d 2l; 2 ( Q l q l; 1 ) 3 =0 ) 2 d 21 q 3 1 + 2 d 22 ( q 2 ) 3 =0 ) q l; 1 q l; 2 = d l; 1 d l; 2 (2 = 3)No w, consider problems withn l > 2. If w e decide on all but t w o of theq l;iv alues then the nal t w o v ariable v alues can be set to their optimal lev els using the abo v e relationship. Since w e can select the t w o v ariables that will be decided on last, arbitrarily then the relationship can be generalized ton l > 2case, with relativ e ease. The optimal lev el of v ariableq l;iis found as follo ws.q l;j = q l;i d l;j d l;i (2 = 3) ; 8 j ) Q l = X j 2 N l q l;j = q l;i X j 2 N l d l;j d l;i (2 = 3) ) q l;i = Q l P j 2 N l d l;j d l;i (2 = 3) ; 8 iThis result can be used in devising a lo w er bound to the solution of a problem with a kno wnQ lv alue. F or a giv en state( i;R l;i ), w e kno w the v alues of the v ariablesq l; 1 ;q l; 2 ;::;q l;i, and w e ha v e to allocateQ R l;ibatc hes to the remaining v ariablesq l;i +1 ;q l;i +2 ;::;q l;n. The abo v e result is generalized to this situation asq h =( Q R l;i ) = ( P n l j = i +1 d l;j d l;i (2 = 3) ), for allh>i. W e deneq 0 l;i;h =( d l;h =d l;i ) (2 = 3) = ( P n l j = i +1 ( d l;j =d l;i ) (2 = 3) ),8 i =0 ; 1 ;::;n l 1 ;h>ias the optimal v alues ofq l;iratios in a partial solution, where the rstiv ariables are xed andq 0 l;i;his the ratio ofq l;hin the remaining part of the solution. F or a giv en state( i;R l;i ), the optimal solution to the relaxed problem for the rest of the
PAGE 273
257 v ariables isG ( i;R l;i )= w l 12 0BB@ Q l n l X h = i +1 d l;h ( Q l R l;i ) q 0 l;i;h 2 n l P h = i +1 d 2l;h Q l 1CCA :W e simplify this result as follo ws.G ( i;R l;i )= Q l ( Q l R l;i ) 2 U l;i V l;i Q l ;whereU l;i = w l 12 n l X h = i +1 d l;h q 0 l;i;h 2andV l;i = w l 12 n l X h = i +1 d 2l;h :
PAGE 274
APPENDIX G FINE TUNING THE P A TH RELINKING METHOD IN THE SMML MODEL T able G1: Analysis of the P arametersbandNTSfor the PR Method Time Deviation P arameter Lev el A vg. Max. A vg. Max. b 15 1.390 4.828 0.0617% 11.658% 25 1.696 7.661 0.0457% 11.658% 35 2.064 11.615 0.0071% 4.324% NTS 1 1.514 9.623 0.0400% 11.658% 3 1.740 10.314 0.0364% 11.658% 5 1.896 11.615 0.0380% 11.658% T able G2: t test Results of the P arametersbandNTSfor the PR Method P arameter H 0 H Alt t Sig. (2tailed) b t15 = t25 t15 < t25 19.097 0.000 d15 = d25 d15 > d25 1.522 0.128 t25 = t35 t25 < t35 19.367 0.000 d25 = d35 d25 < d35 1.841 0.066 d15 = d35 d15 > d35 2.358 0.019 NTS t1 = t3 t1 < t3 16.683 0.000 d1 = d3 d1 > d3 0.396 0.692 t3 = t5 t3 < t5 15.344 0.000 d3 = d5 d3 < d5 0.762 0.446 d1 = d5 d1 > d5 0.214 0.831 258
PAGE 275
259 T able G3: Analysis of the P arametersb,NTSandPSHMethodsfor the PR Method Time Deviation P arameter Lev el A vg. Max. A vg. Max. b 5 0.500 1.774 0.4449% 40.056% 10 0.713 3.025 0.0769% 11.658% 15 0.886 4.386 0.0639% 11.658% 20 1.030 6.870 0.0481% 11.658% NTS 1 0.679 3.645 0.1599% 40.056% 2 0.740 4.987 0.1589% 40.056% 3 0.790 5.808 0.1578% 40.056% 4 0.833 6.440 0.1578% 40.056% 5 0.870 6.870 0.1578% 40.056% PSHMethods 1 0.782 6.869 0.1585% 40.056% 2 0.782 6.870 0.1585% 40.056% T able G4: t test Results of the P arametersbandNTSfor the PR Method P arameter H 0 H Alt t Sig. (2tailed) b t5 = t10 t5 < t10 42.699 0.000 d5 = d10 d5 > d10 8.133 0.000 t10 = t15 t10 < t15 32.194 0.000 d10 = d15 d10 > d15 3.520 0.000 t15 = t20 t15 < t20 31.465 0.000 d15 = d20 d15 > d20 2.420 0.016 NTS t1 = t2 t1 < t2 25.056 0.000 d1 = d2 d1 > d2 0.246 0.806 t2 = t3 t2 < t3 28.072 0.000 d2 = d3 d2 < d3 1.414 0.157 t3 = t4 t3 < t4 24.874 0.000 d3 = d4 n/a t4 = t5 t4 < t5 24.406 0.000 d4 = d5 n/a d1 = d3 d1 > d3 0.479 0.632 PSHMethods t1 = t2 t1 < t2 0.809 0.418 d1 = d2 n/a
PAGE 276
APPENDIX H FINE TUNING THE P A TH RELINKING METHOD IN THE FSML MODEL T able H1: Analysis of the P arametersbandNTSfor the PR Method Time Deviation P arameter Lev el A vg. Max. A vg. Max. b 20 3.370 7.887 0.0674% 14.542% 30 4.691 10.145 0.0412% 6.321% 40 6.041 15.511 0.0401% 6.321% NTS 1 3.290 8.889 0.0702% 14.542% 3 4.740 10.413 0.0471% 11.825% 5 6.072 15.511 0.0314% 6.321% T able H2: t test Results of the P arametersbandNTSfor the PR Method P arameter H 0 H Alt t Sig. (2tailed) b t20 = t30 t20 < t30 18.975 0.000 d20 = d30 d20 > d30 8.278 0.000 t30 = t40 t30 < t40 19.047 0.000 d30 = d40 d30 > d40 1.193 0.233 d20 = d40 d20 > d40 9.878 0.000 NTS t1 = t3 t1 < t3 19.638 0.000 d1 = d3 d1 > d3 7.961 0.000 t3 = t5 t3 < t5 18.991 0.000 d3 = d5 d3 > d5 6.214 0.000 260
PAGE 277
BIOGRAPHICAL SKETCH Mesut Y a vuz w as born Ma y 19, 1978, in Istan bul, T urk ey He did his undergraduate w ork at the Istan bul T ec hnical Univ ersit y T urk ey and receiv ed a Bac helor of Science in industrial engineering in 1999. After college he enrolled in the graduate program at the same univ ersit y and serv ed as a researc h and teac hing assistan t during his graduate studies. He receiv ed a Master of Science in industrial engineering, with a concen tration in engineering managemen t, in 2001. Afterw ards, he mo v ed to Gainesville, Florida, and began his doctoral studies in the Departmen t of Industrial and Systems Engineering at the Univ ersit y of Florida. His doctoral w ork is in the area of design and optimization of mixedmodel justintime man ufacturing systems. He successfully met the requiremen ts of the Ph.D. program and w as admitted to candidacy in Decem ber, 2003. He is a mem ber of INF ORMS, IIE and ASQ professional societies. He speaks English, T urkish and Y ugosla vian. 261
xml version 1.0 encoding UTF8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchemainstance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID EPJWT8MWT_EM957C INGEST_TIME 20110616T17:35:29Z PACKAGE UFE0010066_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
