A COMPARATIVE SIMULATION STUDY OF
KANBAN, CONWIP, AND MRP MANUFACTURING
CONTROL SYSTEMS IN A FLOWSHOP
By
THOMAS ALFONS HOCHREITER
A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY
OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR
THE DEGREE OF MASTER OF SCIENCE
UNIVERSITY OF FLORIDA
1999
Copyright 1999
by
Thomas A. Hochreiter
To Mum, Dad, and Sis
ACKNOWLEDGMENTS
I would like to express my sincere gratitude to Dr. Suleyman Tufekci for his
assistance and guidance. As my advisor and the chairman of my supervisory
committee he has provided the constructive critique to refine the content of this thesis.
Further, I would like to acknowledge the contribution of the other members of my
committee, Dr. Diane Schaub and Dr. Sherman Bai. During her course in Applied
Probability, Dr. Schaub improved my understanding of statistics while Dr. Bai
provided me with the required background on digital simulation.
In addition, I would like to thank Tim Elftman for the professional administration
of the computer network and Tobias and Henrik Andersson for their initial support on
the material.
TABLE OF CONTENTS
page
A CK N O W LED G M EN T ......................................................... ............ iv
LIST OF TABLES .............................................. ................. .......... ix
LIST OF FIGURES........................................................................ xiv
A B S T R A C T .................................................................................x ix
CH A PTER 1 IN TR OD U CTION ..................................... ........ .............................. 1
1 .1 M o tiv atio n ...................................................................................................... 1
1.2 T h esis O u tlin e .............................................................................................. 3
CHAPTER 2 CONTROL SYSTEMS........................................................................ 4
2.1 Push A nd Pull System s ............................................................. .............. 4
2 .2 K an b an ......................................................................................................... 6
2.2.1 The M echanism ..... .. ............................... ........................ .............. 7
2 .2 .2 C h aracteristics ..... .. ...................................... ........................ .............. 9
2 .3 C O N W IP ........................................................................................................... 10
2 .3 .1 T he M ech anism ... .............................................................. .............. 10
2 .3 .2 C h aracteristics ..... .. ...................................... ...................... .............. 12
2 .4 M R P ............................................................................................................. . .. 1 3
2 .4 .1 T he M ech anism ... .............................................................. .............. 13
2.4.2 Characteristics ................................. ........ 16
2.5 Comparison of CONWIP with MRP............................................................. 17
2.6 Comparison of CONWIP with Kanban......................................................... 19
CH A PTER 3 SIM U LA TION ....................................... ........................ .............. 23
3.1 The Software ............................. .......... .................................... 24
3 .1 .1 E F M L ......................................................................................................... 2 4
3 .1 .2 A ren a .......................................................................................................... 2 6
3.2 T he Sim ulation Study ......................................... ......................... .............. 28
3.2 .1 State O objective ... .. ...................................... ....................... . . .......... 30
3.2.2 C ollect/Prepare D ata ................................. ....................... .............. 30
3 .2 .3 F orm ulate M odels ........................................ ........................ .............. 3 1
3.2.4 V erification of the M odels ................. ................................................ 32
3.2 .5 V alidation ................................. .............................................. ........... . 37
3.2.6 Simulation Experiment Design ................................................ 37
3.2.7 Sim ulation E execution ..................................... ...................... .............. 4 1
v
3.2.8 Output Analysis and Interpretation of the Results ............................... 42
3.2.9 Conclusions and Im plem entation........................................... .............. 42
CH A PTER 4 STA TISTIC S ....................................... ......................... .............. 44
4.1 Transient and SteadyState B ehavior............................................ .............. 45
4.2 Confidence ...... .............. . ..... ............................................. 46
4.2.1 Analysis for Terminating Simulations ................................................. 46
4.2.2 Analysis for NonTerminating Simulations......................................... 48
4.2.3 Pairedt C confidence Interval .................................................. .............. 49
4.3 M multiple R egression ......................................... .. ......................... ............. 51
4.3.1 Estimating and Testing Hypotheses about the P3 Parameters .................. 54
4.3.2 Usefulness of a Model: R2 and the Analysis of Variance FTest............ 55
4.3.3 Multiple Coefficient of Determination, R2........................................... 56
4.3.4 V ariance FT est ..................................... ........................ .......... .. 56
4.3.5 Comparison of two or more Regression Functions.............................. 58
4.3.6 Transform ation .. ................................................ .............. 59
4.3.7 R esidual A analysis ..................................... .. ....................... .............. 60
4.3.8 Influential O observations ......................................................... .............. 62
CH A PTER 5 B A TCH SIZE ....................................... ......................... .............. 63
5 .1 P aram eters ......................................................................................................... 6 4
5.1.1 Process T im e .. ................................................................ ............. 64
5 .1 .2 B atch S iz e ................................................................................................... 6 5
5.1.3 N um ber of C ards ....................................... .. ...................... ............... 65
5.1.3.1 Card Allocation for K anban ................................................ 67
5.1.3.2 C ard A location R ules ......................................................................... 68
5.1.3.3 Deviation of Rules from Optimum................................................ 72
5.1.4 Interarrival Tim e ......................................................... ......... ..... 77
5.2 A average C ycle Tim e........................................................... ................... 78
5.2.1 The Average Machine Utilization ............... .................................. 79
5.2.2 K anban and C O N W IP ............................................................ .............. 83
5.2.3 Findings and Conclusions for Kanban and CONWIP................................ 93
5 .2 .4 M R P ........................................................................................................... 9 5
5.2.5 F findings for M R P ................................................................. .............. 99
5.3 K anban, CON W IP, and M RP ..................................................... .............. 100
5.3.1 Average Cycle Time Dependent on Work in Process........................... 101
5 .3 .2 C o n clu sio n s ..... .. ...................................... ........................ .............. 10 9
CH A PTER 6 SETU P TIM E ................................... ........................ .............. 110
6 .1 P aram eters ....................................................................................................... 1 12
6.1.1 Setup R atio ...................................................................................... . 113
6.1.2 Utilization.................... ........ ... .................... 113
6.2 Average Cycle Time (High Utilization)...... .... .................................. 120
6.2 .1 C om prison ...... .. ...................................... ....................... . . .......... 122
6.2.2 C conclusions .............. ........ ................................ ...................... .. 124
6.3 Average Cycle Time (Low Utilization) .......... .................................. 124
6.3.1 C om prison ...... .. ...................................... ....................... . . .......... 127
6 .3 .2 C o n clu sio n s ..... .. ...................................... ........................ .............. 12 9
6.4 R egression M odels ............................................................... .............. 129
CHAPTER 7 MACHINE FAILURE ..................... .................... 132
7 .1 P a ra m e te rs ....................................................................................................... 1 3 5
7.2 D ynam ics of F failure ..................................... ......................... .............. 136
7.2 .1 Indicators .................................................................................................. 136
7.2.1.1 Tim e Spent in System ....... ...... ...... .................... 137
7.2.1.2 R recovery Tim e ................................. ....................... .............. 138
7.2.2 C onfi guration ... .. ...................................... ....................... .............. 139
7.2.3 Tim e Spent in the System ...... ........ ...... .................... 140
7.2.4 System R recovery .................................... ........................ .............. 146
7 .2 .5 C onclu sion s ..... .. ...................................... ........................ . . ........ .. 150
7.3 Failure in SteadyState .................................. ........................ .............. 151
7.3.1 Param eters........................................................................ ....................... 152
7.3.2 Influence of Interfailure Time and Repair Duration ............................ 153
7.3.3 Conclusions for Interfailure Time and Repair Duration ....................... 156
7.3.4 U tilization ........................................................................................... 157
7.3.5 A average C ycle Tim e.................... ....................................................... 159
7.3.6 Conclusions for Average Cycle Time...... ................... ................. 165
7.3.7 The Maximum Cycle Time ........................................... 166
7.3.8 Conclusions for the Maximum Cycle Time ................... ................. 170
7.3.9 Standard D aviation of Cycle Tim es ..................................... .............. 170
7.3.10 Conclusions for the Standard Deviation of Cycle Times.................... 173
7.3.11 R egression .. ................................................................ ............. 174
7.3.11.1 M models .................................................................. . .......... 175
7.3.11.2 Effects of the Regressors...... ......................... 177
7.3.11.3 M odel V alidation...... .......... ......... .................... 181
7.3.12 Conclusions .................................................... ............ .. 182
CH APTER 8 CON FIDEN CE ................................. ....................... .............. 183
8.1 T ransient B behavior ................................................................ .............. 184
8 .2 B atch siz e ......................................................................................................... 18 8
8.2.1 Prior to Sim ulations.................................. ...................... .............. 188
8.2.2 Succeeding Sim ulations ...... ......... ......... .................... 191
8 .3 S e tu p ........................................................................................................... . .. 1 9 4
8.3.1 Prior to Sim ulations.................................. ...................... .............. 194
8.3.2 Succeeding Sim ulations ...... ......... ......... .................... 195
8 .4 F a ilu re .............................................................................................................. 1 9 7
8.4.1 D ynam ics of Failure ................................. ...................... .............. 198
8.4.1.1 Time Spent in the System....................................... 198
8.4.1.2 Recovery Time ..................................................... 199
8.4.2 M machine Failure in SteadyState .......................................... .............. 199
8.4.2.1 Influence of Interfailure Time and Repair Duration ...................... 200
8.4.2.2 Prior to Sim ulations...... ......... ........ .................... 202
8.4.2.3 Succeeding Simulations ...... .... ...................... 204
CHAPTER 9 CONCLUSIONS...... ............ ............ .................... 206
9 .1 S u m m a ry ......................................................................................................... 2 0 6
9.2 Future W ork ...... .. .............................. .......... ......... ............ .. 212
GLOSSARY ................................................................................. .. 213
R E FE R E N C E S ................................................................. ............2 15
BIOGRAPHICAL SKETCH............... ...... ...... .......... 222
viii
LIST OF TABLES
Table page
31: Configuration for Kanban, CONWIP, and MRP to verify
correctness of the m odels .......................................................... .............. 33
32: Statistics on ttest to verify concurrence of output between EFML
and Arena for Kanban, CONWIP, and MRP.......................................... 33
41: For the pairedt test, comparing two systems is reduced to
estimating a single parameter, the difference. .......................... .............. 50
51: Additional cards allocated to the system with ten machines ............................ 76
52: Multiple regression output for CONWIP with the average cycle
time (Avgct) dependent on the number of cards (Ccards)......................... 87
53: The residual standard error of different transformations for the
m multiple regression on K anban ................................................. .............. 90
54: Function coefficients describing the dependency of the average
cycle time and the batch size derived by multiple linear
regression for Kanban and CONW IP ....................................... .............. 91
55: The derived functions for CONWIP and Kanban to estimate the
average cycle time for given batch size and number of cards
assigned to the system ..................................... ....................... ............. 92
56: Combinations of batch size and number of cards for maximum
WIP level 60 and the resulting average cycle times for
K anban and CONW IP. ...... ............ .............. .................... 102
57: The increase in cycle time for increasing batch size and constant
W IP .................................................................................................... . . . .. 1 0 4
58: The results for the regression analysis modeling the response of
the average cycle time to the WIP for a comparison between
Kanban, CONWIP, and MRP.............. .................... 107
61: The setup times and the corresponding setup ratios included in this
stu d y ..................................................................................................... . . . 1 1 3
62: Configuration chosen to establish high and low utilization levels ................. 114
63: Output for the paired ttests on difference of high utilization
including setup for Kanban, CONWIP, and MRP................................ 117
64: The output of the paired ttests on the difference between the
average cycle times for Kanban, CONWIP, and MRP............................ 122
65: The mean of the average cycle time relative to the mean of the
utilization for Kanban, CONWIP, and MRP. ............. ................. 123
66: Output for the paired ttests on difference of low utilization
including setup for Kanban, CONWIP, and MRP................................ 125
67: The regression models for the average cycle time as functions of
the batch size, the number of cards assigned to the system or
the interarrival time for MRP, and the setup ratio and their
corresponding multiple coefficients of determination, R2. ....................... 130
71: The configurations chosen for the investigation on the dynamics
o f fa ilu re ...................................................................................................... 1 3 9
72: The coefficients of variation, the minimal and the maximal times
after the failure of machine 5 for Kanban, CONWIP, and
M R P ............................................................................................................ 14 3
73: The results of the paired ttest for the time after failure at passing
point 11 or departure of the system for Kanban, CONWIP,
and M R P .............................................................................. 145
74: Output for the multiple linear regression models fitting the moving
average cycle time dependent on the time after failure for
Kanban, CONWIP, and MRP.............. .................... 148
75: Combinations of interfailure time and repair duration resulting in a
constant availability. ............. .... ........... ......................................... 154
76: The interfailure times and repair durations in different time units
representing the scenarios for the range of availability
sim u lated ..................................................................................................... 15 7
77: The minimum, mean, and maximum values for the low and high
utilization levels as a summary for the simulations completed,
including machine failure for Kanban, CONWIP, and MRP .................. 158
78: The output for the paired ttest to establish the difference between
the utilization including machine failure for Kanban,
CON W IP, and M RP. ................. ..................................................... 158
79: A summary of statistics on the average cycle time for Kanban,
CONWIP, and MRP including machine failure. .............. ................. 163
710: The output for the paired ttest to establish the difference
between the average cycle time including machine failure for
Kanban, CONWIP, and MRP.............. .................... 164
711: Statistics on the maximum cycle time including machine failure
for Kanban, CONW IP, and M RP. ...... ........... ..................................... 167
712: The output for the paired ttest on the maximum cycle time
including machine failure for Kanban, CONWIP, and MRP .................... 169
713: Statistics on the standard deviation of cycle time including
machine failure for Kanban, CONWIP, and MRP. .............................. 171
714: The output for the paired ttest for the standard deviation of cycle
time including machine failure for Kanban, CONWIP, and
M . .RP .......................................................................................................... 1 7 2
715: The relative increase of the standard deviation in cycle time
including machine failure for Kanban, CONWIP, and MRP .................... 173
716: The regression models for the average cycle time including
machine failure for Kanban, CONWIP, and MRP. .............................. 175
717: The domain and the corresponding effects for the regressor terms
including machine failure for Kanban's regression model ...................... 178
718: The domain and the corresponding effects for the regressor terms
including machine failure for CONWIP's regression model................... 179
719: The domain and the corresponding effects for the regressor terms
including machine failure for MRP's regression model.......................... 180
81: The configurations for the analysis of the transient behavior for
Kanban, CONWIP, and MRP.............. .................... 184
82: Configuration for Kanban and CONWIP to determine confidence
interval prior to simulation and the corresponding utilization
and coefficient of variation as the output...... ................... ................. 189
83: Configuration for MRP to determine confidence interval prior to
simulation and the corresponding utilization and coefficient of
variation as the output ....... ............. ............. .................... 189
84: Output for confidence interval calculations for CONWIP, Kanban,
an d M R P .................................................................................................... 19 1
85: Configuration for CONWIP, Kanban, and MRP resulting in the
highest coefficient of variation of all the simulations run ....................... 191
86: Configuration for CONWIP, Kanban, and MRP resulting in the
lowest coefficient of variation of all the simulations run. ........................ 192
87: Configuration for Kanban, CONWIP, and MRP including setup
time to determine confidence interval prior to simulation and
the corresponding throughput and coefficient of variation as
th e o u tp u t. ................................................................................................... 19 4
88: Output for confidence interval calculations including the setup
time for CONWIP, Kanban, and MRP. ................... ................. 195
89: Configuration for Kanban, CONWIP, and MRP including setup
time to determine confidence interval succeeding the
simulations and the corresponding throughput and coefficient
of variation as the output. ...... ......... .......... .................... 195
810: The coefficients of variation prior to the simulations and
succeeding the simulations and their difference including
setup for Kanban, CONW IP, and M RP....... .................... ................. 197
811: The output for ttests done for the time after failure at passing
point 11 for Kanban, CONW IP, and MRP. ...................... ...... ........... 198
812: The results for the calculation of the confidence intervals for the
time after failure and the moving average of the cycle times
for Kanban, CONW IP, and M RP. ...... ........... ..................................... 199
813: The response of the average utilization to different combinations
of interfailure time and repair duration and varying batch size
for a small number of cards [see Table 62]assigned to a line
controlled by CONW IP. ...... ........... ............ .................... 200
814: The response of the average utilization to different combinations
of interfailure time and repair duration and varying batch size
for a large number of cards [see Table 62] assigned to a line
controlled by CONW IP. ...... ........... ............ .................... 201
815: The configurations for Kanban, CONWIP, and MRP including
m machine failure prior to sim ulations. ...................................... .............. 203
816: The amount of entities processed to ensure good estimation of
indicators including m machine failure...... .... .................. ................. 204
817: Output for confidence interval calculations including machine
failure for Kanban, CONW IP, and MRP.......................... ................. 204
818: The configurations for Kanban, CONWIP, and MRP including
machine failure succeeding the simulations. .................... ...... ........... 204
819: Output for confidence interval calculations including machine
failure for Kanban, CONWIP, and MRP succeeding the
sim ulations ......................................................................................... . 205
91: The optimal configurations for the minimal average cycle time for
Kanban, CONW IP, and MRP....... ... .... .................... 210
92: The optimal configurations for the minimal average cycle time for
Kanban, CONW IP, and MRP....... ... .... .................... 210
93: The optimal configurations for the minimal average cycle time for
Kanban, CONW IP, and MRP....... ... .... .................... 211
LIST OF FIGURES
Figure page
21: A push m manufacturing system ............................................................ .............. 5
22: A pull m manufacturing system ................................... ...................... .............. 5
23: The onecard K anban system .................................... ...................... .............. 7
24: A CONW IP production line. ................ ................................................... 11
25: Simplified schematic of MRP...................................... 14
26 : A M R P produ action line ........................................................................................ 15
27 Relative robustness of CONWIP and MRP.................................................... 18
31: The object architecture for the EFM L ............................................. .............. 25
32: A rena's hierarchical Structure ......................................................... .............. 27
33: Flow chart of a sim ulation study....................................................... .............. 29
34: The cycle time per entity and the cumulative average cycle time
dependent on the number of processed entities for MRP with
A re n a ........................................................................................................ . . 3 4
35: The deviation of the average cycle time between EFML and Arena
for different configurations for CONWIP. .............................................. 35
36: The deviation of the average cycle time between EFML and Arena
for different configurations for Kanban.................................................. 36
37: The deviation of the average cycle time between EFML and Arena
for different configurations for MRP ............... ................................... 36
41: R ejection region for a test of /2 ....................................................... .............. 55
51: The ten m machine tandem line .......................................................... .............. 68
52: Free body diagram of the ten machine tandem line modeled as a
b e a m ........................................................................................................ . . . 6 8
53: Number of cards per machine for 11 cards assigned to a ten
m machine line .............................................................................. . . ......... 70
54: Number of cards per machine for 12 cards assigned to a ten
m machine line .............................................................................. . . ......... 70
55: Increase in throughput by allocating cards optimally instead of
sim ply applying the rules ......................................................... .............. 73
56: The average utilization dependent on the number of cards for
K an b a n ........................................................................................................ 7 4
57: The average cycle time dependent on the batch size and number of
cards allocated to the line for the three control systems:
Kanban (1), CONWIP (2), and MRP (3) ............................................... 78
58: Utilization dependent on batch size and number of cards for
Kanban (1), CONWIP (2), and MRP (3) ............................................... 82
59: Average WIP dependent on the number of cards assigned to a
K anban system .. .................................................................. ............ .. 84
510: The average cycle time dependent on the number of cards
assigned to the system for CONW IP ........................................ .............. 85
511: The average cycle time dependent on the number of cards
assigned to the system for K anban ........................................... .............. 86
512: Unequal residual error variance for initial model fitted to
K an b an ......................................................................................................... 8 9
513: The distribution pattern for the residual error of the transformed
multiple regression m odel for K anban............... ................................... 90
514: Three dimensional illustration of the Intransformed data points
of the average cycle time, dependent on the batch size and
number of cards, and the data points computed with the
regression m odel for K anban ........................................................................ 93
515: Work in process dependent on the interarrival time for different
batch sizes for M R P ... .......................................................... ............. 95
xv
516: The average utilization of the line dependent on the interarrival
tim e of the batches for M R P .................................................... .............. 97
517: The average cycle time per entity dependent on the interarrival
tim e for M R P ......................................................................................... 98
518: The average cycle time dependent on the batch size with a
constant throughput for M RP................................................... .............. 99
519: The minimal average cycle time dependent on the average work
in process for the three control system s....... .................... ................. 101
520: The average cycle time dependent on different combinations of
batch size and number of cards assigned, simulation and
regression m odel .. ............. ................ ........................................... 103
521: A closer look at the minimal average cycle time dependent on
lower average work in process for the three control systems .................. 106
522: A closer look at the minimal average cycle time dependent on
higher average work in process for the three control systems ................. 108
61: The higher utilization level dependent on the setup ratio and the
batch size for Kanban, CONWIP, and MRP. ................... ...... ........... 116
62: Throughput dependent on the setup ratio and the batch size for
Kanban, CONWIP, and MRP.............. .................... 118
63: The average cycle time dependent on the setup ratio and the batch
size for Kanban, CONWIP, and MRP. .................................... 120
64: The average cycle time dependent on the batch size and setup ratio
for K anban ....................................................................................... . . 12 1
65: The mean differences of the average cycle times between Kanban,
CONWIP, and MRP for the high utilization level................................ 123
66: The average cycle time dependent on the setup ratio and the batch
size for the high (0.85) and the low utilization level (0.67)..................... 126
67: The average cycle time for the low utilization level dependent on
the setup ratio and the batch size for Kanban, CONWIP, and
M R P ............................................................................................................ 12 8
68: The mean differences of the average cycle times between Kanban,
CONWIP, and MRP for the low utilization level................................. 129
71: Resource states and their occurence times...... .... .................................. 133
72: The effect of failure on the entity. ...... ... .... .................... 133
73: The points of data collection for the investigation on the dynamics
o f fa ilu re ...................................................................................................... 13 8
74: 20 replications showing the first entity passing through the
downstream half of the line after the reactivation of machine 5
for K anban ....................................................................................... . . 14 1
75: 20 replications showing the first entity passing through the
downstream half of the line after reactivation of machine 5 for
C O N W IP ..................................................................................................... 14 2
76: 20 replications showing the first entity passing through the
downstream half of the line after reactivation of machine 5 for
M R P ............................................................................................................ 14 3
77: The average time after failure at the passing points for Kanban,
CON W IP, and M RP. ................. ..................................................... 144
78: The moving average of the cycle time dependent on the time after
failure for five replications per Kanban, CONWIP, and MRP ................ 146
79: The time after failure for which the exponentially smoothed
average of the cycle times exceeds the average cycle time by
less than 10% for Kanban, CONWIP, and MRP .............. ................. 149
710: The average of the average utilizations per batch size and
replication versus the configuration for increasing interfailure
tim es and repair durations ...... ......... ....... .................... 155
711: The average cycle time versus the batch size and the setup ratio
for the six availability levels for Kanban...... .................... ................. 160
712: The average cycle time versus the batch size and the setup ratio
for the six availability levels for CONW IP. ..................... ................. 161
713: The average cycle time versus the batch size and the setup ratio
for the six availability levels for M RP...... .... ................................... 162
714: Box plots of the maximum cycle time including machine failure
for Kanban (1), CONWIP (2), and MRP (3). ................... ...... ........... 168
715: Boxplots of the standard deviation of cycle time including
machine failure for Kanban (1), CONWIP (2), and MRP (3). ................. 171
716: The Cook's Distance versus the index of the data points for the
regression model for Kanban, including machine failure........................ 176
81: Cycle time and average cycle time dependent on the number of
processed entities for CONWIP............... ........................ 185
82: Cycle time and average cycle time dependent on the number of
processed entities for Kanban. ...... ....... .................... 186
83: Cycle time and average cycle time dependent on the number of
processed entities for M RP....... ....... ........ .................... 187
84: Correlogram for MRP indicating the correlation dependent on the
lag num ber. ....................................................................................... . . 190
85: The coefficient of variation dependent on the interarrival time for
M R P ............................................................................................................ 1 9 3
86: The coefficient of variation and the utilization dependent on the
interarrival time for MRP with batch size one and setup time
2 0 0 ...................................................................................................... . . . 1 9 6
xviii
Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science
A COMPARATIVE SIMULATION STUDY OF
KANBAN, CONWIP, AND MRP MANUFACTURING
CONTROL SYSTEMS IN A FLOWSHOP
By
Thomas Alfons Hochreiter
May, 1999
Chairman: Dr. Suleyman Tufekci
Major Department: Industrial & Systems Engineering
The globalization of markets due to the improvement of communication and
transportation media has had a significant impact on manufacturing technology in
recent years. The strong international competition forced companies to establish
efficient production facilities ensuring profitability on the long run. The performance
of the most prevalent American manufacturing control mechanism, MRP, was
questioned after the success of the Japanese Kanban control system during the JustIn
Time era. CONWIP, a generalization of the Kanban control system, was introduced as
a result of extensive research done to understand manufacturing systems with the aim
of improving their efficiency.
During an extensive simulation study, the performances of Kanban, CONWIP,
and MRP were evaluated for a ten identical machine tandem line with respect to batch
size, setup time, and machine failure. The utilization (throughput) was kept constant
for all control systems. The parameters were introduced to the models one at a time,
thereby increasing the realism and the variability of the manufacturing line. Thus, the
performances of the three control mechanisms were explored on three levels of
complexity. Initially, only the influence of batch size on the performances of the
control systems was investigated. Then, the setup time was taken into consideration in
addition to the batch size. Last, machine failure was introduced to augment the
models' realism resulting in a higher practical applicability. On each level, the
performances were evaluated for steadystate, assuming the manufacturing line would
run indefinitely. In addition, the response of the performance to machine failure was
observed dynamically while keeping batch size and setup time constant.
Although the performance differences were found to be minute, Kanban and
CONWIP were outperformed by the traditional control system, MRP, for experiments
with varying batch size and for experiments including both batch size and setup times.
On the highest level of variability, with machine failure introduced, Kanban was
ranked first, closely followed by CONWIP. The two pull systems easily outranked the
push system when evaluated according to average cycle time, maximum cycle time
and the standard deviation of cycle time. Kanban performed best for the dynamic
response to failure as well, where the system performance was measured by the time
taken to recover from failure.
xx
CHAPTER 1
INTRODUCTION
1.1 Motivation
Primarily due to rapid development of technology in the past thirty years, the
market structure throughout the world has changed considerably. Local markets have
become accessible to foreign investors, who are not only able to perform well in their
newly established territory, but, who are even able to excel because of superior
technology. Successful companies embedded globalization in their expansion
strategies, consistently seeking for new markets abroad. Consequently, manufacturing
companies are facing global competition, forcing them to keep up with new concepts
and even to proactively incorporate improvement into their daily production routine.
In 1972 the American Production and Inventory Control Society (APICS) strongly
promoted material requirements planning (MRP) in an effort to strengthen the
American manufacturing industry and its standing in the international arena. MRP
was hoisted to the most prevalent production control system on a national level. After
the successes of JustInTime (JIT) its dominant appearance in industry was
questioned. The Japanese had introduced their superior products manufactured with
the support of the Kanban control system enhancing their global competitiveness. An
enormous amount of research was directed towards the new system giving rise to a
rich body of literature documenting various concepts.
In 1990 another system, striving to maintain a constant work in process
(CONWIP), was presented, able to prove its usefulness in theory and in industry. The
extensive research produced ample knowledge of system's behavior and good
understanding of the factors involved. The newly evolved science, Factory Physics,
attempts to describe and formalize the characteristics of the extreme probabilistic
systems.
However, the models analyzing and comparing the different control systems
analytically are based on too many simplifying and unrealistic assumptions. The
results can merely serve as approximations of real systems, a very limiting attribute
for their practical applicability. Simulation has established itself as a very powerful
alternative to the analytical modeling process. With the reduction in computer
hardware prices and the increase of processor speed, simulation has become a popular
tool in recent years. It enables modeling with great precision resulting in a very good
representation of real systems and trustworthy output data. The simulations software
available allows the study of manufacturing systems dynamically, giving the analyst a
feeling for the system in addition to generating realistic results.
In this research paper the three control systems Kanban, CONWIP, and MRP are
analyzed by means of a comparative simulation study. Ever since the introduction of
Kanban to the world of production, MRP has been discredited as an inferior control
system. However, despite its significant success, Kanban is not flawless. CONWIP is
investigated as a highly praised alternative. An evaluation of their performance with
respect to batch size, setup time and failure should unveil the superior control system
for the chosen manufacturing line.
1.2 Thesis Outline
Chapter 2 highlights the mechanisms and characteristics of the control systems,
Kanban, CONWIP, and MRP. A comparison regarding specific attributes reveals
basic differences that support the existence of all three control systems. Chapter 3
introduces simulation as the alternative to analytical modeling of manufacturing
systems. It denotes the important aspects of a simulation study. Chapter 4 serves as a
reference to both, statistical analysis methods unique to simulation, and methods
common to general data interpretation. In Chapter 5, the influence of batch size on the
performance of the control systems is demonstrated. In Chapter 6, setup time is
included in the investigations. Chapter 7 deals with the manufacturing system with the
highest degree of realism, including batch size, setup time and failure. The response
of the system to failure dependent on time is analyzed as well. Chapter 8, summarizes
calculations performed to ensure a high accuracy of the output data on a 95%
confidence level while Chapter 9 encompasses the conclusions and suggestions for
future work.
CHAPTER 2
CONTROL SYSTEMS
A brief theoretical background on the three manufacturing control systems is
given in this Chapter. The purpose is to primarily elaborate on the characteristics
unique to the individual control systems and their differences and to secondarily
explain their most important mechanisms.
2.1 Push And Pull Systems
Spearman and Hopp [HOP96, p.316] give a very describing quote of Taiichi
Ohno, the father of JustinTime (JIT), to distinguish the meaning of the two terms,
push and pull:
Manufacturers and workplaces can no longer base production on desktop planning
alone and then distribute, or push, them onto the market. It has become a matter of
course for customers, or users, each with a different value system, to stand in the
frontline of the marketplace and, so to speak, pull the goods they need, in the amount
and at the time they need them [OHN88, xiv].
This global perspective can be applied to any individual manufacturing system.
The following definition gives a general and thus abstract explanation of the words:
A push system schedules the release of work based on demand, while a pull system
IuII/v i: e' the release of work based on system status [HOP96, p.317].
This means that a push system releases an entity to the line according to the
exogenous master production schedule (MPS). The release time is not modified for a
change in the manufacturing system [see Figure 21]. Information flows from the
MPS downstream towards the finished goods inventory.
Raw Finished
Material Goods
~      
 Physical Flow
. Information Flow
Figure 21: A push manufacturing system.
A pull system, however, only allows an entity to enter the system when a signal
generated by a change in the line status calls for it. This change results in the most
cases from the departure of an entity from the line [see Figure 22]. Information flows
from the finished goods inventory, the customer, upstream towards the raw material
inventory.
Raw Finished
Material Goods
         
 Physical Flow
 Information Flow
Figure 22: A pull manufacturing system.
The performance of the two systems is dependent on scheduling rules as well.
Here the most prevalent one, fist come first serve (FCFS), will be assumed
throughout. Extensive simulations done by Hum and Lee for JIT systems reveal no
dominant rule. However, the results seem to indicate that FCFS is not necessarily
justified, its weakness becomes most apparent under tight production conditions.
According to them, the user should not arbitrarily adopt a scheduling rule. Instead, the
nature of the scheduling rule and the production environment should be understood
[HUM98].
As the release of material to the line is initiated by the MPS in MRP, the
manufacturing system is controlled by the release rate of material resulting in a
specific throughput. The pull systems on the other hand only allow material into the
system when a card is liberated, a consequence of a reduction in work in process
(WIP). Thus, they control the system by managing the WIP and putting an upper
boundary on the material present in a line.
Kanban and CONWIP are the pull systems discussed here. Their performance will
be compared with the performance of MRP, the most prevalent push system.
Before a comparison of their characteristics can be made, Kanban, CONWIP, and
MRP are discussed as a basis of a practical control system in the following chapters.
2.2 Kanban
Mostly the Toyotastyle Kanban system is discussed as a pull system and it is
hardly surprising that the term pull is commonly viewed as synonymous with Kanban
[SCH82]. There is an immense Kanban literature often comparing its performance to
a push system driven by unreliable demand forecasts [BER92].
In a Kanban system, production is triggered by demand. When a part is removed
from the final inventory point, the last workstation in the line is given authorization to
replace the part. This workstation in turn sends an authorization signal to the upstream
workstation to replace the part it just used. This process continues upstream,
replenishing the downstream void by requesting material from the antecedent
workstation. To control information transfer, the operator requires both parts and an
authorization signal, a card, to work.
2.2.1 The Mechanism
The Kanban system simulated here makes use of one inventory storage point and
requires only one card per station. The Kanban system developed at Toyota makes use
of a twocard system requiring a production card and a move card per station [see
HOP96, p.163]. Figure 23 illustrates the onecard Kanban system.
. P3 1 
5p
0 Workstation E Outbound Stockpoint D Standard Container P Kanban Card
Figure 23: The onecard Kanban system.
The operator finds a card in the hold box at workstation J (1). He/she gets material
from the outbound stockpoint of the upstream workstation I (2). The card attached to
the material is removed and placed into the hold box of the upstream workstation (3).
The material enters the manufacturing process and the card in the hold box is attached
to the product placed in the outbound stockpoint (4). The operator at the upstream
workstation I finds the card in his/her holdbox and starts processing (5). The same
cycle is followed for the upstream machines until the raw material inventory is
reached [see Figure 22]. A Kanban system can be seen as a closed queuing network
with blocking. Jobs circulate around the network indefinitely. However, unlike the
CONWIP system [see 2.3.1], the Kanban system limits the number of entities per
workstation, since the number of production cards at a station establishes a maximum
WIP level for that station. Each production cards acts exactly like a space in a finite
buffer in front of the workstation. The upstream workstation is blocked when the
buffer is full [HOP96, p.325].
Berkley shows that a common model of a Kanban system is equivalent to a
traditional tandem production line with finite buffers. His model assumes that kanbans
travel instantly to their destinations when they are detached from a part, and that the
kanbans and parts travel in quantities of one [BER91]. Gstettner and Kuhn describe
and classify different Kanban systems. They analyze the system with respect to
production rate and average work in process [GST96].
2.2.2 Characteristics
As the amount of material in the system is limited to the number of cards
assigned, there is a natural upper bound of material in process.
Due to the presence of the cards the involvement of the operators in controlling
the flow of material is enhanced. This involvement and active participation paired
with a proactive thinking enables continuous improvement not necessarily given for
the push systems.
A Kanban system suits a stable material flow best. The product mix should be
fairly stable and not too large as the cards are unique to certain products and
expensive in their introduction to a system.
Kanban is not useful in an environment with expensive items that are rarely
ordered, since it would require at least one of each kind of item to be in inventory at
all times.
The performance is very sensitive to the number of cards assigned to the system
and their specific allocation. Gstettner and Kuhn show that the distribution of cards
has a significant effect on the performance of Kanban systems. According to them,
the different types of Kanban control mechanisms show equivalent performance data,
if the distribution pattern is adapted accordingly [GST96].
In most Kanban systems the number of cards assigned to specific workstations is
fixed resulting in blockages or starvation. Blocking occurs when all the cards are
attached to full containers in the outbound stockpoint, while starvation occurs when at
least one production Kanban is in the hold box waiting for a container from the
upstream workstation while the machine at that station is idle. Gupta and AlTurki
have developed an algorithm to implement a flexible Kanban system adjusting the
number of cards to stochastic processing times and a variable demand environment
[GUP97].
Mascolo et al. show that the performance of a multistage Kanban system can be
derived from evaluating a set of subsystems. The subsystems result from a
decomposition of the original line, where each set is being associated with a particular
stage. Numerical results show that the method is fairly accurate [MAS96].
2.3 CONWIP
The CONWIP (CONstant Work In Process) control system strives to maintain a
constant work in process. It was first introduced by Spearman et al. in 1990 and can
thus be classified as a very new control concept [SPE90].
2.3.1 The Mechanism
CONWIP can be considered a special case of Kanban, where the entire line
constitutes one workstation. Departing jobs send production cards back to the
beginning of the line to authorize release of new jobs.
Raw Finished
Material Goods
A B .. L
E 3 1T 0
2
0 Workstation I Parts Buffer D] Standard Container P Card
Figure 24: A CONWIP production line.
The finished product is taken out of the inventory that is fed by workstation L (1).
The production card is sent back to workstation A to authorize the release of a new
job (2). The operator at the upstream workstation A finds the card, gets the raw
material from the inventory and starts processing the unit (3). In a Kanban system,
each card is used to signal production of a specific part. CONWIP production cards
are assigned to the production line and are not part number specific. Part numbers are
assigned to the cards at the beginning of the production line. The numbers are
matched with the cards by referencing a backlog list. When work is needed for the
first process center in the production line [see Figure 24, (3)], the card is removed
from the queue and marked with the first part number in the backlog number for
which raw materials are present [SPE90].
Here, the following simplifying assumptions are made for CONWIP:
1. The production line consists of a single routing, along which all parts flow, and
2. WIP can be measured in units (i.e., number of jobs or parts in the line).
Spearman and Hopp [HOP96, p.324] remark that a CONWIP system resembles a
closed queuing network, in which entities never leave the system, but instead circulate
around the network indefinitely. In reality, the entering jobs are different from the
departing jobs. Assuming that all jobs are identical, this difference does not matter for
modeling purposes. Gstettner and Kuhn mention that the model developed by
Spearman et al. [SPE90] can be refined and adapted to different production
environments as done by Duenyas and Hopp [DUE92] and Duenyas et al. [DUE93]
[GST96]. Huang and Wang show by means of simulation that the CONWIP
production control system is very efficient for the production and inventory control of
semicontinuous manufacturing, such as that found in a steel rolling plant [HUA97].
2.3.2 Characteristics
As does Kanban, CONWIP controls the total amount of work in process in the
system. The WIP is limited to the number of cards assigned to the entire line instead
of to the individual machines.
If a machine fails in a CONWIP line, the amount of material downstream of it will
eventually be flushed out of the system by the demand process. These demand events
will cause the release of new entities to the system. If the machine fails for a long
period of time, these entities and the entities already in the system upstream of the
failed machine will accumulate in the buffer immediately upstream of the failed
machine. The release of the new jobs to the system stops once no more cards are
released from entities departing the system [BON97].
There is no blocking in CONWIP lines since buffers are assumed big enough to
hold all parts that circulate in the line [GST96].
In CONWIP systems information about demand is sent directly from the last to
the first station. The entity goes through all the workstations in the line carrying the
information about necessary production.
2.4 MRP
The promotion of material requirements planning (MRP) by the American
Production and Inventory Control Society (APICS) in 1972 boosted this production
control paradigm to the most prevalent system today. Only after the successes of JIT
and Kanban its dominant appearance in industry was questioned.
2.4.1 The Mechanism
As can be derived from its name, MRP plans material requirements. It deals with
the two dimensions of production control: quantities and timing. The system must
determine appropriate production quantities of all types of items, from final products
that are sold, to components used to build final products, to inputs purchased as raw
materials. It must also determine production timing that facilitates meeting order due
dates.
Figure 25: Simplified schematic of MRP.
The data from the bill of material (BOM) and the master production schedule
(MPS), as the source of demand for MRP, is processed in several steps to produce the
planned order releases and notices such as change notices and exception notices [see
Figure 25]. The BOM describes the relationship between end items and lower level
items while the MPS gives the quantity and due dates for all parts to obtain the gross
requirements. The schematic is presented to illustrate that all the information needed
for the entire manufacturing system originates from the MPS.
Raw Finished
Material Goods
F I I I I
O Workstation ] Unlimited Parts Buffer D Entity
Figure 26: A MRP production line.
The order is released at the raw material post (1) as planned with the help of the
MPS [see Figure 26]. As the entity is released independent of the amount of the
material in the buffer preceding Workstation A, the buffer size may not be limited to a
specific amount of entities. Mostly constraints are given by physical space on the
manufacturing floor. When workstation A is finished with processing the entity, it
pushes it on to the next workstation, B (2). This process continues downstream until
the entity departs the system at the finished goods post.
To be able to address the huge problem of coordinating thousands of orders with
hundreds of tools for thousands of end items made up of additional thousands of
components manufacturing resources planning (MRP II) was developed [HOP96,
p. 143]. It provides a general control structure that breaks the production control
problem into a hierarchy based on time scale and product aggregation, thus, primarily
taking the capacity of the manufacturing system into account. MRP II brings together
many functions to generate a truly integrated manufacturing management system
including demand management, forecasting, capacity planning, roughcut capacity
planning, dispatching and input/output control.
2.4.2 Characteristics
MRP provides a simple method for ordering materials based on needs, as
established by a master production schedule and bills of material. As such, it is well
suited for use in controlling the purchasing of components. However, in the control of
production MRP shows deficiencies [HOP96, p. 143]. This is especially true for
manufacturing systems that require proper exploitation of capacity resources by
taking bottlenecks into consideration.
According to Spearman and Hopp the real reason for MRP's inability to perform
well is the faulty underlying model. The key calculation is performed by using fixed
lead times to derive releases from due dates. These lead times are functions of the part
number only. They are not affected by the status of the plant. More importantly, the
lead times do not consider the loading of the manufacturing system. An MRP system
assumes that the time for a part to travel through the plant is the same whether the
plant is empty or overflowing with work, which is only true for infinite capacity.
Furthermore, to ensure the coordination of parts at assembly, there is a strong
incentive to increase the lead times to provide a buffer against unforeseen
obstructions. However, as inflating lead times introduces more material into the
system, it increases congestion and consequently the cycle times. Instead of delivering
on time, the products are delayed even more [HOP96, p. 175].
As quoted by the APICS literature, MRP's bad performance in industry was
blamed on inaccurate data, including bills of material and inventory records. MRP
requires a high standard of data integrity to function properly [LAT81].
2.5 Comparison of CONWIP with MRP
As mentioned previously [see 2.1], a push system controls throughput and
observes WIP, while a pull system controls WIP and observes throughput. WIP is
directly observable, while throughput can only be determined indirectly. The jobs on a
shopfloor can be physically counted and maintained according to the WIP cap. In
contrast, the release rate for MRP must be set with respect to capacity. If the rate is
chosen too high, the system will be congested with material resulting in high cost due
to insufficient throughput and high WIP. As estimating capacity is very difficult,
optimizing a push system is much more intricate [HOP96, p.325].
Concerning the efficiency, Spearman and Hopp state the following law:
For a given level of throughput, a push system will have more WIP on average than
an equivalent CONWIP system [HOP96, p.327].
The law is supported by a calculation for a simple example of a five machine
tandem line and exponentially distributed process times with mean one hour.
According to Spearman and Hopp MRP systems have more variable cycle times
than equivalent CONWIP systems [HOP96, p.327]. As the total amount of WIP in a
line is fixed, the WIP level at the individual stations are negatively correlated. As the
WIP level increases at one station, it decreases at all the other stations, which tends to
dampen the fluctuations in cycle time. In contrast, WIP levels at the individual
stations are independent of one another for MRP. The WIP level at one station reveals
no information about the WIP levels at the other stations. The overall WIP level may
become extremely high or even low, resulting in great variability of the cycle times
that are directly dependent on the WIP level.
Spearman and Hopp state another law to express the robustness of the two
systems:
A CONWIP system is more robust to errors in WIP level than MRP is to errors in
release rate.
The law is verified with the help of a simple profit function dependent on the
throughput and the WIP level expressed in terms of percent error. The coefficients are
calculated from empirical data, revealing the functions given in Figure 27 [HOP96,
p.329].
 Conwip
Mrp
2 
0 60  .
100
Control [% of optimal]
Figure 27 Relative robustness of CONWIP and MRP.
The profit function for CONWIP is very flat between WIP levels as low as 40%
and as high as 160% of the optimal level. The MRP function declines steadily when
the release rate is chosen at a level below the optimum and falls off sharply when the
release rate is set even slightly above the optimum level.
2.6 Comparison of CONWIP with Kanban
Both CONWIP and Kanban are pull systems since new order releases are
triggered by external demand. As both systems control the WIP and limit the level by
an upper bound, they show similar performance relative to the push system, MRP.
Gstettner and Kuhn reveal in their comparisons between Kanban and CONWIP
that Kanban is more flexible with respect to a certain objective than CONWIP. Not
only does the absolute number of cards matter, but, the card distribution is another
parameter that influences performance. Selecting a favorable card distribution showed
that in a Kanban system a given production rate is reached with less WIP than in a
CONWIP system [GST96]. However, Spearman et al. point out that by allowing WIP
to collect in front of the bottleneck, CONWIP can function with lower WIP than
Kanban [SPE90].
As there is no blocking in CONWIP lines it can easily be understood that a
CONWIP system with n cards will have a higher production rate than a Kanban
system with n cards [SPE92].
According to Spearman and Hopp the most obvious difference is that Kanban
requires setting more parameters than does CONWIP [HOP96, p.330]. In a onecard
system a card count must be established for every workstation, in a twocard system
twice as many. In a CONWIP system the amount of cards is set for the entire line,
which needs to be established only once. Coming up with the optimal card count
requires a combination of analysis and continual adjustment, making it a great deal
easier to find the right configuration for the CONWIP system.
Cards are part number specific in a Kanban system and only line specific in a
CONWIP system. Instead of being matched to a specific part at the upstream
workstation, the cards are matched against a backlog [see 2.3.1], which gives the
sequence of parts to be introduced into the line. Thus, in its pure form, a Kanban
system must include standard containers of WIP for every active part number in the
line to which the cards can be matched. For a high number of parts, although only
occasionally produced, this implies a very high overall WIP level swamping the
manufacturing system [HOP96, p.330]. Gstettner and Kuhn elaborate on this
difference as well, neglecting special release mechanisms in the CONWIP system
which are based on a MPS [GST96]. In a paper Spearman et al. mention that although
the backlog affords the opportunity for control, it also provides a tremendous
challenge. The backlog sequence is the key to assuring adequate capacity when there
are significant setups and to optimizing synchronization of production of part
components [SPE90].
Hall points out, that Kanban is applicable only in repetitive manufacturing
environments [HAL83]. Spearman and Hopp explain repetitive manufacturing by
systems where material flows in fixed paths at steady rates [HOP96, p.331]. They
mention that large variations in either volume or product mix destroy this flow, at
least when parts are viewed individually, and hence seriously undermine Kanban. In
another publication Spearman et al. mention that the JIT environment provided by
CONWIP can accommodate a changing product mix as it is suitable for short runs of
small lots. Furthermore, they find this environment to be more predictable than its
pendant provided by Kanban [SPE89]. A CONWIP system is more robust due to the
planning capability introduced by the process of generating a work backlog.
Spearman and Hopp mention prevalent employee issues differentiating CONWIP
and Kanban. The pull mechanism at every workstation results in great operator stress
as described by Klein [KLN89]. When the operator receives a card having to wait for
the material to start processing, the void has to be replenished as quickly as possible
upon arrival of this material. This is only true for the first workstation in a CONWIP
system. The other station function according to a push system where the operators are
subjected to less pacing stress [HOP96, pp.332333].
The previous comparisons illustrate the advantages of CONWIP over MRP and
Kanban. Most fundamentally, the differences between the pull and the push systems
can be utilized as an advantage to building a manufacturing system that encompasses
the positive attributes of the different mechanisms. The result is an integration of the
systems to compensate for the weaknesses on both sides. According to Titone
integration of various functions into a total comprehensive manufacturing strategy
leads to worldclass manufacturing and profits. Using MRP II for planning and JIT for
the execution combines two powerful tools into an efficient manufacturing system
[TIT94]. Wang et al. introduce an experimental push/pull production planning and
control software system which is designed as an alternative to a MRP II system for
mass manufacturing enterprises in China [WAN96].
Bonvik et al. compare a twoboundary hybrid system to conventional systems.
The system is a hybrid of basestock and Kanban control. Basestock control limits the
amount of inventory between each production stage and the demand process. Each
machine tries to maintain a certain amount of material in its output buffer, subtracting
backlogged finished goods demand, if any [KIB88]. For the hybrid system demand
information is propagated directly as in basestock control and inventory at the
individual workstations is limited as in Kanban control. The hybrid control policy
demonstrated superior performance in achieving a high service level target with
minimal inventories [BON97].
The three control mechanisms were evaluated by means of simulation as the
analytical methods available serve as approximations limited to special cases not
applicable to more complex systems.
CHAPTER 3
SIMULATION
Simulation refers to a broad collection of methods and applications to mimic the
behavior of real systems, usually on a computer with appropriate software. Since
computers and software have evolved tremendously in recent time, simulation has
become very powerful and popular [KEL98, p.3]. Simulation, like most analysis
methods, involves systems and their models. A system is a facility or process, either
actual or planned. It is a collection of elements that cooperate to accomplish some
stated objectives. A model is a collection of symbols and ideas that approximately
represent the functional relationship of the elements in a system [BA198, p.2]. The
system is studied to measure its performance, improve its operation or to determine an
optimal design. As sometimes the primary goal is to focus attention on understanding
how a system works, the results after the modeling process may become irrelevant.
Often, simulation analysts find that the process of defining how a system works,
which must be done before developing a model, provides great insight into the
mechanisms of the system.
From a practical viewpoint, simulation is the process of designing and creating a
computerized model of a real or proposed system for the purpose of conducting
numerical experiments to improve the understanding of the behavior of that system
for a given set of conditions [KEL98, p.7].
Here, the purpose of the simulation was to evaluate the behavior of the system
under different sets of conditions by using the models to carry out groups of
experiments. The simulations primarily provided estimates of the statistics of system
performance. The systems, Kanban, CONWIP, and MRP, were modeled by a ten
identical machine tandem line and exponential distributed process time with mean 20
seconds. Indeed, the modeling process gave great insight into the mechanisms of the
systems creating a feeling for their behavior.
Yavuz and Satir reviewed selected published research on Kanbanbased
operational planning and control in assembly and flow lines. Their article focuses on
simulation models and distinguishes between explorative and comparative type
research. Operational and experimental design features are summarized in tabular
format giving a good overview of work done in this area [YAV95].
3.1 The Software
Two simulation tools were used to conduct the experiments: EFML and Arena.
3.1.1 EFML
The Emulated Flexible Manufacturing Laboratory (EFML) was developed in the
Department of Industrial & Systems Engineering at the University of Florida. The
originating concept was to develop a handson environment where students and
companies could test and study manufacturing operations in a factory setting, giving
students and managers the ability to test the performance of a manufacturing facility,
which could be distributed over several computers.
The EFML is composed of a network of personal computers linked together
through the Virtual Manufacturing Software, which enables the communication of the
computers via the TCP/IP protocol and the internet. The software is written with
Borland's Delphi Developers Toolkit based on an object oriented architecture. The
objects machine, dispatch/raw material inventory storage, repair and maintenance
facility, transportation, assembly line, and finished goods inventory storage can be
assigned to different computers to construct a complete factory. The object
architecture is illustrated in
Figure 31.
Figure 31: The object architecture for the EFML.
As the dispatch object releases material to the shop floor, based on predetermined
release times, the behavior of each factory component can be observed in real time.
According to Mijon the advantage of the EFML over traditional simulation software
is the visual interface providing meaningful output. This output lets the viewer see
where the problem is arising and potentially the reason for its occurrence [MIJ97, p.
3].
The EFML is an evolving system which is continuously improved, adding more
features to increase the realism of the system and to enhance user friendliness even at
the time of writing this thesis.
3.1.2 Arena
Arena combines the ease of use found in highlevel simulators with the flexibility
of simulation languages down to generalpurpose procedural languages like the
Microsoft Visual Basic programming system, FORTRAN, or C. It does this by
providing alternative and interchangeable templates of graphical simulation modeling
andanalysis models that one can combine to build a fairly wide variety of simulation
models. For ease of display and organization, modules are typically grouped into
panels to compose a template. By switching templates one can gain access to a whole
different set of simulation modeling constructs and capabilities. In many cases,
modules from different panels and templates can be mixed together in the same
model. The modules in Arena templates are composed of SIMAN components. Arena
maintains its modeling flexibility by being fully hierarchical, as depicted in Figure
32.
UserCreated Templates
. Commonly used constructs.
Companyspecific processes.
Companyspecific templates.
Etc.
Application Solution Templates .
Call$im 5
BP$im
Etc.
Common Panel
2 Many common modeling constructs.
SVery accessible, easy to use.
o Reasonable flexibility. E
0 E)
s i
 Support, Transfer Panels
Access to more detailed modeling for greater flexibility. <
Blocks, Elements Panels 0
All the flexibility of the SIMAN simulation language. I
I
UserWritten Visual Basic, CIC++, FORTRAN code
SThe ultimate in flexibility.
0o C/C++/FORTRAN requires compiler.
Figure 32: Arena's hierarchical Structure.
Arena includes dynamic animation in the same work environment. It also provides
integrated support, including graphics, for some of the statistical design and analysis
issues that are part of a good simulation study [KEL98, p. 13].
The models for Kanban, CONWIP, and MRP were created with the Blocks and
Elements Panels to utilize all the flexibility of the SIMAN simulation language.
EFML and Arena served as the framework for the simulation study, which is
introduced next.
28
3.2 The Simulation Study
Issues related to design and analysis and representing the model in the software
certainly are essential to a successful simulation study. However, there are more
aspects that should be taken into consideration. Following the flowchart in Figure 33
should improve the chances of conducting a successful study.
Figure 33: Flowchart of a simulation study.
The simulation study does not necessarily have to exactly follow the given
flowchart, there is no general formula to guarantee success. It rather gives a rough
path to follow. Here, the identification of a problem can be omitted directly
proceeding to the second step, stating the objective.
3.2.1 State Objective
The objective is to compare the performance of the three manufacturing control
systems: Kanban, CONWIP, and MRP. The comparison should involve three main
parameters influencing the performance of a manufacturing system:
* Batch size,
* Setup time, and
* Machine failure.
To observe the influence of the individual parameters without any blurring
interaction between one another, the central parameter of this study, batch size, is
introduced first. The complexity of the models is increased steadily by adding setup
time and failure in two further steps. This process allows to build new investigations
on the knowledge gained during prior steps improving the realism with the increasing
number of parameters.
After determining the objective of this study, the focus had to be directed on the
input data.
3.2.2 Collect/Prepare Data
The data is produced by the random number generator provided by the software
packages. The Arena random number generator was tested by applying the chisquare
test of uniformity to the numbers generated. The null hypothesis of uniformity was
not rejected at level a = 0.10 revealing that the numbers generated didn't behave in a
way significantly different from the expectations for truly independent and identically
distributed random variables [BA198, p.60]. Similar behavior was expected from
EFML. As previously mentioned, the exponential distribution function was chosen as
the input distribution function. This distribution function is commonly used for
simulations on manufacturing systems as it has the remarkable memoryless property,
where the past history of a random variable, that is distributed exponentially, plays no
role in predicting its future [KLE75, p. 66]. Unlike most other probability
distributions, the shape of the exponential distribution is governed by a single
quantity. Further, it is a distribution with the property that its mean equals its standard
deviation [MCC94, p.250].
3.2.3 Formulate Models
The models of the systems were built according to the descriptions previously
given. Figures 13, 14, and 16 depict the graphical models of Kanban, CONWIP,
and MRP respectively. For each control system 4 models were created to enable
simulations on the 4 levels including the following parameters:
* Batch size,
* Batch size and setup time,
* Batch size, setup time, and failure (dynamic response), and
* Batch size, setup time, and failure (in steady state).
A few assumptions were made to simplify the simulation process, unfortunately
resulting in a less realistic system. The most important assumptions were the
following:
* The 10 stages are in series, i.e., each stage has only one supplier and one
consumer,
* There is an infinite supply of raw parts at the input of the production system,
* The systems are saturated, there are always demands for finished parts,
* Information is transmitted instantly,
* Transportation within and between workstations is instantaneous,
* The system produces a single part type,
* Kanbans are associated with batches and not with individual entities, and
* Any kanban detached at the output of a stage is immediately available for the
upstream stage, there is no return delay.
More assumptions may result implicitly from those given above.
3.2.4 Verification of the Models
The three basic models were verified with the EFML output. EFML was verified
formally. However, the output data has not been verified before with another
simulation software, therefore making this verification process an especially
interesting task.
For both Kanban and CONWIP 25 replications were run on Arena and EFML. For
MRP 30 replications were carried out. The configurations are given in Table 31. The
interarrival time corresponds to batch interarrival times.
Table 31: Configuration for Kanban, CONWIP, and MRP to verify correctness of the
models.
Control System Process Time Batch Size Number of Cards Interarrival Time
Kanban 20 4 20
CONWIP 20 4 20
MRP 20 5 105
A paired ttest [see 4.2.3] was performed on the output data to test the following
hypothesis:
Ho: True mean of average cycle time differences is equal to 0, and
Ha: True mean of average cycle time differences is not equal to 0,
to calculate the 95% confidence interval. The statistics are given in Table 32.
Table 32: Statistics on ttest to verify concurrence of output between EFML and
Arena for Kanban, CONWIP, and MRP.
System tvalue df pvalue Interval Estimate of Average
mean of diff. Cycle Time
Kanban 0.4048 24 0.6892 (3.6486; 5.4292) 0.8903 1608.517
CONWIP 0.2335 24 0.8174 (2.2046; 2.7671) 0.2812 1848.7355
MRP 0.164 29 0.8709 (115.2965; 98.1806) 8.5580 4166.046
All of the intervals include the value 0 resulting in the failure of rejecting the null
hypothesis. The 95% confidence intervals indicate a small deviation of the average
cycle times for Kanban and CONWIP.
For MRP the interval calculated is considerably bigger, even evaluated relative to
the average cycle time. Here CONWIP presents a very small deviation. The reason for
the strong deviation of MRP is the varying average cycle time, even after a big
amount of entities have passed through the system. The halfwidth for the confidence
interval indicated [see Table 84], that 10,000 entities would result in an accurate
estimation of the cycle time. Although Figure 34 reveals that the average cycle time
for 10,000 entities produced has approached a fairly stable value, it is still varying for
bigger numbers.
6000
7 4000
I
2000
Cycle Time
 Average Cycle Time
0
0 5000 10000 15000 20000 25000 30000
Number of Processed Entities
Figure 34: The cycle time per entity and the cumulative average cycle time
dependent on the number of processed entities for MRP with Arena.
Even after 20,000 entities processed the average is moving, indicating that the
random generator has an influence on the output for Arena. The same behavior is
expected for EFML, as both simulation tools don't generate true random numbers.
This fact could explain great deviations even for a high number of replications
completed [see Figure 36].
To get an impression of how the systems would behave for different
configurations, more simulations were run for varying batch size (1, 2, 4, 8, and 10)
and number of cards (10, 15, 18, 10, and 22) or length of interarrival time (22 645).
The difference was measured as the percentage deviation in average cycle time,
Atcycle
t EFML Arena
Atc = 100 cycle cycle
cycle 1t cEFML
cycle
where tyEm is the average cycle time for EFML and t .e" is the average cycle time
for Arena.
As not enough replications were run to evaluate the output data statistically, scatter
diagrams were constructed to visualize the results.
0.8
0.6
0.4 *
E 0.2 
oU 0 .
S0.4
0.6
. 0.8 
1
1.2
Batch Size
Figure 35: The deviation of the average cycle time between EFML and Arena for
different configurations for CONWIP.
10
4.
5 6 7 8 9 10
Batch Size
Figure 36: The deviation of the average cycle time between EFML and Arena for
different configurations for Kanban.
20
15
10
5
0
5
10
15
A
2 3 4 5 6 1
9 10
Batch Size
Figure 37: The deviation of the average cycle time between EFML and Arena for
different configurations for MRP.
Figures 25, 26, and 27 indicate a fairly random output. The shift of data points
for Kanban [see Figure 36] can probably not be associated to a software error, as the
points lie above (batch size eight) and below (batch size four) the xaxis. The
3 1



calculations done earlier reveal no significant difference between the outputs for batch
size four and 20 cards assigned. Faulty input data would probably result in a bigger
difference than 2%. The shift may again be attributed to the random generators.
3.2.5 Validation
The models under consideration were representing systems existing in theory
only. Too many parameters were omitted to enable the simulation of a real system,
making validation impossible. Yavuz and Satir mention, that the modeling of reallife
manufacturing environment and usage of empirical data would provide a practical
means of validation for the simulation models developed. The validation was missing
in most of the articles reviewed. Validation would unravel intricacies of
manufacturing that are demystified through mostly gross assumptions [YAV95].
3.2.6 Simulation Experiment Design
Experiments are performed by investigators in virtually all fields of inquiry,
usually to discover something about a particular process or system. Literally, an
experiment is a test. A designed experiment is a test or series of tests in which
purposeful changes are made to the input variables of a process or system to observe
and identify the reasons for changes in the output response [MON91, p.1].
The progression of choice of factors and levels included in the experiments is
discussed at the beginning of the chapters covering the different stages of simulation:
* batch size,
* batch size and setup time,
* and batch size, setup time and failure.
The discussions comprise the following factors, henceforth called system parameters:
* Total number of cards assigned to the entire line, c [see 5.1.3],
* Batch size, b [see 5.1.2],
* The ratio of setup time to process time, r, [see 6.1.1],
* Time between failures (interfailure time), t nfl [see 7.1], and
* Repair duration, trepr [see 7.1].
The levels were determined according to practical applicability, primarily
concerning average machine utilization. First, a high and a low level per factor was
established. Then, the interval [low, high] was divided into segments with a certain
amount of intermediate levels. As the average run time of one replication was
approximately two minutes, the amount of levels was held high, mostly equal to ten.
The total cost could be selected as the primary response variable as the cost affects
the basic goal of a company: making profit. The optimization of manufacturing
resources can reduce costs considerably resulting in a higher profit margin or even a
higher revenue as other market segments are conquered, in turn increasing the overall
market share. This elevates cost to one of the most important indicators if not the most
important indicator for the efficiency of a manufacturing system.
One characteristic makes cost even more useful. It can serve as an overall
indicator, that takes different aspects into consideration, consolidating all the
indicators. However, when several indicators are accumulated to be represented by
one quantifier, the question, how to weigh the individual components, arises. The
weights are most diverging for different industries. Even within one industry, they
may differ considerably, representing the company's unique environment.
A wide variety of functions is available enabling a controller to construct a model
perfectly fitting the needs. Unfortunately, often weights contain error terms and other
parameters that are determined by subjective estimation, making a cost analysis at this
point questionable.
Constructing functions for different scenarios would certainly give more insight
into the problem [AFY98]. But, the gain in investigating other factors was classified
as more important. Furthermore, the regression models can be transformed into cost
functions without greater effort. The construction of more complex models would
certainly be an interesting topic for another thesis that would probably be most
rewarding when written in cooperation with industry.
Consequently, the performance measures were selected as the response variables.
The control systems were evaluated on several criteria utilizing the following
performance indicators:
* Work in process, WIP,
* Throughput, Th,
* Average utilization, u [see 5.2.1],
* Average cycle time, tcycle,
* Time spent in the system (analysis of dynamic response), t system [see 7.2.1
Indicators, Time Spent in System], and
* Recovery time (analysis of dynamic response), trecove, [see 7.2.1 Indicators,
Recovery Time].
The relationship,
Th = WIP
Th=_
cycle
is known as Little's Law and is often referred to in manufacturing literature, being
originally derived for a basic queuing system. It was found to be independent upon
any specific assumptions regarding the arrival distribution, the service time
distribution, the number of servers in the system or upon the particular queuing
discipline within the system [KLE75, p. 17]. The formula existed as a "folk theorem"
for many years before Little established its validity in a formal way [LIT61]. The
formula is a useful tool as it can be used to calculate the third unknown indicator
when two indicators are known, independent of system configurations.
The three basic principles of experimental design,
1. randomization,
2. blocking, and
3. replication
were taken into consideration in the following manner:
1. As the system variables and statistics were reinitiated after every replication and
the random number generators were assumed to produce numbers confidently,
behaving like numbers following a true random distribution, the order of the runs
was not randomized.
2. The simulation software and the computer hardware provided an identical
environment for every experiment performed, making experiment blocking
[MON91, p. 9], unnecessary.
3. As most of the simulations were performed for nonterminating systems [see
3.2.7] a large number of entities was produced rather than completing several
replications of the same configuration [see 4.2.2]. Only the analysis done on the
dynamic behavior to failure involved a terminating system [see 4.2.1]. Here, the
number of replications was established prior to the bulk of experiments [see
8.4.1].
3.2.7 Simulation Execution
Depending on the starting and stopping conditions, terminating or nonterminating
simulations can be executed as a natural reflection of how the target system actually
operates. The terminating simulation ends according to some modelspecific rule or
condition. For instance, a manufacturing line operates as long as it takes to produce
500 completed assemblies specified by order. According to Kelton et al. the key
notion is that the time frame of the simulation has a welldefined and natural end, as
well as a clearly defined way to start up. A steadystate of nonterminating simulation,
on the other hand, is one in which the quantities to be estimated are defined in the
long run, i.e., over a theoretically infinite time frame [KEL98, p. 177]. For a
manufacturing line that never stops or restarts, a nonterminating simulation is
appropriate.
After initial reflections on parameter settings and several model modifications,
preliminary calculations of the confidence intervals [see 4.2 and CHAPTER 8] were
conducted. These computations were done to ensure high accuracy on the estimation
of the performance indicators. After the completion of the simulations on each of the
levels, the confidence on the indicators was reevaluated. All the calculations carried
out on the confidence were aggregated and documented in a separate chapter not to
disrupt the analysis of the data.
3.2.8 Output Analysis and Interpretation of the Results
The output analysis and interpretation forms the major part of this documentation.
Since simulation was the modeling tool in question, statistical output analyses were
considered in a comprehensive manner. Yavuz and Satir, and Chu and Shih found
these issues to be treated rather lightly in many studies reviewed [YAV95] [CHU92].
3.2.9 Conclusions and Implementation
At the end of the three chapters encompassing the discussions on the stepwise
introduction of batch size, setup time, and machine failure the conclusions drawn
from prior investigations are presented. Conclusions presented within the chapters, are
clearly marked by a heading.
Unfortunately, a few additional factors have to be taken into consideration to
enable simulations of an authentic manufacturing line. However, some findings may
be translated into implementations able to improve productivity and efficiency of a
real production system.
Before proceeding to the actual discussions of the simulations, a fairly
comprehensive but short theoretical background on the statistical analysis methods
used is given in the next chapter. The summary of the statistical theory in one chapter
43
can serve as a review for some readers, but should primarily serve as the source of
reference making explanations within the chapters redundant. Thus, several
clarifications are reduced to one only, and the obstruction of narration is eliminated.
CHAPTER 4
STATISTICS
A simulation is a computerbased statistical sampling experiment [BA198, p.97].
The results of a simulation have to be analyzed with the appropriate statistical
techniques to reveal their full potential. Statistics cannot prove that a factor has a
particular effect. They only provide guidelines as to the reliability and validity of
results. Properly applied, statistical methods do not allow anything to be proved
experimentally, but, they do allow us to measure the likely error in a conclusion or to
attach a level of confidence to a statement. Thus, the primary advantage of statistical
methods is that they add objectivity to the decisionmaking process. Unfortunately,
the output processes of virtually all simulations are nonstationary and auto
correlated. Thus, classical statistical techniques based on identical independent
distributed (lID) observations may not be directly applicable. Sometimes, special
techniques have to be applied to ensure the statistical independence of the output data.
Let x1,, x12 ,..,x m be a realization of the random variables X1,X2,..., Xm resulting
from a simulation run of m replications using the random numbers Ul1,u12,.... If the
simulation is run with different sets of random numbers u21,u22,..., a different
realization x21,x22,..., X2m of the random variables X1,X2,..., Xm will be obtained. For
different runs of a simulation, different random numbers are used for each replication.
The statistical counters are reset at the beginning of each replication, which uses the
same initial conditions. Suppose that we make n independent runs of length m,
resulting in the observations:
,11 ... 11 ... 1C m
21 ... X *... nm
xCn ... x z ... x
The observations from a particular replication (row) are not IID due to the nature
of the random generators. However, the observations in the ith column are IID
observations of the random variable X,, i=1, 2, ..., m. This independence across runs
allows the statistical methods discussed below to be used. The goal it to make use of
the observations to draw inferences about the random variables X1,X2 ,..., Xm, the
parameters influencing the performance of the different control systems [BA198,
p.98].
4.1 Transient and SteadyState Behavior
For the output stochastic process X1,X2,... let
F,(xI) =P(X, < xJI), i=1, 2, ...,
where x is a real number an I represents the initial conditions. 1F (x I) is called the
transient distribution of the output process at time i for initial conditions I.
For fixed x and I, the probabilities F, (x I), F2 (x I),... are just a sequence of numbers.
If 1F (xI) ' >F(x)for all x and all initial conditions I, then F(x) is called the
steadystate distribution of the output process X, X2,... Here, if the distributions are
approximately the same after k steps in time, then steadystate is said to start at time k.
However, steadystate does not mean the random variables Xk+1, Xk+2,...will take on
the same value in a particular simulation run. It means that they will have
approximately the same distribution [BA198, p.98].
As mentioned earlier, statistics can not prove the correctness of a certain
statement. Instead, they allow statements to be made with a certain confidence.
4.2 Confidence
The statistical analysis methods differ according to whether simulations are
terminating or nonterminating [see 3.2.7].
4.2.1 Analysis for Terminating Simulations
The data set is given by n independent replications of a terminating simulation.
Each replication is initiated with the same conditions and a different random generator
seed and terminated by a certain event. Thus, independence of the observations is
achieved by a different string of random numbers.
Let Xbe the observation of the ith replication, i=1, 2, ..., n. It is assumed that the
X,'s are comparable for different replications. Consequently, the X,'s can be defined
as identical independently distributed random variables.
For n data points X,, X2, ..., Xn, the sample mean is an unbiased point estimator for
the mean of X represented by the following formula:
n
X,
X(n) =
n
The 100(1(x)% confidence interval for the mean is given by
where s2(n) is the sample variance given by
2
[X, X (n)]
s2()= =1
n1
with n1 degrees of freedom.
Let h be the halfwidth of the confidence interval of the point estimate,
h __ 2 *n)
n
To ensure the desired accuracy of the estimation,
h < X(n),
where y is a given parameter, 0 < y< 1, here y= 0.1 by default.
After an initial simulation with n replications this condition may not be satisfied.
Additional n2 replications have to be run to reduce the initial halfwidth h, to the
desired halfwidth h2 [BA198, p. 103]
For moderately large n1, the sample statistics will remain relatively unchanged
with respect to n, thus,
tnl1,1a/2 t n21,1a /2
s 2 (n ) S2 (n2 :
X(n) X(n2)
Consequently,
4.2.2 Analysis for NonTermin ating Simulations
Let YI Y,, ... be an output string from a single replication of a nonterminating
simulation. P(Y, y)= ((y) >P(Y : y)=F(y),
where Y is the steady state random variable with distribution F. Due to the initial
conditions, the observations near the beginning of the simulation usually are not
representative of the steadystate behavior. For given observations Y,, Y2, ..., Ymthe
following formula gives a good point estimate of E(Y):
m
Y (n,l)= 1=+1
where I/ stands for the warmup period and m for the number of observations. I/ and m
are determined such that
Y(nm,/)> E(Y).
The Method of Batch Means is applied to ensure the accurate calculation of a
point estimate for nonterminating systems.
A replication results in observations Y,, Y2, ..., Ym after removing the warmup
period 1. The m observations are divided into n batches of length k, thus, mnnk. Let
Y' (k) be the sample mean of the k observations in the jth batch. Let
Y(n, k) =1
n m
be the grand sample mean. Then Y(n, k) can be used as the estimate point for E(Y).
The batch size k can be determined by a correlation analysis. k is set equal to the
lag length resulting in a minimal correlation of the data. Should
m
n = 
k
be noninteger, the excess amount of data, e,
e =m [n
k
can be truncated.
4.2.3 Pairedt Confidence Interval
The following assumptions have to be made:
1. Each system provides an equal amount of data (n replications),
2. Observations are independent within the systems.
The following descriptions will refer to the two systems as System A and System
Table 41: For the pairedt test, comparing two systems is reduced to estimating a
single parameter, the difference.
Replication System A System B Difference
1 xai xb1 dX
2 Xa2 Xb2 d2
n Xan Xbn dn
The confidence interval on the quantity 8, which is the expected value of d, will
enable a comparison between the two systems. Thus, the problem of comparing two
systems is reduced to estimating a single parameter, namely d, [see Table 41]. The
resulting confidence interval is referred to as apairedt confidence interval.
This method is particularly appealing as the following assumptions can be
omitted:
1. Variance of xa = variance of xb (assumption for the twosamplet method),
2. xa, and xb, are independent.
The confidence interval requires Xai and xa2 to be independent, but correlations
across rows are permitted. The procedure for computing the confidence interval on 8
is exactly the same as for the singlesystem case:
Sd
n
snd) *
The halfwidth for a (1c) confidence interval on 8 centered at d is then given by
h = t,,_a /2s(d).
The statistic d is an estimate of the difference in the measured performance of the
two systems: if the two systems perform identically, the expected value of d is 0. If
the computed confidence interval contains 0, a difference between System A and
System B can not be reliably stated. However, if the interval does not contain a 0, a
difference between the two systems can be stated with the appropriate confidence
level. If the confidence interval does not contain 0, the two systems differ and the
appropriate system can be selected based on the sign of d .
The authors elaborate on the fact, that if the interval on the difference between the
systems contains 0, the two systems are not necessarily the same. Additional
replications may be required to discern any difference [PEG95, pp. 177].
Another powerful tool to analyze data is regression. As regression describes
statistical relations between variables, it also enables estimation and prediction of data
points.
4.3 Multiple Regression
A regression model is a formal means of expressing the two essential ingredients
of a statistical relation:
1. A tendency of the dependent variable to vary with the independent variable in a
systematic fashion, and
2. A scattering of points around the curve of statistical relationship [NET90, p. 27].
Probabilistic models that include terms involving x2, X3 (or higherorder terms), or
more than one independent variable are called multiple regression models. The
general form of these models is
y =O + 31xi + P2x2 + ...+ kxk + .
The dependent variable y is written as a function of k independent
variables xl, x2,..., k. x*,x 2,..., xk can be functions of variables as long as the
functions do not contain unknown parameters. The random error term, e, is added to
make the model probabilistic rather than deterministic. The value of the coefficient Pf,
determines the contribution of the independent variable x, and /o is the yintercept.
The coefficients / Po, ..., fkA are usually unknown because they represent population
parameters
y = PO + Axi + 2 +... + k +k
, Randomerror
Deterministic part of model
The Least Squares Approach is used to fit the multiple regression models. The
estimated model
P=O +fAx, +...+Pk Xk
minimizes
SSE = (y)2,
where SSE stands for the sum of square errors.
The sample estimates ,..., k are obtained as a solution to a set of
simultaneous linear equations.
Model Assumptions:
1. For any given set of values of x,, x2, ..., xk, the random error e has a normal
probability distribution with mean equal to 0 and variance equal to o2.
2. The random errors are independent in a probabilistic sense [MCC94, p.744].
02 represents the variance of the random error, e. Thus it is an important measure
of the usefulness of the model for the estimation of the mean and the prediction of
actual values of y. If 72 = 0, all the random errors will equal 0 and the predicted
values, 5y, will be identical to E(y), that is, E(y) will be estimated without error. On
the other hand a large value of o'2 implies large values of e and larger deviations
between the predicted values, y), and the mean value, E(y). Thus, o'2 plays a major
role in making inferences about f0 3, ..., 1k, in estimating E(y), and in predicting for
specific values ofx,, x2, ..., k..
Since the variance of the random error will rarely be known, the results of the
regression analysis are used to estimate its value with the following formula
2 2(
n (k+) '
(k+1) indicating the number of P parameters. This will be referred to as the mean
square for error (MSE). To enable a meaningful interpretation, the standard deviation
s is introduced as a measure of variability
n(k +1)
4.3.1 Estimating and Testing Hypotheses about the P3 Parameters
Some of the P3 parameters have practical significance in the models formulated in
the following chapters. Thus, their values will be estimated and hypotheses will be
tested about them. Considering the model
y= 30 + P3x + Px2 +e
the following hypothesis could be performed using a ttest:
null hypothesis Ho: /= 0 (No curvature in the response curve.)
against the
alternative hypothesis H,: f32< 0 (Concavity exists in the response curve.).
The ttest utilizes a test statistic analogous to that used to make inferences about
the slope of the straightline regression model. The t statistic is formed by dividing the
sample estimate, /2, of the parameter, /3, by the estimated standard deviation of the
sampling distribution of /2, S, s
Test statistic: t =
s
For relevant estimated model coefficients f/ the estimated standard deviation
s A and the calculated t values will be given. To find the rejection region for the test
the uppertail value for t is retrieved from the ttable. This is a t, such that P(t < t)
= a. This value can then be used to construct rejection regions for either onetailed
[see Figure 41] or twotailed tests.
Rejection Area
Figure 41: Rejection region for a test of /2
The numbers given in the following chapters list the twotailed significance levels
for each t value. The null hypothesis, that the parameter equals to zero, would be
rejected in favor of the alternative hypothesis, that the parameter does not equal to
zero, at any cx level larger than the given number. A 100(1a)% confidence interval
for a 13 parameter is given by
, ta/2S
where t/2is based on n(k+l) degrees of freedom and n observations and (k+1) /
parameters in the model [MCC94, p.746].
4.3.2 Usefulness of a Model: R2 and the Analysis of Variance FTest
Conducting ttests on each /P parameter in a model is not a good way to determine
whether a model is contributing information for the prediction of y. When conducting
a series of ttests to determine whether the independent variables are contributing to
the predictive relationship, it is most likely that an error would be made in deciding
which terms to retain in the model and which to exclude. This may result in including
a large number of insignificant variables and excluding some useful ones. Thus, a
global test that encompasses all the P parameters is needed. Furthermore, it would be
useful to find a statistical quantity that measures how well the model fits the given
data. As this statistical quantity R2, the multiple coefficient of determination, can be
used to calculate the F value. R2 will be introduced first.
4.3.3 Multiple Coefficient of D determination, R2
As the name multiple coefficient of determination indicates, R2 is the equivalent
of r2, the coefficient of determination for the straightline model [see MCC94, p. 697].
It is defined as the following
R2 1 y y)2 _ Explained variability
Y, (y y)2 Total variablity
where ) is the predicted value of y for the model. R2 represents the fraction of the
sample variation of the y values that is explained by the least squares prediction
equation. R2 = 0 implies a complete lack of fit of the model to the data and R2 = 1
implies a perfect fit with the model passing through every data point. Thus, the larger
the value of R2, the better the model fits the data [MCC94, p. 759].
4.3.4 Variance FTest
The following test would formally test the global usefulness of the model:
Ho: =1 = ...= = 0
(All model terms are unimportant for predictingy.,
Ha : At least one of the coefficients /, is nonzero
(At least one model term is useful for predicting y).
The test statistic used to test this hypothesis is an F statistic, which can be
calculated with the following formula:
F R2/k
(1R2)/I[n(k +1)]'
where n is the sample size and k is the number of terms in the model. The formula
indicates that the F statistic is the ratio of the explained variability divided by the
model degrees of freedom to the unexplained variability divided by the error degrees
of freedom. The larger the proportion of the total variability accounted for by the
model, the larger the F statistic.
To determine when the ratio becomes large enough that the null hypothesis can be
rejected and the model is more useful than no model at all for predicting, the
calculated F value is compared to a tabled F value:
Rejection region: F > F,, where F is based on k numerator and n(k+l)
denominator degrees of freedom.
McClave et al. caution the reader that a rejection of the null hypothesis leads to
the conclusion, with 100(la)% confidence, that the model is useful. However, useful
does not necessarily mean best. Another model may prove even more useful in terms
of providing more reliable estimates and predictions. Thus, this global Ftest is
usually regarded as a test that the model must pass to merit further consideration
[MCC94, p.762]. It will only be used in this sense in the following chapters.
4.3.5 Comparison of two or more Regression Functions
Instead of fitting separate regressions for separate data sets, only one regression is
fitted. This regression gives rise to the same response functions otherwise obtained.
This has the following advantages:
1. Inferences can be made more precisely by working with one regression model
containing indicator variables since more degrees of freedom will then be
associated with the mean standard error (MSE)[NET90, p.355],
2. One regression run on the computer will yield both fitted regressions, and
3. Tests for comparing the regression functions for the different classes of the
qualitative variable can be clearly seen to be tests of regression coefficients in a
general linear model [NET90, p.358].
Here the data sets of the different control systems are accumulated to produce one
data set. Indicator variables (or binary variables) that take on the values 0 and 1 are
used to quantitatively identify the classes of the qualitative variables distinguishing
the control systems. To prevent computational difficulties a qualitative variable with c
classes will be represented by (c1) indicator variables [see NET90, p.351].
Assuming that a first order model is to be employed it would give rise to the
following function:
y =3o +31x, +/3211i
where
x, = independent variable, and
{1 controlsysteml
0 controlsystem2
The response function of this regression model is:
E(y)= P0 + 1,x, + 2P1,
which can be interpreted as:
E(y) = (o + 2)+ ,11
for the control system 1, and as:
E(y) = o + 1x,
for the control system 2. Thus, P2 measures the differential effect of the type of
control system. It shows how much higher (lower) the mean response line is for the
class coded 1 than the line for the class coded 0, for any given level of x,
This approach is completely general. If three control systems are to be compared,
additional variables are simply added to the model. Furthermore, the differentiation is
not only limited to the yintercept, but can be introduced to distinguish gradients or
coefficients of variables with higher order.
However, the following assumption has to be made:
The error term variances in the regression models for the different populations are
equal, otherwise transformations may be used to approximately equalize them.
4.3.6 Transformation
Simple transformations of either the dependent variable y or the independent
variable x, or of both, are often sufficient to make the simple regression model
appropriate for the transformed data. Unequal error variances and nonnormality of
the error terms frequently appear together. To reduce the departure from a simple
linear regression model a transformation on y is needed, since the shapes and spreads
of the distributions of y need to be changed. Such a transformation on y may help to
linearize a curvilinear regression relation at the same time. At other times, a
simultaneous transformation on x may also be needed to obtain or maintain a linear
regression relation. However, it is very unlikely that such a transformation will be
needed in the following chapters.
Box and Cox [COX58] have developed a procedure for choosing a transformation
from the family of power transformations on y. This procedure is useful for correcting
unequal error variances. The family of power transformations is of the form:
y, = yY ,
where y is a parameter to be determined from the data. The family encompasses the
following and widely used transformation:
y' = loge y.
The criterion for determining the appropriate parameter yof the transformation of
y in the BoxCox approach is to find the value of that minimizes the error sum of
squares SSE for a liner regression based on that transformation.
4.3.7 Residual Analysis
When regression analysis is applied deviations from the initial assumptions may
result in incorrect reliabilities stated. The departures have to be detected and taken
into account should they be big enough to alter the results. Fortunately, experience
has shown that least squares regression analysis produces reliable statistical tests,
confidence intervals, and prediction intervals as long as the departures from the
assumptions are not too great [MCC94, p.784].
As the assumptions [see 3.2.3] concern the random error component, e, of the
model, a first step is to estimate the random error. Since the actual random error
associated with a particular value of y is the difference between the actual y value and
its unknown mean, the error is estimated by the difference between the actual y value
and the estimated mean. This estimated error is called the regression residual, denoted
by .
S = actual random error
= (actual y value) (mean of y)
=y E(y)
= y(0 +P 1x1 + 1 2 +.." k xk)
S = estimated random error (residual)
= (actual y value) (estimated mean of y)
= yy
= y( + 1x,+1 2 x2 ...+ kxk).
As the true mean of y (i.e., the true regression model) is not known, the actual
random error can not be calculated. However, because the residual is based on the
estimated mean (the least squares regression model), it can be calculated and used to
estimate the random error and to check the regression assumptions. These checks are
generally referred to as residual analyses [MCC94, p.784].
4.3.8 Influential Observations
When using regression, some subset of the observations may be found to be
unusually influential. Sometimes these influential observations are relatively far away
from the vicinity of the rest of the data. Dennis R. Cook developed an excellent
diagnostic, the Cook's distance. This is a measure of the squared distance between the
usual least squares estimate of P based on all n observations and the estimate obtained
when the ith point is removed, say, f, [NET90, p. 403].
The next chapter comprises a discussion of the influence of the batch size on the
performance of the three manufacturing systems. A comparison between the systems
introduces the chapter to give the reader a brief overview of the material. Then, the
two pull systems are discussed in more detail to explain their behavior. The push
system, MRP, is introduced separately due to its different attributes. After dealing
with the material in more detail on a level where the interdependence of factors is
more evident the discussion continues on a higher level by returning to the
comparison of the systems.
CHAPTER 5
BATCH SIZE
Avoiding setups and facilitating material handling are the two primary reasons for
watching jobs together in a manufacturing system. If large lots of similar products are
run in batches, equipment setups are infrequently needed. If setups are long, large lots
result in substantially more effective capacity. Furthermore, for process batches equal
to move batches the material that is moved between workstations in large batches
requires less handling than if it is moved in small lots [HOP96, p. 288].
The entities arrive at a workstation in a batch. While the first entity of that batch
enters the machine, the remaining entities have to wait to be processed. The batch can
be transported to the next stage in the system, only when the last entity of a batch is
completed. Here, transportation is assumed infinitely fast resulting in zero
transportation time.
A variety of single stage models and analytical techniques have been reviewed by
Chaudhry and Templeton [CHA83]. The literature covers single stage manufacturing
systems only, not applicable to a ten machine tandem line. Gold investigates
sophisticated batch service systems in push and pull manufacturing environments as
single stage systems by using embedded Markov chain techniques [GOL92]. Kim et
al. focus on production scheduling in semiconductor wafer fabrication taking batch
sizes into account. They use simulation to evaluate new scheduling rules [KIM98].
Schoening and Kahnt show how to extend the methodology of Mitra and Mitrani
[MIT90] to model a onecard Kanban system with batch servers [SCG95]. However,
in all three cases the batches could be processed simultaneously by batch servers, such
as plating baths, drying facilities, and heattreating ovens, not quite transferable to the
tandem line with sequentially processing machines.
The model parameters and their levels are introduced to elaborate on the input
data prior to the discussion of the simulation results.
5.1 Parameters
The process time was established at 20 seconds throughout all the simulations
while the following parameters were varied to evaluate the performance of the
manufacturing systems:
* Batch size,
* Total number of cards assigned to the line, and
* The interarrival time for MRP.
The levels of these parameters or factors are discussed briefly.
5.1.1 Process Time
A workstation which processes a batch size r can be modeled as an rstage
Erlangian server. In such a system a customer enters the server, proceeds one stage at
a time through the sequence of r stages and departs at the end. Only then, a new
customer enters. The total time that a customer spends in this service facility is the
sum of r independent identically distributed random variables, each chosen from an
exponential distribution. The probability distribution function of the service time is an
Erlangian distribution [KLE75, pp. 123124]. Consequently, the process time for a
batch of size r is distributed according to an rstage Erlangian distribution with a
mean of the individual process time, viz. 20 seconds. The batch size and the mean
process time per entity were given as an input.
5.1.2 Batch Size
The following batch sizes were selected:
1, 2, ..., 10, and 20.
Initially, neutral experiments were conducted to establish differences of system
behavior for batch size 20. The results were found to be compliant with the results
obtained for batch size 1 to 10. Thus, batch size 20 was omitted for further
experiments.
5.1.3 Number of Cards
The second design parameter portrays the number of cards assigned to the entire
line. Naturally, this parameter applies to the pull systems only. Its pendant for MRP is
the interarrival time. The parameter merely indicates the total amount of cards in the
system. It does not specify the number of cards assigned to individual machines.
Huang and Wang determine the number of cards in a CONWIP system, 0, by
applying Little's Law:
0 =ut,
where p is the average throughput of the production line and t is the average time for
a card to pass through the production line. The formula is expanded to approximate
the number of cards in a production line in series containing a bottleneck [HUA98].
Optimizing the number of kanbans in a line has been a popular research topic.
According to Bonvik et al. most kanban implementations set the parameters by rules
of thumb or simple formulas [BON97]. Sugimori et al. state Toyota's formula as an
example:
c> DL(1+ a)
P
where c is the number of cards, D is the demand rate, L the replenishment lead time, a
a safety factor, andp the number of parts in a container [SUG77]. During factory
operation, the kanban numbers are steadily decreased by reducing the safety factor.
According to Bonvik et al. the fact that the formula is based on standard lead times is
less than satisfying, as it does not reflect the lead time consequences of shop floor
congestion and limited machine capacities [BON97].
Liberopoulos and Dallery use an iterative heuristic to optimize the number of
cards assigned to a conventional singlestage Kanban control system (KCS). They
show that the computational complexity of optimizing a singlestage generalized
Kanban control system (GKCS) is the same as that of optimizing the KCS, which can
be considered a special case of the GKCS [LIB95]. However, the algorithm was
found to be rather complex, making use of an analytically tractable approximation
method or simulation for initialization. Dallery and Liberopoulos introduce the
extended Kanban control system (EKCS) as a KCS accommodating N stages in
another publication [DAL95], which was recently generalized to assembly structures
by Chaouiya et al. [CHY98]. However, these discussions have a pure comparative
nature, not incorporating the number of cards assigned to the system.
Unlike CONWIP, the Kanban control system does not only vary with the number
of cards assigned to the entire system, but, its performance is dependent on the
number of cards assigned to the individual machines. To ensure a comparison of an
optimal Kanban with CONWIP and MRP, some card allocation studies had to be
carried out prior to the actual simulations.
5.1.3.1 Card Allocation for Kan ban
Card allocations can not be carried out according to a generally applicable
algorithm. Some rules have been documented, applicable to specific manufacturing
systems. Gsettner and Kuhn make use of a heuristic to determine the optimal
allocation for a given production rate in a Kanban line with m stations. The
production rate is calculated analytically underestimating the true production rate
systematically. The procedure starts with assigning one card to every station. The
number of cards is then increased at each station on a trial basis. The distribution
which shows the best ratio between change in production rate and WIP is finally
accepted (greedy procedure) [GST96].
The next subchapter constitutes an endeavor to specify general allocation rules
relevant to the ten machine tandem line.
5.1.3.2 Card Allocation Rules
To visualize the material and to avoid ambiguity, the rules are explained with the
assistance of statics, essential to any engineering education. The ten machine tandem
line [see Figure 51] can be modeled as a beam supporting ten weights of equal
distance to one another [see Figure 52].
Figure 51: The ten machine tandem line.
W1 W2 W3 W4 W5 W6 W7 W8 W9 W10
Arm + Moment
.4Center Point +
Figure 52: Free body diagram of the ten machine tandem line modeled as a beam.
The moment of a force is its tendency to produce rotation of the body on which it
acts, about some axis. The measure of a moment is the product of the force and the
perpendicular distance between the axis of rotation and the line of action of the force.
This distance is called the moment arm [see Figure 52]. The intersection of the axis
of rotation with the plane of the force and its moment arm is called the center of
moments [JEN83, p. 15]. As it is a point, it is referred to as center point here. All the
forces of a system may be regarded as the component of their resultant force. Hence,
about any point, the moment of the resultant force (total weight) equals the algebraic
sum of the moments of the separate forces (weights). This principle is known as
Varignon's Theorem [JEN83, p. 17].
For the ten machine production line, the weight refers to the number of cards
assigned to a machine. The weight increases with increasing number of cards
allocated. Thus, the balance of the line can be expressed as the moment of the
resultant force, a consequence of a specific card allocation.
As the rules are not applicable to all manufacturing lines, the following
assumptions were made:
1. Identical machines,
2. All machines comprise the bottleneck,
3. Objective: maximum throughput, and
4. Center point: median of line (between machine 5 and 6).
Applying the statics analogy to the manufacturing line the following rules result:
1. Increase weight of last machine last,
2. Positive moment preferred to negative moment:
Increase weight on positive side of center point first,
Start increasing weight with smaller arm first,
3. Establish balance on line:
Symmetric structure relative to center point,
moment close to the absolute minimum (zero): same weight with certain
arm on either side (positive and negative) of center point,
Small difference (one card) in weight between the machines for the entire line,
and
4. Minimize number of consecutive machines with same weight.
All rules are to be applied simultaneously. However, the importance of the rules
decreases with increasing number. Thus, if the rules contradict one another, the rules
with lower number override the rules with higher number. Initially, all machines get
assigned the same amount of cards. Then, any additional cards are allocated according
to the rules. All the additional cards previously positioned may have to be reallocated
for one more card assigned to the line, thus satisfying an additional rule. For example:
the card remaining from allocating one card to each machine, the 11th card, is
assigned to the 6th machine [see Figure 53].
1 1 1 1 1 2 1 1 1 1
Figure 53: Number of cards per machine for 11 cards assigned to a ten machine line.
However, when a 12th card is assigned, the 11th card previously assigned has to be
reallocated to machine seven while the 12th card is assigned to machine four [see
Figure 54].
1 1 1 2 1 1 2 1 1 1
Figure 54: Number of cards per machine for 12 cards assigned to a ten machine line.
To test the correctness of the rules, some simulations were carried out. For these
simulations batch size, setup time, and failure were not taken into consideration. It
was assumed, that the card allocations were optimal independent of the above
mentioned parameters.
The rules were found to result in a good approximation of the optimum. However,
an approximation was not good enough to compare Kanban with the other two
systems, both being able to perform at their optimal settings. Consequently, more
simulations were run to establish optimal card allocations for 10 to 70 cards assigned
to the line. Assuming that the performance of the line could not be improved
otherwise, one rule was kept: small difference (one card) in weight between the
machines for the entire line.
The following number of combinations, m, had to be run for 10 to 19 cards being
assigned:
9 10
m = =1023.
As it was found that even the optimal allocations for the interval 10 to 19 cards
could not be applied to the lines with 20 to 70 cards, simulations had to be run for the
following intervals:
1. [10, 19],
2. [20, 29],
3. [30, 39],
4. [40, 49],
5. [50, 59], and
6. [60, 69],
plus one last replication for 70 cards assigned. This resulted in
n = 6m+l = 6(1023)+1 = 6139
experiments.
Thus, 6139 replications were completed resulting in the data to evaluate the rules
quantitatively.
5.1.3.3 Deviation of Rules from Optimum
The performance of the line for 10 to 70 cards assigned was measured by the
throughput. The percentage increase in throughput, I, for allocating optimally, Tho,
instead of allocating according to the rules, Thr, was calculated according to the
following formula:
I Th Th, 100%.
Figure 55 indicates an increase in most of the cases. Only in a few cases the rules
resulted in the optimal allocation. Naturally, there was no increase in throughput for
10, 20, ..., 70 as with these numbers only one allocation was possible under the given
assumptions [see p. 68].
18
16
14
12
c 10  Rules
S3 0 Max
S6
4 0
0
10 20 30 40 50 60 70
# of Cards
Figure 55: Increase in throughput by allocating cards optimally instead of simply
applying the rules.
To put these percentage increases in a relative context, the maximal increases, i.e.
the increases from the worst possible allocation to the optimal allocation, are
indicated in Figure 55 (Max) as well. The graph shows all maximal increases for the
first interval, 10 to 20 cards assigned, and only the maximal increase for 25, 35, ...,
65 cards assigned per consecutive allocation interval. These numbers were expected
to show the greatest deviation in throughput as they give rise to the greatest amount of
different possible allocations, a:
10
a = = 252,
where five additional cards had to be assigned after an equal amount of cards was
allocated to all the machines.
The graph illustrates the good approximation of the optimum by the rules. This is
especially true for a bigger number of cards assigned. It can clearly be seen that the
maximal increase decreases with an increasing amount of cards in the system. This
can be ascribed to the following:
* the machines are busy most of the time as enough cards have been allocated to
them,
* the increase of utilization per additional card assigned to the system decreases
with an increasing amount of cards allocated [see Figure 56], and
* the ratio,
r = c,
C2
where c1 is the smallest number of cards assigned to any machine on the line and
c2 the largest number of cards assigned to a machine, decreases as the
difference, d = c2 c1, is kept constant and equal to 1.
0.9
0.85 
0.75
=' 0.7 0000O0
S0.65 600"
& 0.6
.0.55 00
< 0.5 <0
0.45
0.4 4..
10 20 30 40
Number of Cards
50 60
Figure 56: The average utilization dependent on the number of cards for Kanban.
The optimal card allocations for maximum throughput, minimal work in process
and minimal average cycle time were carefully studied.
The optimal allocations for minimizing WIP and average cycle time were found to
be very close to the rules applied. Note that these rules are different from those given
above, as the primary objective to achieve minimal WIP and minimal average cycle
time is to liberate the system of WIP. This is most efficiently done by placing more
cards towards the end of the line to pull material out of the system. Less cards at the
beginning of the line would result in raw material only being pulled into the system
for processing, not keeping any excess material in the buffers.
However, trying to achieve maximal throughput resulted in a great variability of
where the additional cards should be placed. Table 51 shows an extraction of the list
obtained to illustrate this interesting phenomenon. The systems with the same amount
of additional cards were grouped together. These additional cards were indicated as
ones in their respective rows. Looking at the table unveils no obvious pattern. Mmedan
and Mbeg7nn.ng are discussed below.
Table 51: Additional cards allocated to the system with ten machines.
# of cards assigned to M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 Mmedian Mbeg nn.ng
the system
11 0 0 0 0 0 1 0 0 0 0 1 6
21 0 0 0 1 0 0 0 0 0 0 2 4
31 0 0 0 0 0 0 0 1 0 0 3 8
41 1 0 0 0 0 0 0 0 0 0 5 1
51 0 0 0 0 0 0 0 0 0 1 5 10
61 0 0 0 0 0 0 0 0 0 1 5 10
12 0 0 0 1 0 0 1 0 0 0 0 11
22 0 0 1 0 0 0 0 0 1 0 1 12
32 0 0 0 0 0 1 1 0 0 0 3 13
42 0 1 0 0 0 0 0 0 1 0 0 11
52 0 0 1 0 0 0 1 0 0 0 1 10
62 0 0 1 0 0 0 0 0 1 0 1 12
13 0 0 1 0 1 0 1 0 0 0 2 15
23 0 0 1 0 1 0 1 0 0 0 2 15
33 0 1 0 0 0 1 0 1 0 0 0 16
43 0 0 1 0 1 0 0 0 1 0 0 17
53 0 0 0 0 1 1 1 0 0 0 2 18
63 0 0 1 0 1 0 0 0 1 0 0 17
As the research on card allocation was not the main topic of this research paper, a
very limited amount of time was spent trying to find patterns that could explain this
variation. Some calculations were done to express the findings mathematically. The
interest was focused on the balance of the system. Mm/edan represents the moment of
the line with the center point at the median (between machine 5 and machine 6):
10
Almedan= 2 ,w
1=1
where w, stands for the weight of machine i [see 5.1.3, Card Allocation Rules] and /
stands for the arm of machine i. This was expected to be close to zero at all times,
assuming the correctness of the rules. As can be seen in Table 51, this number
greatly varies and sometimes equals to the maximum arm, /5= 5.
Mbegnn.ng quantifies the moment of the line for additional cards with the center
point at the beginning of the line, such that 1,=i:
10
M e.. = id'
Ilfbeginrnng = X dz,
1=1
where diC is the difference between the amount of cards of the different machines in
the system [see 5.1.3, Card Allocation Rules]. This formula indicates the position of
weight on the line. For 33 and 43 [see Table 51] Mmedan is the same and indicates a
balanced line. However, Mb eg.nng shows, that the weight is distributed differently,
viz. more towards the end of the line for 43 cards assigned. Comparing MI edan and
M/begimnmg for the different allocations, shows no evident pattern. More research could
be conducted to find explanations for this behavior.
5.1.4 Interarrival Time
This parameter stands for the time interval between two consecutive batch
arrivals. Its inverse is the arrival rate. The interarrival time was favored to the arrival
rate as it is understood more intuitively. Furthermore, it served as a direct input value
for the software applied.
The selected levels resulted from setting the utilization interval [u,,,,un ,,x ] for
MRP equal to the utilizations for the pull systems. The levels selected divided the
intervals into nine partitions.
As the average cycle time represents one of the primary indicators of the
performance of a manufacturing line, its response to a change in batch size is
discussed first.
5.2 Average Cycle Time
The following graph shows the influence of the batch size and the number of cards
allocated to the system on the average cycle time for the three control systems:
Kanban (1), CONWIP (2), and MRP (3) [see Figure 57].
Figure 57: The average cycle time dependent on the batch size and number of cards
allocated to the line for the three control systems: Kanban (1), CONWIP (2), and
MRP (3).
The average cycle time increases with increasing batch size. For the vertically
aligned data points, the number of cards assigned increases from bottom to top. As the
material is pulled into the system in batches the last member of each batch has to wait
until all the other members are processed. As the batch size increases, this waiting
time increases.
The lowest values of the average cycle time per batch size were obtained for the
least number of cards assigned to the system, viz. ten. Ten cards theoretically enable
all the machines to be busy simultaneously. Furthermore, Kanban requires this
minimal amount to function. For the upper bound at most 200 entities were chosen:
WPmax = bc = (10)(20) = 200,
where b is the batch size and c is the number of cards assigned to the system.
However, another constraint enforces even stronger limitations on the systems: the
average utilization of the machines.
5.2.1 The Average Machine Utilization
Little's Law relates the three parameters: throughput, cycle time, and work in
process. This interdependence has proven practically to be the only stable observation
for the turbulent stochastic manufacturing systems. Thus, it can easily be used to
make conclusions about one of the parameters when one is kept constant and the other
one is known.
The three parameters constitute ideal indicators of performance for a production
system. The production engineers are most definitely interested in reducing work in
process to decrease cycle time and increase the throughput of the line. Thus, these
parameters serve as quantitative indicators enabling state of the art process control.
From these indicators other indicators can be derived. One of these indicators
would be the machine utilization. The utilization, u, can be determined independently
of the throughput, but, they are directly related:
Th
average
Thheory
where Thmerage is the average throughput derived from the systems under study, and
Ththeot is the theoretical throughput, which can be determined by the following
formula:
Th 1
Th theory 
process
where tprocess is the process time of the bottle neck machine in minutes. Here, the
machines are identical and can all be considered bottle neck machines with a process
1
time of 20 seconds or minute resulting in the following:
3
Ther = 1 = 3
3
entities per minute.
The utilization gives a relative performance of a machine and can be calculated
for the entire line. The average utilization, u of the line can be calculated by the
following formula:
10
10
10
