<%BANNER%>

UFIR








QUEUES WITH BALKING AND THEIR APPLICATION

TO AN INVENTORY PROBLEM













By
EDWIN LUTHER BRADLEY, JR.
















A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY



















UNIVERSITY OF FLORIDA
1969




































TO T}E MEMORY OF

MY m''T'.P















A CKINOU L E C[GH EN TS


The author is particularly indebted to Professor J. G. Saw,

e supervisory co!.rrnittee chairman, who gave continued interest and

couragerent throughout the entire period involving the research

d writing of this dissertation. Special thanks to Professor R. L.

heaffer who proofread the entire dissertation and made many worth-

ile suggestions. Thanks ate also due to Mrs. Edna Lacrick who did

superb job of typing the dissertation.

It is a pleasure to acknowledge the Department of Statistics

r the support it has extended so that the author was able to pursue

s graduate w-rk.

Finally, the author acknowledges the patienc,:e and encourage-

nt given by his wife and children during his many years in school.

thout their understanding, this paper would never have been written.
















TABLE OF CONTENTS


ACKNOWLEDGMENTS .


CHAPTER


I. INTRODUCCTION . . . . . .


2. THE QUEUF GI/M/1 WITH BALKING AT QUEUES OF
LENGTH K- . . . . . .


2.1 The Basic System . . . . ... ..


2.2 An Imbedded Marko' C.ain . . . . .


2.3 Some Properties of the Time Between Balks


2.4 The Inverse cE a Special Triangular Matrix


3. THE QUEUE GI/M/" WITH BALKING AT QUEUES OF
LF.NGTH K-.- . . . . . . . . .


3.1 The basic System . . . . . . .


3.2 An Imbedded .-arkov Chain . . . . .


3.3 Some Properties of the Tine Between Balks


4. THE QUEUE GI/D/1 WITH BALKING AT QUEUES OF
LENGTH K- . . . . . . . . .


4.1 The Basic System . . . . . . .


4.2 The Waiting Time in the System . . .


4.3 Some Properties of the Tiue Between Balks


5. THE INVENTORY PROBLEM: DISCRETE CASE . .


5.1 Definition of the Inventory System .


5.2 Relation of the Inventory System tc Queues
with Balking . . . . . . . .


Page


iii


7


7


9


24


29



33


33


35






41


41
. . 7



. . 5


S . 64

















64



69
. . 29



. . 33


. . 33


. . 35


. . 35



. . 61


. . 41



. . 45


. . 50


. . 64


. . 64



. . 69









TABLE OF CONTENTS (Continued)


AP P.E


5.3 The Cost Function C('Q ) . . . . . .

5.4 Solution of C('v) Using the Queue GI/M/1
with Balking . . . . . . . . .

5.5 Solution of C(v) Using the Queue GII//"
with Balking . . . . . . . . .

5.6 Solution of C(') Using the Queue GI/D/1
with Balking . . . . . . . . .

6. THE INVETNTOP.Y PROBLEM: CONTINUOUS CASE . . .

6.1 First Passage Times of Non-Negative,
Continuous Stochastic Processes with
Infinitely Divisible Distributions ..

6.2 Definition of the Continuous Inventory System
and Its Relation to Previous Results .. ..

BLIGGRAPHY . . . . . . . . . . . .

OGRAPHICAL SKETCH . . . . . . . . . .


Page

. 72


. 75


. SO


. 85

. 39




. S9


. 96

. 100

. 101













CAPTFER 1


I NTRO DUCT ION



In this dissertation, we consider an alternative to the (s,S)

ordering policy associated with inventory systems.

The (s,S) ordering policy is specified as follows. There exists

a store of finite capacity S that holds material (discrete or continuous)

for future use in some process. In the most general context, demand fcr

the materLal in storage during an interval of time is assumed to be a

time-dependent stochastic process. Ordering of replacement stock to

maintain the level of inventory in the store is done in one of two ways.

Either orders for an amount S-s of replacement stock are made at the

times when the stock level reaches s, s S, or the level of stock in

the inventory is examined at regular points in time and orders for

replacement stock equal to the stock deficit are only made at those

regular times for which the stock has fallen below the level s. In

both cases, the time it takes the replacement stock to arrive (i.e.,

the delivery time) is assumed to be zero. A generalization of the

(s,S) ordering policy allows a time lag T for arrival of the replace-

ment stock.

For a certain class of cost functions associated with maintaining

the level of stock in an inventory, it can be' shown that the (s,S) order-

ing policy is the optimal policy to utilize. A summary of some results









for the (s,S) ordering policy and conditions under which (s,S) is (or

is not) the optimal ordering policy is ,iven in a paper by Gani (1957).

Another generalization of the (s,S) ordering policy is the fol-

lowing. The capacity of the inventory is S aLrd demand for the stored

material is once again a time-dependent stochastic process. However,

orders for an amount v (v -S) of replacement stock are now made at the

times when the stock level drops to the values S-'J,S-2v,S-3v,.... The

delivery time for any order is assumed to be a constant value T (T >0).

Under the assumptions that the store holds discrete items and the demand

for these items obeys a Poisson probability law, the long run probabil-

ity law representing the level of stock in the store is given in GanLi

(1957) and Prabhu (1965b).

In many cases, hcweler, a constant delivery time does not

adequately express reality. Furthermore, the negative stock level

that can arise when T (the delivery time) is greater than zero may

reflect the loss of considerable time and money in terms of idle m.n-

power and equipment. To circumvent these difficulties, an alternative

ordering policy is defined and its properties examined in this paper.

Envision a subwarehouse, maintaining an inventory of finite

capacity S, that holds material (discrete or continuous) for future use

in some process. In the most general conte'ct, we assume the demand for

the stored material is a time-dependent stochastic process. In order

to maintain a stock on hand, orders for an amount '.' (v S) of replacement

stock are placed with a warehouse at the times Vhen the stock level drops

to S-,',S-2V,...,S-'v[S/'u] ([x] the integral part of :c). The time it takes









the warehouse to process an order placed when the stock level falls to

S-v,S-2'J,.... or S-'[S/v] +v is called a regular service time. All

regular service times are assumed to be mutually independent random

variables with a common distribution and to be independent of the

demand process. An order placed when the stock level falls to S-v[S/v]

is called an emergency order. The time to process an emergency order

is assumed to be instantaneous, or at least effectively zero.

Hence, regular orders for an amount v oE replacement stock are

made if the stock level is at least at the instant an order is placed,

while an emergency order is made if the stock level is less than v at

the instant the order is placed. Utilizing this reordering technique,

the inventory maintains a positive stock level at all times.

Although a somewhat larger cost would quite naturally be incurred

with emergency orders than with regular ones, it is assumed we are will-

ing to pay the price of instantaneous delivery in order to avoid the

disaster of running completely out of stock in the inventory.

The cost ct maintaining the inventory level will clearly depend

on v, the size of a replacement order, and there should exist an optimal

value of v, defined to be that value of v for which this cost is a min-

imum. In Chapter 5, we define a '-dependent cost function for which we

seek the optimal v.

It will be shown later that the inventory problem is closely

related to a problem in queueing theory--queueing systems with balking

at queues ot a fixed length. We shall ;ow discuss the salient features

of such a queueing system.









Utilizing a notation proposed by Kendall (1953), by "the queue

A/B/s with balking at queues of length K-I" we mean a queueing system

specified as follows. The queue length at any instant will refer to the

number of people in the system who are being g served or waiting to be

served at that instant. Successive customers are assumed to arrive in

the system in such a way that their inter-arrival times are mutually

independent with distribution function A(*). A customer joins the queue

if, at the instant he arrives, there are less than K-1 persons already

in the queue. If there are K-1 persons in the queue when the customer

arrives (so that he is the K-th person in the system), one of three

equivalent things happens to him: (1) The customer balks, i.e., he

leaves without waiting to be served; (2) The system rejects the customer;

or (3) The cusTconer receives instantaneous service. There are s servers

available to wait on customers with the first free server attending the

customer at the top of the queue. The length of time from when a server

starts to serve a customer until the completion of such service is called

the service time. All service times are assumed to be mutually independ-

ent with distribution function B(-). Finally, the service times and

inter-arrival times are assumed to be mutually independent.

Because the statistician is more familiar with the terminology

of queues rather than inventories, the work has been carried out in terms

of queueing theory. The times between successive orders and the service

times for the inventory problem with emergency orders are shown to corre-

spond to the inter-arrival times and service times, respectively, in

queueing systems with balking at queues of a fixed length. The mechanics









of inventories have led us to give prime attention to the queues GI/M/1,

GI/M/N and GI/D/l, all with balking at queues of length K-1, where GI

(or G) refers to a general distribution function, M refers to a negative

exponential distribution function, and D refers to a distribution whose

mass is concentrated at a single point.

An inventory or storage area is normally established with the

assumption that it will be in operation for a long period of time.

In choosing a reordering policy, therefore, long run distributions

become important. Fortunately, this means that long run properties of

queues with balking are adequate for the solution of our inventory problem.

In Chapter 2, tne queue GI/M/1 with balking at queues of length

K-1 is discussed. In particular, we utilize the concept of an imbedded

Markov chain to derive properties of the queue length and the time between

successive balks. In Chapter 3, the same is done for the queue GIl/M/m

with balking at queues of length K-1.

In Chapter 4, the queue GI/D/1 with balking at queues of length

K-l is discussed. Here, the concept of the waiting time in the system

is introduced. We again utilize an imbedded Markov process to obtain

properties of the waiting time and the time between successive balks.

In Chapter 5, we consider the inventory problem when the stored

material is discrete. Here, we forge the link between queues with balking

and inventories subject to instantaneous emergency orders, and give solu-

tions for the cost function associated with the inventory problem based

on results from Chapters 2 through 4, along with some examples.




6




In Chapter 6, the continuous inventory problem and its relation

to results of previous chapters is discussed. Also included are some

properties of a continuous, non-negative stochastic process with an

infinitely divisible distribution.













CILHATER 2


TILE QUEUE GI/AIl/ WITH BALKING
AT QUEUES OF LENGTH K-1



2.1 The Basic System


Consider a queueing system in which customers arrive in the

system at ties, .. _2,a_''a 2 ,.., such that the inter-arrival

times

u. = 0. . j 1, (2.1.1)
3 3 3-1


are mutually independent. The distribution function of u. will be
3

denoted by

Pr [u. < u} = F(u), u 0, j 2 1. (2.1.2)


One server is available to handle the needs of the customers.

This server dispenses his service on a strict "first come, first served"

basis. The successive service times of customers who join the queue are

denoted by w,w2, 3, ..., and are assumed to be mutually independent random

variables that are independent of the arrival times. The distribution

function of w. is assumed to be
J

Pr [wj ] w] = 1 e w > 0, j 1. (2.1.3)


Let (Q(t);--
represents the number of customers in the system at time t and

(Q*(t); -m < t < -= the stochastic process such that Q*(t) represents









the queue length at time t. Recall that the queue length, Q*(t), is

the number of persons being served or waiting to be served at thne t.

A customer arriving in the system at time a enters the queue

if and only if Q* (-O) < K-2, that is, if and only if the number of

people in the queue immediately prior to his arrival is K-2 or less.

In this case we have

Q*() = Q() = Q*() +(O 1 Q(C-O) + 1.

If, on the other hand, our customer is faced with a queue length of K--1

(so that he becomes the K-th person in the system), he balks and imme-

diately leaves the system. We now have that Q(c) = K implies

Q(C-0) = Q(3+0) = K-1

and

Q*('-0) = Q*(O+o) = Q*(c) = K-I.

It is clear that Q(t) and Q*(t) are identical in value except at the

points on the time axis for which Q(o) = K.

We shall work with the stochastic process (Q(t);-- < t < +)}

and shall be concerned with its behavior beyond the time point 0,

which we assume is known. Hence, without loss of generality, a could

be taken as zero.

Define

N1 = inf (k > OCQ(Ok) = K),
(2.1.4)
Nn = inf [k > N n-_ Vk) = K), n > 2,

(so that Nk is the number of customers who arrive up to and including

the n-th customer to balk), and









Ml = Ni'
(2.1.5)
M. = N. N j 2,
j j j-I' '


(so that M. (j 2) is the nunbe of arrivals between the (j-l)-st and

j-th balks plus the j-th peso: to balk).

Define

Vl = ONI a0
(2.1.6)
Vj = CN CNj_ j > 2-


Then V is the time until the first balk and V. (j 2 2) is the time
1J
between the (j-l)-st and j-th balks.

Of primary importance, for us, is the value of 8(V ), j 2 2.

This quantity is established in Section 2.3. While the theorems of

Sections 2.2 and 2.4 are proved with the thought of building toward

a solution to 8(V.), these theorems have a theoretical and practical
J
importance that goes beyond our arrow objective.



2.2 An Imbedded Harkov Chain


Before we define the imbedded LMarkov chain, we prove two lemmas.

The first lemma simply restates a well-known result about negative

exponentially distributed random variables, while the second lemma

establishes the non-Markovian character of Q(t) when the inter-arrival

times have an unspecified distribution.


Lemma 2.2.1

Let X be a continuous, non-negative random variable. Then

Pr [X > x + yIX > xj = Pr [X > y}, x,y 2 0, (2.2.1)










if and only if


Pr (X > x = e


, > 0.


(2.2.2)


Proof of Lemma 2.2.1

If (2.2.1) holds, we have

Pr (X > x + y] = Pr r( > x) Pr [( > y}

and therefore (2.2.2) is true. See, for example, Parzen (1962, p. 121).

If (2.2.2) holds, we have

Pr [X > x + yjX > Pr (X > x + y,X > x}/Pr [X > x)

= Pr rX > x + y]/Pr (X > x.


e /G

= e = Pr (X > y),

thus completing the proof.


Lemma 2.2.2

The stochastic process [Q(t);-~ < t < - is not, in general,

a Markov process


Proof of Lemma 2.2.2

Without loss of generality,

Let Y(t) = max Q(T) and
s
Pr (Y(t) = OIQ(T) = 0,

= Pr

= Pr

= Pr


let 7 = 0 and Q(O) = 0.
o

u = t-s. We have


0 < T s s)

[Y(t) = Oul > sj

[ul > t Iu > sl

ru > u + S[u > s1.
1 1.


(2.2.3)









Let 0 < T < T < s, and define the events A and B by
o 1

A = (Q(T) = 0; 0 T < To S T s}

and
B = (Q(T) = 1; TO T T < T}

We have

Pr (Y(t) = OA,B)

= Pr(Y(t) = 01u = To, U1 +W1 ) = I, +u1 >s}

= Pr[ul +u >tul =To, u +w =T ul +u2 >s}

= Pr- U2 >t -Tou2 >s To

= Prful >t -To U >S -T }
. 1 0o 0
= Prlu >u + (s -To)u u >s T. (2.2.4)

But (2.2.3) and (2.2.4) are not necessarily equivalent.

Yet, if

Pr(Q(t) -'k Q(T), 0 -T Ss} = Pr[Q(t) =k Q(s)}

for all t >s, then (2.2.3) and (2.2.4) would have to be identical.

Therefore, the proof is complete.

It should be noted here, that, if the u. have a negative exponential

distribution, then Q(t) is a Markov process.

Although (Q(t); -C
there exists an imbedded Markov chain defined by

Q = Q(on), n = 0,1,2,..., (2.2.5)

regardless of the distribution of the u..

Figures 2.1 and 2.2 give the correspondence between Q(t) and Qn

Qn clearly represents the number of persons in the system at the time

the n-th customer enters the system. Valuable information about Q(t)









can be obtained from a knowledge of Qn as will be shown in the follow-

ing theorems.

Before proceeding further, note that if K =I, then

PrQn = 1) = 1, n = 0,1,2,.... For the. future we shall therefore hold

K > 2.


Theorem 2.2.1

The stochastic process [Qn; n = 0,1,2,...j defined by (2.2.5)

has the following properties:

(a) Q is a Markov chain;

(b) Qn is time-homogeneous;

(c) The class [1,2,...,K) of states on which Q is defined is

an periodic, positive persistent communicating class; and

(d) The one-step transition probability matrix P is given by


S K K-l K-2 ... 3 2 1


K ak
0 1 2 K-3 K-2 K-2

K-I 0 C ... oa a k
0 1 2 K-3 K-2 k-2

K-2 0 0 .. a k
0 1 K-4 K-3 K-3





3 0 0 0 ... a a k
1 2 2

2 0 0 0 ... 1 k


1 0 0 0 ... 0 0 k0












Q(t)


N1


N2 $J3


Figure 2.1.









Qn


5-
4-
3-
2
I -

0 2

Figure 2.2.


A Typical Path o[ Q(t) for GI/M/1 with Balking
at Queues of Length 4.


N2 N3
0 0


* *


Path of Qn Corresponding to Q(t) in Figure 2.1.


- r 1










where
(.. t)3 -'t
SJ ---- e dF(t), j =0,,,.., (2.2.6)
j 0o j

and

k = j =0,1,2,.... (2.2.7)



Proof of Theorem 2.2.1, Part (a)

Let UI*) be the unit step function at zero. Let X be the
n+l 1
number of customers who complete their service in the interval (on'n .

Then

Qnl = Qn + nl' if Qn < K,

Qn+ n 1 Xn 1, if Qn = K


so thar

Qn+l Qn X + U(Q -K) + 1, Q < K.
n n+1 n n

Since the distribution on service times is negative exponential,

the probability law on Xn+1 conditional on the history Q0,QI ',Qn is

a function only of Qn. Hence, we see easily that the probability law on

Qn+l conditional on the history Q0),Qi,' ,Qn can be a function only of

Qn, which establishes the Markov property of Qn


Proof of Theorem 2.2.1, Parts (b) and (d)

Since we are looking at Q(t) at successive arrivals, we have

1Q Qn+l Qn +1. Also, of course, by the balking aspect of the problem

Qn K. Hence.

Pr(Qn+ =klQn = j} =0, for k>j +1. (2.2.8)









FurLter,


PrQ n+l kIQ = K' =Pr[K -X n+l1 I = k
n+l n n+l
= Pr((k-1) -Xn +1 =k)

=Pr(Qn+1 =klQn K 1. (2.2.9)

For j = 1,2,...,K-1 and k =2,...,j+1, let N(t) be a Poisson

process with inter-arrival times wl,w2,t 3,..., then

Pr[Qn = kon = j} = Pr(j -Xnl + 1 =kQn j}

=Pr(Xn1 = j -k +lQ : j}

= P (Pr[w +...+ w _k+l un w l+ ...j- 2 u
un, I I j-k+1 n 1 j-k+2 n+1

= 8un+l(Pr(N(u ) = j k + 1})
JC -,t (?t)J-k+!
= e (k dF(t)
o (j-k l)l
= j-k+ (2.2.10)
j-k+ 1

Let j = 1,2,3,...,K-1 and k = 1, then

Pr(P{Q = 1Q j =j =Pr[j -: +l = ,11 = j
n+1 n n+1 T
=Pr(Xn,+l n j Qn =j

=Eun+ (Prw1 .*.+ w. u J)


t A.
= J (J x1 e /(j-1): dx) dF(t)
o 0
S j--1 t .
= (1 E e (.t) /i) dF(t)
0o =O

= 1 0 ".. j- = k.- (2.2.11)











Equations (2.2.8) through (2.2.11) are independent of n and

therefore Qn is time-homogeneous. Application of these equations for

j,k = 1,2,...K gives us the matrix P.


Proof of Iheorem 2.2.1, Part (c)

An examination of the one-step transition matrix in the state-

ment of the theorem shows that each state communicates with all others.

Since, for example,


PrQn1 = K = K) > 0,

state K is periodic. We therefore ha'.ve a finite irreducible communicat-

ing class of periodic states so that each state is necessarily positive

persistent. The proof of Theorem 2.2.1 is now complete.

We shall prove a lemma that applies to an arbitrary Markov. chain

with one-step transition matrix P given by


PK-I,1

P-1,1

0






0

0

0


PK-1,2

PK-1,2

PK-2,1





0

0

0


PK-1,3

PK-1,3

PK-2,2





0

0

0


PK-1,K-2

PK-1,K-2

PK-2,K-3






. . P3,2



... O
0


PK-I K-1

PK-IK-I

K-2,K-2






P3,3

P2,2

pl, I


PK-I,K

PK-I.K

PK-2,K-1






P3,4

P2,3

PI,2

(2.2.12)


in wnich p. > 0 for i = 1 2,...,K-1; j =1,2,...,i+ .
L,j


I









IE a Markov chair, has a onc-step transition matrix of the form

(2.2.12), then clearly the Harkov chain is periodic and positive

persistent. Hence, there exists a unique long run distribution equal

to the stationary distribution. With this in mind, :e now state and

prove the lemma.


Lemma 2.2.3

Let P of (2.2.12) be the one-step transition probability matrix

of a Markov chain. Then, if e' = ( ,. ) is the unique stationary

distribution for the Harkov chain,


K 'K -K-L'

where.


K" )B = ( ... .)

and



PK-1,

PK-1,2-1 K-2,1

PK-1,3 PK-2,2- PK-3,1
B =




PK-1,K-2 PK-2,K-3 PK-3,K-L4 P3,2-1 P2,1

PK-1,K-1 PK-2,K-2 PK-3,K-3 P3,3 P2,2-1 PI,1









Proof of Lemma 2.2.3

By the definition of a stationary distribution, 0 is the unique

solution to

'P* = 0' (2.2.13)

and

6 +...+ 6 = 1 (2.2.14)


Writing out the equations (2.2.13) and (2.'.14) with K on

the left hand side, we get the following system of K linearly independent

equations:

(1K-i,1 )K PK-l,e K-


K-1,2 K K-,2-)K-1 K+ -2,1 K-2


-PK-1,3 K K-1,3 K-1 K-2,2- K-2 PK-3,1 K-3






K-1,K-2 K K-1,K-2K + PK-2,K-3 K-2 + + (P3,2-1)93 +P2,1 2

-P p + D 9 +
K-1,K-1 K K-1,K-l K-1 K-2,K-2 K-2


+ P3,33 +(2,2-12 +1,1

=K- + ... + 91 -1-
K K-1 1
(2.2.15)


Let A be the (K x K-1) matrix of coefficients of 8 K-l...,1

in (2.2.15), then














PK-2,1


PK-2,2- PK-3,1


PK-1,K-2 PK-2,K-3


PK-1,K-1 PK-2,K-2

1 1


The K-I columns

of dimension K. Hence,

that 7 is orthogorial to

A, we must have 1 / 0.

take "' = -1. That is,


PK-3 K-3

1


S"'" P3,2-1


* P3,3

1


P2,1


P2,2-1

1


p, 1

1


of A form a set of linearly independent vectors

there exists a vector;' = (K'K-I, '' ) such

each column of A. By noting the last column of

Therefore, without Icss of generality, we may

we have


'A = 0'


(2.2.16)


with 0 the null vector.

After multiplying the n-th equation of (2.2.15) by n and

adding them up, we find


[(-PK-I,_ K-K-I2K-I K-1,3 K-2 ...

-1,-2 + 1] = 1.
-PK_1,K-2 3 P1_I,K-1I2


(2.2.17)


But by (2.2.16), if we take the scalar product of with the

first column of A, we have


PK-1,1 SK + (p-_ -1,)K-I


+ P-,3K E-2

+ PK- ,K-1'2-1 0 0.


PK-I,I


K-1 ,2-


PK-1,3


(2.2.18)










Substituting (2.2.18) into (2.2.17) yields


K K K-1) = .

Since


A B



and F = -1, (2.2.16) implies that


( K K-1 2)B (1,1,...,1),


thus completing the proof.


Corollary 2.2.1

Let iKK be the mean recurrence time of state K for a Markov

chain with one-step transition matrix P of (2.2.12) and state space

[( ,2,..,K}. Then 22, 33' 14 ,..., are finite and satisfy


I/p,1 K=2,

J(2.2.19)
PK K-i K-i
(1/p ) 1 + p j K 3.
K1k=2 K-1,K-k+2 j=k
k=2 j=k


Proof of Corollary 2.2.1

It is well known that iKK = 1/0K' K the stationary probability

associated with state K, and iKK is finite. See, for example.Parzen (1962).

By Lemma 2.2.3, if K = 2, than B = [p1,1] and hence


(2.2.20)


P22 = 1/92 = 1/p,1 = 2









If K23, take the scalar product of thr, first column of P, and

(5K' "'' '2) then the lemma yields

= F),
PK-1,1 K PK-1,2 K-1 -1,3 K-2 K-,4 K-3

S- 1,K-2 3 K-1,K-1 2 + 1. (2-2.21)

Subtracting K ,p i.K from both sides of equation (2.2.21),

we get

PK-l,(K- K K-1 = -PK-1,2 -K-,l .K-1

PK-1,3K-2 K- PK-I + 1. (2.2.22)


Adding and subtracting (1 -p K -,2 K )K-2 on the right
K-1 22 r-l i K-2
hand side of equation (2.2.22) we obtain


PK-1,1 ( K-KI ) K-1,2 -K-I,1( K-1 -K-2

4 (i-p)C
S(1 -K-,3-PK-1,2- PK-, K-2


-PK-1,4 K-3 K-.,K-12 +1. (2.2.23)


Continuing in this manner to add and subtract (coefficient of

.)'* as j goes from R-2 to 3, we get
j j-l

PK-1,I K K-1 0 -PK-1,2 -PK-1,1 K-1 K-2

+ (1 -K-1,3 -PK-,2 -PK-,1)(K-2 -3)


S...* (0 -PK-1,K-2 -PK-1,K-3 -....PK-!, )(3 -2)

+ (- p K-. 1 2 + 1. (2.2.24)
( -K-I,K-1 "-PK-l, 2


n K
Note that I pK-k = p K-,k
k=l k=n-l











Applying this result, equation (2.2.20), and .j = F- .j- to

(2.2.24), we have


K-1 K-1
PK-1,l LKK = 1 + E I -i. E PK- i,k+1 (2.2.25)
)=2 k1-K-j+l

Setting i = (K-1) -k+2 in equation (2.2.25) yields

K-l j
P= 1 *+ P
K-1,1 KI = 1 .3 K-1,K-i+2
j=2 i=2

K-1 K-1
= 1 + E p K E ... (2.2.26)
K-1,K-i 2 J3
i=2 j=i

Dividing both sides of (2.2.26) by p yields equation

(2.2.19), thus completing the proof.

The result of Corollary 2.2.i can be applied to the Markov chain

of Theorem 2.2.1 to cbtain the mean recurrence time of state K. However,

the special form of the matrix P in Theorem 2.2.1 lends itself to a more

elegant solution for the mean recurrence time than that given by the

corollary. Before stating this solution in Theorem 2.2.2, we define


p(9) = eu dF(u), 6>0, (2.2.27)
o
and


K(Z) = C c z-, Z (2.2.28)
j=0 J


Also, for ease in writing, if h(Z) is any function of Z, denote by

C(n)h(Z) the coefficient of Zn in the expression h(Z).
z









Theor-em 2.2.2

The mean recurrence time pKK for the state K of the Markov

process [Qn; n = 0,1,2,... of Theorem 2.2.1 is finite and satisfies


(2.2.29)


= c(K-2) i/[p(.\( -Z)) Z].
KK z


Proof of Theorem 2.2.2

By Theorem 2.2.1, K is finite and


tKK = 1/8K,

where 6K is the stationary probability associated with state K.

The P of Theorem 2.2.1(d) is P of (2.2.12) with


(2.2.30)


Pi, j k
i kj-1' ij = i 1.


Hence, Lemma 2.2.3 applies with


(Ci-1)


2





K-3


K- 2


(a- ) C0
' 1 0


Q
K-4


K-3


ce
K-5

aK-


S(o-l) C0


.. (a -1)
2 1










In Section 2.4, it iill be shown that for a matrix of the special form

B above, the solution of 1/9K in Ummna 2.2.3 is


(K-2)
/K = K K-1 = C 1/(K(Z)-Z). (2.2.31)

But


S e- t (.\zt) /j'. 1, if Zj < i.
j=0

Therefore,

co m
K(Z) = Zj e"' ('t) /j' dF(t)
j=0 o


= J e- t (.Zt) 'j! dF(t)
o j=0


0 t(I-Z)
= ] e dF(t)
o

= cp( (l -Z)). (2.2.32)

Hence, (2.2.30), (2.2.31), and (2.2.32) give the desired result.



2.3 Some Properties of the Time Between Balks


The theorems in this section refer to the random variables

defined by (2.1.5) and (2.1.6). These theorems are not only useful in

discussion of the queueing problem, but also provide powerful results

for the inventory problem. to be discussed in Chapters 5 and 6.








Theorem 2.3.1

(a) M I,2, 3,..., are mutually independent random variables,

(b) M2 3 ,M14, ..., are identically distributed,

(c) C(M.) = KK' j 2, with LKK given by (2.2.29).


Proof of Theorem 2.3.1

Let B(i,j) denote the event

([Q

Let kk,k,k3..., be a sequence of positive integers and define

n. = k + ... + +k.. Then for any m 2,
S 1 3

Pr[( = M12 = k ,..., M = km
1 1 2 2 m j

= Pr[B(l,kl), B(k1 + l,n2),..., B(n[- + I,*)Q0}

= Pr(B(nM +1,P ), =K} Pr[B(nm +1 ,n ) 1 Q0n 2 K}
m-1 m m-1 m-2 '-1 m-

... Pr(B(k1 +l,n2)IQkl =K} Pr[B(l,k1) Q0]

= Pr[B(l,k )|0Q =K} Pr[B(l,k_-1)IQ0 =K

... Pr[B(l,k2) Q = K Pr (B(1,k )JQ0} (2.3.1)

The second equality above follows by the Markov property of Qn'

and the last equality follows by time-homogeneity of Qn. Hence,

Pr([M =kI, ..., M =k = = Pr(M k k ..." Pr[(im k m .


Therefore, the M. are independent, and by examining the last expression
in (2.3.1), we see that M (j2) are identically distributed
in (2.3.1), we see that M. i (je2) are identi-cally distributed.
-3









Now, for j 2


Pr(. = k = Pr(Q < K,..., Q < K, Qk = K Q0 = Ki


= Pr(the first passage from states K to K

takes k stages}.

Hence,

(RM.) = mean recurrence time of state K

= KK"

Applying Theorem 2.2.2, we complete the proof.


Theorem 2.3.2

(a) V ,V2' 3.., are mutually independent random variables,

(b) V2,V3V4,..., are identically distributed,

(c) t(V.) = (M.) (u ), j 1

= KK e(u), j 2

where ,KK is given by (2.2.29) and u1 is given by (2.1.2).


Proof of Theorem 2.3.2

Since,


1j+l =Nj UNj

= UNj + ... + uNj+I,

parts (a) and (b) follow directly from Theorem 2.3.1 (a) and (b), the

assumed independence of the uk, and the independence of un+l and Qn"

Assume that Q0 = i, (1 < i < K). Let

a = a -
n n o

so that a = u + ... + u n > I.
n 1 n









Now (0n nS(u )) is a martingale and

E(o n(u )) = 0, fur all n.
n 1
The event [1 > k] E where B is the o-field
Seventh generated by andk+
of events gencratled by oOI, 0 and (u,' ... ,.w
S Bk ari rtiT


Clearly, Bk c Bk.
-k -
Hence, MI is an optional stopping rule

martingale property. See, for example, Feller

Therefore,


EV l1 (ul)) = 8(o: -

= 0.

Now let i = K in the above solution so

distribution as M. (j 2). Then


and has no effect on the

(1966, p. 214).




M1 8(uI))



that M has the same


( M .(u )) = 8(V1 M1(ul)) = 0, j > 2,


and part (c) follows immediately from T.heorem 2.3.1(c), thus completing

the proof.










2.4 The Inverse of Special Tri.ngular Matrix


Let 80, 1, 2,..., be a sequence of numbers such that 00 / 0.

If B is a (n+l) x (n+1) matrix of the form





L 0

2 1 0
B = (2.4.1)


n n-l n-2


. 1 0


then B-1 is obviously of the Corm


B-1 =
B


(1) (0)

(2) p(l) 0(0)


... (1) 8(0)
. . p


We show the Following:


Theorem 2.4.1


(k) I/
S=

1 k
E (-1) A(ji:k)/j+1,
j=1


(2 .4.2)


k = 0,


k 1,


(2.4.3)


a(n) 8(n-1) (n-2)












A(j:k) = E ... E
i + + i = k
1 i


1 3

Co
If, in addition, 2 P. converges,
j=0


(k) (k)
Z I/B(Z), k >



with B(Z) Z .Z .
j=O 3


Proof of Theorem 2.4.1


It is clear that


5(0)


= 1/"0.


To sho. (2.4.3) true for


k 2 1, we need simply verify that


(k -
B( =0, m
m-k






m-k m 0


m
= M / + 2
m =l
k=1


k
-k J
rm-k
j=1


= 6 /? Z (-1 /j +
m 0 0
j=1


= 1,2,...,n.


m-k


.3 0
(- )J A(j:k)/$j+


m
E m- A(j:k)-
kj -k
k-j


with


1i 5i2 -1'- ij"


(2.4.4)


k=O





k
k=O


(2.4.5)










The coefficient of 1/ in the last expression is 0 -A(1:m) =0,

and for j = 1,2,...,m-1 the coefficient of


1/01 = (-1)j m B-k A(j:k) A(j+1:m)
k=j

m-1
= (-1) i ... S iI 2

k=j i + ... + i. k 1 2 m-k
1 j

... ... i ij+L 0.
i + i. = m
1 p+1

Hence, (2.4.5) is zero and, therefore, (2.4.3) is true.

For n k 1,

(k) 2
A(j:k) = C (0 Z + 2 .. z)
Z 1 2 n

Hence, from (2.4.3)


(k) = ) [(-1)/60 ]( Z ... +
j=1


(k) 1( Z + ... + Zn)
= (-1/ )C o -






1 + n
l +3 i Z n j



The term in brackets in the numerator of the last expression

above contributes nothing to the coefficient of Zk and hence car, be

dropped.









Thcrcforc,


(k) 3 Z + .. .+ Zn
= (-1/BO) C nn"
/BB0 )Cz z-...- Z
0 1 n


But, by adding and subtracting B0 in the numerator, we have, for r. k 1,


(k) (k)
S = (-1/B,) )C


+ .

0 + ... + B Zn
0


(k)
=C
7


1/(0B +


The last equality follows since Z = 1

But (2.4.6) holds for all n k, hence


... + Zn)
n


and we have taken


(2 .6)


k 2 1.


-(k) (k)
z


1/B(Z), k Z 1.


That (2.4.4) holds for k = 0 can be seen by


(0)
C
Z


(0)
1/B(Z) = C
z


(0)
=
z


SZ (1 B0)j = I/B0 =


The proof is now complete.

To verify equation (2.2.31), it is noted that the matrix B above

equation (2.2.31) is of the form (2.4.1) with


Bk = k'-
I


k / 1,
k = 1,


and, therefore, B(Z) = K(Z) Z.


S(k)
8


E (I B(Z))
j=0


8(0)











Applying Theorem 2.4.1 with n = K-2 to obtain the solution

of our particular e hae from Lem 2.2.3
of our particular B we have from Lemma 2.2.3


K- = K- 1 B-


+1
-1
0




0


= (K-2)


(K-2)
= C. 1/(K(Z) Z).
Z













CHAPTER 3


TLE QUEUE GI/i./c WITH BALKING
AT QUEUES OF LENGTH K-i



3.1 The Basic System


Consider a queueing system in which customers arrive in the

system at times, .. .,a_2, _1' 0, ,' ..., such that the inter-arrival

t times

u. = "j- j 1, (3.1.1)
S j ij-1'


are mutually independent. The distribution function of u. will be

denoted by

Pr(u. < u = F(u), u 0, j > 1. (3.1.2)


We assume there is a suEici'ent number of servers so that, if

a person joins the queue, his service commences immediately. The queue

length at any time t is the number of persons being served at time t

(no one has to wait for service) or, equivalently, the number of busy

servers at time t. Since a customer balks at a queue cE length K-1,

there are never more than K-1 servers busy at any one time. Hence, the

queues GI/M/- and GI/MIs, Eor s : K-1, both with balking at queues of

length K-1, are identical.

We also have the apparent absurdity that a person would balk

from a system with an infinite number of servers. It would be better










in this case to assert that the customer, who arrives to f ind K-I

servers busy, is rejected by tle system.

The successive service times for customers who join the queue

are denoted by wl'1 2, w3,..., and are assumed to be mutually independent

random variables. Any w. is also assumed to be independent of the

arrival times. The distribution function of w. is assumed to be
J

PrCw. i w = 1 e w > 0, j 1. (3.1.3)


As before, we let (Q(t); -- < t < + m} be the stochastic

process such that Q(t) represents the number of people in the system

at time t. The number of people that can be in the system at any one

time is restricted to K by requiring Q(C +0) = K-1 wherever Q(o) = K.

See Section 2.1 for more thorough discussion of Q(t).

We are interested in the development of Q(L) beyond the time

point CO. Without loss of generality, C could be taken to be zero.

Once again we are interested in the random variables


N = inf (k > 0OQ(ck ,
1 k

N = inf (k > N1 9Q(ck) = K), n 2, (3.1.4)
n n-I

M = N
I1 N1'

M. = N. N j 2, (3.1.5)

and

VI = VNI '0

S ON Nj 2. (3.16)
3 j Nj 1










A complete. description of these random variables is giuen in

Section 2.1.

Since the service times are negative a:
we find that many of the results derived in Chapter 2 will apply to

queue GI/I/' with balking at queues of length K-I without any chang

the proofs.

As beEore, we follow a systematic approach to find the solut


c-d,

the

e in


ion


of C(V.).



3.2 An Imbedded Marl-ov Chain


Lemma 2.2.2 applies to the stochastic process (Q(t); -"
defined in Section 3.1 and Q(t) is, therefore, in general, a non-M.rrkovian

process. However, there exists an imbedded Markov chain defined by


Qn = Q( n),


n = 0,1,2,....


Figures 3.1 and 3.2 gLve the relation between Q(t) and Qn"

Q clearly represents the number of persons in the system at the instant

the n-th customer arrives. Once again we shall restrict our attention

to the cases K 2, [or when K =1, PrQn = l1 = 1, n = 0,1,2,....

Information obtained from the stochastic process

(Qn; n = 0,1,2,...} will provide suEEicient information about

(Q(t); --

Theorem 3.2.1

The stochastic process fQ ; n = 0,1.2,...) deEined by (3.2.1)

has the following properties:


(3.2.1)















5
4-
3-
2-
FI-
0 ,




Figure 3.1.










On
R-^


A Typical Path of Q(t) for GIl//x with Balking
at Queues of Length 4.


N2 N3
S *'0


Path of Qn Corresponding to Q(t) in Figure 3.1.


-- I -2 3 4 5 6 7 8iT-- n9
0 1 2 3 4 5 6 7 8 9 10


Figure 3.2.










(a) Qn is a Eirkov chain;

(b) Qn is time-homogeneous;

(c) The class [1,2,...,K) of states on which Qn is defined

is an periodic, positive persistent communicating class;

and

(d) The one-step transition probability matrix P is given by

SK K-I ... 3 2 1


K b(K-1 ,0) b(K-1,1) ... b(K-1,K-3) b(K- ,K-2) b(K-I,K-1)

K-1 b(K-1,0) b(K-1,l) ... b(K-I,K-3) b(K-1,K-2) b(K-1,K-1)

K-2 0 b(K-2,0) ... b(R-2,R -4) b(K-2,K-3) b(K-2,K-2)





3 0 0 ... b(3, ) b(3,2) b(3,3)

2 0 0 .. b(2,0) b(2,1) b(2,2)

1 0 0 ... 0 b(1,0) b(l,l)


where
n ( -\u k -k Nu n-k
b(n,k)= k (C) (1 e-\)k (e-u dF(u). (3.2.2)
o

Proof of Theorem 3.2.1

The proofs of parts (a) and (c) are identical to those given for

Theorem 2.2.1 (a) and (c). We need only show parts (b) and (d).

Let U(*) be the unit step function at zero. Let X n+be the

number of customers who complete their service in (C C +1]. Then
n n +1









nl n + 1 X nif Q < K,

n+ Qn 1 Xn+1 1, if Qn = K
so that

n+ = Qn + 1 Xn+l U(Qn-K), Qn K.

Since we are looking at Q(t) at successive arrival times, we have


S Qnl Qn + 1.

By the balking aspect of the problem, Qn < K.

Hence,

Pr[Qn+1 = kJQn = j} = 0, for k > j + 1. (3.2.3)

Further,

PrQ 1 klQ = K) = Pr:' ,1 -1 + 1 = klQ = K}
+l n n+1 n -
= Pr(K-1) -X + 1 = klQ = K-11

= PrQ,,+ = kJQn = K-1}. (3.2.4)

For I < j K K-I and k j+1, we have

PQn+1 = klQn = j) = Pr(j Xn+ + 1 k = k j
= Pr{Xn1 = j + 1 klQ = j}

= Pr(exactly j + 1 k persons out

of j complete their service in
(on ]}
(an')in+ljj

= (Pr[exactly j+l k independent events
u
un+l
{w I un+l occur out of j possibilities))

= b(j, j + 1 k). (3.2.5)









Equations (3.2.3) through (3.2.5) are independent of n and hence

part (b) follows. Application of these equations when j,k = 1,2,...,K

gives the matrix P, thus completing the proof.

Let KK be the mean recurrence time of the state K for the imbed-

ded Markov chain of Theorem 3.2.1. If we let p k = b(n,k 1), we

have that the matrix P of Corollary 2.2.1 and the matrix P of Theorem

3.2.1 are identical. Hence, P22,33' d ,..., satisfy


(/b(l,0), K = 2,

S = (3.2.6)
PKK K-1 K-1
1/b(K-1.0) 1 + 2 b(F-l,K-k+l) K 3.
k=2 j=k Jii


The matrix P of Theorem 3.2.1 arid equation (3.2.6) both contain

the quantity b(n,k) defined by equation (3.2.2). However, this integral

expression is not a form that lends itself to easy evaluation. Fortu-

nately, b(n,k) can be expressed as a function of the parameter .\ of

(3.1-3) and the Laplace transform of (3.1.2) in the following manner.

Let

c(9) = e- dF(u), 6 > 0, (3.2.7)
o
0

then b(n,k) = (k) (- eI (e u) kdF(u)
o

S(n) () (-e- (e- dF(u)
o j=0 j
k k (n-k+j ,un-k

() E (k)(-l)J f e dF(u)
k j=0 o

k
= ( ) ( ) (-1) (.\(n-k+j)). (3.2.8)
j=0









3.3 Some Properties of the Time Between Balks


The following theorems refer to the random variables defined

by (3.1.5) and (3.1.6). The proofs of these theorems are identical to

those given for Theorems 2.3.1 and 2.3.2. Hence, only the statements

of the theorems wLll be given.


Theorem 3.3.1

(a) M,2 ,M 2 ,..., are mutually independent random variables,

(b) M2, 3M 4, ..., are identically distributed,

(c) E(m.) p- j 2
j 3 KK 2
with pKK given by (3.2.6).


Theorem 3.3.2

(a) VIV2,V ,..., are mutually independent ranoj-. variables,

(b) V2 V31 V ,..., are identically distributed,

(c) t(V.) = E(M.) S(u ), j 1

= S(KK (l) j 2

with KK given by (3.2.6) and ul defined by (3.1.2).
c'r.













CHAPTER 4


THE QUEUE GI/D/1 WITH BALKING
AT QUEUES OF LENGTH K-i



4.1 The Basic System


Consider a queueing system in which cusLomers arrive in the

system at times, ..._,o _,0-l,0 0, ,..., such that the inLer-arrival

times

u. = o o j > 1, (4.1.1)
j j '

are mutually independent. The distribution function of u. will be

denoted by


Prfu.i u} = F(u),
1


u O 0, j > 1.


One server is available to handle the needs of the customers.

This server dispenses his service on a strict "first come, first

served" basis. The service time of any customer who joins the queue

is assumed to be a constant value b.

Let [Q(t); -C < t <+ '" be the stochastic process such that

Q(t) represents the number of people in the system at time t. 'he

number of people that can be in the system at any one time is

restricted to K by requiring Q(o *0) = K-1 whenever Q(O) = K.

A more thorough discussion of Q(t) and its relation to the queue

length is found in Section 2.1. As in the previous two chapters,


(4.1 .2)










we are interested in the devclopnent of Q(t) beyond the time point

o0. We could, therefore, take a0 = 0 without loss of generality.

Define

N1 inf (k > OIQ(ok) = K],

Nn = inf (k > N _n-Q(Ok) = K), n > 2, (4.1.3)


M1 = N1 '

M. N. N.j- j 2 2, (4.1.)

and

V1 = N1 00'

v = CN ONjl j 2. (4.1.5)
S-j 'j- ...


A complete description of these randcm variables is given in

Section 2.1.

Our ultimate objective is to find an expression for t(V.).

Unfortunately, we are only able to obtain an exact expression for

S(V.) in terms of quantities that are difficult (if not impossible)

to obtain. The results that we do establish are based on the concept

of the waiting time in the system. The definition of the waiting time

and our motivation for its use in the search for a solution to (V.)

now follow.

Let W(t) be the amount of time it would take our server to

finish serving all of the customers present in the queue at time t.

W(t) is then called the waiting time in the system at time t. If a

is the time of an arrival into the system, then









Q(o) < K implies W(J) W(c-O) b b

and

Q(c) = K implies W(o) = W(O-O).

The latter condition reflects the fact that a customer arriving in the

system to find K-I persons already in the queue, leaves without waiting

to be served.

By the balking aspect of the problem, at most K-1 persons may

be in the queue at any particular time t. Since the service time is

a constant, b, we have C 5 W(t) 5 (K-l)b.

Again let 0 represent the time of an arrival into the system.

Clearly,

Q(C) I il and cnly if W(.-0) = 0.

Further, if Q(0) = j (j = 2,3,...,K) we must have Q(C-0) = j-1.

Since the service time for any one customer is b, a constant,

(j-2)b < W(J-0) (j-l)b.

That is, for j = 2,3,...,K

Q(o) = j if and only if (j-2) < W(G-0) (j-l)b.

Hence, complete knowledge of the stochastic process defined by

Qn = Q(n) n = ,1,, ,

can be obtained from a knowledge of the probability law of the

stochastic process defined by

W = W(U -0), n = 0, ,2, .. (4.1.6)
n n
Figures L.1 and 4.2 give typical realizations of Q(t) and Wn
n"















Q(t)


b b b b b


Figure 4.1.


v


4b-
3b-
2b-
Ib-
0


A Typical
at Queues


Path of Q(t) for CI/D/1 with Balking
of Length 4.


N2
0'


- V


II


9


Path of W Corresponding to Q(t) in Figure 4.1.
n


Figure 4.2.


- -- - -








The random variables defined by (4.1.3) can now be expressed

in the equivalent form

N = inf (k > OWk > (K-2)b],
I k
N = inf (k > n- W > (K-2)b), n 2. (4.1.7)
n n-1 K

It may be further shown, by considering a slight modification

of the proof of Lemma 2.2.2, that both Q(t) and W(t) are non-Markovian

processes. Since we now have constant rather than negative exponential

service times, it is also true that Q is non-Markovian. However, it

will be shown in the next section that W is a t-Mrkov process. Because
n
of the Markov nature of W and the equivalence of (0.1.3) and (4.1.7),

we are led to consider the stochastic process (W ; n = 0,1,2,...) in

our search for an expression for E(V.).

As always, we shall ignore the trivial case when K = 1.



4.2 The Waiting Time in the System


Let Wn ; n = 0,1,2,...) be the stochastic process defined by

(4.1.6) so that W is the waiting time in the system immediately pre-
n
ceding the n-th arrival into the system. Then it is clear that

Wn 5 (K-2)b, i.e., Qn < K, implies


0, if W + b u ,
n n+1
Wn+l = L
nl L b u if W + b> u
n n*1, n n+l,
and Wn > (K-2)b, i.e., Qn K, implies


Sn n+1
n+l W u if W > u
n n*l' n n*l










If '(-) is the unit step function at zero, we can rewrite the

above express ions in the form


W = max [0, W u + b U((K-2)b )}, n 2 0. (4.2.1)
n+l n n'1 n

We now note the relation between (4.2.1) and the analogous

expression for the waiting time just prior to an arrival in the queue

GI/D/1 (no balking). If we let (W ; n = 0,1,2,...) be the stochastic
n

process that represents the latter waiting time, we have the well-known

result

W = max (0, W u4 + bJ, n 2 0. (4.2.2)
n+l n nt1

See, for e.xarple, Prabhu (1965b).

The difference between W and W is that, for the former,
n n
a person who enters the system and faces a queue of length K-1 balks

and adds no service time to the system.

We new formally state and prove some basic properties of the

stochastic process [W ; n = 0,1,2,...).


Theorem 4.2.1

The stochastic process Wn ; n = 0,1,2,...) is a time-homogeneous

Markov process concentrated on the continuous state space [0, (K-l)b]

with one-step transition distribution

= = f 1 F(y-:c-0), y > (K-2)b,
Pr[w = =W (4.2.3)
1 F(y+b-x-O), y < (K-2)b.



Note that the interval on which (4.2.3) is concentrated is actually a

sub-interval of [0, (K-l)b], this sub-interval being a function of y.









Proof of Theorem 4.2.1

Let m ,m.2,3, ..., be a sequence of integers such that

mi < m ... < mk < n. Then

Pr[Wnl xSWn y, W nk Yk,...'W = Yl}

= Prrmax [y un1 + bU((K-2)b-y), 01 < x)

= Pr[nl x Wn v).


Hence W is tMarkovian. Now,
n

Pr(W, K <4Wn = y Pr{(W = O'W = y}

+ Pr{o < Wn1 "I y


= Pr[y un+ + bU((K-2)b-y) S 0)

+ Pr(0 < y u n bU((K-2)b-y) < x)

= Prun+l y + bU((K-2)b-y) -

= 1 F(y + bU((K-2)b y) x 0), (4.2.4)

which is independent of n, and therefore W is time-homogeneous.
n

Equation (4.2.4) is the same as (4.2.3), thus completing the proof.

To simplify the notation, we write

P (y, x) = PrW xlWo = y} n 1, (4.2.5)

and


P(y, x) = P (y, x).


(4.2.6)









Theorem 4.2.2

The n-step transitioii distribution functions defined by (4.2.5)

are concentrated on thc continuous state space [0, (K-1)b] (or a sub-

interval of it) and satisfy


P (z,x)
n+1


J= P (z,y) d F(y-x-0)
max(x,(K-2)b)

(K-2)b
+ P (z,y) d F(y+b-x-0)
max(0-,Ox-b) n

- P (z,(K-2)b) [F((K-l)b-x-0)

F((K-2)b-x-0)J].


Proof of Theorem 4.2.2

By the Chapman-Kolmogorov equations.


(K-1)b
Pn+1(zx) = J
O-


P(y,x) P (z,dy).
n


Integrating by parts, (4.2.8) becomes


v = (K-1)b
P (zx) = [P(y,x) P (z,y)]
nl = 0-
y 0 -


(K-I)b

- Jo_


P (z,y) P(dy,x)
n


(K-l)b
= P((K-l)b,x) P (z,y) P(dy,x).
n


(4.2.9)


(4.2.7)


(4.2.8)









By Theorem 4.2.1, we have


S0,'
P(dy,x) = ,
-d F(y-x-O),
Y


y >< ,

y > x,


(K-2


0, y < x-b,
P(dy.x) = 0
-d F(y*b-x-0), y > x-b,


P(dy,x) = F((K-l)b-x-0) F((K-2)b-x-0),

so that (4.2.9) becomes


P n(z,x)
n+ I


)b < y (K-.l)b,




y < (K-2)b,



y = (K-2)b,


= 1 F((K-l)b-x-0)

(K-1)b
+ P (z,y)d F(y-x-O)
* max(x,(K-2)b) y


(K-2)b
+ 'r
max(0- ,x-b)


? (z,y) d F(y+b-x-0)
r, y


- P (z,(K-2)b)[F((K-1)b-x-0)

F((K-2)b-x-0)]


But P (z,y) = 1, for y > (K-l)b. Hence (4.2.10) becomes

(4.2.7), thus completing the proof.

Various attempts have been made to establish the stationary

distribution of W all without success. Since the stochastic kernal

P(y,x) does not satisfy the regularity conditions stated in Feller

(1966, Sec. VIII, 7), we are not even sure if W possesses a stationary
n

distribution.

As will be seen in the next section, the most important result

of this section is the Markov property of Wn established in Theorem 4.2.1.
n


(4.2.10)










4.3 Some Properties of the Time Between Balks


We are now ready to obtain solutions for the expected values of

the random variables N. and V. defined by (0.1.4) and (I.1.5), respec-

tively. In the previous chapters exact results were derived for these

expectations, but in this chapter we must be content to utilize unsolved

expressions for the expectations of interest. In order to reach our

objectives, we make use of the properties of the following quantities.

Let

S = rb u ... u n ],
n 1 n

and, for 0 s y 5 (K-1)o, y a real number, 1at


S y, y' (K-2)b,
Y = [
y b, y > (K-21b.


Define

N(y) = inf fn > OJW > (K-2)b; W = y (4.3.1)
,n L.

(so that N(y) represents the number of arrivals until a balk occurs,

conditional on an initial amount y of waiting time in the system), and


M(y) = inf {n > OIW = 0 or W > (K-2)b; W = y (4.3.2)
n n

(so that M(y) is the number of arrivals until a customer either enters

an empty queue or balks, conditional on an initial amount y of waiting

time in the system).

It is clear that H(y) has the equivalent representation


M(y) = inf [n > OIS -y or Sn > (K-2)b-y'}
n










and, hence, is the index at which the random walk S ; n = 1,2,3,...

first leaves the interval (-y ,(K-2)b-y ].

Finally, let J = J(y) be the random variable that represents

the number of customers who enter an empty queue prior to the first

person to balk, conditional on an initial amount y of waiting time in

the system. That is, J is the number of times the waiting time process

W (n 2 1) takes the value zero before it takes a value greater than (K-2)t
n
First we shall prove a few lemmas that lead to a theorem which

expresses e(N(y)) in terms of expectations and probabilities associated

with tie random variables M(y), S M(O), and S (O). Note that when
N:* ( y ) L '

K= 2, M(O) = M(y) = 1 and SM(O) S = b-u.


Lemma 4.3.1

Pr[J = 0) = PrHS > (K-2)b-y'},

Pr(J = j} = Pr[S( -y' Pr(iS( 0-1

PrfSM() > (K-2)b, j >



Proof of Lemma 4.3.1

Define

L = M(y),

L. = inf [n > L. j W = 0 or W >(K-2)b; W y}, j 2


(so that L. is the index of the j-th person to balk or enter an empty

queue). Keep in mind that L. is a function of y.
.3










Nov, if j = 0,


Pc(j = 0] = E Pr(i = 0, N(y) = n'
n=1

a,'


= PrO0 < W 5 (K-2)b





= PriO < S + y (
n=l




= Z Pr[N(y) = n, S
n=1


= PrS(y)


,...,0 < W < (K-2)b,
n-
W > (K-2)b|WO = }
n


K-2)b,...,0 < Sn-
n-I


+ y (K-2)b,


S + v > (K-2)b}
n -


+ > (K-2)bJ


> (K-2)b-:, }


For j 1 ,


Pr(J = j] = Pr(L1 = 0, WL2 = ,...,VLj = 0,

WLj+ > (K-2)blW0 =y}

= Pr[WLj+ > (K-2)bWL = 0 PriWLj = OIWLLj = 0

...Pr(WLL2 = 01WL = 0 Pr[W(L1 = 0 WO = y, (4.3.3)

the last equality following from the Markov nature of W But we have,

PrWL1 = 0100 = y} = PriW () = 0W0 = y


= PrW = 0, M(y)
n=1


SE PriS + y < 0,
n=1

= PLS M(y) y}'


= nlwo = yI


(L4.3.4)













Pr(WLiI = 0JWL


= 0} = E Pr(L
in=1
n=l


- L. = n, W -= 0W = 0]
1 L. L
i+1 i


Z PrfO < WL (K-2)b,...,0 < W
n=1 1.

S(K-2)b, W. = 01w 0]
L.+n L.
L i


n=


Pr(O < SI f (K-2)b,...,0 < S
n-I


5 (k-2)b, S <: 0}



= E Pr(M(O) = n, SM(0) 0O
n=1


= Pr(SM(0) 5 0),


1 i-1 j-1.


(4.3.5)


Following a proof analogous to that used to obtain (4.3.5), wz get


(4.3.6)


> (K-2)b WL 0) = PrS(S ) > (K-2)b)
J


Applying (4.3.4) through (4.3.6) to (4.3.3), we complete the proof.


Lemma 4.3.2

Q(N(y)lJ=0) = e(M(y)IS > (K-2)b-y'),


e(N(y)lJ=j) = c(M(y)IS y')


(j-1) e .i(O)ISM(0) 0)


+ (M(0)SWM(o) > (K-2)b), j > 1.


and


Pr(w









Proof of Lemrna 4.3.2

Pr4N(y) = nlJ = 0} = Pr[N(y) = n, J = 0)/Pr{J = 0)

= Pr[O < S + y (K-2)b,...,0 < Sn1 + y (K-2)b,

Sn + > (K-2)b}/Pr(J = 0)

= Pr M(y) = n, S( + > (K-2)b}/PrtS + > (K-2)b}
m y) L-m(y)

= Pr[M(y) = nISM( > (K-2)b-y'},

and therefore

t(N(y)IJ = 0) = S(M(yv)|S > (K-2)b-y').

For j 2 1, note that

S(N(y)IJ = j) = g(Lj1 = j)
j+1

= 8(I,j = j) (L2 -Li3 = j) ... +

(L.j L lJ = j). (4.3.7)

We now solve for the various terms in the last expression of equs-

tion (4.3.7).


S(L1J = j) = Z n Pr(J = j, L = nj/Pr(J = j]
n=1

n Pr[WL,0, ...,WL =O,w > (K-2)b,Ll nW = y

n-l Pr(J = j

= n Pr[WL j> (K-2)bIWLj =0) PriWLj =OIWLj_ =0
n l J

PrWLI =0, L1 = n =
...Pr[WL2 =OIWLP =G0)
S L Pr[J = j}


n= n Pr[WL1 =, L = nIW =yv)/PrWL =W =Y]
n=il








= n Pr[(,(y) = n s1( ) + y' 0)
n=1

= e(t(y)L S .y y'). (4.3.8)


The second equality from the last follows from (4.3.3) and the next to

the last equality follows from (4.3.4). Using similar techniques as

those employed in deriving (4.3.8), we have

P(Li Lil j) = e(.f(0)ls o) 0), l i -j 1, (4.3.9)

and

E(L. L. j = j) = (tI(0)jc >(K-2)b). (4.3.10)
>l j )(0)

Applying (4.3.8) through (4.3.10) to (4.3.7), we complete the proof.

We are now ready to find the expression for c(N(y)).


Theorem 4.3.1

If K 2 2, then

Pr(S y'}
(N((y)) = e(M.(y)) + 8(M(0)) ,
Pr[S > (K-2)b}

and, in particular, if K = 2, then

C;(N(y) = 1 + [1 F(b+y'-0)]/F(b-0).


Proof of Theorem 4.3.1

For K > 2, let A(y) be the evenL [S-(y) +y' >(K-2)b) arLd A (y)

the complement event [S,, + y' 0 Further, let p(y) =Pr(A(y)

and q(y) = PriA'(y)) = l-p(y). We have, by Lemmas 4.3.1 and 4.3.2, that









co
C(N(y)) = P C(N(v)IJ = j) Pr[J = j}
j=0

= CQ(y)IA(y)) p(y)

+ -(1(y) A'Cy)) q(y) E q(O)j-l p(0)
j=1

+ C(M(0) A'(0)) q(y) E (j-1) q(0)- p(0)
j=1

+ I(M(0)lA(0)) q(y) Z q(0)j-1 p(0)
j=l

= C(0;(y)) + e(M(0)!A'(0)) q(,,) [l/p(0) i]

+ 8(M(0)IA(C)) q(y)

= (M(y)) + q(y) [q(M(0) A'(0)) (q()./p(O))

+ F(M(0) lA(0)) (p(O)/p(0))]

= C(M(y)) + 8(M(0)) q(y)/p(O),

and the first half of the theorem is proved.

For K = 2, we have immediately that

Pr[N(y) = I = Pr1[w > o1

= Pr(S1 + y > o0

= Pr[u < b + y'}


= F(b+y'-0)

SPr[J = 0}


and

Pr[N(y) =n)


=Pr[W1 =0,...,W n- =O,Wn 0>0WO

= Pr(b-u +y 0, b-u, O,...,b-u < 0. b-u > 0)
1 n-I n
= [ F(b+V'-O)] [1 F(b-0)]n-2 F(b-0)

= PrJ = n-1}, n 2.


(4.3.11)


(4 .3.12)








From (4.3.11) and (4.3.12), P('l( y)) tor the case K =2 followF trivially,

thus completing the proof.

We now proceed to find an approximate expression for (Ul(y))

when K 23. Since M(y) represents tth inde. of first passage of the randcm

walk [S ; n = 1,2,3,...) out of the interval (-' ,(K-2)b-y ], uc take

y >0 (and hence y >0) so that we can use Wald's approximation to yield

the following results for the random variables N(y) and S (y). See,

for example, Ferguson (1967).

Let


e(8) = j e dF(u), (4.3.13)
o

and e0 be the non-zero solution (if it exists) of

exp (Bob) C (90) = 1. (4.3.14)

We then have, for y > 0 and K > 3,

y'/(K-2)b, '(u ) I b.
Pr[ ( > (K-2)b-y'} exp(- ) (4.3.15)

exp(e0 F(K-2)b-y' j) e:p(-0v' )

e(u ) / b,

and

8(M(y)) y'[(K-2)b-y']/Var(ul), (ul) = b,

((M(y)) (1/(b-P(ul)))(-y (exp(O0[(K-2)b-y']) -1)

+ [(K-2)b-y ]( exp(-80y ))]

[exPp(e0[(K-2)b-y ]) exp(-90y )-1

P(u ) / b. (4.3.16)










By the use of Theorem 4.3.1, and equations (4.3.15) and

(4.3.16), an approximate expression for (N(y)) may be obtained.

However, if y = 0, then the above approximations give

e(M(0)) = PriSW(0) > (K-2)b 0.

Hence, the substitution of these quantities into the expression of

Theorem 4.3.1 yields 0/0, an undefined quantity. To circumvent this

difficulty, we write


-(M(a)) Pr[S < y'
8(N(y)) = .(M(y)) + lin----- ---
a-0 PriSM(a) > (K-2)b-a }

and substitute the approximations beEore taking the li

simplification, for K > 3,

(N(y)) [(K-2)b-y' ][(K-2)b h/ ]/Var(a), t

(N(y)) [(K-2 )b-y']/(b e(ul


(exp(E (K-2)b) exp( 0'))

0 (b .(u ))
0 I


nit. We have after



u(u.) b,
L


(u ) / b.


(4 .3.17)


It will be shown in Theorem 4.3.3 that (4.3.17) can be used to put an

upper bound on C(V ). The next two lemmas and the theorem following

them help us to attain this goal, while giving insight into why we

have devoted much effort to obtain e(N(y)).

Let


Y. = WNj
J j'


j I1,


(4.3.18)


(so that Y. is the amount of waiting time in the system immediately

prior to the j-th balk).








Lcimma 4 .3 .3

The stochastic process Y.; j = 1,2,3,...} is a tirme-homogencous

Markov process on the continuous sLate space ((K-2)b, (K-l)b].


Proof of Lemma 4.3.3

By the defirrit ion of N. in Section 4.1, the state space is as

described. For 1 < i < i2 < ... < i < j, by the Markov and time-

homogeneity properties of W we have

Pr(Yj1 'lYj Y = y ,..., Yil =
i-, Yim '' Yl]

= Pr[Wj+ .N = Wm = ,... WNil =

= Pr[WN2 x ;= 1 }

= Pr(Y xlY -= y},

thus complctir.g the proof.


Lemmr 4.3 .

(a) The distribution of M. conditional on Y. = y

is the same as that of N(y) for j 1..

(b) The distribution of V. conditicnal on Y. = y

is the same as that of N() 0 Eor j 1.

(c) min E(N(,)) E(M.) max O(N(y)),
(K-2)b

j 2.









Proof of Lenima 4.3.4

By the Markov nature of Wn, we have

PrfM = rIY = y
jtl+1 3


= Pr(wN (K-2)b,..., W+n
[WN.
3I 3


' (K-2)b,


W > (K-2)bjW = y
j" J

= Prf(.W1 (K-2)b,..., Wn-l (K-2)b, Wn > (K-2)bW y}

= Pr[N(y) = n),

completing part (a) of the lemma.

Further, since u1,u2,u3,..., are mutually independent and

identically distributed,


PcVj+1 j = y Pr(u N.+ + u
J 3+1


S WN.
3


= A


= Z Prju. ++ uN.
l +1 N.+n
n=1 J J


< x, Mj+. = nIW" = y
3


= Pr[o T <- x, N(y) = n}
n 0 1
n=l

= PrCaN(y) a0 s x,
NCy) 0

and the (b) part is proved.

Finally, Erom the (a) part of the lemma, we have, for j > 1,

E(M. ) = P [e(N(Y.))}.
J+1 3
j

But, by Lemma 4.3.3, (K-2)b < Y. j (K-l)b, so that part (c) of the

lemma follows. The proof is now complete.

Note that Y,Y2 ,Y3,.., are not identically distributed unless

Y has the same distribution as the stationary distribution of Y..
1 3









Therefore, Lemma 4.3.4 implies that neither :12 ,M3M, . nor

V2 V 3,V ..., are identically distributed sequence. of random variables

in general.


Theorem 4.3.2

(a) C(a a) = ) (N(v)) E(u ),
N(y) 0 1

(b) S(V.) = S(M.) S(u ), j > 1,
3 3 1

with ul given by (4.1.2).


Proof of Theorem. 4.3.2

Let 0 = C so that
n n C
*
0 = u +. . u n > 1 .
n 1 n

The sequence Cr rn (u ) forms a martingale and 6(,n -n e(u )) = 0
n 1 n
*
for n > 1. Let Bk be the o-field of events generated by (0 ,..., k)
-k 1 k
then the event (N(y) > k)} B arn B c: k Hence, N(y) is an
-+k -k + l

optional stopping rule and therefore has no effect on the martingale

property. See, for example, Feller (1966). We now have

(3 N(y) E(u )) = 0, which establishes part (a) of the theorem.
N1(v) L1
By Lemna 4.3.1 and part (a) above, we car. write


C(v. ) = fS(vjiY.)
J+1 y .j+1





E3
= ~N(Y.) 0


= ([(N(Y.)) (u )5
Y 1

= C(u ) C(CN(Y.)))
1 y j

= S(u,) C(M. ), j C 1.
L j+1










Similarly, we have

8(vl) =0 (a 0) = tl-) e(ul
I O N(OO) ) 0

The proof is now complete.

If one could obtain the exact value of e8(i.), then the above

theorem implies (V.) could be easily found. However, the best we are

able to do is obtain an approximate upper bound for (V.) when K 3.
3
An exact upper (and lower) bound for C(V.) when K = 2 can be obtained

directly from Theorem 4.3.1. No approximate lower boui.d for e(V.) in the
3
case K 3 can be found since, when y = (K-l)b (or equivalently

y = (K-2)b), the approximation (4.3.17) yields a value of zero. We now

stace formally the results that can be obtained. Since they hold for

all j e 2, the steady stc te solution satisfies th?se bounds also.


Theorem 4.3.3

Let 0 satisfy (4.3.14).

If K = 2, then

E(ul)/F(b-0) (V.) -5 (ul) ([ + l/F(b-0)], j > 2.

If K > 3, then, approximately,


8(V.) < I(u ) (2k-5) b2/Var(ul), C(u ) = b,

and

c(V ) s b e(u )/(b-e(ul))

8(u ) (exp(J (K-2)b) exp(O (K-3)b)

80(b ,(u ))

e(u ) / b, j > 2.
1.








Proof of Theorem 4.3.3

By Lemma 4.3.4(c), we need the maxiinuii arid minimum of C(N(y))

from Theorem 4.3.1 for y in the range (K-2)b < y (K-l)b, or equiva-

lently, (K-3)b < y' (K-2)b to obtain bounds for S(.1.). The exact

bounds for S(N(y)) when K = 2 are taken directly from the expression

in the theorem, while the approximate results for K e 3 are obtained

from (4.3.17). It is easily shown that the lower bounds are reached

when y = (K-2)b aid the upper bounds when y = (K-3)b. By Theorem

4.3.2(b) we need only multiply these bounds by C(u ) to complete the

proof.

Finally, let F (-) denote the stationary distribution of Y..
Y I
That is, Fy () satisfies

(K-i)b
Fy(x) = J PrjY, x Y1 = yd dF..(y), (4.3.19)
(K-2)b

when (K-2)b < x 5 (K-l)b. If Y has the distribution F y(), then it is

well known that Y. has the distribution Ft(*) for all j. See, for example,

Feller (1966). We then have, by Lemma 4.3.4, the following theorem.


Theorem 4.3.4

If Y has the distribution F y() that satisfies (4.3.19), then

(a) M2,M3,1 3' ..., are identically distributed,

(b) V2,V3,V ..., are identically distributed, and

(K-l)b
(c) (M.) = J ) (N(y)) dF ,(), j 0 2,
3 (K-2)b

with C(N(y)) given by Theorem 4.3.1.













CHAPTER 5


THE INVENTORY PROBLEM: DISCRETE CASE



5.1 Definition of the Inventory System


We suppose there exists a subuarehouse, maintaining an inventory

of finite capacity S, that holds material (discrete) for future demand.

We assume the item-by-item demand for the stored objects occurs accord-

ing to the stochastic process [D(t); t -> 0 defined by


D(t) = U(t T.) (5. .1)
j=1

with U(-) the .init step fur.ction at zero. It will be assured that the

inter-demanj ties, T T T T 4 3,..., for the items in

storage are mutually independent and that the distribution of i -T .

is given by

Pr[T .-T 5 u) = G(u), ua 0, j 2. (5.1.2)


In order to maintain a stock on hand, the subwarehouse places

an order for replacement items to a warehouse. It will be held that

items are so ordered in lots of integral size 'j(1 'J V S) and that

orders are placed at the times alc2' 3'.., ith c defined by

C. = inf (tfD(t) = j '}, j > 1. (5.1.3)

From the definition. of D(t), we have C. = T so that 0 0 -02
..., are mutually independent and has distribution
C4 -03,..., are mutually independent and C 1 has distribution









Pr[ -a ._ x = G (x), x a 0, j 2, (5.1 .'0
1 0-1G0

where G (*) is the v-th convolution of G(-) with itself.

Let [S(t); t 0) be the stochastic process such that S(t)

represents the inventory or stock level in the subwarehouse at time t.

If we let [R(t); t 2 0) be the stochastic process such that R(t) is

the number of orders filled by the warehouse in [0,t] for our sub-

warehouse of interest, then S(t) will be defined by

S(t) = S D(t) + vR:(t). (5.1.5)

Tr:e above definition assumes that the inventory is initially full,

i.e., S(0) = S.

An order for replacement stock cf lot size v, nade at tin.e 0,

may be one of two types. We 'have a "regular" order provided that

S(c) > S v[S/v], where [x] means the integral part of x. In this case,

the time tc fill an order (hereafter, the service time) is assumed to be

a random variable. The successive regular service times, denoted by

w,'2' 3, ..., are assumed to be mutually independent and independent of

the demand process D(t). The distribution function of w. is given by

Pr[w. w) = H(w), w > 0, j > 1. (5.1.6)


We have an "emergency" order if S(C) = S v[S/V]. In this case, the

emergency service time is supposed instantaneous, or at least effec-

tively zero, so that S(o + 0) = S V[S/v] + V.

In other words, regular ordering procedures are used provided

that at the time we place such an order, there are at least v items in

the subwarehouse. If there are less than items in the subwarehouse










when an order is placed, we utilize emergency measures to obtain the

lot of items. Utilizing this ordering scheme, we avoid the disaster

of running complctaly out of stock in the subwarehouse. Figure 5.1

gives a typical realization of S(t).

The behavior of the warehouse in filling the regular orders is

important to a discussion of the inventory problem. It will be assumed

that the warehouse operates under ore of two distinct systems. Under

the first system, the warehouse can handle only one order at a time, so

that successive orders, which arrive while an order is being filled,

form a queue and must wait to begin being processed or "served."

The orders are then processed by the warehouse according to a strict

rotation basis of "first come, first served." The warehouse just

described will be called the one-server warehouse. Under the second

system, an order begins processing as soon as it arrives in the '.'are-

house so that no order must wait for "service." A warehouse operating

under this procedure will be called an infinite-server warehouse. We

shall consider both one-server and infinite-server warehouses.

We now state the following formal definition cf the concepts

discussed so far.


Definition 5.1.1

The ordering scheme (G,H,S,'v,1) is a policy for maintaining the

level of inventory in a subwarehouse where:

(a) The capacity of the inventory is S.

(b) Item-by-item demand for objects in storage satisfies (5.1.1)

and the inter-demand times are mutually independent random

variables with distribution function G(-).





67










4-N
_Z


=0


103
i
II



LO
--- -^
b"
II =I
-J










n


c,
CO C:


Ib- o -
_0-I


b
II





C ra







4-










(c) Lots of items (1 1 v S) are ordered at the times

given by (5.1.3).

(d) The orders are made to a one-server warehouse.

(e) Regular service times are mutually independent random

variables, are independent of the demand process, and

possess a distribution function H(-).

(f) Instantaneous service occurs for orders placed when less

than items remain in storage at the time an order is

placed.

If we change condition (d) of the definition to state that orders

are m.ide to an infinite-ser'-er warehouse, we have the ordering scheme

(G,H,S,V, ) .

Clearly, the cost of maintaining the inventory level in the

subwarelicuse wili be d function of ', the lot size ordera.1. The optimal

value of rte lot size is defined herein to be that value of v which min-

imizes the cost. It has to be remembered, however, that frequently not

all values of v are available to us sinca orders to the warehouse may

have to be in multiples of ten, a doen,a gross, or some other basic

unit. Our best v is that of finding the optimal attainable value of

v. In Section 5.3, a cost function is defined that utilizes reasonable

costs associated with maintaining the inventory level.

While searching for a minimum cost with respect to V, V may

take all values from I to S. Therefore, the distribution on regular

service times could quite possibly be a function of ', the lot size

ordered.









5.2 :celrtion of the Inventoty System to Queues
with Balking


Recall from Section 5.1 that S( k) is the stock level at the

time the k-th uidar is placed. S( k) tells us whether an order is

regular or emergency. Since, realistically, emergency orders have

large costs (more than the costs of regular orders), the value uo S( k)

is of extreme importance in determining the cost of maintaining the

inventory level in the subwarehousc. A study of the properties of S(ck)

can be Eacilitated by making the following observations. From equa-

tions (5.1.5) and (5.1.3), we have

S(a ) = S D( k) + VR( k)

= S v'k R(Ok)}. (5.2.1)


Definie the stochastic process (Q(t); t > 3) by

Q(t) = [D(t)/v R(t), (5.2.2)

where [x] is the integral part of x, so that Q(t) represents the

number of unfilled orders, for our subwarehouse of interest, at time t.

From (5.2.1) and (5.2.2), we have

S(ok) = S vQ( k) (5.2.3)


so that a knowledge of Q(Ok) gives us the value of S(ck). Therefore,

a study of the stochastic process (Q(t); t 2 0) is needed. It will

be demonstrated in Theorem 5.2.1 that such a study has been carried out

fur some special cases in Chapters 2 through 4.










For the cost function : to be defined in Section 5.3, we will

make use of the following random .ar isbles. Define

N, = inf [k > OS(CO ) = S v[S/'J},
1 k

N = inf fk > N IS(C ) = S -v [S/v]), n "2 (5.2.4)
n n-i k

(so that N is the number of orders, regular and emergency, placed up
n
to and including the n-th emergency order), and

V1 = 0. l

V = CNj ONji, j > 2 (5.2.5)


(so that V is the time until the first emergency order is placed and

V. (j L 2) is the time between the (j-l)-st and j-th emergency orders).


Theorem 5.2.1

For an ordering sccheme (G,H,S,v,l) ( (G,H,S,V,") ) we have the

following dualities.

(a) Q(t) is the number of people in the system at time t for

the queue G /H/1 (G /H/) with balking at queues of

length iS/v] 1.

(b) Nk is the number of people who arrive in the system up to

and including the k-th person to balk in the queue GC/H/1

(G /H/1/) with balking at queues of length [S/v] 1.

(c) V1 is the time until the first balk and V. (j > 2) is

the time between the (j-l)-st and j-th balks in the

queue G /H/l (Gv /lH/) with balking at queues of

length [S/v] 1.










Proof of Theorem 5.2.1, Pnrt (a)

By dcEinitioni (5.2.2), we have

Q(t) = [D(t)/V' R(t).


Now [D(t)/',] has unit increases at the times a1 ,2'o 3.., so that

0(t) also has unit increases at these times. Hence, the order time

o. can be considered as the arrival time of a "customer" into the

warehouse.

R(t), by definition, is the number of orders filled by the

warehouse in [0,t]. The time it takes to fill a regular order is

w.. Since R(t) increases by a unit amount at the time an order is

filled, Q(t) decreases by a unit amount at that time. Therefore, w.

is the service time of a customerr" in the warehouse.

Finally, by the restriction of emergency orders and (5.1.5),

o is such that

S(a +0) = S -v[S/] -+v when S(k) = S-' [S/vJ,
k k
if and only if

R(ak + 0) = R(Ck) + I.


Hence, from (5.2.2) and (5.2.3)

S(ok +0) = S v[S/V] + when S(Uk) = S -v[S/v]


if and only if

Q(c +0) = [S/v when Q(k) = [S/J].

Therefore, a "customer" balks at the queue of length [S/v] 1.

By the assumptions placed or the ordering times and the service

times for the ordering scheme, and the discussion of a queue with

balking in Section 2.1. we complete part (a) of the proof.










Pioof of Theorem 5.2.1, Parts (b) and (c)

Simply note by (5.2.3) that


N1 = inf (k > 0IQ(Jk) = [S/v]},

N = inf (k > N n- (k) = [S/u'), n > 2.

By part (a) of the theorem and definitions in Section 2.1, we complete

the proof.



5.3 The Cost Function C(')


For the inventory problem discussed in Section 5.1, consider the

following costs associated with running the ordering scheme (G,H,S,'v,)

or (G,H.S,v,'):

CO: The cost of placing an order,

C : The per unit cost of the commodity, and

C2: A penalty cost for instantaneous delivery of an emergency

order that is possibly a function of 'V, the lot size

ordered.

Define the stochastic processes (N(t); t 2 0} and (M(t); t > 0)

by

N(t) = E U(t o ) (5.3.1)
j=1

and


M(t) = E U(t ON.) (5.3.2)
j=l

with '. given by (5.1.3) and N. by (5.2.4). Then N(t) is the total
nu r f orders placed in the int al [,t] regularr and erg
number of orders placed in the interval [0,t] (regular and emergency)








and M(t) is the number of these that are emergency orders. DefiniLions

(5.3.1) and (5.3.2) are not the same random variables as defined by

(4.3.1) and (4.3.2), respectively.

Lr t

C(v;t) = (CO + C1V) N(t) + C2M(t) (5.3.3)


so that C('V;t) is the total cost of ordering lots of size v during the

time interval [0,t]. Since M(t) and M(t) are random quantities, we

shall concern ourselves with the expected total cost, eCC(v;t)}, during

the interval [0,t]. Further, the v which minimizes P[C(v;t0)} for

a fLxed :0 will minimize e[C(V';t ) /t., qo that we shall restrict our-

selves to the latter quantity. Finally, since a subwarehouse that

maintains an inventory i. usually established with the thought of oper-

ating for a long period of time, we choose to minimize the expected

total cost of ordering per unit time in the long run, a quantity that is

mathematically tractable. That is, we want the value of v (' = 1,2...,

or S) that minimizes

C(v) = lim efC(v;t)/t}
t-.m

= (CO + C ) lim e{N(t)/t)
t-

+ C lim [(M(t)/t}. (5.3.4)
t-

But a 2 -a
But 02 l 43 -2' a -3',..., are mutually indeprendent

and identically distributed random variables. Therefore, N(t) is

a (delayed) renewal process and, by the Elementary Renewal Theorem,

Prabhu (1965a), we have










Lemma 5.3.1

lim C[N,(t)/t = 1/(o2 -0 1
t-' c

= 1/ T (5.3.5)

with
CO
-= f u dC(u). (5.3.6)
o

A similar closed form for lirm n '[M(t)/t) does not exist for
t- x
an arbitrary ordering scheme (G,H,S,;,1) or (G,H,S,a,w). The reason

for this is th-t M(t) is a function. of the random variables N. whose

properties depend heavily on the distribution of service tines and

whether we hav.. a one-server or infinite-set.xer warehouse.

In the following sections, we consider reasonable candidates

for the distribution function, H('), on the service times and both one-

server and infinite-server warehouses. For the :ases discussed in these

sections, a "closed" form for lim [M(t)/tJ will be obtained.


At this juncture, it should be pointed out that when rS/J] = 1,

Theorem 5.2.1 gives Nk = k. Therefore, M(t) = N(t) and

lirm elI(t)/t = 1/'.,. For the future we shall therefore concern ourselves

ith the cases 2.
with the cases [S/,] _e 2.









5.4 SoluLiori of C( ) Us iing the Queue Cl/M/1
with Balkin:


General Demand Function

In this section, we develop the solution of the cost function

C(v) for the ordering scheme (G,M,S,v,1).

The subuarehouse places orders of lot size 'J with a one-server

warehouse, so that orders arrive at the warehouse, form a queue, and

are processed on a strict "first come, first served' basis. We are

leaving the item-by-item demand function D(t) general, but we are

requiring that regular orders have service times with a Markov, or

negative exponential, distribution. Therefore, the distribution of w. is
3


Prtw. w) = H(w) 1 e w 0, j ( 1, (5.4.1)


where /I is the mean service time to process an ord.-r.

It is reasonable that the time to fill an order, w., should

depend in some manner on v, the lot size of the order placed. We may

allow for this by permitting that \ be a function of '. Typically, we

may have = a/v, 0 a constant. In order to find the value of v which

minimizes C(v) of Section 5.3, we prove the following theorem.


Theorem 5.L.1

For the ordering scheme (G,M,S,'J,1), the cost function C(V) of

Section 5.3 has the form


C(v) = (CO 'VC 1)/I'a + C2/)v p(-.)


(5.4.2)










with


([S/V ]-2)
Cz 1/1[t(\( -z))] z}, [S/v] 2 2,
n (') =

1 [S/V] = ,

where

e(0) = e dG(u),
o

and


= c u dG(u).
o


Procf of Theorem 5.4.1

By Lemna 5.3.1, we need only find lin [:li(t)/t} to complete
t-"
the proof. Now


M(t) = E U(t N:).
j=l

Let V = aN and V. = oNj oN (j 2). By Theorem 5.2.1(c),

aNj is the time until the j-th balk occurs in the queue Gv .//1 with

balking at queues of length [S/V] 1. Hence, we can apply the results

of Chapter 2 with

K = [S/v],

F(x) = G (x), and


CP(a) = [I(0)]'v.

From Theorem 2.3.2, we therefore have M(t) is a delayed renewal

process. By the Elementary Renewal Theorem, Prabhu (1965a), we have









lim C[l(t)/t) = 1/(v '2
t--

t /J l C()

with (v) from Theorem 2.2.2. The proof is now complete.

Writing (5.4.2) out, we see that C(v) is minimized when


h(v) (1/V1)[ + (C2/C0)/))L(v) (5.4.3)


is minimized. Therefore, h(v) can be considered as the cost function

of interest.


Exiam;le: Poisson Demand

Assume the item-by-item demand for objects in the subwarehouse

occurs according to a Poisson process with intensity w. Then


PrD(t) = n} = e-t (pt)L/n: n 0,


and the inter-demand times have the distribution


Pr[T .-T ._ x} = G(x) = I <- x 0, j > 2. (5.4.4)


That is, we have the ordering scheme (M,M,S,V,l).

To apply Theorem 5.4.1, we need a workable expression for

H(V).

Def ine

P = i/

and

p = p/(1 + p), q = 1/(1 + p).

Frcm (5.4.4), we have

W(8) = (1 + a/0)-1









Henrce,


[9(.(1 Z))]V = (1 + (1 Z)`./p)-'

= (1 (1 Z)/p

= p /(1 Zq)


Therefore,


([s/v -2)
~(U2


qZ -
pv/(l qZ)'_


([S/v] -2) I/ Z qZ v p
b-~- ~l


c[S/V].-2) .j qZ (j+)
=C' Z z -
z j=0 P

[S/v]..2 ([S/ ]-j-2)
= E C [(1 qZ)/p]'
j=0 z

[S/=]-2
F ]-2 (j1)v. (q)[S/v]-2-j
j=0 [S/A -j-2 )


, iZI < p


:j+l)v


-(j+l)'


As an illustration, Table 5.1 gives the values of h(v) of

equation (5.4.3) for C /C = 10.0 and various values of p and S.

It is to be noted that considerable savings can be effected by the

proper choice of v, the lot size ordered to replenish the stock.


















VALUES OF h(v) FOR


TABLE 5.1

T'il ORDERING
C2/CO = 10.0


S 8 9 10

S 0.8 1.2 0.8 1.2 0.8 1.2

1 1.5040 3.1717 1.3876 3.0673 1.3007 2.9877

2 0.5672 0.7759 0.5672 0.7759 0.51S4 0.6313

3 0.6260 0.8743 0.3634 0.4461* 0.3634 0.4461

4 0.3475* 0.4713* 0.3475* 0.4713 0.3475 0.4713

5 2.2000 2.2000 2.2000 2.2000 0.234 7* 0.2966*

6 1.8333 1.8333 1.8333 1.8333 1.8333 1.8333

7 1.5714 1.5714 1.5714 1.5714 1.5714 1.5714

8 1.3750 1.3750 1.3750 1.3750 1.3750 1.3750

9 1.2222 1.2222 1.2222 1.2222

10 1.1000 1.1000
I~


Denotes optimal value.


SCILF,t.E. (m],im,S,srl)









5.5 Solution of C(v) Using the Queue GI/N/'m
with Bnlking


General Demand Function

In this section, we develop the solution of the cost function

C(V) for the ordering scheme (G,M,S,v, ).

The subwarehouse places orders of lot size v with an infinite-

server warehouse, so that when an order is received at the warehouse,

processing begins immediately. Once again, we hold the item-by-item

demand function D(I) general, but we require that regular orders have

service times with the Markov, or negative exponential, distribution.

Therefore, the distribution of w. is


Prfw O, j 1, (5.5.1)

where 1/ is the mean service time to process an order.

As before, it is reasonable that the time to fill an order

should depend in some manner on V, the lot size ordered. We may allow

for this by permitting to be a function of '. Typically, we may have

X = a/v, C a constant. In order to find the optimal value of ', we

prove the following theorem.

Theorem 5.5.1

For the ordering scheme (G,M,S,V,), the cost function C(V)

of Section 5.3 has the form


C(u) = (CO + ., C )/v C2/' E p(.) (5.5.2)

with

Sm([S/u ],v), [S/v] ->2,
1, [/ = 1,
1, [S/fj1 = 1,









where m(2,V),...,m([S/v],v) satisfy tl'e relationships

m(2,v) = [t(1)]- ,

n n
m(nl,J) = [,(nk)]^ [1 + 2 bv(n,n-k+2) Z m(j,v)],
k=2 j=k

n = 2,3,..., [S/ ] 1,

with






O
so(9) =J e Ou dG(u),


= L u dG(u),
0
a nd

n-k+2
n n-l+2
b (n,n-k+2) = ( ) l (.) [ (,.j+k-2))]
V k-2 =
j=0


Proof of Theorem 5.5.1

By Lemma 5.3.1, we need only find lim C[M(t)/t) to complete
t -
the proof.

Recall that


M(t) = U(t N).
j=0

Let VI = and V. = -Nj oNj- (j > 2). By Theorem 5.2.1(c),

CNj is the time until the j-th balk occurs in the queue G, /M/w with

balki:g at queues of length [S/v] 1. The results of Chapter 3 apply

here, with
K = [S/A,
F(x) = G, (), and

CP(C ) = [, e)]" .










From Theorem 3.3.2, we therefore have M(t) is a delayed renewal

process. By the Elementary Renewal Theorem, Prabhu (1965a),


lirm {M(t)/t) = 1/v s ().
t-* a:

The quantities used iri calculating i('-) follow from equations (3.2.6)

and (3.2.S). The proof is now complete.

Writing (5.5.2) out, we see that C('-) is minimized when


h (I.) = (1/v)(1 + (C2/Cb)' (-)}i (5.5.3)


is minimized. Therefore, h (v) can be considered as the cost function

of interest.


Example: Poisson Dum: nd

Assume the iteir.-by-item demand for objects in the subu;irehouse

occurs according to a Poissonr process with intensity C. Then


Pr[D(t) = nj = e-t (t)n/n' n 2 0,


and the inter-d-emand times have the distribution

PrT -T xi = G(x) = 1 e x 0, j 2. (55.54)


That is, we ha ve the ordering scheme (M,3,S,v,).

Define p = ,/.. From (5.5.4), we have


(6) -= (l + e/p)-1

Hence, iL(V) of Theorem 5.5.1 becomes a function of
-l -1
(JK) = (1 + n \/0)- (1 + n/o ),

which is a function only of 0 and not p and \ separately.









As an illustration, Tablc 5.2 gives the values of h (V) of

equation (5.5.3) for C2/CO = 10.0 and various values of p and S.

Once again, considerable savings can be effected by the proper choice

of v. the lot size ordered.


Comparison of Sections 5.4 and 5.5

We note that Sections 5.4i and 5.5 both deal with inventories

subject to general demand and negative e:xpocnetial regular service

times. Whereas the results of Section 5.4 are based on orders being

placed to a one-server warehouse, Section 5.5 assumes we have an

infinite-server warehouse so that processing of an order begins as soon

as it is received.

These two sections represent the extreme cases in terms of the

number of processors available in a warehouse to process an order when

we have general item-by-item demand and negative exponential service

times. Therefore, we can calculate the optimal value of v, the lot

size ordered, for the best possible situation (processing begins imme-

diately when an order is placed, Section 5.5) and the worst possible

situation (an order must wait in turn before processing begins,

Section 5.4).

It is worthwhile to compare Tables 5.1 and 5.2 for the case

of Poisson item-by-item demand.



















TABLE 5.2

VALUES OF h (v) FOP. THE ORDERING SCHEME (M,M,S,'-',,)
C2/Co = 10.0


S 8 9 !0


0.8 1.2 0.8 1.2 0.8 1.2


1 1.0002 1.0021 1.0000 1.0003 1.0000 1.0000

2 0.5044 0.5234 0.5044 0.5234 0.5001 0.5014

3 0.6260 0.8743 0.31406* 0.3654* 0.3406 0.3650

4 0.3475* 0.4713* 0.3475 0.4713 0.3475 0.4713

5 2.2000 2.2000 2.2000 2.2000 0.2347* 0.2966'

6 1.8333 1.8333 1.8333 1.3333 1.8333 1.8333

7 1.5714 1.5714 1.5714 1.5714 1.5714 1.5714

8 1.3750 1.3750 1.3750 1.3750 1 .3750 1.3750

9 1.2222 1.2222 1.2222 1.2222

10 1.1000 1.1000


Denotes optimal value.









5.6 Solution of C(') Using the Queue Gl/D/1
with Balking


In this section, we consider the solution of C('J) [or the

ordering scheme (G,D,S,v,l).

The subwarchouse places orders of lot size v with a one-server

warehouse so that orders arrive at the warehouse, form a queue, and are

processed on a strict "first come, first served" basis. We allow the

item-by-item demand function D(t) to be general, but we require that

regular orders have a constant service time b. It may be that b is

a function of V, the lot size ordered. Typically, we may allow for this

by permitting b = vd, d a constant.

By Lemma 5.3.1, we need only find lim Crml(t)/t} to complete
t-
the solution of the cost function C(V) defined ir Section 5.3.

Recall that


M(t) = U(t ON j)
j=1

From Theorem 5.2.1(c), ONj is the time until the j-th balk occurs in the

queue G /D/1 with balking at queues of length [S/'] 1. Hence, the

results of Chapter 4 apply here with K = [S/v] and the distribution on

inter-arrival times F(x) = G (x).

Except for the trivial case [S/'] = 1 when M(t) = N(t),

it was noted in Section 4.3 that the random variables


Vj = ONj Ij-1, j 2 2,


do not have the property of identical distribution as was the case when

the service times had a negative e:.:ponential distribution. Therefore,











M(t) is not a renewal proce-.s and Er[(t)/t) does not possess a simple

limit. Since we are interested in the long run behavior of the inventory,

we shall be content to utilize the steady state properties oE the system

and to redefine M(t) so that a suitable limit for C{M(t)/t} can be

obta ined.

Recall the definition of C Erom Section 5.1. It is clear that
n
o is a function of v, the lot size ordered. Define
n

S (v) = nb na 1 ,
n n

and

M (y;V) = int in > OIS (v) -y or S (J) > [S/J] 2 y},
n n

[S/v] > 3.


Furthermore, let Y.('J); j = 1,2,3,... be the -harkov process defined
J
by (4.3.18). Y. is written as Y.(J) to emphasize the dependence of the
3 3
process on ', the lot size ordered, when the quantities K and F(x) of

Chapter 4 are [S/v] a:d G (x), respectively.

Dencte by F (y,V) the stationary distribution of Y.(v) so

that F (y,v) satisfies


([S/']-1 )b
F y(yv) = r PrY2(' ) y () x) d x
([S/v]-2)b


([S/'] 2)b < y s ([S/J] l)b.










Finally, Iet


(v) = 1, [s/v] 1,

= 1 + [1 C (y-O)]/GC (b-O) dFP(y,), [S/V] = 2,


= J (M (y-b; V')) dF,(y,v)


+ [( (0; v))/PrIS ((; v)() > ([s/v] 2)bJ]

Pr( S (y-b; )(' () 5 y + b} dFy(y,v), [S/vl ] 3.
I


If Y1(V)

by Theorem .3.1i,

size v th-t occur

system is in the

Let V


possesses the stationary distribution F (y,v), then

('0) is one plus the expected nuniber of orders of lot

between alny two emergency orders when the inventory

steady state.
Sb
V V ,..., be a sequence o mutually indepeiideiint


random variables such that the distribution of V. is
J

Pr(V = f PrLV j xY .(v) = y) dF yv, j > 1.

*
By Theorem 4.3.4, V V2 V3..., are identically distributed and repre-

sent the time between successive emergency deliveries when the system is

in the steady state. For the purposes of this section, we redefine

M(t) as

+ +
M(t) = S U(t V . V.), (5.6.2)
j=1 1

so that M(t) is the number of emergency deliveries during [O,t] when

the inventory system is in the steady state.


(5.6.1)











Theorem 5.6.1

For the ordering scheme (G,D,S,'v,!), the cost function C(v)

of Section 5.3 (with M(t) redefined by (5.6.2)) has the form


C(v) = (CO + C1)/E V + C2/V V E(,) (5.6.3)


with
CO
= J u dG(u)
0

and Wl(V) given by (5.6-1).


Proof of Theorem 5.6.1

By the Elementary Renewal Theo-em, Prabhu (1965a),


lim '.[M(t)/t} = l/e(V)
t-c."

= 1/v S ('v),


the last equality following from Theorem 4.3.2 and the definition of V..

Applying Lemma 5.3.1, we complete the proof.

Writing (5.6.3) out, we see that C(') is minimized when


h2() = (1/')il + (C2/C )/v(V)} (5.6.4)


is minimized, h2(') may therefore be taken as the cost function of

interest.

In many cases it may be difficult to obtain p(a) when [S/vj >3.

Theorem 4.3.3 then gives a bound that may be used to obtain an approx-

imate lower bound for C('C).











CHIAI_'IER 6


THE IrNVE 'ORY PROBLEM: CONTINUOUS CASE



6.1 First Passagc. Times of Ncn-Ilegative,
Continuous Stochastic Processes with
Infinitelv Divisible Distributions


In this chapter, we wish to consider the ordering scheme for

a subwarehouse that maintains an inventory of fluid material. We assume

the demand for the fluid in storage occurs continuously. It is reason-

able to further assume that the denrnd during any interval of time is

independent of the demand during any other norioverlapping interval of

time and that the protability law for the demand during any interval

[s, s+tJ is functionally dependent only on the length, t, of the inter-

val. Hence, if [D(t); t GC} is the stochastic process such that

D(t) represents the demand for the fluid in storage during the time

interval [0, t], then we are assuming that [D(t); t Oj is a non-

negative, continuous stochastic process with stationary, independent,

nonoverlapping increments. By Theorem 2 of Feller (1966, p. 294), this

is equivalent to stating that [D(t); t > O0 Ls a non-negative, contin-

uous stochastic process whose distribution is infinitely divisible.

The distribution function of D(t) will be denoted by

K
Pr(D(t) x} = f g(y,t) dy, x 0, t 0, (6.1.1)


with g(-,t) a density on (0,=) for each t 2 0.









In the next section, a complete description of the ordering

scheme used to replenish the inventory will be given. To solve this

inventory problem, we need the probability law for the stochastic

process IT(u); u > 03 defined by

T(u) = irnf {t D(t) > u) (6.1.2)

(so that T(u) is the first passage time of D(t) into the interval

[u,-)). The rest of the current section will be devoted to properties.

of T(u).


Theorem 6.1.1

Let tT(u); u > 0) be the stochastic process defined by (6.1.2),

then

(a) T(u) has stationary, independent, rioricverlapping increments,

and

(b) The distribution of T(u) is


PrTT(u) < tJ = J g(y,t) dy, t > C, u 2 0.
u

Proof of Theorem 6.1.1

Since T(u) < t if and only if D(t) u,

Pr(T(u) s tj = Pr(D(t) 2 uj

= g(y,t) dy,
u
completing part (b) of the proof.

To prove part (a), it is sufficient to show

Pr(T(y) T(w) < siT(w) = r) = Pr(T(y-w) < s} (6.1.3)

for all s > 0, r > 0, and 0 w < y.









First we shall calculate Pr[T(y) '- tIT(w) = r} for w < and

r < t. Now, Pr[T(w) = r) = 0 so that Pr(T(y) i tLT(w) = L} involves

conditioning on an event of probability zero. Hence, the quantity

Pr[T(y) t, T(w) = r)/Pr(T(w) = r) is undefined and therefore can not

be used to define Pr(T(y) s tTT(w1 = r}. Cramer and Leadbctter (1967,

pp. 219-222) give two plausible definitions for Pr[T(y) stT'(w) = r)

which are knov.n as the vertical-window (v.w.) and hor i7zlltal-winrdcw (h.w.)

conditional probabilities. These conditional probabilities are defined by

Pr(T(y) < tIT(w) = r}

= lim Pr[T(y) S tir T(w) r + 51 (6.1.4)
6- 0
and

Pr(T(y) tiT(w) = r)h


= lini PrfT(y) tiT(T) = r,
5- 0
for some T e [w,w+5j}, (6.1.5)

respectively. Both (6.1.4) and (6.1.5) define Pr(T(y,) StjT(w) = r}

in terms of a limit of conditional Drobabilities which involve condi-

tioning on events of non-zero probability. Equation (6.1.4) is the

usual definition of Pr[T(y) s tIT(w) = rI. However, in our particular

case (6.1.u) leads to an undefined quantity. Therefore, we choose to

use the horizontal-window definition given by (6.1.5). We have, for

0 < 6 < y-w,










PrfT() N T( ,) = d}h


SPrfT(vy) t, T(T) = r for some T [w,w+6]J
= lim
5-0 Pr(T(T) = r for some T e [w,w+6]j

Pr(D(t) y, w D(r) s w + 6
= lir
_0 Q Prfw ( D(r) 5 w + 5J


J 3g(z-.-,t-r) g(x,r) dy: dz
1 im v w
S6--' 0 w*
J g(x,r) dM:



= g(z-w,t-r) dz[g(w,r)/g(w,r)]
y


S g(2,t-r) da
y-w

= Pr{D(t-r) y-w}


= P [l'(y-w) S t-r). (6.1.6)

Let t = r+s in equation (6.1.6), then

Pr(T(y) T(w) s sIT(w) = r}h.w

= Pr(T(y) s+r/T(w.) = r)

= PtiT(y-w) < s],

thus completing the proof.

The above theorem implies that T(u) is also non-negative with

an infinitely divisible distribution. Hence, the Laplace trar.sforms of

T(u) and D(t) have the forms








(-oT(u)) e-w(0)u > 0, (6.1.7)

and
-GD(t) -v(O)tL
(c- D ) = e, > 0, (6.1.8)

respectively, such that w(O) and v(@) are positive for 8 > 0 and

possess completely monotone derivatives. See, for example, Feller (1966).

An attempt was made to find the relation between the Laplace

transforms of (6.1.7) and (6.1.S) by utilizing the following technique.

We have


JJ ee- e- PrlT(y) 5 t} dt dy
o o

o ~-Sy -w(Fv)y/
= e e / dy
0

= l/[ (e +-w(C ))], (6.1.9)

and

J J e et e Pr[D(t) > y ) dy dt
o o

-St -v( )t/
S e (1 e )/ dt
o


= 1/e /[e(7 + v(8))]

= v(9)/[S (S + ,v( ))]. (6.1.10)

Nov Pr(D(t) 5 y] = Pr{T(y) 5 t] so that if


f j e-t e- PrfD(t) y} dy dt
o o
0

= e-2 t e PrrD(t) ) y} dt dy (6.1.11)
o o









(i.e., if we can change Lthe order of integration). then (6.1.9) and

(6.1.10) must be identical. Hence, we hiave


1/[(f e + w(.))] = v(9)/[: e( + .v(e))]

or

v(e) w( ) = t :

which implies

w(C) c t

and

v(9) = O/c

for some c > 0. We must have by the uniqueness of Laplace transforms

that

Pr(D(t) t/c} = Pr(T(y) cy} = 1.

It now seems that the only non-negative, continuous process D(t) with

stationary, independent, nonoverlapping increments is the tri..'al deter-

ministic model D(t) = t/c. However, we know that if D(t) has the gamma

dens ity


g(y,t)= e-' y tl/F(t)p t y 0, (6.1 12)

then D(t) is non-negative; continuous; has stationary, independent,

nonoverlapping increments; and is clearly not deterministic. The point

we make is that (6.1.11) is true only for the trivial case D(t) = t/c

and, hence, the order of integration can not be changed for any other

choice of D(t). So far we have been unable to find a suitable method

for obtaining the Laplace transform of T(u).









Since we would like as much in fomation about T(u) as possible,

it seems reasonable to try to calculate C(T(u)). By (6.1.7) and

(6.1.S), it is clear that

E(T(u)) = u C(T(1))

and

t(D(t)) = t E(D( )).


Now, by Lemnia I of Feller (1966, p. 14S),



C(T(u)) -- [ Pr[T(u) tJ] dt
0


J' Pr[D(t) u] dt
o

co U
= J g(y,t) dy dt,
o o

so that

c 1
E(T(l)) = J g(y,t) dy dt. (6.1.13)
o 0

It is strongly suspected that

(T(1)) = 1/e(D(1)), (6.1.14)


although all attempts to prove (or disprove) this fact have failed.

It is obvious that many questions remain unanswered about the

process T(u). Possible areas of future research include finding the

specific form of (6.1.7), which would completely solve the problem, or

at least establishing (6.1.14).










6.2 Defin .itin of the Continuous Inventory Sr',st m
and Its Relation to Previous Results


The inventory problem for th.e case when we have continuous

demand is analogous to the description for the discrete case given in

Section 5.1. The biggest difference is that V, the size of an order of

replacement fluid, is not constrained to be an integer. We now discuss

the system in detail.

Consider a subwarehou3e that maintains a reservoir inventoryr)

of fluid material. The capacity of the reservoir is S, a positive real

number. The fluid in the reservoir is continuously being depleted by

its use in some process.

Let fD(t); t > 01 be the stochastic process which represents

the demand for the fluid in storage. In keeping with the constraints

placed on D(t) in SectLon 6.1, we assume D(t) is non-negati,.e, contin-

uous, and has an infinitely divisible distribution. The distribution

function of D(t) will be given by (6.1.1).

In order to maintain a positive level of fluid on hand, the

subwarehouse places an order for an amount t v (0 < v S) of replacement

fluid to a warehouse. It will be held that the orders are placed at

the times ol, c2, a3,..., where


c. = inE ftjD(t) j }, j 1. (6.2.1)


By Theorem 6.1.1, the random variables -c C, -c3 C2 C 3

are mutually independent and 0. has the distrLbution function
j j-1









Pr(o. a. t) = Pr(D(t) 'v~
J .l-1
0,
= g(x,t) dx, t > 0, j > 2. (6.2 2)
V

Let (S(t); t 0} be the stochastic process such that S(t)

represents the level of fluid in the reservoir at time t. We then

define

S(t) = S D(t) + vR(t) (6.2.3)


where R(t) is the number of orders that have been filled by the ware-

house for our subware.house of interest during the interval [0,t].

As before, an order placed at time 0 may be one of two types.

We have a regular order if S(o) > S v[S./v]. In this case, the time

to fill an order (h-r-rafter, the service Lime) is assumed to be a

randc.m variable. TIhe successive regular service tines, Jenoted by

w1, 2' 3,..., are assumed to be mutually independent and independent

of the demand process D(t). The distribution of w. is given by


Pr[w. w] = H(w), w 2 0, j > 1. (6.2.4)


We have an emergency order if S(c) = S V[S/v]. In this case,

S(o+0) = S J[S/v] + v, so that the emergency service time is assumed

instantaneous, or at least effectively zero. Utilizing this ordering

technique, the reservoir maintained by the subwarehouse is never depleted.

Realistically, the time to process a regular order should depend

in some manner oi v, the quantity being ordered. Therefore, one may

have H(w) a function of v.










We assume the subwarehouse places its orders with either

a one-server or infinite-server warehouse as described in Section 5.1

Def ine


F(t,J) = Pr(D(t) j]

CO
= g(x,t) dx, t > 0, (6.2.5)
v

so that F(.,v) represents the distribution function of 0 C. .

As before, we seek that value of ', the size of the replacement order,

such that the co-t function C(v) defined in Section 5.3 is a minimum.

If we replace the di.stribution functions G(-) and G, (-) of Cnapter 5

by F(-,1) and F(-,v), respectively, then all results of Sections 5.2

through 5.6 hold for our cot inuous inventor:, problem. One can easily

change the terminology cf Definition 5.1.1 so that it expresses the

ordering scheme for maintaining the level of fluid for an inventory

problem of the continuous type. The main point to remember is that we

are now minimizing C('J) over all choices of v in the interval (0,S] and we

are not restricted to the integral values of v as was the case in the

last chapter.

An important and realistically relevant example of D(t) is when


e(e-D(t))= ( + p)t


= exp (-t ln(l + L u)).

That is, when D(t) has a gamma distribution with parameters p and t.

In this case,


F(t,x) = e x t- /F(t) dx. (6.2.6)
v/p




99




However, no ex p essi.on for



9() = e- dFl(t,l)
o

could bz obtained. Thercfore, we arc unable to give an example of

the calculate ions involved in utilizing Sections 5.11 through 5.6.

The study of the distribution function given by (6.2.6) is an

aiea of possible future research.












B IBLI OGP.APIIY


References Cited


CRAMER, HARALD and LEADBETTER, i-. R. (1967). Stationary and Related
Stochastic Processes. Wiley, New York.

FELLER, WILLIAM (1966). An Introduction to Probability Theory and Its
Applications, Vol. II. Wiley, New York.

FERGUSON, T. S. (1967). Mathematical Statistics: A Decision Theoretic
Approach. Academic Press, New York.

GANI, J. (1957). "Problems in the probability theory of storagr- syste:ns."
Journal of the Royal Statistical Society, Vol E19, pp. 11-206.

KENDALL, D. G. (1953). "Stochastic processes occurring in the theory o.
queues and their analysis b>, mears of the imbedded Markcv. chain."
The Arnnals of Mat hematical Statistics, Vol. 24, pp. 338;-35".

PARPEN, EMINUEX L (1962). Stochestic Processes. Hol.den-Day, San Francisco.

PRABHU, N. U. (1965a). Stochastic Processes. Macmillan, NJew York.

PRABHU, N. U. (1965b). Queues and Inventories: A Study of Thcir Basic
Stochastic Processes. Wiley, New York.



Additional References


BARTLE, R. G. (1966). The Flements of Integration. Wiley, New York.

KREYSZIG, ERWIN (1962). Advanced Engineeri .-i Mathematics. Wiley,
New York.

LOhVE, M. (i963). Probability Theory. Van Nostrand, Princeton.

PERLIS, S.AM (1952). Theory of Matrices. Addison-Wesley, Reading.













BIOGRAPHICAL SKETCH


Edwin Luther Bradley, JL.,was born July 16, 1943, at

Jacksonville, Florida. In June, 1961, he was graduated from Clay

County High School in Green Cove Springs, Florida. In August, 1964,

he received the degree of Bachelor of Science with Honors with a

major in Mathematics from the University of Florida. He immediately

enrolled in the Graduate School of the University of Floridi, leaving

in January, 1965, to work at Lockhead-Georgia Company at MarieLta,

Georgia. He returned to Graduate School at the University of Florida

in September, 1965, whert-e he crked aq a graduate assistant. at the

University Computing Center as a programmer until Nuvemliter, 1966.

He has worked as a graduate assistant in the Department of Statistics

from November, 1966, to the present tim.. During this period he received

the degree of Master of Statistics in Aoril, 1967, and from that time

he has pursued his work towards the degree of Doctor of Philosophy.

Edwin Luther Bradley, Jr, is married to the former Dorothy

Louise Hill and is the father of two children. He is a member of

Phi Eta Sigma, Pi Mu Epsilon, Phi Kappa Phi, and the Institute of

Mathematical Statistics.







This dissertation was prepared under the direction of the
chairman of the candidate's supervisory committee and has been
approved by all members of that committee. It was submitted to the
Dean of the College of Arts and Sciences and to the Graduate Council,
and was approved as partial fulfillment of the requirements for the
degree of Doct3r of Philosophy.

December, 1969


-- / ---y -- -
Dean, College of Arts and Sciences



Dean, Graduate School


Supervisory Ccm.iLttee:


Chairman

7- ^

,_iiiiiiiiiii fii
@ /% -h ^ diiiii


"7,. li..., iiiiitiiiiiii




Queues with balking and their application to an inventory problem
CITATION SEARCH THUMBNAILS DOWNLOADS PDF VIEWER PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00098405/00001
 Material Information
Title: Queues with balking and their application to an inventory problem
Physical Description: v, 101 leaves. : illus. ; 28 cm.
Language: English
Creator: Bradley, Edwin Luther, 1943-
Publication Date: 1969
Copyright Date: 1969
 Subjects
Subjects / Keywords: Inventory control -- Mathematical models   ( lcsh )
Queuing theory   ( lcsh )
Statistics thesis Ph. D
Dissertations, Academic -- Statistics -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
 Notes
Thesis: Thesis -- University of Florida.
Bibliography: Bibliography: leaf 100.
General Note: Manuscript copy.
General Note: Vita.
 Record Information
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000570700
oclc - 13721261
notis - ACZ7681
System ID: UF00098405:00001

Downloads

This item has the following downloads:

( PDF )

( INSTR )


Full Text





QUEUES WITH BALKING AND THEIR APPLICATION
TO AN INVENTORY PROBLEM












By
EDWIN LUTHER BRADLEY, JR.














A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN. PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA
1969





























UNIVERSITY OF FLORIDA
i lU1 111111111111111 Hllllll mill lllllllll
3 1262 08552 6563

































TO THE MEMORY OF

MY MOTHER














ACKNOWLEDGMENTS


The author is particularly indebted to Professor J. G. Saw,

the supervisory committee chairman, who gave continued interest and

encouragement throughout the entire period involving the research

and writing of this dissertation. Special thanks to Professor R. L.

Scheaffer who proofread the entire dissertation and made many worth-

while suggestions. Thanks are also due to Mrs. Edna Larrick who did

a superb job of typing the dissertation.

It is a pleasure to acknowledge the Department of Statistics

for the support it has extended so that the author was able to pursue

his graduate work.

Finally, the author acknowledges the patience and encourage-

ment given by his wife and children during his many years in school.

Without their understanding, this paper would never have been written.


Lii














TABLE OF CONTENTS


ACKNOWLEDGMENTS . . .


CHAPTER

1. INTRODUCTION . . . . . . . . .

2. THE QUEUE GI/M/I WITH BALKING AT QUEUES OF
LENGTH K-i . . . . . . . . .

2.1 The Basic System . . . . . . .

2.2 An Imbedded Markov Chain . . . . .

2.3 Some Properties of the Time Between Balks

2.4 The Inverse of a Special Triangular Matrix

3. THE QUEUE GI/M/r WITH BALKING AT QUEUES OF
LENGTH K-1 . . . . . . . * .

3.1 The Basic System . . . . . . .

3.2 An Imbedded Markov Chain . . . . .

3.3 Some Properties of the Time Between Balks

4. THE QUEUE GI/D/l WITH BALKING AT QUEUES OF
LENGTH K-1 . . . . . . . . .

4.1 The Basic System . . . . . . .

4.2 The Waiting Time in the System . . .

4.3 Some Properties of the Time Between Balks

5. THE INVENTORY PROBLEM: DISCRETE CASE . . .

'5.1 Definition of the Inventory System . .

5.2 Relation of the Inventory System to Queues
with Balking . . . . . . .. .


7




9

. . 24


. . 28


. 33

. . 33

. . 35

. . 40


. . 41

. . 41

. . 45

. 50

. 64

. 64


. . 69


Page

- . iii








TABLE OF CONTENTS (Continued)


5.3 The Cost Function C(v) . . . . . . .

5.4 Solution of C(v) Using the Queue GI/M/I
with Balking . . . . . . . . . .

5.5 Solution of C(v) Using the Queue GI/M/A
with Balking . . . . . . . . . .

5.6 Solution of C(v) Using the Queue GI/D/I
with Balking . . . . . . . . . .

THE INVENTORY PROBLEM: CONTINUOUS CASE . . . .

6.1 First Passage Times of Non-Negative,
Continuous Stochastic Processes with
Infinitely Divisible Distributions . . . .

6.2 Definition of the Continuous Inventory System
and Its Relation to Previous Results . . . .


BIBLIOGRAPHY . . . . . . . . . . .

BIOGRAPHICAL SKETCH . . . . . . . . . .


. 100

. 101


CHAFFPTER


6.


Page

72


75


80


89


96













CHAPTER 1


INTRODUCTION



In this dissertation, we consider an alternative to the (s,S)

ordering policy associated with inventory systems.

The (s,S) ordering policy is specified as follows. There exists

a store of finite capacity S that holds material (discrete or continuous)

for future use in some process. In the most general context, demand for

the material in storage during an interval of time is assumed to be a

time-dependent stochastic process. Ordering of replacement stock to

maintain the level of inventory in the store is done in one of two ways.

Either orders for an amount S-s of replacement stock are made at the

times when the stock level reaches s, s 9 S, or the level of stock in

the inventory is examined at regular points in time and orders for

replacement stock equal to the stock deficit are only made at those

regular times for which the stock has fallen below the level s. In

both cases, the time it takes the replacement stock to arrive (i.e.,

the delivery time) is assumed to be zero. A generalization of the

(s,S) ordering policy allows a time lag T for arrival of the replace-

ment stock.

For a certain class of cost functions associated with maintaining

the level of stock in an inventory, it can be shown that the (s,S) order-

ing policy is the optimal policy to utilize. A summary of some results









for the (s,S) ordering policy and conditions under which (s,S) is (or

is not) the optimal ordering policy is. given in a paper by Gani (1957).

Another generalization of the (s,S) ordering policy is the fol-

lowing. The capacity of the inventory is S and demand for the stored

material is once again a time-dependent stochastic process. However,

orders for an amount V (v!S) of replacement stock are now made at the

times when the stock level drops to the values S-v,S-2v,S-3v,.... The

delivery time for any order is assumed to be a constant value T (T O).

Under the assumptions that the store holds discrete items and the demand

for these items obeys a Poisson probability law, the long run probabil-

ity law representing the level of stock in the store is given in Gani

(1957) and Prabhu (1965b).

In many cases, however, a constant delivery time does not

adequately express reality. Furthermore, the negative stock level

that can arise when T (the delivery time) is greater than zero may

reflect the loss of considerable time and money in terms of idle man-

power and equipment. To circumvent these difficulties, an alternative

ordering policy is defined and its properties examined in this paper.

Envision a subwarehouse, maintaining an inventory of finite

capacity S, that holds material (discrete or continuous) for future use

in some process. In the most general context, we assume the demand for

the stored material is a time-dependent stochastic process. In order

to maintain a stock on hand, orders for an amount v (vC S) of replacement

stock are placed with a warehouse at the times when the stock level drops

to S-v,S-2v,...,S-v[S/v] ([x] the integral part of x). The time it takes








the warehouse to process an order placed when the stock level falls to

S-v,S-2v,..., or S-v[S/V] +v is called a regular service time. All

regular service times are assumed to be mutually independent random

variables with a common distribution and to be independent of the

demand process. An order placed when the stock level falls to S-v[S/A]

is called an emergency order. The time to process an emergency order

is assumed to be instantaneous, or at least effectively zero.

Hence, regular orders for an amount v of replacement stock are

made if the stock level is at least v at the instant an order is placed,

while an emergency order is made if the stock level is less than v at

the instant the order is placed. Utilizing this reordering technique,

the inventory maintains a positive stock level at all times.

Although a somewhat larger cost would quite naturally be incurred

with emergency orders than with regular ones, it is assumed we are will-

ing to pay the price of instantaneous delivery in order to avoid the

disaster of running completely out of stock in the inventory.

The cost of maintaining the inventory level will clearly depend

on v, the size of a replacement order, and there should exist an optimal

value of V, defined to be that value of v for which this cost is a min-

imum. In Chapter 5, we define a V-dependent cost function for which we

seek the optimal v.

It will be shown later that the inventory problem is closely

related to a problem in queueing theory--queueing systems with balking

at queues of a fixed length. We shall now discuss the salient features

of such a queueing system.









Utilizing a notation proposed by Kendall (1953), by "the queue

A/B/s with balking at queues of length K-l" we mean a queueing system

specified as follows. The queue length at any instant will refer to the

number of people in the system who are being served or waiting to be

served at that instant. Successive customers are assumed to arrive in

the system in such a way that their inter-arrival times are mutually

independent with distribution function A(*). A customer joins the queue

if, at the instant he arrives, there are less than K-1 persons already

in the queue. If there are K-1 persons in the queue when the customer

arrives (so that he is the K-th person in the system), one of three

equivalent things happens to him: (1) The customer balks, i.e., he

leaves without waiting to be served; (2) The system rejects the customer;

or (3) The customer receives instantaneous service. There are s servers

available to wait on customers with the first free server attending the

customer at the top of the queue. The length of time from when a server

starts to serve a customer until the completion of such service is called

the service time. All service times are assumed to be mutually independ-

ent with distribution function B(-). Finally, the service times and

inter-arrival times are assumed to be mutually independent.

Because the statistician is more familiar with the terminology

of queues rather than inventories, the work has been carried out in terms

of queueing theory. The times between successive orders and the service

times for the inventory problem with emergency orders are shown to corre-

spond to the inter-arrival times and service times, respectively, in

queueing systems with balking at queues of a fixed length. The mechanics








of inventories have led us to give prime attention to the queues GI/M/1,

GI/M/I, and GI/D/1, all with balking at queues of length K-i, where GI

(or G) refers to a general distribution function, M refers to a negative

exponential distribution function, and D refers to a distribution whose

mass is concentrated at a single point.

An inventory or storage area is normally established with the

assumption that it will be in operation for a long period of time.

In choosing a reordering policy, therefore, long run distributions

become important. Fortunately, this means that long run properties of

queues with balking are adequate for the solution of our inventory problem.

In Chapter 2, the queue GI/M/1 with balking at queues of length

K-1 is discussed. In particular, we utilize the concept of an imbedded

Markov chain to derive properties of the queue length and the time between

successive balks. In Chapter 3, the same is done for the queue GI/M/-

with balking at queues of length K-I.

In Chapter 4, the queue GI/D/l with balking at queues of length

K-I is discussed. Here, the concept of the waiting time in the system

is introduced. We again utilize an imbedded Markov process to obtain

properties of the waiting time and the time between successive balks.

In Chapter 5, we consider the inventory problem when the stored

material is discrete. Here, we forge the link between queues with balking

and inventories subject to instantaneous emergency orders, and give solu-

tions for the cost function associated with the inventory problem based

on results from Chapters 2 through 4, along with some examples.




6




In Chapter 6, the continuous inventory problem and its relation

to results of previous chapters is discussed. Also included are some

properties of a continuous, non-negative stochastic process with an

infinitely divisible distribution.












CHAPTER 2


THE QUEUE GI/M/l WITH BALKING
AT QUEUES OF LENGTH K-i



2.1 The Basic System


Consider a queueing system in which customers arrive in the

system at times, ...,G 2,a_1,9a0,'~2,'..., such that the inter-arrival

times

= 1 j j 1 (2.1.)


are mutually independent. The distribution function of u. will be

denoted by

Pr (u. S u} = F(u), u > 0, j a 1. (2.1.2)

One server is available to handle the needs of the customers.

This server dispenses his service on a strict "first come, first served"

basis. The successive service times of customers who join the queue are

denoted by w ,2,w3, ..., and are assumed to be mutually independent random

variables that are independent of the arrival times. The distribution

function of w. is assumed to be

Pr (w. w) = 1 e w 2 0, j 1. (2.1.3)

Let [Q(t);-m
represents the number of customers in the system at time t and

(Q*(t); -_ < t < *-] the stochastic process such that Q*(t) represents









the queue length at time t. Recall that the queue length, Q*(t), is

the number of persons being served or waiting to be served at time t.

A customer arriving in the system at time a enters the queue

if and only if Q*(a-0) S K-2, that is, if and only if the number of

people in the queue immediately prior to his arrival is K-2 or less.

In this case we have

Q*(C) = Q(a) = Q*(o-0) + 1 = Q(CF-O) + 1.

If, on the other hand, our customer is faced with a queue length of K-1

(so that he becomes the K-th person in the system), he balks and imme-

diately leaves the system. We now have that Q(C) = K implies

Q(o-O) = Q(a+0) = K-1

and

Q*(C-O) = Q*(a+0) = Q*(C) = K-1.

It is clear that Q(t) and Q*(t) are identical in value except at the

points on the time axis for which Q(O) = K.

We shall work with the stochastic process (Q(t);-m < t < +-)

and shall be concerned with its behavior beyond the time point ,

which we assume is known. Hence, without loss of generality, a0 could

be taken as zero.

Define

N1 = inf (k > OIQ(ak) = K),
(2.1.4)
n = inf (k > Nn-1Q(a) = K), na 2,

(so that Nk is the number of customers who arrive up to and including

the n-th customer to balk), and









M1 = Ni'
(2.1.5)
M = N -N j k 2,

(so that M. (jQ 2) is the number of arrivals between the (j-1)-st and

j-th balks plus the j-th person to balk).

Define

V1 = N1 0
(2.1.6)
Vj = CNj oNj_, j 2-

Then V is the time until the first balk and V. (j z 2) is the time
1 3
between the (j-l)-st and j-th balks.

Of primary importance, for us, is the value of 6(Vj), j 2 2.

This quantity is established in Section 2.3. While the theorems of

Sections 2.2 and 2.4 are proved with the thought of building toward

a solution to S(V.), these theorems have a theoretical and practical

importance that goes beyond our narrow objective.



2.2 An Imbedded Markov Chain

Before we define the imbedded Markov chain, we prove two lemmas.

The first lemma simply restates a well-known result about negative

exponentially distributed random variables, while the second lemma

establishes the non-Markovian character of Q(t) when the inter-arrival

times have an unspecified distribution.


Lemma 2.2.1

Let X be a continuous, non-negative random variable. Then

Pr (X > x + yIX > x] = Pr [X > y), x,y t 0, (2.2.1)









if and only if

Pr [X > x} = e- PX, > 0.


(2.2.2)


Proof of Lemma 2.2.1

If (2.2.1) holds, we have

Pr (X > x + y) = Pr (X > x) Pr (X > y)

and therefore (2.2.2) is true. See, for example, Parzen (1962, p. 121).

If (2.2.2) holds, we have

Pr (X > x + y X > x) = Pr (X > x + y,X > x]/Pr (X > x)

= Pr (X > x + y)/Pr [X > x)

= e-6(x+y)/e-pX


= e- = Pr [X > y},

thus completing the proof.


Lemma 2.2.2

The stochastic process (Q(t);-m < t < +co} is not, in general,

a Markov process.


Proof of Lemma 2.2.2

Without loss of generality,

Let Y(t) = max Q(T) and


Pr (Y(t) = OIQ(T) = 0,

= Pr

= Pr

= Pr


let a = 0 and Q(0) = 0.

u = t-s. We have


0 : T s}I

(Y(t) = 0Ou1 > s}

([u > tjul > s}

ful > u + s.u > sI.


(2.2.3)








Let 0 < < T < s, and define the events A and B by

A = (Q(T) = 0; 0 < T < T 0 T T rS s)


B = (Q(T) = 1; oT T < TO.


We have

Pr [Y(t)


= 0OA,B)

= Pr(Y(t) = 0fuI = To, ul *wI = T1, ul +u2>s)

= Pr[ul +u2>tluI =T0, uI +w1 =T1, u +u2 >s)

= Pr(u2 >t -Tou2>s -To

= Pr(u1 >t-T ul >S -T

= Pr[u >u + (s -To)ul >S -To ."


But (2.2.3) and (2.2.4) are not necessarily equivalent.

Yet, if

Pr[Q(t) =kIQ(T), OST 's) =Pr[Q(t) =kIQ(s))

for all t >s, then (2.2.3) and (2.2.4) would have to be identical.

Therefore, the proof is complete.

It should be noted here, that, if the u. have a negative exponent

distribution, then Q(t) is a Markov process.

Although (Q(t); --
there exists an imbedded Markov chain defined by

Qn = Q(a ), n = 0,1,2,..., (2.2
regardless of the distribution of the u..

Figures 2.1 and 2.2 give the correspondence between Q(t) and Q .

Qn clearly represents the number of persons in the system at the time

the n-th customer enters the system. Valuable information about Q(t)


tial


al),


.5)


(2.2.4)









can be obtained from a knowledge of Qn as will be shown in the follow-

ing theorems.

Before proceeding further, note that if K=1, then

Pr(Qn = 1) = 1, n = 0,1,2,.... For the future we shall therefore hold

K 2 2.


Theorem 2.2.1

The stochastic process (Qn; n = 0,1,2,...} defined by (2.2.5)

has the following properties:

(a) Qn is a Markov chain;

(b) Qn is time-homogeneous;

(c) The class (1,2,...,K} of states on which Q is defined is

an periodic, positive persistent communicating class; and

(d) The one-step transition probability matrix P is given by


K K-1 K-2


... 3 2 1


0 1 2


0 1 %2

0 a a


0 0


0 0 0

0 0 0


-.. 'K-3


K-3

... K-
K-4


K-2 K-2


K-2

K-3


... 0 C
0 1X


/


-I


K

K-I

K-2






3

2

1


kK-3











N2 N3


Figure 2.1.


Qn


A Typical
at Queues


Path of Q(t) for GI/i/1 with Balking
of Length 4.


N2 N3
S S


* *


Path of Q Corresponding to Q(t) in Figure 2.1.


Q(t)
A


N


W3


51
4-
3-
2-0
I-
0-
(


I I I I I I I I I I 1W
I 2 3 4 5 6 7 8 9 10


I L


AI___________I


Figure 2.2.












aJ = (O-t e-t dFC(t), j =0,l1,2,...,
j,. .


j =0,1,2,....


Proof of Theorem 2.2.1,

Let U(-) be the

number of customers who

Then


Qn+l = n + 1 -

Qn+l = Qn + 1 -


Part (a)

unit step function at zero. Let X n+1 be the

complete their service in the interval (aC ,a ].
n n if Q



n+1' nf K,


- 1,


if Qn = K,


so that


Qn+l Qn n+1 U(Q-K) + Qn K.


Since the distribution on service times is negative exponential,

the probability law on Xn+I conditional on the history Qo,QI...,Qn is

a function only of Q Hence, we see easily that the probability law on

Qne1 conditional on the history Q0,Q,...,Qn can be a function only of

Qn, which establishes the Markov property of Qn'


Proof of Theorem 2.2.1, Parts (b) and (d)

Since we are looking at Q(t) at successive arrivals, we have

1 Qn41 -Qn +1. Also, of course, by the balking aspect of the problem

Q K. Hence.
n
Pr(Q+ k|Q =j} =0, for k>j +l. (2.2.8)


where


(2.2.6)


(2.2.7)


k =1 -0 l i .. P"


Xn+1








Further,
PrQn1 =k Qn =K) =Pr(K-Xn+1 1 +1 =k)
= Pr((k-1) -Xn+1 +1 =k)
= Pr(Qn+1 = kQn =K-1)]. (2.2.9)

For j =1,2,...,K-1 and k =2,...,j+l, let N(t) be a Poisson

process with inter-arrival times w l,w2,w3,..., then

Pr(Qn+1 =kIQn = j] =Pr(j -Xn+1 +1 =kIQn
= Pr(X = j -k + lIQn = i}

= %. (Prfw +....w. w u Y+...+w. u i)
= un+1(PrI +...+ Wj -k+1 Un+l' W1 + +Wj-k+2 n+l)

= eun l(Pr[N(u n+) = j k + 1))

= j'-At (It)J-k+1
= j-k+l)' dF(t)

= wj-k+1. (2.2.10)

Let j = 1,2,3,...,K-1 and k = 1, then
Pr([Q+ I = io =Prfj -x +1 =1 Q = j)
n+1 Pn n+1 = jlQn = jn


= Un+(Pr(wI +...+ w u n])


= (J xj-1 j e-xj-'1) dx) dF(t)

a 0 j- 3-1 "
= Jo (1 E e" t (t) l/i") dF(t)
i=O

= I o .... Cj-1 = kj-" (2.2.11)










Equations (2.2.8) through (2.2.11) are independent of n and

therefore Qn is time-homogeneous. Application of these equations for

j,k = 1,2,...K gives us the matrix P.


Proof of Theorem 2.2.1, Part (c)

An examination of the one-step transition matrix in the state-

ment of the theorem shows that each state communicates with all others.

Since, for example,

Pr(Qn1+ = KJQn = K) > 0,

state K is periodic. We therefore have a finite irreducible communicat-

ing class of periodic states so that each state is necessarily positive

persistent. The proof of Theorem 2.2.1 is now complete.

We shall prove a lemma that applies to an arbitrary Markov chain
*
with one-step transition matrix P given by


aw K K-I K-2 ... 3 2 1

K PK-I,1 PK-1,2 K-1,3 PK-1,K-2 PK-1,K-1 PK-1,K

K-1 pK-1,1 PK-1,2 PK-1,3 PK-1,K-2 PK-1,K-1 PK-1,K

K-2 0 PK-2,1 PK-2,2 PK-2,K-3 PK-2,K-2 PK-2,K-1





3 0 0 0 ... p3,2 3,3 P3,4

2 0 0 0 ... P21 2,2 P2,3

1 0 0 0 ... 0 pl1 p1,2

(2.2.12)


> 0 for i =1,2,...,K-1; j =1,2,...,i+1.


in which p. ,
IsJ









If a Markov chain has a one-step transition matrix of the form

(2.2.12), then clearly the Markov chain is periodic and positive

persistent. Hence, there exists a unique long run distribution equal

to the stationary distribution. With this in mind, we now state and

prove the lemma.


Lemma 2.2.3
*
Let P of (2.2.12) be the one-step transition probability matrix

of a Markov chain. Then, if 6' = (9 ,...,0 ) is the unique stationary

distribution for the Markov chain,

1/K = K '

where


(Kt K-'"' 2)B = (l,l,...,l)


PK-3,1


PK-3,K-4 P3,2-1

PK-3,K-3 P3,3


P2,12-
P2,2-1 P1,1


PK-1,1


PK-1, 3
K-1,3





K-1, K-2

PK-1,K-L


PK-2,1

PR-2,2-1





PK-2,K-3

PK-2,K-2








Proof of Lemma 2.2.3

By the definition of a stationary distribution, 8 is the unique

solution to

8'P = 8' (2.2.13)

and

S+...+ 8K = 1 (2.2.14)

Writing out the equations (2.2.13) and (2.2.14) with K on

the left hand side, we get the following system of K linearly independent

equations:

(1-K-1,I K = PK-1,1 K-1

-PK-12 K = K-,2-1)K-1 + K-2, K-2

K-1,3 K = PK-1,3 K- (K-2,2-1)K-2 + PK-3,1 K-3





K-1,K-2 K K-1,K-2 K- + K-2,K-3 K-2 + (P3,2-1)3 +P2,1 2

-PK-1,K-1 K K-1,K-1 K-I + K-2,K-2 K-2 +

+ P3,3 3 2,2- )92 +1, 1

K =K-i + ... 1 -1"
(2.2.15)


Let A be the (K x K-1) matrix of coefficients of 8 IK-"'. 1

in (2.2.15), then










PK-I,1

PK-1,2-1

PK-1,3


PR-1l,K-2

PK-1,K-1

1


The K-i columns

of dimension K. Hence,

that j is orthogonal to

A, we must have tl 1 0.

take 1 = -1. That is,


PK-2,1

PK-2,2-1 pK-3,1


PK-2, K-3


PK-2,K-2

1


PK-3, K-4

K-3, K-3

1


... p3,2-1


"." P3,3

... 1


P2,1

P2,2-1


Pl 1


of A form a set of linearly independent vectors

there exists a vector V' = (K >-1",. I) such

each column of A. By noting the last column of

Therefore, without loss of generality, we may

we have


I'A = O'


(2.2.16)


with 0 the null vector.

After multiplying the n-th equation of (2.2.15) by tn and

adding them up, we find


K (1-PK-1,1 )- PK-, 2K-1- PK-1,3[ K-2- -

-PK-1,K-23 PK-I,K-I2 + 1] = 1.


(2.2.17)


But by (2.2.16), if we take the scalar product of j with the

first column of A, we have


PK-1,IK + K(P-1,2-1)K-1 + PK-1,3K-2 +

PK-1,K-1l2-1 = 0. (2.2.18)









Substituting (2.2.18) into (2.2.17) yields


K K- K-i) =

Since


A = ... ....
1,1, ...1I


and (1 = -1, (2.2.16) implies that

(K K-1'"'.,2)B = (1,1,...,1),


thus completing the proof.


Corollary 2.2.1

Let 1iKK be the mean recurrence time of state K for a Markov

chain with one-step transition matrix P of (2.2.12) and state space

(1,2,..,K). Then t22 P332,44,..., are finite and satisfy

I/Pl, 1' K=2,


RK M K-1 K- 1 (2.2.19)
I/PK-, 1 + K-1K-k+2 3.
k=2 -I-k2j=k


Proof of Corollary 2.2.1

It is well known that KK = 1/SK 9 K the stationary probability

associated with state K, and PKK is finite. See, for example, Parzen (1962).

By Lemma 2.2.3, if K = 2, then B = [pl,1] and hence


P22 = 1/92 = 1/Pll = 2


(2.2.20)








If Ka3, take the scalar product of the first column of B and

(K",..'Y2), then the lemma yields

PK-1,lK = (1-PK-1,2K-1 K-,3K-2 PK-1,4K-3

PK-1,K-23 PK-1,K-l 2 + 1. (2.2.21)

Subtracting pK-1,IK-1 from both sides of equation (2.2.21),

we get

PK-1,1 K- 1K-) = ( P-PK-I,2 K-1,IPK-I

PK-1,3 K-2 -PK-,K-2 + 1. (2.2.22)

Adding and subtracting (1 -pK-,2 -PK-1,1 )K-2 on the right

hand side of equation (2.2.22) we obtain

PK-1,I1 K-K-) = (1 -PK-1,2 -PK-1,1)( K-1 -K-2

+ (1 -PK-1,3 -PK-1,2 -PK-1,1)K-2


PK-1,4 K-3 -. PK-K 2 +1. (2.2.23)

Continuing in this manner to add and subtract (coefficient of

.)'._I as j goes from K-2 to 3, we get

PK-1,1 (K-K- ) = (1 -pK-1,2 -PK-1,1 )(K-1 -K-2)

+ (1 -PK-1,3 -PK-1,2 -PK-I,1)( K-2 -K-3)

S...(1 -pK-1,K-2 K-1,K-3 -... K-I,1) 3 -

+ (1-PK-1,K-1 -PK-1,1)2 + 1. (2.2.24)


n K'
Note that 1 E p K-k = p .
k=1 k=n*1










Applying this result, equation (2.2.20), and p.. = -g j- to

(2.2.24), we have

K-i K-I
PK-I,IPKK = 1 + S p E PK-l,k+l. (2.2.25)
j=2 jj k=K-j+l

Setting i = (K-1)-k+2 in equation (2.2.25) yields

K-1 j
PK-IIKK= 1 + E p E pK-l,K-i+2
j=2 i=2

K-i K-I
= 1 + E PK Ki+2 E . (2.2.26)
i=2 j='

Dividing both sides of (2.2.26) by pK-1,1 yields equation

(2.2.19), thus completing the proof.

The result of Corollary 2.2.1 can be applied to the Markov chain

of Theorem 2.2.1 to obtain the mean recurrence time of state K. However,

the special form of the matrix P in Theorem 2.2.1 lends itself to a more

elegant solution for the mean recurrence time than that given by the

corollary. Before stating this solution in Theorem 2.2.2, we define

(9) = I e-u dF(u), 8>0, (2.2.27)
o
and

K(z) = E a. Z I !l. (2.2.28)
j=0 3

Also, for ease in writing, if h(Z) is any function of Z, denote by

C(n)h(Z) the coefficient of Zn in the expression h(Z).
z








Theorem 2.2.2

The mean recurrence time "KK for the state K of the Markov

process (Qn; n = 0,1,2,...) of Theorem 2.2.1 is finite and satisfies


p =(K-2) i/[cp(x(l -Z)) Z]. (2.2.29)


Proof of Theorem 2.2.2

By Theorem 2.2.1, "KK is finite and

PKK = I/'K' (2.2.30)

where K is the stationary probability associated with state K.

The P of Theorem 2.2.1(d) is P of (2.2.12) with

Si-1 J ,
Pj = { k. ,
j kj-1' j =i -1.


Hence, Lemma 2.2.3 applies with


"0

(a-i1) 0

a (a-1) 0

B = .



K-3 K-4 K-5 ... (1-1) 0

K-2 K-3 c-4 "2 01-1 0O









In Section 2.4, it will be shown that for a matrix of the special form

B above, the solution of 1/9K in Lemma 2.2.3 is

(K-2)
1/OK = K-K-I = C 1/(K(Z)-Z). (2.2.31)

But


s et (Xzt) /j'. 1, if IZ <: 1.
j=0

Therefore,


K(Z) = E zJ e (Xt) /j' dF(t)
j=0 o

W 0
= J E e-t (XZt)j!: dF(t)
o j=0


= S e dF(t)

= Cp(XO -Z)). (2.2.32)

Hence, (2.2.30), (2.2.31), and (2.2.32) give the desired result.



2.3 Some Properties of the Time Between Balks


The theorems in this section refer to the random variables

defined by (2.1.5) and (2.1.6). These theorems are not only useful in

discussion of the queueing problem, but also provide powerful results

for the inventory problem to be discussed in Chapters 5 and 6.








Theorem 2.3.1

(a) M ,M2,M,..., are mutually independent random variables,

(b) M2,M3,M4,..., are identically distributed,

(c) U(M ) = PKK, j *2, with pKK given by (2.2.29).

Proof of Theorem 2.3.1

Let B(i,j) denote the event

(Qi
Let k1,k2,k3,..., be a sequence of positive integers and define

n. = k + ... + k.. Then for any ma 2,
3 11
Pr(MI = k M2 = k2"..., M = km

= Pr[B(l,kl), B(k1 + I,n2),..., B(nm-1 + 1,n )IQO

= Pr(B(nm-1 +1,n )Qnm_1 =K] Pr(B(nm-2 +1,nm-1)I Qn-2 = K

... Pr(B(k1 +1,n2)9Qk =K} Pr(B(l,k1)|Q}1

= Pr(B(l,km)Q0 =K1 Pr(B(l,km_) IQ0=K

... Pr(B(l,k2 )Q =K] Pr (B(I,k)1Q0} (2.3.1)

The second equality above follows by the Markov property of Qn'

and the last equality follows by time-homogeneity of Q Hence,

Pr(M k1,..., M =km) = Pr[M =kl) ... Pr(M =ki].

Therefore, the M. are independent, and by examining the last expression

in (2.3.1), we see that M. (ji 2) are identically distributed.
3









Now, for j 2 2

Pr[Mj = k) = Pr(Q1 < K,..., Qk-l < K, Qk = KjIQo = K

= Pr[the first passage from states K to K

takes k stages}.

Hence,

e(M.) = mean recurrence time of state K

= KK"

Applying Theorem 2.2.2, we complete the proof.


Theorem 2.3.2

(a) V ,V2V3,..., are mutually independent random variables,

(b) V2,V3,V4,..., are identically distributed,

Cc) e(V.) = 8(M.) S(ul), j 1

= KK 8(ul), j 2
where pKK is given by (2.2.29) and ul is given by (2.1.2).


Proof of Theorem 2.3.2


Since,

j+1 = 'Nj+1 UNj


= UNj+1 + *.. + UNj+I,

parts (a) and (b) follow directly from Theorem 2.3.1 (a) and (b), the

assumed independence of the uk, and the independence of un+1 and Qn"

Assume that Q0 = i, (1 2 i K). Let

a = a a
n n o

so that a = u + ... + u n l.
n 1 n









Now (na nS(u )) is a martingale and
n 1
(aCn nC(u )) = 0, for all n.
n 1
The event ([M > k} e B, where Bk is the c-field

of events generated by (a ,...,a) and (w .I
1CB'--w~


Clearly, B C Bk+

Hence, 'l is an optional stopping rule

martingale property. See, for example, Feller

Therefore,

e(v1 M1 e(ul)) = e(cMl -

= 0.

Now let i = K in the above solution so

distribution as M. (j 2). Then


and has no effect on the

(1966, p. 214).




M C(u ))



that M has the same


e(Vj M.i(u)) = (V1 M1e(ul)) = 0, j 2,

and part (c) follows immediately from Theorem 2.3.1(c), thus completing

the proof.










2.4 The Inverse of a Special Triangular Matrix


Let 802 0 2',..., be a sequence of numbers such that 80 / 0.

If B is a (n+1) x (n+l) matrix of the form


02 01 0


0n-l n-2_


... 1 0


then B-1 is obviously of


B-1
B =


(2.4.1)


the form


C(0)

(1) 0(0)


(2)





p(n)


0(1) 0(0)


S(n-1) -(n-2)
P- P


We show the following:


Theorem 2.4.1


p(k) 0
k
E (
j=1


-1) A(j:k)/~,


... (1) a(0)


(2.4.2)


k = 0,


k > 1,


(2.4.3)











A(j:k) = E


... Es


i + ... + i. = k
1 :3

(i1 2 1,...,i. > 1)
1
m


If, in addition,



-(k) = (k)
P = "Z


0il -i2 ". ..ij-


E 0. converges,
j=0 J


1/B(Z),


k a 0


with B(Z) = E O.Zj.
j=0 3


Proof of Theorem 2.4.1


It is clear that


0(0)


= 1/00.


To show (2.4.3) true for


k Z 1, we need simply verify that


m

k=O



mi
E
k=(


(k) m-kr = 0,
0) 0 m-k 0


= 0/po + E
k=l


m = 1,2,...,n.


m
M/ + E
k=1


k
nm-k E


m j+1
= pO/ + E (-1) -/
j=1


m

k=j


0m-k A(j:k)*


with


(2.4.4)


(k)


0m-k


(-1)j A(j:k)/~+1
0


(2.4.5)


U









The coefficient of 1/P0 in the last expression is m-A(l:m) =0,

and for j = 1,2,...,m-1 the coefficient of


/j+'


S() { m-k ACj:k)
k=j

m-1
= (-i1)3 .. +
k=j i + ... +
1


- A(j+l:m)}


i. = k
I


s ... E
i + ... + i.+


Hence, (2.4.5) is zero and, therefore, (2.4

For n k a 1.


(k)
A(j:k) = C
z


Oil 0i2 ... Oij -_k


- ... P ij+l} = 0.


.3) is true.


( 8 + 2 n
1 2 n


Hence, from (2.4.3)

(k) (k) k Z ++1
P~= [C 1 /(-l) ~]/ ,z ... + z
j=l 0 1 n


(k) (BIZ + + Zn


10 k


The term in brackets in the numerator of the last expression

above contributes nothing to the coefficient of Zk and hence can be

dropped.









Therefore,


(k) (k)
Bk) = (-1/B0) C
0 z


B1 + ... + Pn

0 + 1 n z


But, by adding and subtracting %0 in the numerator, we have, for n ?k 21,


()) (k)
= /0


-0 +. + Zn
0 +.. +* n2"


= (k) /(c + ... + BZn).
z 0 n

The last equality follows since Z = 1 and we have taken

But (2.4.6) holds for all n -' k, hence


(k) = (k)
z


1/B(Z),


(2.4.6)


k 1.


k 1.


That (2.4.4) holds for k = 0 can be seen by


(0)
Z


(0)
l/B(z) = C
z


1/[l-(+l-B(Z))]


(0)
-Z


j=0

j=0


= E (1 -
j=0


(1 B=2))1



O) = 1/0o


The proof is now complete.

To verify equation (2.2.31), it is noted that the matrix B above

equation (2.2.31) is of the form (2.4.1) with


k {oC1 -1,


k / 1,
k = 1,


and, therefore, B(Z) = K(Z) Z.


= 0(0)










Applying Theorem 2.4.1 with n = K-2 to obtain the solution

of our particular B-1, we have from Lemma 2.2.3

-1
-K K-1 = (1,l,...,1) B- -+
-1
0



0

S(K-2)


(K-2)
= C 1/(K(Z) Z).
z












CHAPTER 3


THE QUEUE GI/M/A WITH BALKING
AT QUEUES OF LENGTH K-I



3.1 The Basic System


Consider a queueing system in which customers arrive in the

system at times, ...,_2,al,O0,al 2, ..., such that the inter-arrival

times

u. = aj a- j > 1, (3.1.1)


are mutually independent. The distribution function of u. will be

denoted by

Pr(u. S u) = F(u), u ; 0, j a 1. (3.1.2)

We assume there is a sufficient number of servers so that, if

a person joins the queue, his service commences immediately. The queue

length at any time t is the number of persons being served at time t

(no one has to wait for service) or, equivalently, the number of busy

servers at time t. Since a customer balks at a queue of length K-l,

there are never more than K-1 servers busy at any one time. Hence, the

queues GI//0V and GI/M/s, for s > K-1, both with balking at queues of

length K-1, are identical.

We also have the apparent absurdity that a person would balk

from a system with an infinite number of servers. It would be better










in this case to assert that the customer, who arrives to find K-i

servers busy, is rejected by the system.

The successive service times for customers who join the queue

are denoted by w l,2,w3,..., and are assumed to be mutually independent

random variables. Any w. is also assumed to be independent of the

arrival times. The distribution function of w. is assumed to be


Pr[w. w] = 1 e w > 0, j > 1. (3.1.3)


As before, we let [Q(t); -_ < t < 4 -} be the stochastic

process such that Q(t) represents the number of people in the system

at time t. The number of people that can be in the system at any one

time is restricted to K by requiring Q(C +0) = K-1 whenever Q(C) = K.

See Section 2.1 for a more thorough discussion of Q(t).

We are interested in the development of Q(t) beyond the time

point o0. Without loss of generality, o0 could be taken to be zero.

Once again we are interested in 'the random variables

N1 = inf (k > 0 Q(0k) = K),

N = inf (k > Nn-1I Q(C) = K), n 2, (3.1.4)


M1 = N1'

M = N. N j 2, (3.1.5)
.3 3 j-1'

and

V1 = VN1 0

Vj = Nj j Nj-1 j 2. (3.1.6)









A complete description of these random variables is given in

Section 2.1.

Since the service times are negative exponentially distribute

we find that many of the results derived in Chapter 2 will apply to

queue GI/M/r with balking at queues of length K-1 without any chang

the proofs.

As before, we follow a systematic approach to find the solut


ed,

the

e in


ion


of e(v.).
2


3.2 An Imbedded Markov Chain


Lemma 2.2.2 applies to the stochastic process (Q(t); -m
defined in Section 3.1 and Q(t) is, therefore, in general, a non-Markovian

process. However, there exists an imbedded Markov chain defined by


Qn = Q(Crn)'


n = 0,1,2,....


(3.2.1)


Figures 3.1 and 3.2 give the relation between Q(t) and Qn"

Qn clearly represents the number of persons in the system at the instant

the n-th customer arrives. Once again we shall restrict our attention

to the cases K22, for when K= 1, Pr(Qn = 1} = 1, n = 0,1,2,....

Information obtained from the stochastic process

(Qn; n = 0,1,2,...} will provide sufficient information about

(Q(t); -

Theorem 3.2.1

The stochastic process (Qn; n = 0,1,2,...) defined by (3.2.1)

has the following properties:

























Figure 3.1.










Qn

5
4-
3-
2-I
I-
0-
0 1

Figure 3.2.


A Typical Path of Q(t) for GI/M/r with Balking
at Queues of Length 4.


N2 N3
* *


* *


Path of Qn Corresponding to Q(t) in Figure 3.1.


~~~~- m .o L M









(a) Qn is a Markov chain;

(b) Qn is time-homogeneous;

(c) The class f[,2,...,K] of states on which Qn is defined

is an periodic, positive persistent communicating class;

and

(d) The one-step transition probability matrix P is given by

y K K-I ... 3 2 1

K b(K-l,0) b(K-l,l) ... b(K-1,K-3) b(K-1,K-2) b(K-1,K-l)

K-1 b(K-l,0) b(K-l,1) ... b(K-1,K-3) b(K-1,K-2) b(K-1,K-1)

K-2 0 b(K-2,0) ... b(K-2,K-4) b(K-2,K-3) b(K-2,K-2)




3 0 0 ... b(3,1) b(3,2) b(3,3)

2 0 0 ... b(2,0) b(2,1) b(2,2)

1 0 0 ... 0 b(l,0) b(l,l)


where


b(n,k)= (k) (1 eu (e-)n-k dF(u). (3.2.2)


Proof of Theorem 3.2.1

The proofs of parts (a) and (c) are identical to those given for

Theorem 2.2.1 (a) and (c). We need only show parts (b) and (d).

Let U(') be the unit step function at zero. Let X n+ be the

number of customers who complete their service in (Cn, an1 ]. Then
nfl*









Qn+l = Qn + 1 Xn+

Qn9 = Qn + 1 Xn1 1,


if Q < K,

if Qn = K,


so that


Qn+1= Qn + 1 n+1 U(Qn-K), Qn K.

Since we are looking at Q(t) at successive arrival times, we have


1 s Qn+1 Qn + 1.

By the balking aspect of the problem, Qn K.
n


PrQn+1 = kQn = j] = 0, for k > j + 1.

Further,


(3.2.3)


Pr[Q = k Q = K) = PrFK X -1 + 1 = kQn = K)
n+1 n n+l n
= Pr[(K-1) Xn+ + I = kQn = K-l]

= Pr([Qn = kjQ = K-l}. (3.2.4)

For 1 j S K-1 and k j+1l, we have

Pr([Q = kQn = j} = Pr(j X+ + I = kIQ = j)

=Pr(Xn+1 = j + 1 klQn= j

= Pr(exactly j + 1 k persons out

of j complete their service in

(an, n+ln 1
= n (Pr[exactly j+1l k independent events
un+1
(wI S u n+} occur out of j possibilities})


= b(j, j + 1 k).


Hence,


(3.2.5)








Equations (3.2.3) through (3.2.5) are independent of n and hence

part (b) follows. Application of these equations when j,k = 1,2,...,K

gives the matrix P, thus completing the proof.

Let PKK be the mean recurrence time of the state K for the imbed-

ded Markov chain of Theorem 3.2.1. If we let pn,k = b(n,k 1), we

have that the matrix P* of Corollary 2.2.1 and the matrix P of Theorem

3.2.1 are identical. Hence, p22'P33',p4,..., satisfy


1l/b(l,0), K = 2,


KK i K-1 K-1
1/b(K-1,0) + E b(K-1,K-k+l) SE K > 3.
k=2 j=k


(3.2.6)


The matrix P of Theorem 3.2.1 and equation (3.2.6) both contain

the quantity b(n,k) defined by equation (3.2.2). However, this integral

expression is not a form that lends itself to easy evaluation. Fortu-

nately, b(n,k) can be expressed as a function of the parameter X of

(3.1.3) and the Laplace transform of (3.1.2) in the following manner.

Let


p() = J e-eu dF(u),


e0 0,


(3.2.7)


b(n,k) = (k) (1 e-T5k (e-u )n-k dF(u)
o
k

= () E (k) (-e-'U)j (e-k)un-k dF(u)
o j=0 3
fn k (n-k+j)Xu
k) E ( )(-1) 1 -a- dF(u)
j=0 Jo

= (n) () (-1) p(X(n-k+j)). (3.2.8)
Sj=0 3


then









3.3 Some Properties of the Time Between Balks


The following theorems refer to the random variables defined

by (3.1.5) and (3.1.6). The proofs of these theorems are identical to

those given for Theorems 2.3.1 and 2.3.2. Hence, only the statements

of the theorems will be given.


Theorem 3.3.1

(a) M ,M2 M3,..., are mutually independent random variables,

(b) M2,M3,M ,..., are identically distributed,

(c) p(M ) = 1KK, j > 2

with pKK given by (3.2.6).


Theorem 3.3.2

(a) V ,V2 ,V3..., are mutually independent random variables,

(b) V2 ,V3,V,..., are identically distributed,

(c) C(V.) = (M.) C(u), j a I

= PK S(ul), j 2


with PKK given by (3.2.6) and ul defined by (3.1.2).












CHAPTER 4


THE QUEUE GI/D/1 WITH BALKING
AT QUEUES OF LENGTH K-i



4.1 The Basic System


Consider a queueing system in which customers arrive in the

system at times, ...,C _21, ,0,al, 2,...,"' such that the inter-arrival

times

u. = a. a j a 1, (4.1.1)
3 3 j-l'

are mutually independent. The distribution function of u. will be

denoted by


Pr(u. S u) = F(u),
3


u 0, j l 1.


One server is available to handle the needs of the customers.

This server dispenses his service on a strict "first come, first

served" basis. The service time of any customer who joins the queue

is assumed to be a constant value b.

Let ([Q(t); -- < t <+ -) be the stochastic process such that

Q(t) represents the number of people in the system at time t. The

number of people that can be in the system at any one time is

restricted to K by requiring Q(a +0) = K-1 whenever Q(C) = K.

A more thorough discussion of Q(t) and its relation to the queue

length is found in Section 2.1. As in the previous two chapters,


(4.1.2)










we are interested in the development of Q(t) beyond the time point

C0. We could,therefore, take a0 = 0 without loss of generality.

Define

N1 = inf (k > 0 Q(ak) = K],

Nn = inf (k > n-1 I Q( = K), n : 2, (4.1.3)


1 = N1,

Mj = Nj Nj_I, j 2 2, (4.1.4)

and

V1 =N1 -

Vj = aNj Nj-1, j 2- (4.1.5)


A complete description of these random variables is given in

Section 2.1.

Our ultimate objective is to find an expression for e(V.).

Unfortunately, we are only able to obtain an exact expression for

e(V.) in terms of quantities that are difficult (if not impossible)

to obtain. The results that we do establish are based on the concept

of the waiting time in the system. The definition of the waiting time

and our motivation for its use in the search for a solution to 8(V.)

now follow.

Let W(t) be the amount of time it would take our server to

finish serving all of the customers present in the queue at time t.

W(t) is then called the waiting time in the system at time t. If a

is the time of an arrival into the system, then









Q(C) < K implies W(a) = W(o-O) + b

and

Q(a) = K implies W(o) = W(O-O).

The latter condition reflects the fact that a customer arriving in the

system to find K-1 persons already in the queue, leaves without waiting

to be served.

By the balking aspect of- the problem, at most K-1 persons may

be in the queue at any particular time t. Since the service time is

a constant, b, we have 0 : W(t) (K-l)b.

Again let a represent the time of an arrival into the system.

Clearly,

Q(c) = 1 if and only if W(c-0) = 0.

Further, if Q(Q) = j (j = 2,3,...,K) we must have Q(a-0) = j-1.

Since the service time for any one customer is b, a constant,

(j-2)b < W(a-0) (j-l)b.

That is, for j = 2,3,...,K

Q(a) = j if and only if (j-2) < W(C-0) 9 (j-l)b.

Hence, complete knowledge of the stochastic process defined by

Qn = Q(an) n = 0,1,2,...,

can be obtained from a knowledge of the probability law of the

stochastic process defined by

W = W(a -0), n = 0,1,2,.... (4.1.6)
n n
Figures 4.1 and 4.2 give typical realizations of Q(t) and W .
n













Q(t)


b b b b b


Figure 4.1.


A Typical
at Queues


Path of Q(t)
of Length 4.


Wn

4b-
3b-
2b-
Ib
0-
0


for GI/D/I with Balking


N2


Path of Wn Corresponding to Q(t) in Figure 4.1.
n


Figure 4.2.








The random variables defined by (4.1.3) can now be expressed

in the equivalent form

NI = inf [k > OIWk > (K-2)b],

Nn = inf (k > NnlW k > (K-2)b), n a 2. (4.1.7)

It may be further shown, by considering a slight modification

of the proof of Lemma 2.2.2, that both Q(t) and W(t) are non-Markovian

processes. Since we now have constant rather than negative exponential

service times, it is also true that Qn is non-Markovian. However, it

will be shown in the next section that W is a Markov process. Because

of the Markov nature of W and the equivalence of (4.1.3) and (4.1.7),
n
we are led to consider the stochastic process (Wn; n = 0,1,2,...) in

our search for an expression for 8(V.).

As always, we shall ignore the trivial case when K = 1.



4.2 The Waiting Time in the System


Let (W ; n = 0,1,2,...) be the stochastic process defined by

(4.1.6) so that W is the waiting time in the system immediately pre-
n
ceding the n-th arrival into the system. Then it is clear that

W s (K-2)b, i.e., Q < K, implies


{ 0, if W + b S u ,
W = U n 1
+1 W + b if W + b > u
n n+1, n n+1,
and Wn > (K-2)b, i.e., Q = K, implies

0, if Wn u n+l

n+l= n u n, if Wn > un+.
n n*1' n n+1









If U(') is the unit step function at zero, we can rewrite the

above expressions in the form

W n+1 = max ([0, Wn un+l + b U((K-2)b Wn )], n 2 0. (4.2.1)

We now note the relation between (4.2.1) and the analogous

expression for the waiting time just prior to an arrival in the queue
*
GI/D/I (no balking). If we let [W ; n = 0,1,2,...) be the stochastic
n
process that represents the latter waiting time, we have the well-known

result

W +1 = max t0, W u + b), n 2 0. (4.2.2)

See, for example, Prabhu (1965b).

The difference between W and W is that, for the former,
n n
a person who enters the system and faces a queue of length K-1 balks

and adds no service time to the system.

We now formally state and prove some basic properties of the

stochastic process [Wn; n = 0,1,2,...}.


Theorem 4.2.1

The stochastic process Wn ; n = 0,1,2,...) is a time-homogeneous

Markov process concentrated on the continuous state space (0, (K-l)b]

with one-step transition distribution

1 F(y-x-0), y > (K-2)b,
Pr{w S x W = yJ = (4.2.3)
1 F(y-b-x-0), y I (K-2)b.


Note that the interval on which (4.2.3) is concentrated is actually a

sub-interval of [0, (K-l)b], this sub-interval being a function of y.









Proof of Theorem 4.2.1

Let mn,m2,m3,..., be a sequence of integers such that

mI < m2 < ...< mk < n. Then

Pr(Wn.*1 < CiWn = y, Wnmk = Yk','**WMl =

= Pr(max (y un+1 + bU((K-2)b-y), 01 x)

= Pr(Wn+1 x = y .

Hence Wn is Markovian. Now,

Pr(Wn+l < xWn = y] = Pr(Wn+, = OW = yW

+ Pr(O < Wn+l < xw n

= Prfy Un+1 + bU((K-2)b-y) 0)

+ Pr(0 < y un+1 + bU((K-2)b-y) S x)

= Pr[u n1 2 y + bU((K-2)b-y) x)

= 1 F(y + bU((K-2)b y) x 0), (4.2.4)

which is independent of n,"and therefore W is time-homogeneous.

Equation (4.2.4) is the same as (4.2.3), thus completing the proof.

To simplify the notation, we write

Pn(y, x) = Pr(W n xjWO = y], n 1, (4.2.5)

and


P(y, x) = Pl(y, x).


(4.2.6)









Theorem 4.2.2

The n-step transition distribution functions defined by (4.2.5)

are concentrated on the continuous state space [0, (K-l)bj (or a sub-

interval of it) and satisfy


P n+ (zx)


= P (z,y) d F(y-x-0)
max(x,(K-2)b) n y

(K-2)b
+ K-2)bP (z,y) d F(y+b-x-0)
max(0-,x-b) n y

- P (z,(K-2)b) [F((K-l)b-x-0)

F((K-2)b-x-0)].


Proof of Theorem 4.2.2

By the Chapman-Kolmogorov equations,


(K-l)b
P (z,x) = P(y,x) P (z,dy).
o-

Integrating by parts, (4.2.8) becomes


P n+(Z,x) = [P(y,x) P (z,y)]


(K-1)b
-SO-


y = (K-l)b

y = 0-


Pn(z,y) P(dy,x)


(K-l)b
= P((K-l)b,x) K-)b n(z,y) P(dy,x).
0_


(4.2.9)


(4.2.8)


(4.2.7)









By Theorem 4.2.1, we have


0,
P(dy,x) = {
S-d F(y-x-O),
y


y S x,

y > ,
y>x ,


(K-2


0, y 9 x-b,
P(dy,x) = 0
-d F(y+b-x-0), y > x-b,

P(dy,x) = F((K-l)b-x-0) F((K-2)b-x-0),

so that (4.2.9) becomes


P n+(z,x)


)b < y : (K-l)b,




y < (K-2)b,


y = (K-2)b,


= 1 F((K-l)b-x-0)

(K-l)b
+ K-l)b P (z,y)d F(y-x-O)
- max(x,(K-2)b) n y


+ (K-2)b
S(0 -b
max(0-,x-b)


P (z,y) d FCy+b-x-0)
n y


P (z,(K-2)b)[F((K-l)b-x-0)
n
F((K-2)b-x-0)] (4.2.10)


But P (z,y) = 1, for y > (K-l)b. Hence (4.2.10) becomes

(4.2.7), thus completing the proof.

Various attempts have been made to establish the stationary

distribution of W all without success. Since the stochastic kernal

P(y,x) does not satisfy the regularity conditions stated in Feller

(1966, Sec. VIII, 7), we are not even sure if W possesses a stationary

distribution.

As will be seen in the next section, the most important result

of this section is the Markov property of W established in Theorem 4.2.1.
n









4.3 Some Properties of the Time Between Balks


We are now ready to obtain solutions for the expected values of

the random variables M. and V. defined by (4.1.4) and (4.1.5), respec-

tively. In the previous chapters exact results were derived for these

expectations, but in this chapter we must be content to utilize unsolved

expressions for the expectations of interest. In order to reach our

objectives, we make use of the properties of the following quantities.

Let

S = nb u ... u n 1.,
n 1 n

and, for 0 5 y S (K-l)b, y a real number, let


r y y (K-2)b,
y =
y b, y > (K-2)b.

Define

N(y) = inf [n > JWn > (K-2)b; WO = y} (4.3.1)

(so that N(y) represents the number of arrivals until a balk occurs,

conditional on an initial amount y of waiting time in the system), and

M(y) = inf (n > 0Wfn = 0 or Wn > (K-2)b; W0 = y) (4.3.2)

(so that M(y) is the number of arrivals until a customer either enters

an empty queue or balks, conditional on an initial amount y of waiting

time in the system).

It is clear that M(y) has the equivalent representation

M(y) = inf (n > 01Sn -y or Sn > (K-2)b-y'l









and, hence, is the index at which the random walk [Sn; n = 1,2,3,...}

first leaves the interval (-y',(K-2)b-y'].

Finally, let J = J(y) be the random variable that represents

the number of customers who enter an empty queue prior to the first

person to balk, conditional on an initial amount y of waiting time in

the system. That is, J is the number of times the waiting time process

W (n 2 1) takes the value zero before it takes a value greater than (K-2)b
n
First we shall prove a few lemmas that lead to a theorem which

expresses C(N(y)) in terms of expectations and probabilities associated

with the random variables M(y), SM(y), M(0), and SM(0). Note that when

K= 2, M(0) = M(y) = 1 and SM(0) = S = b-ul.


Lemma 4.3.1

Pr[J = 0) = PrfSM(y) > (K-2)b-y'),

Pr[J = j) = Pr[SM(y) -y') Pr(SM(o) 0)j-1

SPr(S M() > (K-2)b], j 1 1.



Proof of Lemma 4.3.1

Define

L1 = M(y),

L. = inf (n > L._ n = 0 or W >(K-2)b; W0 = y], j a 2

(so that L. is the index of the j-th person to balk or enter an empty

queue). Keep in mind that L. is a function of y.
:1









Now, if j = 0,


Pr(J = 0) =


n=l
n=l


Pr(J = 0, N(y) = n)


PrfO < W 1 (K-2)b,...,0 < W n- (K-2)b,

W > (K-2)b|Wo = y}


= E Pr[0 < S1 + y < (K-2)b,...,O < Sn-1
n=l


+ y S (K-2)b,


S + y > (K-2)b)


= E Pr[M(y) = n, SM(y) + y' > (K-2)b)
n=l

= PrSM(y) > (K-2)b-y').
1,M(y)


For j a 1,


Pr[J = j]


= Pr[WL = 0, WL2 = O,...,WLj = 0,

WLj+I > (K-2)bJW0 = y}


= Pr(WLj+I > (K-2)bIWL. = 0} Pr(WLj


= 0 WLj_1


...Pr{WL2 = O WL = 0} Pr(WLi = o0WO = y},


the last equality following from the Markov nature of W But we have,
n
Pr(WL1 = oWO = y} = Pr[( = Y 0 = y



= IE PrWM(y) = 0, M(y) = njWO = y}
n=1


= ; Pr(SM(y) + y' 0,1


M(y) = n}


= Pr[SM(y) y',3


= E
n=1


= 0)


(4.3.3)


(4.3.4)












Pr(wLi+l = 0oWL.


= 0) = S Pr[(L L = n, WL = 0 WL
n=l 1+1 Li


= 0o


= E Pr0 < W L. (K-2)b,...,0 < WL.
n=l 1 L


s (K-2)b, W. .
I-


= 01WL
Li


= 0}


= E Pr(O < S 1 (K-2)b,...,0 < S
n o n=1
5 (k-2)b, Sn 01

CO
= E Pr(M(O) = n, SM(O) < 0)
n=1


= Pr(SM(O) 0o,


1 i a j-l.


(4.3.5)


Following a proof analogous to that used to obtain (4.3.5), we get


(4.3.6)


> (K-2)bIWL. = 0) = PrISM(0) > (K-2)b} .
3


Applying (4.3.4) through (4.3.6) to (4.3.3), we complete the proof.


Lemma 4.3.2


8(N(y)|J=o) =


(M(y)ISM(y) > (K-2)b-y'),


e(N(y)IJ=j) = e(M(y) IS y')


+ (j-1) t(M(O)|SM(0) < 0)


+ t(M(0)ISM(0) > (K-2)b),


and


Pr(WL.
3+l1


J 2t 1.








Proof of Lemma 4.3.2

Pr[N(y) = nIJ = 0] = Pr(N(y) = n, J = 0}/Pr[J = 0}


= Pr0 < S + y' S (K-2)b,...,0 < Sn


+ y' S (K-2)b,


S + y' > (K-2)b}/Pr[J = 0)
11


= PrM(y) = n, SM(y) + y > (K-2)b)/Pr[SM(y) + y


> (K-2)b)


= Pr[M(y) = nSMy) > (K-2)b-y'},

and therefore

&(N(y)IJ = 0) = (MS(y)JS M > (K-2)b-y').

For j 1, note that

e(N(y)IJ = j) = C(Lj+i = j)

= (L1 IJ = j) + PCL2-L 1I = j) + +

e(L. Lj J = j).

We now solve for the various terms in the last express ion of equa-

tion (4.3.7).


(4.3.7)


F(L1Ij = j)


n Pr(J = j, L1 = n)/Pr(J = j]


= 1
n=l


Sn Pr[w L=0,...,WLj=0,WL+I > (K-2)b,L1 = n|W0 = Y]
= S 1


n=l


Pr[J = j}


= Sn Pr(WLj+I> (K-2)bIWL. =0} Pr(WLj =O0WLj-1 =0]
n=l


...Pr(WL2 =OIWL1 =0}


Pr(WL1 =0, L1 = n|W0 = y)

Pr([ = j}


= S n Pr(WLI =0, L1 = nWO0 =y}/Pr[WL1 =1W0 =y)
n=1








= E n Pr(M(y) = niSM(y) + y' 9 0)
n=l

= 8(MCy)lSM~y) I y'). (4.3.8)


The second equality from the last follows from (4.3.3) and the next to

the last equality follows from (4.3.4). Using similar techniques as

those employed in deriving (4.3.8), we have

(Li+1 LiJ = j) = 8(M(0)ISM(o) r 0), 1 i j -1, (4.3.9)

and

e(Lj+. L J = j) =8(M(0)SM(0) >(K-2)b). (4.3.10)
j+l j M(0)

Applying (4.3.8) through (4.3.10) to (4.3.7), we complete the proof.

We are now ready to find the expression for e(N(y)).


Theorem 4.3.1

If K ; 2, then

Pr(S M y'}
e(N(y)) = e(M(y)) + e(M(0)) M(y)
Pr(S >(K-2)b)
M(0)

and, in particular, if K = 2, then

C(N(y) = 1 + [1 F(b+y'-0)]/F(b-0).


Proof of Theorem 4.3.1

For K 2 2, let A(y) be the event [SM(y) +y >(K-2)b) and A'(y)

the complement event (S (y) + y 0]. Further, let p(y) =Pr{A(y)}

and q(y) = Pr[A'(y)) = l-p(y). We have, by Lemmas 4.3.1 and 4.3.2, that










e(N(y)) = Z e(N(y)IJ = j) Pr4J = j}
j=0

= e(M(y)|A(y)) p(y)

+ e(M(y)IA'(y)) q(y) E q(0)j-1 p(0)
j=1

+ e(M(0)IA'(0)) q(y) E (j-1) q(0)j-1 p(0)
j=l

+ 8(M(0)IA(0)) q(y) E q(0)j-1 p(0)
j=l

= e(M(y)) + e(M(O)jA'(0)) q(y) [l/p(0O) i]

+ e(M(0)IA(0)) q(y)

= C(M(y)) + q(y) [P(M(o0)A'(0)) (q(0)/p(0))

+ eC(M(0)IA(0)) (p(0)/p(0))]

= 8(M(y)) + P(M(O)) q(y)/p(O),

and the first half of the theorem is proved.

For K = 2, we have immediately that

Pr(N(y) = 1) = Pr(W > 0OW0 = y)

= Pr(S1 + y > 0o

= Pr(ul < b + y')

= F(b+y'-O)

= Pr(J = 0) (4.3.11)

and

Pr(N(y) =n] =Pr[W =0,...,wn =O,W >0Wl = y)

= Pr(b-u +y" 5O,b-u 20,...,b-u n' 0, b-u > 0)

= [1 F(b+y'-0)] [1 F(b-0)]n-2 F(b-O)

= Pr(J = n-1], n > 2. (4.3.12)








From (4.3.11) and (4.3.12), 8(N(y)) for the case K =2 follows trivially,

thus completing the proof.

We now proceed to find an approximate expression for C(N(y))

when K >3. Since M(y) represents the index of first passage of the random

walk [S ; n = 1,2,3,...) out of the interval (-y',(K-2)b-y'], we take

y>0 (and hence y' >0) so that we can use Wald's approximation to yield

the following results for the random variables M(y) and S My. See,

for example, Ferguson (1967).

Let


C(9) = J e dF(u), (4.3.13)
o

and 80 be the non-zero solution (if it exists) of

exp (00b) C (0 ) = 1. (4.3.14)

We then have, for y > 0 and K 3,

y'/(K-2)b, t(u1) = b,
Pr[SM() >(K-2)b-y'] 1 exp(-e0y') (4.3.15)

exp(80 [(K-2)b-y' ]) -exp(-oy')

t(ul) / b,

and

8(M(y)) y'[(K-2)b-y']/Var(ul), C(ul) = b,

t(M(y)) (1/(b-8(ul)))[-y'(exp(90[(K-2)b-y']) 1)

+ [(K-2)b-y'](l exp(-9oy')))

(exp(9 [(K-2)b-y']) exp(-90y' ) -

e(u ) y b. (4.3.16)










By the use of Theorem 4.3.1, and equations (4.3.15) and

(4.3.16), an approximate expression for C(N(y)) may be obtained.

However, if y = 0, then the above approximations give

8(M(0)) = Pr[SM(0) > (K-2)b] 4 0.

Hence, the substitution of these quantities into the expression of

Theorem 4.3.1 yields 0/0, an undefined quantity. To circumvent this

difficulty, we write

8(M(a)) Pr[SM y'J
e(N(y)) = E(M(y)) + lir (M(a)) Pr SM(y)
a-0 Pr(SM(a) > (K-2)b-a'}

and substitute the approximations before taking the limit. We have after

simplification, for K 3,

e(N(y)) = [(K-2)b-y'][(K-2)b+y']/Var(ul), C(u) = b,

8(N(y)) A [(K-2)b-y']/(b (u))


(exp(e0(K-2)b) exp(C0y'))

a0(b (u))


e(u ) / b.


It will be shown in Theorem 4.3.3 that (4.3.17) can be used to put an

upper bound on e(V ). The next two lemmas and the theorem following

them help us to attain this goal, while giving insight into why we

have devoted much effort to obtain C(N(y)).

Let


Y. = WNj,
3 "


j 2 1,


(4.3.18)


(so that Y. is the amount of waiting time in the system immediately

prior to the j-th balk).


(4.3.17)








Lemma 4.3.3

The stochastic process ([Y; j = 1,2,3,...] is a time-homogeneous

Markov process on the continuous state space ((K-2)b, (K-1)b].


Proof of Lemma 4.3.3

By the definition of N. in Section 4.1, the state space is as
.J
described. For 1 S i < i < ... < i < j, by the Markov and time-

homogeneity properties of W we have

Pr[Yj+1 xj = y, Yim = m' Y =

= PrWNj+1 WNj = Y, WNim = m... WNil =

= Pr(WN2 x|WN1 = y]

= Pr{Y2 x x = y},

thus completing the proof.


Lemma 4.3.4

(a) The distribution of M. conditional on Y. = y

is the same as that of N(y) for j 1.

(b) The distribution of V. conditional on Y. = y
J3l 3
is the same as that of aN(y) a0 for j 1.

(c) min S(N(y)) g 8(M.) < max e(N(y)),
(K-2)b

j 2.








Proof of Lemma 4.3.4

By the Markov nature of W we have

PrfM+ = nIYj = y)

= Prw .+1 (K-2)b,..., WN.+
3 J


' (K-2)b,


WN.+n > (K-2)bIWN. = y)
j j
= Pr(W1 5 (K-2)b,..., W n_1 (K-2)b, Wn > (K-2)biWo = y}

= Pr[N(y) = n},

completing part (a) of the lemma.

Further, since ulu2,u3,..., are mutually independent and

identically distributed,


Pr(Vj+1Yj = y) = Pr(u N+1 t...+ U.j+
.+ p+1


SxWN.


= y]


= E Pr N.+ +'+ uN.+n x, M1 = jWN1 = Y
n=1 3 J J


= Prf a n -0 C x, N(y) = n)
n=1

=PrN(y) -0 ,


and the (b) part is proved.

Finally, from the (a) part of the lemma, we have, for j > 1,

8(Mj ) = e ([(N(Y.))}.


But, by Lemma 4.3.3, (K-2)b < Y. S (K-l)b, so that part (c) of the

lemma follows. The proof is now complete.

Note that Y1,Y2,Y3,..., are not identically distributed unless

Y7 has the same distribution as the stationary distribution of Y..
1J









Therefore, Lemma 4.3.4 implies that neither M2,M3,M4,..., nor

V2 ,V3,V,..., are identically distributed sequences of random variables

in general.


Theorem 4.3.2

(a) 8(N(y) C0) = 8G(Ny)) e(u ),

(b) 8(V.) = 8(M.) S(u ), j a 1,

with u1 given by (4.1.2).


Proof of Theorem 4.3.2
*
Let a = o a0 so that
n n 0
*
n = u1 +...+ u n 1.
n 1 n

The sequence on n C(u ) forms a martingale and t(a -n E (u)) = 0
n n
for n a 1. Let Bk be the C-field of events generated by (Ca,..., ),
L-k 1 "q
then the event [N(y) > k} C Bk and Bk c Bk+". Hence, N(y) is an

optional stopping rule and therefore has no effect on the martingale

property. See, for example, Feller (1966). We now have

(CNy) N(y) 8(ul)) = 0, which establishes part (a) of the theorem.

By Lemma 4.3.4 and part (a) above, we can write

8(Vj+) = 8 e,(Vj+l)}
j

((N(Y.) 0



Y 3

= ecu ) 8 (e(N(Y.))]
S ( j










Similarly, we have


(1 W (N(Wo) 0 = Ml) (

The proof is now complete.

IE one could obtain the exact value of C(M.), then the above

theorem implies 8(V.) could be easily found. However, the best we are

able to do is obtain an approximate upper bound for 8(V.) when K 3.
J
An exact upper (and lower) bound for S(V.) when K = 2 can be obtained

directly from Theorem 4.3.1. No approximate lower bound for 8(V.) in the

case K e 3 can be found since, when y = (K-l)b (or equivalently

y = (K-2)b), the approximation (4.3.17) yields a value of zero. We now

state formally the results that can be obtained. Since they hold for

all j 2 2, the steady state solution satisfies these bounds also.


Theorem 4.3.3

Let 90 satisfy (4.3.14).

If K = 2, then

S(ul)/F(b-0) 8e(V.) P 8(u ) [1 + I/F(b-0)], j 2.

If K 3, then, approximately,

P(V ) 5 S(ul) (2k-5) b2/Var(ul), (u) = b,

and

-(V.) ) b e(u )/(b-S(u1))

C(u1) (exp(e0(K-2)b) exp(90(K-3)b)

00(b e(u ))


C(u1) b, j 2 2.








Proof of Theorem 4.3.3

By Lemma 4.3.4(c), we need the maximum and minimum of t(N(y))

from Theorem 4.3.1 for y in the range (K-2)b < y S (K-1l)b, or equiva-

lently, (K-3)b < y' s (K-2)b to obtain bounds for e(M.). The exact

bounds for 8(N(y)) when K = 2 are taken directly from the expression

in the theorem, while the approximate results for K 2 3 are obtained

from (4.3.17). It is easily shown that the lower bounds are reached

when y = (K-2)b and the upper bounds when y = (K-3)b. By Theorem

4.3.2(b) we need only multiply these bounds by C(u ) to complete the

proof.

Finally, let F (*) denote the stationary distribution of Y..

That is, F (*) satisfies

(K-l)b
Fy (x) = (K- PrY2 x = y} dFy(y), (4.3.19)
(K-2)b

when (K-2)b< x (K-l)b. If Y1 has the distribution F (*), then it is

well known that Y. has the distribution F (*) for all j. See, for example,

Feller (1966). We then have, by Lemma 4.3.4, the following theorem.


Theorem 4.3.4

If Y1 has the distribution F (*) that satisfies (4.3.19), then

(a) M2,M3,M4,..., are identically distributed,

(b) V2,V3,V4,..., are identically distributed, and

(K-l)b
(c) e(M.) = e o(N(y)) dF (y), j > 2,
w (K-2)b

with e(N(y)) given by Theorem 4.3.1.













CHAPTER 5


THE INVENTORY PROBLEM: DISCRETE CASE



5.1 Definition of the Inventory System


We suppose there exists a subwarehouse, maintaining an inventory

of finite capacity S, that holds material (discrete) for future demand.

We assume the item-by-item demand for the stored objects occurs accord-

ing to the stochastic process [D(t); t -> 0) defined by

co
D(t) = E U(t T.) (5.1.1)
j=l J

with U(*) the unit step function at zero. It will be assumed that the

inter-demand times, 2 T T3 T T4 T 3..., for the items in

storage are mutually independent and that the distribution of T. -T .
J J-1
is given by

Pr(T. -T jl u) = G(u), u 0, j 2. (5.1.2)

In order to maintain a stock on hand, the subwarehouse places

an order for replacement items to a warehouse. It will be held that

items are so ordered in lots of integral size V(1 V
orders are placed at the times 1,c 2 C3..., with C. defined by

C. = inf (tID(t) = j V], j > 1. (5.1.3)


From the definition of D(t), we have a. = T. so that 02 02, c -0 ,

4 -a3',..., are mutually independent and a -aj-1 has distribution








Pr(O. -j._1 5 x) = G (x), x ; 0, j 2 2, (5.1.4)

where C (*) is the v-th convolution of G(-) with itself.

Let (S(t); t ; 0) be the stochastic process such that S(t)

represents the inventory or stock level in the subwarehouse at time t.

If we let (R(t); t L 0] be the stochastic process such that R(t) is

the number of orders filled by the warehouse in (0,t] for our sub-

warehouse of interest, then S(t) will be defined by

S(t) = S D(t) + VR(t). (5.1.5)

The above definition assumes that the inventory is initially full,

i.e., S(0) = S.

An order for replacement stock of lot size v, made at time C,

may be one of two types. We have a "regular" order provided that

S(9) > S v[S/V], where [x] means the integral part of x. In this case,

the time to fill an order (hereafter, the service time) is assumed to be

a random variable. The successive regular service times, denoted by

wl, 2,w3,..., are assumed to be mutually independent and independent of

the demand process D(t). The distribution function of w. is given by

Pr[w. S w) = H(w), w a 0, j : 1. (5.1.6)

We have an "emergency" order if S(0) = S v[S/V]. In this case, the

emergency service time is supposed instantaneous, or at least effec-

tively zero, so that S(O + 0) = S v[S/V] + u.

In other words, regular ordering procedures are used provided

that at the time we place such an order, there are at least v items in

the subwarehouse. If there are less than V items in the subwarehouse










when an order is placed, we utilize emergency measures to obtain the

lot of V items. Utilizing this ordering scheme, we avoid the disaster

of running completely out of stock in the subwarehouse. Figure 5.1

gives a typical realization of S(t).

The behavior of the warehouse in filling the regular orders is

important to a discussion of the inventory problem. It will be assumed

that the warehouse operates under one of two distinct systems. Under

the first system, the warehouse can handle only one order at a time, so

that successive orders, which arrive while an order is being filled,

form a queue and must wait to begin being processed or "served."

The orders are then processed by the warehouse according to a strict

rotation basis of "first come, first served." The warehouse just

described will be called the one-server warehouse. Under the second

system, an order begins processing as soon as it arrives in the ware-

house so that no order must wait for "service." A warehouse operating

under this procedure will be called an infinite-server warehouse. We

shall consider both one-server and infinite-server warehouses.

We now state the following formal definition of the concepts

discussed so far.


Definition 5.1.1

The ordering scheme (G,H,S,v,l) is a policy for maintaining the

level of inventory in a subwarehouse where:

(a) The capacity of the inventory is S.

(b) Item-by-item demand for objects in storage satisfies (5.1.1)

and the inter-demand times are mutually independent random

variables with distribution function G(.).





67





4"






b







II

__I0

11



SI
C










.4
ci,
b- m



P-4













IUD -^r 1
in n










(c) Lots of v items (i C V 2 S) are ordered at the times

given by (5.1.3).

(d) The orders are made to a one-server warehouse.

(e) Regular service times are mutually independent random

variables, are independent of the demand process, and

possess a distribution function H(').

(f) Instantaneous service occurs for orders placed when less

than v items remain in storage at the time an order is

placed.

If we change condition (d) of the definition to state that orders

are made to an infinite-server warehouse, we have the ordering scheme

(G,H,S,V,W).

Clearly, the cost of maintaining the inventory level in the

subwarehouse will be a function of V, the lot size ordered. The optimal

value of the lot size is defined herein to be that value of v which min-

imizes the cost. It has to be remembered, however, that frequently not

all values of V are available to us since orders to the warehouse may

have to be in multiples of ten, a dozen, a gross, or some other basic

unit. Our best v is that of finding the optimal attainable value of

V. In Section 5.3, a cost function is defined that utilizes reasonable

costs associated with maintaining the inventory level.

While searching for a minimum cost with respect to V, V may

take all values from 1 to S. Therefore, the distribution on regular

service times could quite possibly be a function of V, the lot size

ordered.









5.2 Relation of the Inventory System to Queues
with Balking


Recall from Section 5.1 that S(k ) is the stock level at the

time the k-th order is placed. S( k) tells us whether an order is

regular or emergency. Since, realistically, emergency orders have

large costs (more than the costs of regular orders), the value of S(k )

is of extreme importance in determining the cost of maintaining the

inventory level in the subwarehouse. A study of the properties of S(a )

can be facilitated by making the following observations. From equa-

tions (5.1.5) and (5.1.3), we have

S(Ck) = S D(Ck) + VR(O )

= S vk -R(C k)). (5.2.1)


Define the stochastic process (Q(t); t > 0} by

Q(t) = [D(t)/V] R(t), (5.2.2)

where [x] is the integral part of x, so that Q(t) represents the

number of unfilled orders, for our subwarehouse of interest, at time t.

From (5.2.1) and (5.2.2), we have

S(Ck) = S vQ(ak) (5.2.3)

so that a knowledge of Q(ak) gives us the value of S(k ). Therefore,

a study of the stochastic process (Q(t); t a 0) is needed. It will

be demonstrated in Theorem 5.2.1 that such a study has been carried out

for some special cases in Chapters 2 through 4.









For the cost function to be defined in Section 5.3, we will

make use of the following random variables. Define

N1 = inf [k > OlS(Ck ) = S v[S/v]),

N = inf k > N (lS(k ) = S v [S/v]), n -2 (5.2.4)

(so that N is the number of orders, regular and emergency, placed up

to and including the n-th emergency order), and

V1 = N1'

V. = C.N aN. i, j 2 (5.2.5)
J j j-l

(so that V is the time until the first emergency order is placed and

V. (j Q 2) is the time between the (j-l)-st and j-th emergency orders).


Theorem 5.2.1

For an ordering scheme (G,H,S,v,l) ( (G,H,S,v,m) ) we have the

following dualities.

(a) Q(t) is the number of people in the system at time t for

the queue G /H/1 (G /H/-) with balking at queues of

length rS/v] 1.

(b) Nk is the number of people who arrive in the system up to

and including the k-th person to balk in the queue G /H/1

(G /H/-) with balking at queues of length [S/v] 1.

(c) V is the time until the first balk and V. (j 2) is
13
the time between the (j-l)-st and j-th balks in the

queue G /H/1 (G /H/-) with balking at queues of

length [S/v] 1.









Proof of Theorem 5.2.1, Part (a)

By definition (5.2.2), we have

Q(t) = [D(t)/v] R(t).

Now [D(t)/v] has unit increases at the times 1,C 2'3,..., so that

Q(t) also has unit increases at these times. Hence, the order time

0. can be considered as the arrival time of a "customer" into the

warehouse.

R(t), by definition, is the number of orders filled by the

warehouse in (0,t]. The time it takes to fill a regular order is

w.. Since R(t) increases by a unit amount at the time an order is

filled, Q(t) decreases by a unit amount at that time. Therefore, w.

is the service time of a "customer" in the warehouse.

Finally, by the restriction of emergency orders and (5.1.5),

ak is such that

S(Ck +0) = S -v[S/v] +v when S(ok) = S-v [S/v],

if and only if

R(Crk 0) = R(a k) + 1.

Hence, from (5.2.2) and (5.2.3)

S(ak +0) = S -v[S/v] +v when S(Ok) = S -v[(S/v]

if and only if

Q(k +0) = [S/v]-1 when Q(k) = [S/v].

Therefore, a "customer" balks at the queue of length [S/v] 1.

By the assumptions placed on the ordering times and the service

times for the ordering scheme, and the discussion of a queue with

balking in Section 2.1, we complete part (a) of the proof.









Proof of Theorem 5.2.1, Parts (b) and (c)

Simply note by (5.2.3) that

N1 = inf k > OIQ(ak) = [S/v),

N = inf [k > Nn-1 Q(ak) = [S/v]), n 2 2.

By part (a) of the theorem and definitions in Section 2.1, we complete

the proof.



5.3 The Cost Function C(v)


For the inventory problem discussed in Section 5.1, consider the

following costs associated with running the ordering scheme (G,H,S,v,1)

or (G,H.S,v,-):

CO: The cost of placing an order,

C : The par unit cost of the commodity, and

C2: A penalty cost for instantaneous delivery of an emergency

order that is possibly a function of v, the lot size

ordered.

Define the stochastic processes (N(t); t L 0) and [M(t); t L 0)

by

N(t) = E U(t o.) (5.3.1)
j=l

and

M(t) = E U(t aN.) (5.3.2)
j=l

with 0U. given by (5.1.3) and N. by (5.2.4). Then N(t) is the total

number of orders placed in the interval [0,t] (regular and emergency)








and M(t) is the number of these that are emergency orders. Definitions

(5.3.1) and (5.3.2) are not the same random variables as defined by

(4.3.1) and (4.3.2), respectively.

Let

C(v;t) = (C0 + C1V) N(t) + C2M(t) (5.3.3)

so that C(V;t) is the total cost of ordering lots of size v during the

time interval [0O,t]. Since N(t) and M(t) are random quantities, we

shall concern ourselves with the expected total cost, e[C(v;t)], during

the interval [0,t]. Further, the v which minimizes [(C(v;t0)} for

a fixed to will minimize 8[C(v;t0 )}/t0, so that we shall restrict our-

selves to the latter quantity. Finally, since a subwarehouse that

maintains an inventory is usually established with the thought of oper-

ating for a long period of time, we choose to minimize the expected

total cost of ordering per unit time in the long run, a quantity that is

mathematically tractable. That is, we want the value of v (u = 1,2,...,

or S) that minimizes

C(v) = lim t(c(v;t)/t)
t-c

= (c + C v) lim e(N(t)/t)
t-m

+ C lim [(M(t)/t]. (5.3.4)
t-* O

But 2 2- o3- 2, a 3' ..., are mutually independent

and identically distributed random variables. Therefore, N(t) is

a (delayed) renewal process and, by the Elementary Renewal Theorem,

Prabhu (1965a), we have









Lemma 5.3.1

lim e(N(t)/t} = 1/(C2 -a )
t-4m

= 1/A (5.3.5)

with

S= J u dG(u). (5.3.6)


A similar closed form for lim 8PM(t)/t) does not exist for
.t-4 0O

an arbitrary ordering scheme (G,H,S,v,l) or (G,H,S,v,w). The reason

for this is that M(t) is a function of the random variables N. whose

properties depend heavily on the distribution of service times and

whether we have a one-server or infinite-server warehouse.

In the following sections, we consider reasonable candidates

for the distribution function, H('), on the service times and both one-

server and infinite-server warehouses. For the cases discussed in these

sections, a "closed" form for lim u8[M(t)/t) will be obtained.
t-m

At this juncture, it should be pointed out that when [S/v] = 1,

Theorem 5.2.1 gives Nk = k. Therefore, M(t) = N(t) and

lim e(M(t)/t) = 1/v. For the future we shall therefore concern ourselves
t- o
with the cases [S/v] > 2.









5.4 Solution of C(v) Using the Queue GI/M/i
with Balking


General Demand Function

In this section, we develop the solution of the cost function

C(v) for the ordering scheme (G,M,S,v,I).

The subwarehouse places orders of lot size v with a one-server

warehouse, so that orders arrive at the warehouse, form a queue, and

are processed on a strict "first come, first served" basis. We are

leaving the item-by-item demand function D(t) general, but we are

requiring that regular orders have service times with a Markov, or

negative exponential, distribution. Therefore, the distribution of w. is

Pr(w. a w} = H(w) = 1 e w 2 0, j 2 1, (5.4.1)

where I/A is the mean service time to process an order.

It is reasonable that the time to fill an order, w., should

depend in some manner on v, the lot size of the order placed. We may

allow for this by permitting that X be a function of v. Typically, we

may have X = C/v, W a constant. In order to find the value of V which

minimizes C(v) of Section 5.3, we prove the following theorem.


Theorem 5.4.1

For the ordering scheme (G,M,S,v,1), the cost function C(v) of

Section 5.3 has the form


C(v) = (C0 +VC1)/uV + C2/iv 2 (v)


C5.4.2)









with

([s/A]-2)
Cz l/[[(iX( -Z))] z}, [s/v] > 2,


1, [S/u] = 1,

where

4(9) = Je eu dG(u),
o
and

0
= J u dG(u).
o


Proof of Theorem 5.4.1

By Lemma 5.3.1, we need only find lim PfM(t)/t} to complete

the prooE. Now


M(t) = S U(t aN-)-
j=l J

Let V1 = aN1 and Vj = %N. Nj-I (J 2). By Theorem 5.2.1(c),

GNj is the time until the j-th balk occurs in the queue G /M/i with

balking at queues of length [S/u] 1. Hence, we can apply the results

of Chapter 2 with

K = [s/A],

F(x) = G (x), and
p(9) = [ (8)]v.


From Theorem 2.3.2, we therefore have M(t) is a delayed renewal

process. By the Elementary Renewal Theorem, Prabhu (1965a), we have









lim e(M(t)/t) = 1/8(v )
t-co
= 1/v 9 P(v)

with pA() from Theorem 2.2.2. The proof is now complete.

Writing (5.4.2) out, we see that C(v) is minimized when

h(v) = (1/V)[1 + (C2/Co)/p(v)} (5.4.3)

is minimized. Therefore, h(V) can be considered as the cost function

of interest.


Example: Poisson Demand

Assume the item-by-item demand for objects in the subwarehouse

occurs according to a Poisson process with intensity p. Then


Pr[D(t) = n) = e-lt (pt)n/n' ,


n 2 0,


and the inter-demand times have the distribution


Pr[T. -T. S x) = G(x) = 1 e-0, x 0, j 2.
I x O-


(5.4.4)


That is, we have the ordering scheme (M,M,S,v,I).

To apply Theorem 5.4.1, we need a workable expression for

(v).


Define


p = A


and


p = p/(1 + p),


q = 1/(1 + p).


From (5.4.4), we have


(eB) = (I + e/p)-1









Hence,


[*(W(1 Z))]V = (1 + (I Z)/P)-

= (1 + C( Z)p)-V

= p /( Zq).


Therefore,


P.Cv) [s/v]-2
z


C[S/v]-2)
Z


1
p /1 qZ)V -Z


1 qZ /I Z 1_ qZ


= ([s/v].




j=0

[S/v]-2

j=0
J=O


-2) zj 1 q(j+l)
j=0o

([S/v ]-j-2)
c r[( oz)/n]


z


, jz| < p


(j+l)u


(j4l)v [S/V]-2-j -(j+l)v
[S/v]-j-2I (Cq) p


As an illustration, Table 5.1 gives the values of h(v) of

equation (5.4.3) for C 2/C0 = 10.0 and various values of p and S.

It is to be noted that considerable savings can be effected by the

proper choice of v, the lot size ordered to replenish the stock.


a -^ K -




















VALUES OF h(v) FOR


TABLE 5.1

THE ORDERING
C2/CO = 10.0


SCHEME (M,M,S,v,1)


S 8 9 10

0.8 1.2 0.8 1.2 0.8 1.2


1 1.5040 3.1717 1.3876 3.0673 1.3007 2.9877

2 0.5672 0.7759 0.5672 0.7759 0.5184 0.6313

3 0.6260 0.8743 0.3634 0.4461* 0.3634 0.4461

4 0.3475* 0.4713* 0.3475* 0.4713 0.3475 0.4713

5 2.2000 2.2000 2.2000 2.2000 0.2347* 0.2966*

6 1.8333 1.8333 1.8333 1.8333 1.8333 1.8333

7 1.5714 1.5714 1.5714 1.5714 1.5714 1.5714

8 1.3750 1.3750 1.3750 1.3750 1.3750 1.3750

9 1.2222 1.2222 1.2222 1.2222

10 1.1000 1.1000


Denotes optimal value.









5.5 Solution of C(u) Using the Queue GI/M/w
with Balking


General Demand Function

In this section, we develop the solution of the cost function

C(v) for the ordering scheme (G,M,S,v,w).

The subwarehouse places orders of lot size V with an infinite-

server warehouse, so that when an order is received at the warehouse,

processing begins immediately. Once again, we hold the item-by-item

demand function D(t) general, but we require that regular orders have

service times with the Markov, or negative exponential, distribution.

Therefore, the distribution of w. is


Pr[w. S w) = H(w) = 1 ew w > 0, j 2 1, (5.5.1)

where 1/K is the mean service time to process an order.

As before, it is reasonable that the time to fill an order

should depend in some manner on V, the lot size ordered. We may allow

for this by permitting K to be a function of v. Typically, we may have

K = 01/v, a a constant. In order to find the optimal value of V, we

prove the following theorem.

Theorem 5.5.1

For the ordering scheme (G,M,S,V,m), the cost function C(v)

of Section 5.3 has the form

C(v) = (CO + v C1)/v ( + C g2/v P(v) (5.5.2)

with

m([S/],v), [s] 22,

1, [S/v] = 1,








where m(2,v),...,m([S/v],v) satisfy the relationships

m(2,v) = [(x)]-v,

n n
m(n+l,v) = [t(nX)]- [l + b,(n,n-k+2) E m(j,v)],
k=2 j=k

n = 2,3,..., [S/v] 1,

with


#(0) = e dG(u),
0


= u dG(u),
0
and

n-k+2
n n-k+2 4 ]
b (n,n-k+2) = (k-2) j=O( )(-I)! [(A.(j+k-2))]".
j=0


Proof of Theorem 5.5.1

By Lemma 5.3.1, we need only find lim e[M(t)/t) to complete

the proof.

Recall that


M(t) = U(t ).
j=0 j

Let V = C1 and V = ONj Nj_ (j a 2). By Theorem 5.2.1(c),

FNj is the time until the j-th balk occurs in the queue Gv/M"/ with

balking at queues of length [S/v] 1. The results of Chapter 3 apply

here, with
K = [S/v],
F(x) = Gv(x), and

C(P) = [(CO)V.









From Theorem 3.3.2, we therefore have M(t) is a delayed renewal

process. By the Elementary Renewal Theorem, Prabhu (1965a),

lim C[M(t)/t) = 1/v p p(u).
t- a

The quantities used in calculating p(v) follow from equations (3.2.6)

and (3.2.8). The proof is now complete.

Writing (5.5.2) out, we see that C(V) is minimized when

hl(V) = (I/V)[l + (C2/CQ)/i(v)) (5.5.3)

is minimized. Therefore, hl(V) can be considered as the cost function

of interest.


Example: Poisson Demand

Assume the item-by-item demand for objects in the subwarehouse

occurs according to a Poisson process with intensity p. Then

Pr(D(t) = n) = e-A (pt)n/n! n 2 0,

and the inter-demand times have the distribution


Pr[T. .-T x = G(x) = 1 e-, P x 0, j a 2. (5.5.4)

That is, we have the ordering scheme (MM,S,v,m).

Define p = PA/X. From (5.5.4), we have

*(6) = (1 + e/p)-1

Hence, p(v) of Theorem 5.5.1 becomes a function of

(VX) = (1 + n V/0-)-(1 + n/p)-1

which is a function only of p and not p and X separately.








As an illustration, Table 5.2 gives the values of h (v) of

equation (5.5.3) for C 2/C0 = 10.0 and various values of p and S.

Once again, considerable savings can be effected by the proper choice

of V, the lot size ordered.


Comparison of Sections 5.4 and 5.5

We note that Sections 5.4 and 5.5 both deal with inventories

subject to general demand and negative expoenetial regular service

times. Whereas the results of Section 5.4 are based on orders being

placed to a one-server warehouse, Section 5.5 assumes we have an

infinite-server warehouse so that processing of an order begins as soon

as it is received.

These two sections represent the extreme cases in terms of the

number of processors available in a warehouse to process an order when

we have general item-by-item demand and negative exponential service

times. Therefore, we can calculate the optimal value of v, the lot

size ordered, for the best possible situation (processing begins imme-

diately when an order is placed, Section 5.5) and the worst possible

situation (an order must wait in turn before processing begins,

Section 5.4).

It is worthwhile to compare Tables 5.1 and 5.2 for the case

of Poisson item-by-item demand.


















TABLE 5.2

VALUES OF h (V) FOR THE ORDERING SCHEME (M,M,S,V,)
C2/CO = 10.0


S 8 9 10

VN 0.8 1.2 0.8 1.2 0.8 1.2

1 1.0002 1.0021 1.0000 1.0003 1.0000 1.0000

2 0.5044 0.5234 0.5044 0.5234 0.5001 0.5014

3 0.6260 0.8743 0.3406* 0.3654* 0.3406 0.3654

4 0.3475* 0.4713* 0.3475 0.4713 0.3475 0.4713

5 2.2000 2.2000 2.2000 2.2000 0.2347* 0.2966*

6 1.8333 1.8333 1.8333 1.8333 1.8333 1.8333

7 1.5714 1.5714 1.5714 1.5714 1.5714 1.5714

8 1.3750 1.3750 1.3750 1.3750 1.3750 '1.3750

9 1.2222 1.2222 1.2222 1.2222

10 1.1000 1.1000


Denotes optimal value.









5.6 Solution of C(v) Using the Queue GI/D/i
with Balking


In this section, we consider the solution of C(v) for the

ordering scheme (G,D,S,v,l).

The subwarehouse places orders of lot size v with a one-server

warehouse so that orders arrive at the warehouse, form a queue, and are

processed on a strict "first come, first served" basis. We allow the

item-by-item demand function D(t) to be general, but we require that

regular orders have a constant service time b. It may be that b is

a function of V, the lot size ordered. Typically, we may allow for this

by permitting b = vd, d a constant.

By Lemma 5.3.1, we need only find lim S[M(t)/t} to complete
t-4 C
the solution of the cost function C(V) defined in Section 5.3.

Recall that

01
M(t) = Z U(t aN.).
j=1

From Theorem 5.2.1(c), Nj* is the time until the j-th balk occurs in the

queue G /D/1 with balking at queues of length [S/V] 1. Hence, the

results of Chapter 4 apply here with K = [S/u] and the distribution on

inter-arrival times F(x) = G (x).

Except for the trivial case [S/v] = 1 when M(t) = N(t),

it was noted in Section 4.3 that the random variables


Vj = Nj Nj-1, j 2 2,

do not have the property of identical distribution as was the case when

the service times had a negative exponential distribution. Therefore,










M(t) is not a renewal process and e(M(t)/t) does not possess a simple

limit. Since we are interested in the long run behavior of the inventory,

we shall be content to utilize the steady state properties of the system

and to redefine M(t) so that a suitable limit for 8[M(t)/tj can be

obtained.

Recall the definition of 0 from Section 5.1. It is clear that
n
a is a function of v, the lot size ordered. Define
n

S (V) = nb a n 1,

and

M (y;v) = inf in > 0IS (v) -y or S (v) > [S/A] 2 y},
n n

[SA/] > 3.

Furthermore, let [Y.(v); j = 1,2,3,...} be the Markov process defined

by (4.3.18). Y. is written as Y.(v) to emphasize the dependence of the

process on v, the lot size ordered, when the quantities K and F(x) of

Chapter 4 are [S/v] and G (x), respectively.

Denote by F (y,V) the stationary distribution of Y (v) so

that F (y,v) satisfies

([s/v]-l)b
Fy (y,v) = PrY2 () y v) = x dxFy(xv),
([S/v]-2)b


([S/A] 2)b < y : ([S/V] l)b.









Finally, let

g(v) = 1, [S/v] = 1,

= 1 + [I G (y-0)]/G (b-0) dFy(y,V), [S/V] = 2,

= J 8(M (y-b; V)) dFy(y,v)

+ [E(M*(0; v))/Pr(SM*(O ; )(v) > ([S/v] 2)b)]

Pr(SM*(y-b; V)(V) r y + b) dF (y,v), [S/v] 2 3. (5.6.1)


If Y1(v) possesses the stationary distribution Fy(y,V), then

by Theorem 4.3.4, p(v) is one plus the expected number of orders of lot

size v that occur between any two emergency orders when the inventory

system is in the steady state.

Let V1, V2, V3,..., be a sequence of mutually independent

random variables such that the distribution of V. is
3

Pr(V* : x) = .Pr(Vj+1 xY (v) = y) dFy(y,V), j 2 1.

*
By Theorem 4.3.4, VI, V2' V3,..., are identically distributed and repre-

sent the time between successive emergency deliveries when the system is

in the steady state. For the purposes of this section, we redefine

M(t) as

M(t) = E U(t V ... V.), (5.6.2)
j=1 1

so that H(t) is the number of emergency deliveries during [0,t] when

the inventory system is in the steady state.










Theorem 5.6.1

For the ordering scheme (G,D,S,v,1), the cost function C(v)

of Section 5.3 (with M(t) redefined by (5.6.2)) has the form

C(v) = (C0 + C1)/t v + C2/v t P(v) (5.6.3)

with
CO
= u dG(u)
o

and .(u) given by (5.6.1).


Proof of Theorem 5.6.1

By the Elementary Renewal Theorem, Prabhu (1965a),

lim n[M(t)/t) = 1/8(VI)


= i/v P(v),

the last equality following from Theorem 4.3.2 and the definition of V..
3
Applying Lemma 5.3.1, we complete the proof.

Writing (5.6.3) out, we see that C(v) is minimized when

h2(v) = (1/V)[1 + (C2/Co)/P(V)} (5.6.4)

is minimized, h2(v) may therefore be taken as the cost function of

interest.

In many cases it may be difficult to obtain p(v) when [S/u] 23.

Theorem 4.3.3 then gives a bound that may be used to obtain an approx-

imate lower bound for C(v).











CHAPTER 6


THE INVENTORY PROBLEM: CONTINUOUS CASE



6.1 First Passage Times of Non-Negative,
Continuous Stochastic Processes with
Infinitely Divisible Distributions


In this chapter, we wish to consider the ordering scheme for

a subwarehouse that maintains an inventory of fluid material. We assume

the demand for the fluid in storage occurs continuously. It is reason-

able to further assume that the demand during any interval of time is

independent of the demand during any other nonoverlapping interval of

time and that the probability law for the demand during any interval

[s, s+tj is functionally dependent only on the length, t, of the inter-

val. Hence, if (D(t); t 0o is the stochastic process such that

D(t) represents the demand for the fluid in storage during the time

interval ([0, t], then we are assuming that [D(t); t a O0 is a non-

negative, continuous stochastic process with stationary, independent,

nonoverlapping increments. By Theorem 2 of Feller (1966, p. 294), this

is equivalent to stating that (D(t); t a 0) is a non-negative, contin-

uous stochastic process whose distribution is infinitely divisible.

The distribution function of D(t) will be denoted by


Pr(D(t) S x) = sJ g(y,t) dy, x 2 0, t a 0, -(6.1.1)

with g(',t) a density on [0,-) for each t k 0.









In the next section, a complete description of the ordering

scheme used to replenish the inventory will be given. To solve this

inventory problem, we need the probability law for the stochastic

process (T(u); u 2 0) defined by

T(u) = inf (tID(t) u} (6.1.2)

(so that T(u) is the first passage time of D(t) into the interval

[u,w)). The rest of the current section will be devoted to properties

of T(u).


Theorem 6.1.1

Let fT(u); u a 0) be the stochastic process defined by (6.1.2),

then

(a) T(u) has stationary, independent, nonoverlapping increments,

and

(b) The distribution of T(u) is


Pr{T(u) S t) = J g(y,t) dy, t 2 0, u > 0.
u

Proof of Theorem 6.1.1

Since T(u) S t if and only if D(t) 2- u,

Pr(T(u) S t) = Pr[D(t) Z u)

= J g(y,t) dy,
u
completing part (b) of the proof.

To prove part (a), it is sufficient to show

Pr[T(y) T(w) & sIT(w) = r) = Pr(T(y-w) 5 s) (6.1.3)

for all s > 0, r a 0, and 0 s w < y.








First we shall calculate Pr(T(y) 4 tIT(w) = r) for w < y and

r < t. Now, Pr[T(w) = r) = 0 so that Pr[T(y) b tIT(w) = r] involves

conditioning on an event of probability zero. Hence, the quantity

Pr(T(y) 4 t, T(w) = r)/Pr(T(w) = r) is undefined and therefore can not

be used to define Pr[T(y) S tIT(w) = r). Cramer and Leadbetter (1967,

pp. 219-222) give two plausible definitions for Pr[T(y) StJT(w) = r)

which are known as the vertical-window (v.w.) and horizontal-window (h.w.)

conditional probabilities. These conditional probabilities are defined by

Pr(T(y) 5 tIT(w) = r)}v

= lim Pr[T(y) tir T(w) : r + 6} (6.1.4)
6- 0
and

Pr[T(y) tT(w) = rh.w


= lim PrfT(y) tIT(T) = r,
6-0
for some T e [w,w+6]], (6.1.5)

respectively. Both (6.1.4) and (6.1.5) define Pr(T(y)stIT(w) = r)

in terms of a limit of conditional probabilities which involve condi-

tioning on events of non-zero probability. Equation (6.1.4) is the

usual definition of Pr[T(y) 5 tIT(w) = r). However, in our particular

case (6.1.4) leads to an undefined quantity. Therefore, we choose to

use the horizontal-window definition given by (6.1.5). We have, for

0 < 6 < y-w,









Pr(T(y) E tIT(w) = rh.w


= r Pr(T(y)5 t, T(T) = r for some T C [w,w+6]}
6-0 Pr[T(T) = r for some T e [w,w+8])

PrfD(t) 2 y, w : D(r) < w + 6}
6- 0 Pr[w D(r) w + 6)
w +6
SJ g(z-x,t-r) g(x,r) dx dz
lim y w
6- 0 w+6
S g(x,r) dx
w
0G
= S g(z-w,t-r) dz[g(w,r)/g(w,r)]
y


= S g(z,t-r) dz
y-w

= Pr(D(t-r) 2 y-w)

= Pr(T(y-w) 5 t-r). (6.1.6)

Let t = r+s in equation (6.1.6), then

Pr[T(y) T(w) 5 sIT(w) = r}h.w

= Pr(T(y) 2 s+rlT(w) = r)

= Pr[T(y-w) s},

thus completing the proof.

The above theorem implies that T(u) is also non-negative with

an infinitely divisible distribution. Hence, the Laplace transforms of

T(u) and D(t) have the forms








(e- ) = e-w0)u, 0 > 0, (6.1.7)

and

(e-8D(t) = e v()t, > 0, (6.1.8)

respectively, such that w(e) and v(8) are positive for 9 > 0 and

possess completely monotone derivatives. See, for example, Feller (1966).

An attempt was made to find the relation between the Laplace

transforms of (6.1.7) and (6.1.8) by utilizing the following technique.

We have

SF' e e-y PrfT(y) S t) dt dy
o o

= J e-Oy e-w()Y/ dy


= 1/[G(e9+w())], (6.1.9)

and

J' S e-t e-y Pr(D(t) a y ] dy dt
o o


= j e- (1 e- )/9e dt


= 1/9 1/[0( + v(0))]

= vce)/[eg( + v(B))]. (6.1.10)

Now Pr(D(t) 2 y) = Pr(T(y) 5 t) so that if
O O e-9t e- PrfD(t) a y} dy dt

o o

S e e PrfD(t) 2 y) dt dy (6.1.11)
o o









(i.e., if we can change the order of integration), then (6.1.9) and

(6.1.10) must be identical. Hence, we have

1l/[Ce + w(C))] = v(9)/[ 9([ + v(C))]

or

v(C) w(Q) = e9

which implies

w(9) = c

and

v(9) = 9/c

for some c > 0. We must have by the uniqueness of Laplace transforms

that

Pr(D(t) = t/c) = Pr(T(y) = cy) = 1.

It now seems that the only non-negative, continuous process D(t) with

stationary, independent, nonoverlapping increments is the trivial deter-

ministic model D(t) = t/c. However, we know that if D(t) has the gamma

density

g(y,t)= e-y/P yt-/rt)pt, y Z 0, (6.1.12)

then D(t) is non-negative; continuous; has stationary, independent,

nonoverlapping increments; and is clearly not deterministic. The point

we make is that (6.1.11) is true only for the trivial case D(t) = t/c

and, hence, the order of integration can not be changed for any other

choice of D(t). So far we have been unable to find a suitable method

for obtaining the Laplace transform of T(u).