UFDC Home  myUFDC Home  Help  RSS 
CITATION
SEARCH
THUMBNAILS
DOWNLOADS
PDF VIEWER
PAGE IMAGE
ZOOMABLE


Full Citation  
STANDARD VIEW
MARC VIEW


Downloads  
Full Text  
QUEUES WITH BALKING AND THEIR APPLICATION TO AN INVENTORY PROBLEM By EDWIN LUTHER BRADLEY, JR. A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF THE UNIVERSITY OF FLORIDA IN. PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1969 UNIVERSITY OF FLORIDA i lU1 111111111111111 Hllllll mill lllllllll 3 1262 08552 6563 TO THE MEMORY OF MY MOTHER ACKNOWLEDGMENTS The author is particularly indebted to Professor J. G. Saw, the supervisory committee chairman, who gave continued interest and encouragement throughout the entire period involving the research and writing of this dissertation. Special thanks to Professor R. L. Scheaffer who proofread the entire dissertation and made many worth while suggestions. Thanks are also due to Mrs. Edna Larrick who did a superb job of typing the dissertation. It is a pleasure to acknowledge the Department of Statistics for the support it has extended so that the author was able to pursue his graduate work. Finally, the author acknowledges the patience and encourage ment given by his wife and children during his many years in school. Without their understanding, this paper would never have been written. Lii TABLE OF CONTENTS ACKNOWLEDGMENTS . . . CHAPTER 1. INTRODUCTION . . . . . . . . . 2. THE QUEUE GI/M/I WITH BALKING AT QUEUES OF LENGTH Ki . . . . . . . . . 2.1 The Basic System . . . . . . . 2.2 An Imbedded Markov Chain . . . . . 2.3 Some Properties of the Time Between Balks 2.4 The Inverse of a Special Triangular Matrix 3. THE QUEUE GI/M/r WITH BALKING AT QUEUES OF LENGTH K1 . . . . . . . * . 3.1 The Basic System . . . . . . . 3.2 An Imbedded Markov Chain . . . . . 3.3 Some Properties of the Time Between Balks 4. THE QUEUE GI/D/l WITH BALKING AT QUEUES OF LENGTH K1 . . . . . . . . . 4.1 The Basic System . . . . . . . 4.2 The Waiting Time in the System . . . 4.3 Some Properties of the Time Between Balks 5. THE INVENTORY PROBLEM: DISCRETE CASE . . . '5.1 Definition of the Inventory System . . 5.2 Relation of the Inventory System to Queues with Balking . . . . . . .. . 7 9 . . 24 . . 28 . 33 . . 33 . . 35 . . 40 . . 41 . . 41 . . 45 . 50 . 64 . 64 . . 69 Page  . iii TABLE OF CONTENTS (Continued) 5.3 The Cost Function C(v) . . . . . . . 5.4 Solution of C(v) Using the Queue GI/M/I with Balking . . . . . . . . . . 5.5 Solution of C(v) Using the Queue GI/M/A with Balking . . . . . . . . . . 5.6 Solution of C(v) Using the Queue GI/D/I with Balking . . . . . . . . . . THE INVENTORY PROBLEM: CONTINUOUS CASE . . . . 6.1 First Passage Times of NonNegative, Continuous Stochastic Processes with Infinitely Divisible Distributions . . . . 6.2 Definition of the Continuous Inventory System and Its Relation to Previous Results . . . . BIBLIOGRAPHY . . . . . . . . . . . BIOGRAPHICAL SKETCH . . . . . . . . . . . 100 . 101 CHAFFPTER 6. Page 72 75 80 89 96 CHAPTER 1 INTRODUCTION In this dissertation, we consider an alternative to the (s,S) ordering policy associated with inventory systems. The (s,S) ordering policy is specified as follows. There exists a store of finite capacity S that holds material (discrete or continuous) for future use in some process. In the most general context, demand for the material in storage during an interval of time is assumed to be a timedependent stochastic process. Ordering of replacement stock to maintain the level of inventory in the store is done in one of two ways. Either orders for an amount Ss of replacement stock are made at the times when the stock level reaches s, s 9 S, or the level of stock in the inventory is examined at regular points in time and orders for replacement stock equal to the stock deficit are only made at those regular times for which the stock has fallen below the level s. In both cases, the time it takes the replacement stock to arrive (i.e., the delivery time) is assumed to be zero. A generalization of the (s,S) ordering policy allows a time lag T for arrival of the replace ment stock. For a certain class of cost functions associated with maintaining the level of stock in an inventory, it can be shown that the (s,S) order ing policy is the optimal policy to utilize. A summary of some results for the (s,S) ordering policy and conditions under which (s,S) is (or is not) the optimal ordering policy is. given in a paper by Gani (1957). Another generalization of the (s,S) ordering policy is the fol lowing. The capacity of the inventory is S and demand for the stored material is once again a timedependent stochastic process. However, orders for an amount V (v!S) of replacement stock are now made at the times when the stock level drops to the values Sv,S2v,S3v,.... The delivery time for any order is assumed to be a constant value T (T O). Under the assumptions that the store holds discrete items and the demand for these items obeys a Poisson probability law, the long run probabil ity law representing the level of stock in the store is given in Gani (1957) and Prabhu (1965b). In many cases, however, a constant delivery time does not adequately express reality. Furthermore, the negative stock level that can arise when T (the delivery time) is greater than zero may reflect the loss of considerable time and money in terms of idle man power and equipment. To circumvent these difficulties, an alternative ordering policy is defined and its properties examined in this paper. Envision a subwarehouse, maintaining an inventory of finite capacity S, that holds material (discrete or continuous) for future use in some process. In the most general context, we assume the demand for the stored material is a timedependent stochastic process. In order to maintain a stock on hand, orders for an amount v (vC S) of replacement stock are placed with a warehouse at the times when the stock level drops to Sv,S2v,...,Sv[S/v] ([x] the integral part of x). The time it takes the warehouse to process an order placed when the stock level falls to Sv,S2v,..., or Sv[S/V] +v is called a regular service time. All regular service times are assumed to be mutually independent random variables with a common distribution and to be independent of the demand process. An order placed when the stock level falls to Sv[S/A] is called an emergency order. The time to process an emergency order is assumed to be instantaneous, or at least effectively zero. Hence, regular orders for an amount v of replacement stock are made if the stock level is at least v at the instant an order is placed, while an emergency order is made if the stock level is less than v at the instant the order is placed. Utilizing this reordering technique, the inventory maintains a positive stock level at all times. Although a somewhat larger cost would quite naturally be incurred with emergency orders than with regular ones, it is assumed we are will ing to pay the price of instantaneous delivery in order to avoid the disaster of running completely out of stock in the inventory. The cost of maintaining the inventory level will clearly depend on v, the size of a replacement order, and there should exist an optimal value of V, defined to be that value of v for which this cost is a min imum. In Chapter 5, we define a Vdependent cost function for which we seek the optimal v. It will be shown later that the inventory problem is closely related to a problem in queueing theoryqueueing systems with balking at queues of a fixed length. We shall now discuss the salient features of such a queueing system. Utilizing a notation proposed by Kendall (1953), by "the queue A/B/s with balking at queues of length Kl" we mean a queueing system specified as follows. The queue length at any instant will refer to the number of people in the system who are being served or waiting to be served at that instant. Successive customers are assumed to arrive in the system in such a way that their interarrival times are mutually independent with distribution function A(*). A customer joins the queue if, at the instant he arrives, there are less than K1 persons already in the queue. If there are K1 persons in the queue when the customer arrives (so that he is the Kth person in the system), one of three equivalent things happens to him: (1) The customer balks, i.e., he leaves without waiting to be served; (2) The system rejects the customer; or (3) The customer receives instantaneous service. There are s servers available to wait on customers with the first free server attending the customer at the top of the queue. The length of time from when a server starts to serve a customer until the completion of such service is called the service time. All service times are assumed to be mutually independ ent with distribution function B(). Finally, the service times and interarrival times are assumed to be mutually independent. Because the statistician is more familiar with the terminology of queues rather than inventories, the work has been carried out in terms of queueing theory. The times between successive orders and the service times for the inventory problem with emergency orders are shown to corre spond to the interarrival times and service times, respectively, in queueing systems with balking at queues of a fixed length. The mechanics of inventories have led us to give prime attention to the queues GI/M/1, GI/M/I, and GI/D/1, all with balking at queues of length Ki, where GI (or G) refers to a general distribution function, M refers to a negative exponential distribution function, and D refers to a distribution whose mass is concentrated at a single point. An inventory or storage area is normally established with the assumption that it will be in operation for a long period of time. In choosing a reordering policy, therefore, long run distributions become important. Fortunately, this means that long run properties of queues with balking are adequate for the solution of our inventory problem. In Chapter 2, the queue GI/M/1 with balking at queues of length K1 is discussed. In particular, we utilize the concept of an imbedded Markov chain to derive properties of the queue length and the time between successive balks. In Chapter 3, the same is done for the queue GI/M/ with balking at queues of length KI. In Chapter 4, the queue GI/D/l with balking at queues of length KI is discussed. Here, the concept of the waiting time in the system is introduced. We again utilize an imbedded Markov process to obtain properties of the waiting time and the time between successive balks. In Chapter 5, we consider the inventory problem when the stored material is discrete. Here, we forge the link between queues with balking and inventories subject to instantaneous emergency orders, and give solu tions for the cost function associated with the inventory problem based on results from Chapters 2 through 4, along with some examples. 6 In Chapter 6, the continuous inventory problem and its relation to results of previous chapters is discussed. Also included are some properties of a continuous, nonnegative stochastic process with an infinitely divisible distribution. CHAPTER 2 THE QUEUE GI/M/l WITH BALKING AT QUEUES OF LENGTH Ki 2.1 The Basic System Consider a queueing system in which customers arrive in the system at times, ...,G 2,a_1,9a0,'~2,'..., such that the interarrival times = 1 j j 1 (2.1.) are mutually independent. The distribution function of u. will be denoted by Pr (u. S u} = F(u), u > 0, j a 1. (2.1.2) One server is available to handle the needs of the customers. This server dispenses his service on a strict "first come, first served" basis. The successive service times of customers who join the queue are denoted by w ,2,w3, ..., and are assumed to be mutually independent random variables that are independent of the arrival times. The distribution function of w. is assumed to be Pr (w. w) = 1 e w 2 0, j 1. (2.1.3) Let [Q(t);m represents the number of customers in the system at time t and (Q*(t); _ < t < *] the stochastic process such that Q*(t) represents the queue length at time t. Recall that the queue length, Q*(t), is the number of persons being served or waiting to be served at time t. A customer arriving in the system at time a enters the queue if and only if Q*(a0) S K2, that is, if and only if the number of people in the queue immediately prior to his arrival is K2 or less. In this case we have Q*(C) = Q(a) = Q*(o0) + 1 = Q(CFO) + 1. If, on the other hand, our customer is faced with a queue length of K1 (so that he becomes the Kth person in the system), he balks and imme diately leaves the system. We now have that Q(C) = K implies Q(oO) = Q(a+0) = K1 and Q*(CO) = Q*(a+0) = Q*(C) = K1. It is clear that Q(t) and Q*(t) are identical in value except at the points on the time axis for which Q(O) = K. We shall work with the stochastic process (Q(t);m < t < +) and shall be concerned with its behavior beyond the time point , which we assume is known. Hence, without loss of generality, a0 could be taken as zero. Define N1 = inf (k > OIQ(ak) = K), (2.1.4) n = inf (k > Nn1Q(a) = K), na 2, (so that Nk is the number of customers who arrive up to and including the nth customer to balk), and M1 = Ni' (2.1.5) M = N N j k 2, (so that M. (jQ 2) is the number of arrivals between the (j1)st and jth balks plus the jth person to balk). Define V1 = N1 0 (2.1.6) Vj = CNj oNj_, j 2 Then V is the time until the first balk and V. (j z 2) is the time 1 3 between the (jl)st and jth balks. Of primary importance, for us, is the value of 6(Vj), j 2 2. This quantity is established in Section 2.3. While the theorems of Sections 2.2 and 2.4 are proved with the thought of building toward a solution to S(V.), these theorems have a theoretical and practical importance that goes beyond our narrow objective. 2.2 An Imbedded Markov Chain Before we define the imbedded Markov chain, we prove two lemmas. The first lemma simply restates a wellknown result about negative exponentially distributed random variables, while the second lemma establishes the nonMarkovian character of Q(t) when the interarrival times have an unspecified distribution. Lemma 2.2.1 Let X be a continuous, nonnegative random variable. Then Pr (X > x + yIX > x] = Pr [X > y), x,y t 0, (2.2.1) if and only if Pr [X > x} = e PX, > 0. (2.2.2) Proof of Lemma 2.2.1 If (2.2.1) holds, we have Pr (X > x + y) = Pr (X > x) Pr (X > y) and therefore (2.2.2) is true. See, for example, Parzen (1962, p. 121). If (2.2.2) holds, we have Pr (X > x + y X > x) = Pr (X > x + y,X > x]/Pr (X > x) = Pr (X > x + y)/Pr [X > x) = e6(x+y)/epX = e = Pr [X > y}, thus completing the proof. Lemma 2.2.2 The stochastic process (Q(t);m < t < +co} is not, in general, a Markov process. Proof of Lemma 2.2.2 Without loss of generality, Let Y(t) = max Q(T) and Pr (Y(t) = OIQ(T) = 0, = Pr = Pr = Pr let a = 0 and Q(0) = 0. u = ts. We have 0 : T s}I (Y(t) = 0Ou1 > s} ([u > tjul > s} ful > u + s.u > sI. (2.2.3) Let 0 < < T < s, and define the events A and B by A = (Q(T) = 0; 0 < T < T 0 T T rS s) B = (Q(T) = 1; oT T < TO. We have Pr [Y(t) = 0OA,B) = Pr(Y(t) = 0fuI = To, ul *wI = T1, ul +u2>s) = Pr[ul +u2>tluI =T0, uI +w1 =T1, u +u2 >s) = Pr(u2 >t Tou2>s To = Pr(u1 >tT ul >S T = Pr[u >u + (s To)ul >S To ." But (2.2.3) and (2.2.4) are not necessarily equivalent. Yet, if Pr[Q(t) =kIQ(T), OST 's) =Pr[Q(t) =kIQ(s)) for all t >s, then (2.2.3) and (2.2.4) would have to be identical. Therefore, the proof is complete. It should be noted here, that, if the u. have a negative exponent distribution, then Q(t) is a Markov process. Although (Q(t);  there exists an imbedded Markov chain defined by Qn = Q(a ), n = 0,1,2,..., (2.2 regardless of the distribution of the u.. Figures 2.1 and 2.2 give the correspondence between Q(t) and Q . Qn clearly represents the number of persons in the system at the time the nth customer enters the system. Valuable information about Q(t) tial al), .5) (2.2.4) can be obtained from a knowledge of Qn as will be shown in the follow ing theorems. Before proceeding further, note that if K=1, then Pr(Qn = 1) = 1, n = 0,1,2,.... For the future we shall therefore hold K 2 2. Theorem 2.2.1 The stochastic process (Qn; n = 0,1,2,...} defined by (2.2.5) has the following properties: (a) Qn is a Markov chain; (b) Qn is timehomogeneous; (c) The class (1,2,...,K} of states on which Q is defined is an periodic, positive persistent communicating class; and (d) The onestep transition probability matrix P is given by K K1 K2 ... 3 2 1 0 1 2 0 1 %2 0 a a 0 0 0 0 0 0 0 0 .. 'K3 K3 ... K K4 K2 K2 K2 K3 ... 0 C 0 1X / I K KI K2 3 2 1 kK3 N2 N3 Figure 2.1. Qn A Typical at Queues Path of Q(t) for GI/i/1 with Balking of Length 4. N2 N3 S S * * Path of Q Corresponding to Q(t) in Figure 2.1. Q(t) A N W3 51 4 3 20 I 0 ( I I I I I I I I I I 1W I 2 3 4 5 6 7 8 9 10 I L AI___________I Figure 2.2. aJ = (Ot et dFC(t), j =0,l1,2,..., j,. . j =0,1,2,.... Proof of Theorem 2.2.1, Let U() be the number of customers who Then Qn+l = n + 1  Qn+l = Qn + 1  Part (a) unit step function at zero. Let X n+1 be the complete their service in the interval (aC ,a ]. n n if Q n+1' nf K,  1, if Qn = K, so that Qn+l Qn n+1 U(QK) + Qn K. Since the distribution on service times is negative exponential, the probability law on Xn+I conditional on the history Qo,QI...,Qn is a function only of Q Hence, we see easily that the probability law on Qne1 conditional on the history Q0,Q,...,Qn can be a function only of Qn, which establishes the Markov property of Qn' Proof of Theorem 2.2.1, Parts (b) and (d) Since we are looking at Q(t) at successive arrivals, we have 1 Qn41 Qn +1. Also, of course, by the balking aspect of the problem Q K. Hence. n Pr(Q+ kQ =j} =0, for k>j +l. (2.2.8) where (2.2.6) (2.2.7) k =1 0 l i .. P" Xn+1 Further, PrQn1 =k Qn =K) =Pr(KXn+1 1 +1 =k) = Pr((k1) Xn+1 +1 =k) = Pr(Qn+1 = kQn =K1)]. (2.2.9) For j =1,2,...,K1 and k =2,...,j+l, let N(t) be a Poisson process with interarrival times w l,w2,w3,..., then Pr(Qn+1 =kIQn = j] =Pr(j Xn+1 +1 =kIQn = Pr(X = j k + lIQn = i} = %. (Prfw +....w. w u Y+...+w. u i) = un+1(PrI +...+ Wj k+1 Un+l' W1 + +Wjk+2 n+l) = eun l(Pr[N(u n+) = j k + 1)) = j'At (It)Jk+1 = jk+l)' dF(t) = wjk+1. (2.2.10) Let j = 1,2,3,...,K1 and k = 1, then Pr([Q+ I = io =Prfj x +1 =1 Q = j) n+1 Pn n+1 = jlQn = jn = Un+(Pr(wI +...+ w u n]) = (J xj1 j exj'1) dx) dF(t) a 0 j 31 " = Jo (1 E e" t (t) l/i") dF(t) i=O = I o .... Cj1 = kj" (2.2.11) Equations (2.2.8) through (2.2.11) are independent of n and therefore Qn is timehomogeneous. Application of these equations for j,k = 1,2,...K gives us the matrix P. Proof of Theorem 2.2.1, Part (c) An examination of the onestep transition matrix in the state ment of the theorem shows that each state communicates with all others. Since, for example, Pr(Qn1+ = KJQn = K) > 0, state K is periodic. We therefore have a finite irreducible communicat ing class of periodic states so that each state is necessarily positive persistent. The proof of Theorem 2.2.1 is now complete. We shall prove a lemma that applies to an arbitrary Markov chain * with onestep transition matrix P given by aw K KI K2 ... 3 2 1 K PKI,1 PK1,2 K1,3 PK1,K2 PK1,K1 PK1,K K1 pK1,1 PK1,2 PK1,3 PK1,K2 PK1,K1 PK1,K K2 0 PK2,1 PK2,2 PK2,K3 PK2,K2 PK2,K1 3 0 0 0 ... p3,2 3,3 P3,4 2 0 0 0 ... P21 2,2 P2,3 1 0 0 0 ... 0 pl1 p1,2 (2.2.12) > 0 for i =1,2,...,K1; j =1,2,...,i+1. in which p. , IsJ If a Markov chain has a onestep transition matrix of the form (2.2.12), then clearly the Markov chain is periodic and positive persistent. Hence, there exists a unique long run distribution equal to the stationary distribution. With this in mind, we now state and prove the lemma. Lemma 2.2.3 * Let P of (2.2.12) be the onestep transition probability matrix of a Markov chain. Then, if 6' = (9 ,...,0 ) is the unique stationary distribution for the Markov chain, 1/K = K ' where (Kt K'"' 2)B = (l,l,...,l) PK3,1 PK3,K4 P3,21 PK3,K3 P3,3 P2,12 P2,21 P1,1 PK1,1 PK1, 3 K1,3 K1, K2 PK1,KL PK2,1 PR2,21 PK2,K3 PK2,K2 Proof of Lemma 2.2.3 By the definition of a stationary distribution, 8 is the unique solution to 8'P = 8' (2.2.13) and S+...+ 8K = 1 (2.2.14) Writing out the equations (2.2.13) and (2.2.14) with K on the left hand side, we get the following system of K linearly independent equations: (1K1,I K = PK1,1 K1 PK12 K = K,21)K1 + K2, K2 K1,3 K = PK1,3 K (K2,21)K2 + PK3,1 K3 K1,K2 K K1,K2 K + K2,K3 K2 + (P3,21)3 +P2,1 2 PK1,K1 K K1,K1 KI + K2,K2 K2 + + P3,3 3 2,2 )92 +1, 1 K =Ki + ... 1 1" (2.2.15) Let A be the (K x K1) matrix of coefficients of 8 IK"'. 1 in (2.2.15), then PKI,1 PK1,21 PK1,3 PR1l,K2 PK1,K1 1 The Ki columns of dimension K. Hence, that j is orthogonal to A, we must have tl 1 0. take 1 = 1. That is, PK2,1 PK2,21 pK3,1 PK2, K3 PK2,K2 1 PK3, K4 K3, K3 1 ... p3,21 "." P3,3 ... 1 P2,1 P2,21 Pl 1 of A form a set of linearly independent vectors there exists a vector V' = (K >1",. I) such each column of A. By noting the last column of Therefore, without loss of generality, we may we have I'A = O' (2.2.16) with 0 the null vector. After multiplying the nth equation of (2.2.15) by tn and adding them up, we find K (1PK1,1 ) PK, 2K1 PK1,3[ K2  PK1,K23 PKI,KI2 + 1] = 1. (2.2.17) But by (2.2.16), if we take the scalar product of j with the first column of A, we have PK1,IK + K(P1,21)K1 + PK1,3K2 + PK1,K1l21 = 0. (2.2.18) Substituting (2.2.18) into (2.2.17) yields K K Ki) = Since A = ... .... 1,1, ...1I and (1 = 1, (2.2.16) implies that (K K1'"'.,2)B = (1,1,...,1), thus completing the proof. Corollary 2.2.1 Let 1iKK be the mean recurrence time of state K for a Markov chain with onestep transition matrix P of (2.2.12) and state space (1,2,..,K). Then t22 P332,44,..., are finite and satisfy I/Pl, 1' K=2, RK M K1 K 1 (2.2.19) I/PK, 1 + K1Kk+2 3. k=2 Ik2j=k Proof of Corollary 2.2.1 It is well known that KK = 1/SK 9 K the stationary probability associated with state K, and PKK is finite. See, for example, Parzen (1962). By Lemma 2.2.3, if K = 2, then B = [pl,1] and hence P22 = 1/92 = 1/Pll = 2 (2.2.20) If Ka3, take the scalar product of the first column of B and (K",..'Y2), then the lemma yields PK1,lK = (1PK1,2K1 K,3K2 PK1,4K3 PK1,K23 PK1,Kl 2 + 1. (2.2.21) Subtracting pK1,IK1 from both sides of equation (2.2.21), we get PK1,1 K 1K) = ( PPKI,2 K1,IPKI PK1,3 K2 PK,K2 + 1. (2.2.22) Adding and subtracting (1 pK,2 PK1,1 )K2 on the right hand side of equation (2.2.22) we obtain PK1,I1 KK) = (1 PK1,2 PK1,1)( K1 K2 + (1 PK1,3 PK1,2 PK1,1)K2 PK1,4 K3 . PKK 2 +1. (2.2.23) Continuing in this manner to add and subtract (coefficient of .)'._I as j goes from K2 to 3, we get PK1,1 (KK ) = (1 pK1,2 PK1,1 )(K1 K2) + (1 PK1,3 PK1,2 PKI,1)( K2 K3) S...(1 pK1,K2 K1,K3 ... KI,1) 3  + (1PK1,K1 PK1,1)2 + 1. (2.2.24) n K' Note that 1 E p Kk = p . k=1 k=n*1 Applying this result, equation (2.2.20), and p.. = g j to (2.2.24), we have Ki KI PKI,IPKK = 1 + S p E PKl,k+l. (2.2.25) j=2 jj k=Kj+l Setting i = (K1)k+2 in equation (2.2.25) yields K1 j PKIIKK= 1 + E p E pKl,Ki+2 j=2 i=2 Ki KI = 1 + E PK Ki+2 E . (2.2.26) i=2 j=' Dividing both sides of (2.2.26) by pK1,1 yields equation (2.2.19), thus completing the proof. The result of Corollary 2.2.1 can be applied to the Markov chain of Theorem 2.2.1 to obtain the mean recurrence time of state K. However, the special form of the matrix P in Theorem 2.2.1 lends itself to a more elegant solution for the mean recurrence time than that given by the corollary. Before stating this solution in Theorem 2.2.2, we define (9) = I eu dF(u), 8>0, (2.2.27) o and K(z) = E a. Z I !l. (2.2.28) j=0 3 Also, for ease in writing, if h(Z) is any function of Z, denote by C(n)h(Z) the coefficient of Zn in the expression h(Z). z Theorem 2.2.2 The mean recurrence time "KK for the state K of the Markov process (Qn; n = 0,1,2,...) of Theorem 2.2.1 is finite and satisfies p =(K2) i/[cp(x(l Z)) Z]. (2.2.29) Proof of Theorem 2.2.2 By Theorem 2.2.1, "KK is finite and PKK = I/'K' (2.2.30) where K is the stationary probability associated with state K. The P of Theorem 2.2.1(d) is P of (2.2.12) with Si1 J , Pj = { k. , j kj1' j =i 1. Hence, Lemma 2.2.3 applies with "0 (ai1) 0 a (a1) 0 B = . K3 K4 K5 ... (11) 0 K2 K3 c4 "2 011 0O In Section 2.4, it will be shown that for a matrix of the special form B above, the solution of 1/9K in Lemma 2.2.3 is (K2) 1/OK = KKI = C 1/(K(Z)Z). (2.2.31) But s et (Xzt) /j'. 1, if IZ <: 1. j=0 Therefore, K(Z) = E zJ e (Xt) /j' dF(t) j=0 o W 0 = J E et (XZt)j!: dF(t) o j=0 = S e dF(t) = Cp(XO Z)). (2.2.32) Hence, (2.2.30), (2.2.31), and (2.2.32) give the desired result. 2.3 Some Properties of the Time Between Balks The theorems in this section refer to the random variables defined by (2.1.5) and (2.1.6). These theorems are not only useful in discussion of the queueing problem, but also provide powerful results for the inventory problem to be discussed in Chapters 5 and 6. Theorem 2.3.1 (a) M ,M2,M,..., are mutually independent random variables, (b) M2,M3,M4,..., are identically distributed, (c) U(M ) = PKK, j *2, with pKK given by (2.2.29). Proof of Theorem 2.3.1 Let B(i,j) denote the event (Qi Let k1,k2,k3,..., be a sequence of positive integers and define n. = k + ... + k.. Then for any ma 2, 3 11 Pr(MI = k M2 = k2"..., M = km = Pr[B(l,kl), B(k1 + I,n2),..., B(nm1 + 1,n )IQO = Pr(B(nm1 +1,n )Qnm_1 =K] Pr(B(nm2 +1,nm1)I Qn2 = K ... Pr(B(k1 +1,n2)9Qk =K} Pr(B(l,k1)Q}1 = Pr(B(l,km)Q0 =K1 Pr(B(l,km_) IQ0=K ... Pr(B(l,k2 )Q =K] Pr (B(I,k)1Q0} (2.3.1) The second equality above follows by the Markov property of Qn' and the last equality follows by timehomogeneity of Q Hence, Pr(M k1,..., M =km) = Pr[M =kl) ... Pr(M =ki]. Therefore, the M. are independent, and by examining the last expression in (2.3.1), we see that M. (ji 2) are identically distributed. 3 Now, for j 2 2 Pr[Mj = k) = Pr(Q1 < K,..., Qkl < K, Qk = KjIQo = K = Pr[the first passage from states K to K takes k stages}. Hence, e(M.) = mean recurrence time of state K = KK" Applying Theorem 2.2.2, we complete the proof. Theorem 2.3.2 (a) V ,V2V3,..., are mutually independent random variables, (b) V2,V3,V4,..., are identically distributed, Cc) e(V.) = 8(M.) S(ul), j 1 = KK 8(ul), j 2 where pKK is given by (2.2.29) and ul is given by (2.1.2). Proof of Theorem 2.3.2 Since, j+1 = 'Nj+1 UNj = UNj+1 + *.. + UNj+I, parts (a) and (b) follow directly from Theorem 2.3.1 (a) and (b), the assumed independence of the uk, and the independence of un+1 and Qn" Assume that Q0 = i, (1 2 i K). Let a = a a n n o so that a = u + ... + u n l. n 1 n Now (na nS(u )) is a martingale and n 1 (aCn nC(u )) = 0, for all n. n 1 The event ([M > k} e B, where Bk is the cfield of events generated by (a ,...,a) and (w .I 1CB'w~ Clearly, B C Bk+ Hence, 'l is an optional stopping rule martingale property. See, for example, Feller Therefore, e(v1 M1 e(ul)) = e(cMl  = 0. Now let i = K in the above solution so distribution as M. (j 2). Then and has no effect on the (1966, p. 214). M C(u )) that M has the same e(Vj M.i(u)) = (V1 M1e(ul)) = 0, j 2, and part (c) follows immediately from Theorem 2.3.1(c), thus completing the proof. 2.4 The Inverse of a Special Triangular Matrix Let 802 0 2',..., be a sequence of numbers such that 80 / 0. If B is a (n+1) x (n+l) matrix of the form 02 01 0 0nl n2_ ... 1 0 then B1 is obviously of B1 B = (2.4.1) the form C(0) (1) 0(0) (2) p(n) 0(1) 0(0) S(n1) (n2) P P We show the following: Theorem 2.4.1 p(k) 0 k E ( j=1 1) A(j:k)/~, ... (1) a(0) (2.4.2) k = 0, k > 1, (2.4.3) A(j:k) = E ... Es i + ... + i. = k 1 :3 (i1 2 1,...,i. > 1) 1 m If, in addition, (k) = (k) P = "Z 0il i2 ". ..ij E 0. converges, j=0 J 1/B(Z), k a 0 with B(Z) = E O.Zj. j=0 3 Proof of Theorem 2.4.1 It is clear that 0(0) = 1/00. To show (2.4.3) true for k Z 1, we need simply verify that m k=O mi E k=( (k) mkr = 0, 0) 0 mk 0 = 0/po + E k=l m = 1,2,...,n. m M/ + E k=1 k nmk E m j+1 = pO/ + E (1) / j=1 m k=j 0mk A(j:k)* with (2.4.4) (k) 0mk (1)j A(j:k)/~+1 0 (2.4.5) U The coefficient of 1/P0 in the last expression is mA(l:m) =0, and for j = 1,2,...,m1 the coefficient of /j+' S() { mk ACj:k) k=j m1 = (i1)3 .. + k=j i + ... + 1  A(j+l:m)} i. = k I s ... E i + ... + i.+ Hence, (2.4.5) is zero and, therefore, (2.4 For n k a 1. (k) A(j:k) = C z Oil 0i2 ... Oij _k  ... P ij+l} = 0. .3) is true. ( 8 + 2 n 1 2 n Hence, from (2.4.3) (k) (k) k Z ++1 P~= [C 1 /(l) ~]/ ,z ... + z j=l 0 1 n (k) (BIZ + + Zn 10 k The term in brackets in the numerator of the last expression above contributes nothing to the coefficient of Zk and hence can be dropped. Therefore, (k) (k) Bk) = (1/B0) C 0 z B1 + ... + Pn 0 + 1 n z But, by adding and subtracting %0 in the numerator, we have, for n ?k 21, ()) (k) = /0 0 +. + Zn 0 +.. +* n2" = (k) /(c + ... + BZn). z 0 n The last equality follows since Z = 1 and we have taken But (2.4.6) holds for all n ' k, hence (k) = (k) z 1/B(Z), (2.4.6) k 1. k 1. That (2.4.4) holds for k = 0 can be seen by (0) Z (0) l/B(z) = C z 1/[l(+lB(Z))] (0) Z j=0 j=0 = E (1  j=0 (1 B=2))1 O) = 1/0o The proof is now complete. To verify equation (2.2.31), it is noted that the matrix B above equation (2.2.31) is of the form (2.4.1) with k {oC1 1, k / 1, k = 1, and, therefore, B(Z) = K(Z) Z. = 0(0) Applying Theorem 2.4.1 with n = K2 to obtain the solution of our particular B1, we have from Lemma 2.2.3 1 K K1 = (1,l,...,1) B + 1 0 0 S(K2) (K2) = C 1/(K(Z) Z). z CHAPTER 3 THE QUEUE GI/M/A WITH BALKING AT QUEUES OF LENGTH KI 3.1 The Basic System Consider a queueing system in which customers arrive in the system at times, ...,_2,al,O0,al 2, ..., such that the interarrival times u. = aj a j > 1, (3.1.1) are mutually independent. The distribution function of u. will be denoted by Pr(u. S u) = F(u), u ; 0, j a 1. (3.1.2) We assume there is a sufficient number of servers so that, if a person joins the queue, his service commences immediately. The queue length at any time t is the number of persons being served at time t (no one has to wait for service) or, equivalently, the number of busy servers at time t. Since a customer balks at a queue of length Kl, there are never more than K1 servers busy at any one time. Hence, the queues GI//0V and GI/M/s, for s > K1, both with balking at queues of length K1, are identical. We also have the apparent absurdity that a person would balk from a system with an infinite number of servers. It would be better in this case to assert that the customer, who arrives to find Ki servers busy, is rejected by the system. The successive service times for customers who join the queue are denoted by w l,2,w3,..., and are assumed to be mutually independent random variables. Any w. is also assumed to be independent of the arrival times. The distribution function of w. is assumed to be Pr[w. w] = 1 e w > 0, j > 1. (3.1.3) As before, we let [Q(t); _ < t < 4 } be the stochastic process such that Q(t) represents the number of people in the system at time t. The number of people that can be in the system at any one time is restricted to K by requiring Q(C +0) = K1 whenever Q(C) = K. See Section 2.1 for a more thorough discussion of Q(t). We are interested in the development of Q(t) beyond the time point o0. Without loss of generality, o0 could be taken to be zero. Once again we are interested in 'the random variables N1 = inf (k > 0 Q(0k) = K), N = inf (k > Nn1I Q(C) = K), n 2, (3.1.4) M1 = N1' M = N. N j 2, (3.1.5) .3 3 j1' and V1 = VN1 0 Vj = Nj j Nj1 j 2. (3.1.6) A complete description of these random variables is given in Section 2.1. Since the service times are negative exponentially distribute we find that many of the results derived in Chapter 2 will apply to queue GI/M/r with balking at queues of length K1 without any chang the proofs. As before, we follow a systematic approach to find the solut ed, the e in ion of e(v.). 2 3.2 An Imbedded Markov Chain Lemma 2.2.2 applies to the stochastic process (Q(t); m defined in Section 3.1 and Q(t) is, therefore, in general, a nonMarkovian process. However, there exists an imbedded Markov chain defined by Qn = Q(Crn)' n = 0,1,2,.... (3.2.1) Figures 3.1 and 3.2 give the relation between Q(t) and Qn" Qn clearly represents the number of persons in the system at the instant the nth customer arrives. Once again we shall restrict our attention to the cases K22, for when K= 1, Pr(Qn = 1} = 1, n = 0,1,2,.... Information obtained from the stochastic process (Qn; n = 0,1,2,...} will provide sufficient information about (Q(t);  Theorem 3.2.1 The stochastic process (Qn; n = 0,1,2,...) defined by (3.2.1) has the following properties: Figure 3.1. Qn 5 4 3 2I I 0 0 1 Figure 3.2. A Typical Path of Q(t) for GI/M/r with Balking at Queues of Length 4. N2 N3 * * * * Path of Qn Corresponding to Q(t) in Figure 3.1. ~~~~ m .o L M (a) Qn is a Markov chain; (b) Qn is timehomogeneous; (c) The class f[,2,...,K] of states on which Qn is defined is an periodic, positive persistent communicating class; and (d) The onestep transition probability matrix P is given by y K KI ... 3 2 1 K b(Kl,0) b(Kl,l) ... b(K1,K3) b(K1,K2) b(K1,Kl) K1 b(Kl,0) b(Kl,1) ... b(K1,K3) b(K1,K2) b(K1,K1) K2 0 b(K2,0) ... b(K2,K4) b(K2,K3) b(K2,K2) 3 0 0 ... b(3,1) b(3,2) b(3,3) 2 0 0 ... b(2,0) b(2,1) b(2,2) 1 0 0 ... 0 b(l,0) b(l,l) where b(n,k)= (k) (1 eu (e)nk dF(u). (3.2.2) Proof of Theorem 3.2.1 The proofs of parts (a) and (c) are identical to those given for Theorem 2.2.1 (a) and (c). We need only show parts (b) and (d). Let U(') be the unit step function at zero. Let X n+ be the number of customers who complete their service in (Cn, an1 ]. Then nfl* Qn+l = Qn + 1 Xn+ Qn9 = Qn + 1 Xn1 1, if Q < K, if Qn = K, so that Qn+1= Qn + 1 n+1 U(QnK), Qn K. Since we are looking at Q(t) at successive arrival times, we have 1 s Qn+1 Qn + 1. By the balking aspect of the problem, Qn K. n PrQn+1 = kQn = j] = 0, for k > j + 1. Further, (3.2.3) Pr[Q = k Q = K) = PrFK X 1 + 1 = kQn = K) n+1 n n+l n = Pr[(K1) Xn+ + I = kQn = Kl] = Pr([Qn = kjQ = Kl}. (3.2.4) For 1 j S K1 and k j+1l, we have Pr([Q = kQn = j} = Pr(j X+ + I = kIQ = j) =Pr(Xn+1 = j + 1 klQn= j = Pr(exactly j + 1 k persons out of j complete their service in (an, n+ln 1 = n (Pr[exactly j+1l k independent events un+1 (wI S u n+} occur out of j possibilities}) = b(j, j + 1 k). Hence, (3.2.5) Equations (3.2.3) through (3.2.5) are independent of n and hence part (b) follows. Application of these equations when j,k = 1,2,...,K gives the matrix P, thus completing the proof. Let PKK be the mean recurrence time of the state K for the imbed ded Markov chain of Theorem 3.2.1. If we let pn,k = b(n,k 1), we have that the matrix P* of Corollary 2.2.1 and the matrix P of Theorem 3.2.1 are identical. Hence, p22'P33',p4,..., satisfy 1l/b(l,0), K = 2, KK i K1 K1 1/b(K1,0) + E b(K1,Kk+l) SE K > 3. k=2 j=k (3.2.6) The matrix P of Theorem 3.2.1 and equation (3.2.6) both contain the quantity b(n,k) defined by equation (3.2.2). However, this integral expression is not a form that lends itself to easy evaluation. Fortu nately, b(n,k) can be expressed as a function of the parameter X of (3.1.3) and the Laplace transform of (3.1.2) in the following manner. Let p() = J eeu dF(u), e0 0, (3.2.7) b(n,k) = (k) (1 eT5k (eu )nk dF(u) o k = () E (k) (e'U)j (ek)unk dF(u) o j=0 3 fn k (nk+j)Xu k) E ( )(1) 1 a dF(u) j=0 Jo = (n) () (1) p(X(nk+j)). (3.2.8) Sj=0 3 then 3.3 Some Properties of the Time Between Balks The following theorems refer to the random variables defined by (3.1.5) and (3.1.6). The proofs of these theorems are identical to those given for Theorems 2.3.1 and 2.3.2. Hence, only the statements of the theorems will be given. Theorem 3.3.1 (a) M ,M2 M3,..., are mutually independent random variables, (b) M2,M3,M ,..., are identically distributed, (c) p(M ) = 1KK, j > 2 with pKK given by (3.2.6). Theorem 3.3.2 (a) V ,V2 ,V3..., are mutually independent random variables, (b) V2 ,V3,V,..., are identically distributed, (c) C(V.) = (M.) C(u), j a I = PK S(ul), j 2 with PKK given by (3.2.6) and ul defined by (3.1.2). CHAPTER 4 THE QUEUE GI/D/1 WITH BALKING AT QUEUES OF LENGTH Ki 4.1 The Basic System Consider a queueing system in which customers arrive in the system at times, ...,C _21, ,0,al, 2,...,"' such that the interarrival times u. = a. a j a 1, (4.1.1) 3 3 jl' are mutually independent. The distribution function of u. will be denoted by Pr(u. S u) = F(u), 3 u 0, j l 1. One server is available to handle the needs of the customers. This server dispenses his service on a strict "first come, first served" basis. The service time of any customer who joins the queue is assumed to be a constant value b. Let ([Q(t);  < t <+ ) be the stochastic process such that Q(t) represents the number of people in the system at time t. The number of people that can be in the system at any one time is restricted to K by requiring Q(a +0) = K1 whenever Q(C) = K. A more thorough discussion of Q(t) and its relation to the queue length is found in Section 2.1. As in the previous two chapters, (4.1.2) we are interested in the development of Q(t) beyond the time point C0. We could,therefore, take a0 = 0 without loss of generality. Define N1 = inf (k > 0 Q(ak) = K], Nn = inf (k > n1 I Q( = K), n : 2, (4.1.3) 1 = N1, Mj = Nj Nj_I, j 2 2, (4.1.4) and V1 =N1  Vj = aNj Nj1, j 2 (4.1.5) A complete description of these random variables is given in Section 2.1. Our ultimate objective is to find an expression for e(V.). Unfortunately, we are only able to obtain an exact expression for e(V.) in terms of quantities that are difficult (if not impossible) to obtain. The results that we do establish are based on the concept of the waiting time in the system. The definition of the waiting time and our motivation for its use in the search for a solution to 8(V.) now follow. Let W(t) be the amount of time it would take our server to finish serving all of the customers present in the queue at time t. W(t) is then called the waiting time in the system at time t. If a is the time of an arrival into the system, then Q(C) < K implies W(a) = W(oO) + b and Q(a) = K implies W(o) = W(OO). The latter condition reflects the fact that a customer arriving in the system to find K1 persons already in the queue, leaves without waiting to be served. By the balking aspect of the problem, at most K1 persons may be in the queue at any particular time t. Since the service time is a constant, b, we have 0 : W(t) (Kl)b. Again let a represent the time of an arrival into the system. Clearly, Q(c) = 1 if and only if W(c0) = 0. Further, if Q(Q) = j (j = 2,3,...,K) we must have Q(a0) = j1. Since the service time for any one customer is b, a constant, (j2)b < W(a0) (jl)b. That is, for j = 2,3,...,K Q(a) = j if and only if (j2) < W(C0) 9 (jl)b. Hence, complete knowledge of the stochastic process defined by Qn = Q(an) n = 0,1,2,..., can be obtained from a knowledge of the probability law of the stochastic process defined by W = W(a 0), n = 0,1,2,.... (4.1.6) n n Figures 4.1 and 4.2 give typical realizations of Q(t) and W . n Q(t) b b b b b Figure 4.1. A Typical at Queues Path of Q(t) of Length 4. Wn 4b 3b 2b Ib 0 0 for GI/D/I with Balking N2 Path of Wn Corresponding to Q(t) in Figure 4.1. n Figure 4.2. The random variables defined by (4.1.3) can now be expressed in the equivalent form NI = inf [k > OIWk > (K2)b], Nn = inf (k > NnlW k > (K2)b), n a 2. (4.1.7) It may be further shown, by considering a slight modification of the proof of Lemma 2.2.2, that both Q(t) and W(t) are nonMarkovian processes. Since we now have constant rather than negative exponential service times, it is also true that Qn is nonMarkovian. However, it will be shown in the next section that W is a Markov process. Because of the Markov nature of W and the equivalence of (4.1.3) and (4.1.7), n we are led to consider the stochastic process (Wn; n = 0,1,2,...) in our search for an expression for 8(V.). As always, we shall ignore the trivial case when K = 1. 4.2 The Waiting Time in the System Let (W ; n = 0,1,2,...) be the stochastic process defined by (4.1.6) so that W is the waiting time in the system immediately pre n ceding the nth arrival into the system. Then it is clear that W s (K2)b, i.e., Q < K, implies { 0, if W + b S u , W = U n 1 +1 W + b if W + b > u n n+1, n n+1, and Wn > (K2)b, i.e., Q = K, implies 0, if Wn u n+l n+l= n u n, if Wn > un+. n n*1' n n+1 If U(') is the unit step function at zero, we can rewrite the above expressions in the form W n+1 = max ([0, Wn un+l + b U((K2)b Wn )], n 2 0. (4.2.1) We now note the relation between (4.2.1) and the analogous expression for the waiting time just prior to an arrival in the queue * GI/D/I (no balking). If we let [W ; n = 0,1,2,...) be the stochastic n process that represents the latter waiting time, we have the wellknown result W +1 = max t0, W u + b), n 2 0. (4.2.2) See, for example, Prabhu (1965b). The difference between W and W is that, for the former, n n a person who enters the system and faces a queue of length K1 balks and adds no service time to the system. We now formally state and prove some basic properties of the stochastic process [Wn; n = 0,1,2,...}. Theorem 4.2.1 The stochastic process Wn ; n = 0,1,2,...) is a timehomogeneous Markov process concentrated on the continuous state space (0, (Kl)b] with onestep transition distribution 1 F(yx0), y > (K2)b, Pr{w S x W = yJ = (4.2.3) 1 F(ybx0), y I (K2)b. Note that the interval on which (4.2.3) is concentrated is actually a subinterval of [0, (Kl)b], this subinterval being a function of y. Proof of Theorem 4.2.1 Let mn,m2,m3,..., be a sequence of integers such that mI < m2 < ...< mk < n. Then Pr(Wn.*1 < CiWn = y, Wnmk = Yk','**WMl = = Pr(max (y un+1 + bU((K2)by), 01 x) = Pr(Wn+1 x = y . Hence Wn is Markovian. Now, Pr(Wn+l < xWn = y] = Pr(Wn+, = OW = yW + Pr(O < Wn+l < xw n = Prfy Un+1 + bU((K2)by) 0) + Pr(0 < y un+1 + bU((K2)by) S x) = Pr[u n1 2 y + bU((K2)by) x) = 1 F(y + bU((K2)b y) x 0), (4.2.4) which is independent of n,"and therefore W is timehomogeneous. Equation (4.2.4) is the same as (4.2.3), thus completing the proof. To simplify the notation, we write Pn(y, x) = Pr(W n xjWO = y], n 1, (4.2.5) and P(y, x) = Pl(y, x). (4.2.6) Theorem 4.2.2 The nstep transition distribution functions defined by (4.2.5) are concentrated on the continuous state space [0, (Kl)bj (or a sub interval of it) and satisfy P n+ (zx) = P (z,y) d F(yx0) max(x,(K2)b) n y (K2)b + K2)bP (z,y) d F(y+bx0) max(0,xb) n y  P (z,(K2)b) [F((Kl)bx0) F((K2)bx0)]. Proof of Theorem 4.2.2 By the ChapmanKolmogorov equations, (Kl)b P (z,x) = P(y,x) P (z,dy). o Integrating by parts, (4.2.8) becomes P n+(Z,x) = [P(y,x) P (z,y)] (K1)b SO y = (Kl)b y = 0 Pn(z,y) P(dy,x) (Kl)b = P((Kl)b,x) K)b n(z,y) P(dy,x). 0_ (4.2.9) (4.2.8) (4.2.7) By Theorem 4.2.1, we have 0, P(dy,x) = { Sd F(yxO), y y S x, y > , y>x , (K2 0, y 9 xb, P(dy,x) = 0 d F(y+bx0), y > xb, P(dy,x) = F((Kl)bx0) F((K2)bx0), so that (4.2.9) becomes P n+(z,x) )b < y : (Kl)b, y < (K2)b, y = (K2)b, = 1 F((Kl)bx0) (Kl)b + Kl)b P (z,y)d F(yxO)  max(x,(K2)b) n y + (K2)b S(0 b max(0,xb) P (z,y) d FCy+bx0) n y P (z,(K2)b)[F((Kl)bx0) n F((K2)bx0)] (4.2.10) But P (z,y) = 1, for y > (Kl)b. Hence (4.2.10) becomes (4.2.7), thus completing the proof. Various attempts have been made to establish the stationary distribution of W all without success. Since the stochastic kernal P(y,x) does not satisfy the regularity conditions stated in Feller (1966, Sec. VIII, 7), we are not even sure if W possesses a stationary distribution. As will be seen in the next section, the most important result of this section is the Markov property of W established in Theorem 4.2.1. n 4.3 Some Properties of the Time Between Balks We are now ready to obtain solutions for the expected values of the random variables M. and V. defined by (4.1.4) and (4.1.5), respec tively. In the previous chapters exact results were derived for these expectations, but in this chapter we must be content to utilize unsolved expressions for the expectations of interest. In order to reach our objectives, we make use of the properties of the following quantities. Let S = nb u ... u n 1., n 1 n and, for 0 5 y S (Kl)b, y a real number, let r y y (K2)b, y = y b, y > (K2)b. Define N(y) = inf [n > JWn > (K2)b; WO = y} (4.3.1) (so that N(y) represents the number of arrivals until a balk occurs, conditional on an initial amount y of waiting time in the system), and M(y) = inf (n > 0Wfn = 0 or Wn > (K2)b; W0 = y) (4.3.2) (so that M(y) is the number of arrivals until a customer either enters an empty queue or balks, conditional on an initial amount y of waiting time in the system). It is clear that M(y) has the equivalent representation M(y) = inf (n > 01Sn y or Sn > (K2)by'l and, hence, is the index at which the random walk [Sn; n = 1,2,3,...} first leaves the interval (y',(K2)by']. Finally, let J = J(y) be the random variable that represents the number of customers who enter an empty queue prior to the first person to balk, conditional on an initial amount y of waiting time in the system. That is, J is the number of times the waiting time process W (n 2 1) takes the value zero before it takes a value greater than (K2)b n First we shall prove a few lemmas that lead to a theorem which expresses C(N(y)) in terms of expectations and probabilities associated with the random variables M(y), SM(y), M(0), and SM(0). Note that when K= 2, M(0) = M(y) = 1 and SM(0) = S = bul. Lemma 4.3.1 Pr[J = 0) = PrfSM(y) > (K2)by'), Pr[J = j) = Pr[SM(y) y') Pr(SM(o) 0)j1 SPr(S M() > (K2)b], j 1 1. Proof of Lemma 4.3.1 Define L1 = M(y), L. = inf (n > L._ n = 0 or W >(K2)b; W0 = y], j a 2 (so that L. is the index of the jth person to balk or enter an empty queue). Keep in mind that L. is a function of y. :1 Now, if j = 0, Pr(J = 0) = n=l n=l Pr(J = 0, N(y) = n) PrfO < W 1 (K2)b,...,0 < W n (K2)b, W > (K2)bWo = y} = E Pr[0 < S1 + y < (K2)b,...,O < Sn1 n=l + y S (K2)b, S + y > (K2)b) = E Pr[M(y) = n, SM(y) + y' > (K2)b) n=l = PrSM(y) > (K2)by'). 1,M(y) For j a 1, Pr[J = j] = Pr[WL = 0, WL2 = O,...,WLj = 0, WLj+I > (K2)bJW0 = y} = Pr(WLj+I > (K2)bIWL. = 0} Pr(WLj = 0 WLj_1 ...Pr{WL2 = O WL = 0} Pr(WLi = o0WO = y}, the last equality following from the Markov nature of W But we have, n Pr(WL1 = oWO = y} = Pr[( = Y 0 = y = IE PrWM(y) = 0, M(y) = njWO = y} n=1 = ; Pr(SM(y) + y' 0,1 M(y) = n} = Pr[SM(y) y',3 = E n=1 = 0) (4.3.3) (4.3.4) Pr(wLi+l = 0oWL. = 0) = S Pr[(L L = n, WL = 0 WL n=l 1+1 Li = 0o = E Pr0 < W L. (K2)b,...,0 < WL. n=l 1 L s (K2)b, W. . I = 01WL Li = 0} = E Pr(O < S 1 (K2)b,...,0 < S n o n=1 5 (k2)b, Sn 01 CO = E Pr(M(O) = n, SM(O) < 0) n=1 = Pr(SM(O) 0o, 1 i a jl. (4.3.5) Following a proof analogous to that used to obtain (4.3.5), we get (4.3.6) > (K2)bIWL. = 0) = PrISM(0) > (K2)b} . 3 Applying (4.3.4) through (4.3.6) to (4.3.3), we complete the proof. Lemma 4.3.2 8(N(y)J=o) = (M(y)ISM(y) > (K2)by'), e(N(y)IJ=j) = e(M(y) IS y') + (j1) t(M(O)SM(0) < 0) + t(M(0)ISM(0) > (K2)b), and Pr(WL. 3+l1 J 2t 1. Proof of Lemma 4.3.2 Pr[N(y) = nIJ = 0] = Pr(N(y) = n, J = 0}/Pr[J = 0} = Pr0 < S + y' S (K2)b,...,0 < Sn + y' S (K2)b, S + y' > (K2)b}/Pr[J = 0) 11 = PrM(y) = n, SM(y) + y > (K2)b)/Pr[SM(y) + y > (K2)b) = Pr[M(y) = nSMy) > (K2)by'}, and therefore &(N(y)IJ = 0) = (MS(y)JS M > (K2)by'). For j 1, note that e(N(y)IJ = j) = C(Lj+i = j) = (L1 IJ = j) + PCL2L 1I = j) + + e(L. Lj J = j). We now solve for the various terms in the last express ion of equa tion (4.3.7). (4.3.7) F(L1Ij = j) n Pr(J = j, L1 = n)/Pr(J = j] = 1 n=l Sn Pr[w L=0,...,WLj=0,WL+I > (K2)b,L1 = nW0 = Y] = S 1 n=l Pr[J = j} = Sn Pr(WLj+I> (K2)bIWL. =0} Pr(WLj =O0WLj1 =0] n=l ...Pr(WL2 =OIWL1 =0} Pr(WL1 =0, L1 = nW0 = y) Pr([ = j} = S n Pr(WLI =0, L1 = nWO0 =y}/Pr[WL1 =1W0 =y) n=1 = E n Pr(M(y) = niSM(y) + y' 9 0) n=l = 8(MCy)lSM~y) I y'). (4.3.8) The second equality from the last follows from (4.3.3) and the next to the last equality follows from (4.3.4). Using similar techniques as those employed in deriving (4.3.8), we have (Li+1 LiJ = j) = 8(M(0)ISM(o) r 0), 1 i j 1, (4.3.9) and e(Lj+. L J = j) =8(M(0)SM(0) >(K2)b). (4.3.10) j+l j M(0) Applying (4.3.8) through (4.3.10) to (4.3.7), we complete the proof. We are now ready to find the expression for e(N(y)). Theorem 4.3.1 If K ; 2, then Pr(S M y'} e(N(y)) = e(M(y)) + e(M(0)) M(y) Pr(S >(K2)b) M(0) and, in particular, if K = 2, then C(N(y) = 1 + [1 F(b+y'0)]/F(b0). Proof of Theorem 4.3.1 For K 2 2, let A(y) be the event [SM(y) +y >(K2)b) and A'(y) the complement event (S (y) + y 0]. Further, let p(y) =Pr{A(y)} and q(y) = Pr[A'(y)) = lp(y). We have, by Lemmas 4.3.1 and 4.3.2, that e(N(y)) = Z e(N(y)IJ = j) Pr4J = j} j=0 = e(M(y)A(y)) p(y) + e(M(y)IA'(y)) q(y) E q(0)j1 p(0) j=1 + e(M(0)IA'(0)) q(y) E (j1) q(0)j1 p(0) j=l + 8(M(0)IA(0)) q(y) E q(0)j1 p(0) j=l = e(M(y)) + e(M(O)jA'(0)) q(y) [l/p(0O) i] + e(M(0)IA(0)) q(y) = C(M(y)) + q(y) [P(M(o0)A'(0)) (q(0)/p(0)) + eC(M(0)IA(0)) (p(0)/p(0))] = 8(M(y)) + P(M(O)) q(y)/p(O), and the first half of the theorem is proved. For K = 2, we have immediately that Pr(N(y) = 1) = Pr(W > 0OW0 = y) = Pr(S1 + y > 0o = Pr(ul < b + y') = F(b+y'O) = Pr(J = 0) (4.3.11) and Pr(N(y) =n] =Pr[W =0,...,wn =O,W >0Wl = y) = Pr(bu +y" 5O,bu 20,...,bu n' 0, bu > 0) = [1 F(b+y'0)] [1 F(b0)]n2 F(bO) = Pr(J = n1], n > 2. (4.3.12) From (4.3.11) and (4.3.12), 8(N(y)) for the case K =2 follows trivially, thus completing the proof. We now proceed to find an approximate expression for C(N(y)) when K >3. Since M(y) represents the index of first passage of the random walk [S ; n = 1,2,3,...) out of the interval (y',(K2)by'], we take y>0 (and hence y' >0) so that we can use Wald's approximation to yield the following results for the random variables M(y) and S My. See, for example, Ferguson (1967). Let C(9) = J e dF(u), (4.3.13) o and 80 be the nonzero solution (if it exists) of exp (00b) C (0 ) = 1. (4.3.14) We then have, for y > 0 and K 3, y'/(K2)b, t(u1) = b, Pr[SM() >(K2)by'] 1 exp(e0y') (4.3.15) exp(80 [(K2)by' ]) exp(oy') t(ul) / b, and 8(M(y)) y'[(K2)by']/Var(ul), C(ul) = b, t(M(y)) (1/(b8(ul)))[y'(exp(90[(K2)by']) 1) + [(K2)by'](l exp(9oy'))) (exp(9 [(K2)by']) exp(90y' )  e(u ) y b. (4.3.16) By the use of Theorem 4.3.1, and equations (4.3.15) and (4.3.16), an approximate expression for C(N(y)) may be obtained. However, if y = 0, then the above approximations give 8(M(0)) = Pr[SM(0) > (K2)b] 4 0. Hence, the substitution of these quantities into the expression of Theorem 4.3.1 yields 0/0, an undefined quantity. To circumvent this difficulty, we write 8(M(a)) Pr[SM y'J e(N(y)) = E(M(y)) + lir (M(a)) Pr SM(y) a0 Pr(SM(a) > (K2)ba'} and substitute the approximations before taking the limit. We have after simplification, for K 3, e(N(y)) = [(K2)by'][(K2)b+y']/Var(ul), C(u) = b, 8(N(y)) A [(K2)by']/(b (u)) (exp(e0(K2)b) exp(C0y')) a0(b (u)) e(u ) / b. It will be shown in Theorem 4.3.3 that (4.3.17) can be used to put an upper bound on e(V ). The next two lemmas and the theorem following them help us to attain this goal, while giving insight into why we have devoted much effort to obtain C(N(y)). Let Y. = WNj, 3 " j 2 1, (4.3.18) (so that Y. is the amount of waiting time in the system immediately prior to the jth balk). (4.3.17) Lemma 4.3.3 The stochastic process ([Y; j = 1,2,3,...] is a timehomogeneous Markov process on the continuous state space ((K2)b, (K1)b]. Proof of Lemma 4.3.3 By the definition of N. in Section 4.1, the state space is as .J described. For 1 S i < i < ... < i < j, by the Markov and time homogeneity properties of W we have Pr[Yj+1 xj = y, Yim = m' Y = = PrWNj+1 WNj = Y, WNim = m... WNil = = Pr(WN2 xWN1 = y] = Pr{Y2 x x = y}, thus completing the proof. Lemma 4.3.4 (a) The distribution of M. conditional on Y. = y is the same as that of N(y) for j 1. (b) The distribution of V. conditional on Y. = y J3l 3 is the same as that of aN(y) a0 for j 1. (c) min S(N(y)) g 8(M.) < max e(N(y)), (K2)b j 2. Proof of Lemma 4.3.4 By the Markov nature of W we have PrfM+ = nIYj = y) = Prw .+1 (K2)b,..., WN.+ 3 J ' (K2)b, WN.+n > (K2)bIWN. = y) j j = Pr(W1 5 (K2)b,..., W n_1 (K2)b, Wn > (K2)biWo = y} = Pr[N(y) = n}, completing part (a) of the lemma. Further, since ulu2,u3,..., are mutually independent and identically distributed, Pr(Vj+1Yj = y) = Pr(u N+1 t...+ U.j+ .+ p+1 SxWN. = y] = E Pr N.+ +'+ uN.+n x, M1 = jWN1 = Y n=1 3 J J = Prf a n 0 C x, N(y) = n) n=1 =PrN(y) 0 , and the (b) part is proved. Finally, from the (a) part of the lemma, we have, for j > 1, 8(Mj ) = e ([(N(Y.))}. But, by Lemma 4.3.3, (K2)b < Y. S (Kl)b, so that part (c) of the lemma follows. The proof is now complete. Note that Y1,Y2,Y3,..., are not identically distributed unless Y7 has the same distribution as the stationary distribution of Y.. 1J Therefore, Lemma 4.3.4 implies that neither M2,M3,M4,..., nor V2 ,V3,V,..., are identically distributed sequences of random variables in general. Theorem 4.3.2 (a) 8(N(y) C0) = 8G(Ny)) e(u ), (b) 8(V.) = 8(M.) S(u ), j a 1, with u1 given by (4.1.2). Proof of Theorem 4.3.2 * Let a = o a0 so that n n 0 * n = u1 +...+ u n 1. n 1 n The sequence on n C(u ) forms a martingale and t(a n E (u)) = 0 n n for n a 1. Let Bk be the Cfield of events generated by (Ca,..., ), Lk 1 "q then the event [N(y) > k} C Bk and Bk c Bk+". Hence, N(y) is an optional stopping rule and therefore has no effect on the martingale property. See, for example, Feller (1966). We now have (CNy) N(y) 8(ul)) = 0, which establishes part (a) of the theorem. By Lemma 4.3.4 and part (a) above, we can write 8(Vj+) = 8 e,(Vj+l)} j ((N(Y.) 0 Y 3 = ecu ) 8 (e(N(Y.))] S ( j Similarly, we have (1 W (N(Wo) 0 = Ml) ( The proof is now complete. IE one could obtain the exact value of C(M.), then the above theorem implies 8(V.) could be easily found. However, the best we are able to do is obtain an approximate upper bound for 8(V.) when K 3. J An exact upper (and lower) bound for S(V.) when K = 2 can be obtained directly from Theorem 4.3.1. No approximate lower bound for 8(V.) in the case K e 3 can be found since, when y = (Kl)b (or equivalently y = (K2)b), the approximation (4.3.17) yields a value of zero. We now state formally the results that can be obtained. Since they hold for all j 2 2, the steady state solution satisfies these bounds also. Theorem 4.3.3 Let 90 satisfy (4.3.14). If K = 2, then S(ul)/F(b0) 8e(V.) P 8(u ) [1 + I/F(b0)], j 2. If K 3, then, approximately, P(V ) 5 S(ul) (2k5) b2/Var(ul), (u) = b, and (V.) ) b e(u )/(bS(u1)) C(u1) (exp(e0(K2)b) exp(90(K3)b) 00(b e(u )) C(u1) b, j 2 2. Proof of Theorem 4.3.3 By Lemma 4.3.4(c), we need the maximum and minimum of t(N(y)) from Theorem 4.3.1 for y in the range (K2)b < y S (K1l)b, or equiva lently, (K3)b < y' s (K2)b to obtain bounds for e(M.). The exact bounds for 8(N(y)) when K = 2 are taken directly from the expression in the theorem, while the approximate results for K 2 3 are obtained from (4.3.17). It is easily shown that the lower bounds are reached when y = (K2)b and the upper bounds when y = (K3)b. By Theorem 4.3.2(b) we need only multiply these bounds by C(u ) to complete the proof. Finally, let F (*) denote the stationary distribution of Y.. That is, F (*) satisfies (Kl)b Fy (x) = (K PrY2 x = y} dFy(y), (4.3.19) (K2)b when (K2)b< x (Kl)b. If Y1 has the distribution F (*), then it is well known that Y. has the distribution F (*) for all j. See, for example, Feller (1966). We then have, by Lemma 4.3.4, the following theorem. Theorem 4.3.4 If Y1 has the distribution F (*) that satisfies (4.3.19), then (a) M2,M3,M4,..., are identically distributed, (b) V2,V3,V4,..., are identically distributed, and (Kl)b (c) e(M.) = e o(N(y)) dF (y), j > 2, w (K2)b with e(N(y)) given by Theorem 4.3.1. CHAPTER 5 THE INVENTORY PROBLEM: DISCRETE CASE 5.1 Definition of the Inventory System We suppose there exists a subwarehouse, maintaining an inventory of finite capacity S, that holds material (discrete) for future demand. We assume the itembyitem demand for the stored objects occurs accord ing to the stochastic process [D(t); t > 0) defined by co D(t) = E U(t T.) (5.1.1) j=l J with U(*) the unit step function at zero. It will be assumed that the interdemand times, 2 T T3 T T4 T 3..., for the items in storage are mutually independent and that the distribution of T. T . J J1 is given by Pr(T. T jl u) = G(u), u 0, j 2. (5.1.2) In order to maintain a stock on hand, the subwarehouse places an order for replacement items to a warehouse. It will be held that items are so ordered in lots of integral size V(1 V orders are placed at the times 1,c 2 C3..., with C. defined by C. = inf (tID(t) = j V], j > 1. (5.1.3) From the definition of D(t), we have a. = T. so that 02 02, c 0 , 4 a3',..., are mutually independent and a aj1 has distribution Pr(O. j._1 5 x) = G (x), x ; 0, j 2 2, (5.1.4) where C (*) is the vth convolution of G() with itself. Let (S(t); t ; 0) be the stochastic process such that S(t) represents the inventory or stock level in the subwarehouse at time t. If we let (R(t); t L 0] be the stochastic process such that R(t) is the number of orders filled by the warehouse in (0,t] for our sub warehouse of interest, then S(t) will be defined by S(t) = S D(t) + VR(t). (5.1.5) The above definition assumes that the inventory is initially full, i.e., S(0) = S. An order for replacement stock of lot size v, made at time C, may be one of two types. We have a "regular" order provided that S(9) > S v[S/V], where [x] means the integral part of x. In this case, the time to fill an order (hereafter, the service time) is assumed to be a random variable. The successive regular service times, denoted by wl, 2,w3,..., are assumed to be mutually independent and independent of the demand process D(t). The distribution function of w. is given by Pr[w. S w) = H(w), w a 0, j : 1. (5.1.6) We have an "emergency" order if S(0) = S v[S/V]. In this case, the emergency service time is supposed instantaneous, or at least effec tively zero, so that S(O + 0) = S v[S/V] + u. In other words, regular ordering procedures are used provided that at the time we place such an order, there are at least v items in the subwarehouse. If there are less than V items in the subwarehouse when an order is placed, we utilize emergency measures to obtain the lot of V items. Utilizing this ordering scheme, we avoid the disaster of running completely out of stock in the subwarehouse. Figure 5.1 gives a typical realization of S(t). The behavior of the warehouse in filling the regular orders is important to a discussion of the inventory problem. It will be assumed that the warehouse operates under one of two distinct systems. Under the first system, the warehouse can handle only one order at a time, so that successive orders, which arrive while an order is being filled, form a queue and must wait to begin being processed or "served." The orders are then processed by the warehouse according to a strict rotation basis of "first come, first served." The warehouse just described will be called the oneserver warehouse. Under the second system, an order begins processing as soon as it arrives in the ware house so that no order must wait for "service." A warehouse operating under this procedure will be called an infiniteserver warehouse. We shall consider both oneserver and infiniteserver warehouses. We now state the following formal definition of the concepts discussed so far. Definition 5.1.1 The ordering scheme (G,H,S,v,l) is a policy for maintaining the level of inventory in a subwarehouse where: (a) The capacity of the inventory is S. (b) Itembyitem demand for objects in storage satisfies (5.1.1) and the interdemand times are mutually independent random variables with distribution function G(.). 67 4" b II __I0 11 SI C .4 ci, b m P4 IUD ^r 1 in n (c) Lots of v items (i C V 2 S) are ordered at the times given by (5.1.3). (d) The orders are made to a oneserver warehouse. (e) Regular service times are mutually independent random variables, are independent of the demand process, and possess a distribution function H('). (f) Instantaneous service occurs for orders placed when less than v items remain in storage at the time an order is placed. If we change condition (d) of the definition to state that orders are made to an infiniteserver warehouse, we have the ordering scheme (G,H,S,V,W). Clearly, the cost of maintaining the inventory level in the subwarehouse will be a function of V, the lot size ordered. The optimal value of the lot size is defined herein to be that value of v which min imizes the cost. It has to be remembered, however, that frequently not all values of V are available to us since orders to the warehouse may have to be in multiples of ten, a dozen, a gross, or some other basic unit. Our best v is that of finding the optimal attainable value of V. In Section 5.3, a cost function is defined that utilizes reasonable costs associated with maintaining the inventory level. While searching for a minimum cost with respect to V, V may take all values from 1 to S. Therefore, the distribution on regular service times could quite possibly be a function of V, the lot size ordered. 5.2 Relation of the Inventory System to Queues with Balking Recall from Section 5.1 that S(k ) is the stock level at the time the kth order is placed. S( k) tells us whether an order is regular or emergency. Since, realistically, emergency orders have large costs (more than the costs of regular orders), the value of S(k ) is of extreme importance in determining the cost of maintaining the inventory level in the subwarehouse. A study of the properties of S(a ) can be facilitated by making the following observations. From equa tions (5.1.5) and (5.1.3), we have S(Ck) = S D(Ck) + VR(O ) = S vk R(C k)). (5.2.1) Define the stochastic process (Q(t); t > 0} by Q(t) = [D(t)/V] R(t), (5.2.2) where [x] is the integral part of x, so that Q(t) represents the number of unfilled orders, for our subwarehouse of interest, at time t. From (5.2.1) and (5.2.2), we have S(Ck) = S vQ(ak) (5.2.3) so that a knowledge of Q(ak) gives us the value of S(k ). Therefore, a study of the stochastic process (Q(t); t a 0) is needed. It will be demonstrated in Theorem 5.2.1 that such a study has been carried out for some special cases in Chapters 2 through 4. For the cost function to be defined in Section 5.3, we will make use of the following random variables. Define N1 = inf [k > OlS(Ck ) = S v[S/v]), N = inf k > N (lS(k ) = S v [S/v]), n 2 (5.2.4) (so that N is the number of orders, regular and emergency, placed up to and including the nth emergency order), and V1 = N1' V. = C.N aN. i, j 2 (5.2.5) J j jl (so that V is the time until the first emergency order is placed and V. (j Q 2) is the time between the (jl)st and jth emergency orders). Theorem 5.2.1 For an ordering scheme (G,H,S,v,l) ( (G,H,S,v,m) ) we have the following dualities. (a) Q(t) is the number of people in the system at time t for the queue G /H/1 (G /H/) with balking at queues of length rS/v] 1. (b) Nk is the number of people who arrive in the system up to and including the kth person to balk in the queue G /H/1 (G /H/) with balking at queues of length [S/v] 1. (c) V is the time until the first balk and V. (j 2) is 13 the time between the (jl)st and jth balks in the queue G /H/1 (G /H/) with balking at queues of length [S/v] 1. Proof of Theorem 5.2.1, Part (a) By definition (5.2.2), we have Q(t) = [D(t)/v] R(t). Now [D(t)/v] has unit increases at the times 1,C 2'3,..., so that Q(t) also has unit increases at these times. Hence, the order time 0. can be considered as the arrival time of a "customer" into the warehouse. R(t), by definition, is the number of orders filled by the warehouse in (0,t]. The time it takes to fill a regular order is w.. Since R(t) increases by a unit amount at the time an order is filled, Q(t) decreases by a unit amount at that time. Therefore, w. is the service time of a "customer" in the warehouse. Finally, by the restriction of emergency orders and (5.1.5), ak is such that S(Ck +0) = S v[S/v] +v when S(ok) = Sv [S/v], if and only if R(Crk 0) = R(a k) + 1. Hence, from (5.2.2) and (5.2.3) S(ak +0) = S v[S/v] +v when S(Ok) = S v[(S/v] if and only if Q(k +0) = [S/v]1 when Q(k) = [S/v]. Therefore, a "customer" balks at the queue of length [S/v] 1. By the assumptions placed on the ordering times and the service times for the ordering scheme, and the discussion of a queue with balking in Section 2.1, we complete part (a) of the proof. Proof of Theorem 5.2.1, Parts (b) and (c) Simply note by (5.2.3) that N1 = inf k > OIQ(ak) = [S/v), N = inf [k > Nn1 Q(ak) = [S/v]), n 2 2. By part (a) of the theorem and definitions in Section 2.1, we complete the proof. 5.3 The Cost Function C(v) For the inventory problem discussed in Section 5.1, consider the following costs associated with running the ordering scheme (G,H,S,v,1) or (G,H.S,v,): CO: The cost of placing an order, C : The par unit cost of the commodity, and C2: A penalty cost for instantaneous delivery of an emergency order that is possibly a function of v, the lot size ordered. Define the stochastic processes (N(t); t L 0) and [M(t); t L 0) by N(t) = E U(t o.) (5.3.1) j=l and M(t) = E U(t aN.) (5.3.2) j=l with 0U. given by (5.1.3) and N. by (5.2.4). Then N(t) is the total number of orders placed in the interval [0,t] (regular and emergency) and M(t) is the number of these that are emergency orders. Definitions (5.3.1) and (5.3.2) are not the same random variables as defined by (4.3.1) and (4.3.2), respectively. Let C(v;t) = (C0 + C1V) N(t) + C2M(t) (5.3.3) so that C(V;t) is the total cost of ordering lots of size v during the time interval [0O,t]. Since N(t) and M(t) are random quantities, we shall concern ourselves with the expected total cost, e[C(v;t)], during the interval [0,t]. Further, the v which minimizes [(C(v;t0)} for a fixed to will minimize 8[C(v;t0 )}/t0, so that we shall restrict our selves to the latter quantity. Finally, since a subwarehouse that maintains an inventory is usually established with the thought of oper ating for a long period of time, we choose to minimize the expected total cost of ordering per unit time in the long run, a quantity that is mathematically tractable. That is, we want the value of v (u = 1,2,..., or S) that minimizes C(v) = lim t(c(v;t)/t) tc = (c + C v) lim e(N(t)/t) tm + C lim [(M(t)/t]. (5.3.4) t* O But 2 2 o3 2, a 3' ..., are mutually independent and identically distributed random variables. Therefore, N(t) is a (delayed) renewal process and, by the Elementary Renewal Theorem, Prabhu (1965a), we have Lemma 5.3.1 lim e(N(t)/t} = 1/(C2 a ) t4m = 1/A (5.3.5) with S= J u dG(u). (5.3.6) A similar closed form for lim 8PM(t)/t) does not exist for .t4 0O an arbitrary ordering scheme (G,H,S,v,l) or (G,H,S,v,w). The reason for this is that M(t) is a function of the random variables N. whose properties depend heavily on the distribution of service times and whether we have a oneserver or infiniteserver warehouse. In the following sections, we consider reasonable candidates for the distribution function, H('), on the service times and both one server and infiniteserver warehouses. For the cases discussed in these sections, a "closed" form for lim u8[M(t)/t) will be obtained. tm At this juncture, it should be pointed out that when [S/v] = 1, Theorem 5.2.1 gives Nk = k. Therefore, M(t) = N(t) and lim e(M(t)/t) = 1/v. For the future we shall therefore concern ourselves t o with the cases [S/v] > 2. 5.4 Solution of C(v) Using the Queue GI/M/i with Balking General Demand Function In this section, we develop the solution of the cost function C(v) for the ordering scheme (G,M,S,v,I). The subwarehouse places orders of lot size v with a oneserver warehouse, so that orders arrive at the warehouse, form a queue, and are processed on a strict "first come, first served" basis. We are leaving the itembyitem demand function D(t) general, but we are requiring that regular orders have service times with a Markov, or negative exponential, distribution. Therefore, the distribution of w. is Pr(w. a w} = H(w) = 1 e w 2 0, j 2 1, (5.4.1) where I/A is the mean service time to process an order. It is reasonable that the time to fill an order, w., should depend in some manner on v, the lot size of the order placed. We may allow for this by permitting that X be a function of v. Typically, we may have X = C/v, W a constant. In order to find the value of V which minimizes C(v) of Section 5.3, we prove the following theorem. Theorem 5.4.1 For the ordering scheme (G,M,S,v,1), the cost function C(v) of Section 5.3 has the form C(v) = (C0 +VC1)/uV + C2/iv 2 (v) C5.4.2) with ([s/A]2) Cz l/[[(iX( Z))] z}, [s/v] > 2, 1, [S/u] = 1, where 4(9) = Je eu dG(u), o and 0 = J u dG(u). o Proof of Theorem 5.4.1 By Lemma 5.3.1, we need only find lim PfM(t)/t} to complete the prooE. Now M(t) = S U(t aN) j=l J Let V1 = aN1 and Vj = %N. NjI (J 2). By Theorem 5.2.1(c), GNj is the time until the jth balk occurs in the queue G /M/i with balking at queues of length [S/u] 1. Hence, we can apply the results of Chapter 2 with K = [s/A], F(x) = G (x), and p(9) = [ (8)]v. From Theorem 2.3.2, we therefore have M(t) is a delayed renewal process. By the Elementary Renewal Theorem, Prabhu (1965a), we have lim e(M(t)/t) = 1/8(v ) tco = 1/v 9 P(v) with pA() from Theorem 2.2.2. The proof is now complete. Writing (5.4.2) out, we see that C(v) is minimized when h(v) = (1/V)[1 + (C2/Co)/p(v)} (5.4.3) is minimized. Therefore, h(V) can be considered as the cost function of interest. Example: Poisson Demand Assume the itembyitem demand for objects in the subwarehouse occurs according to a Poisson process with intensity p. Then Pr[D(t) = n) = elt (pt)n/n' , n 2 0, and the interdemand times have the distribution Pr[T. T. S x) = G(x) = 1 e0, x 0, j 2. I x O (5.4.4) That is, we have the ordering scheme (M,M,S,v,I). To apply Theorem 5.4.1, we need a workable expression for (v). Define p = A and p = p/(1 + p), q = 1/(1 + p). From (5.4.4), we have (eB) = (I + e/p)1 Hence, [*(W(1 Z))]V = (1 + (I Z)/P) = (1 + C( Z)p)V = p /( Zq). Therefore, P.Cv) [s/v]2 z C[S/v]2) Z 1 p /1 qZ)V Z 1 qZ /I Z 1_ qZ = ([s/v]. j=0 [S/v]2 j=0 J=O 2) zj 1 q(j+l) j=0o ([S/v ]j2) c r[( oz)/n] z , jz < p (j+l)u (j4l)v [S/V]2j (j+l)v [S/v]j2I (Cq) p As an illustration, Table 5.1 gives the values of h(v) of equation (5.4.3) for C 2/C0 = 10.0 and various values of p and S. It is to be noted that considerable savings can be effected by the proper choice of v, the lot size ordered to replenish the stock. a ^ K  VALUES OF h(v) FOR TABLE 5.1 THE ORDERING C2/CO = 10.0 SCHEME (M,M,S,v,1) S 8 9 10 0.8 1.2 0.8 1.2 0.8 1.2 1 1.5040 3.1717 1.3876 3.0673 1.3007 2.9877 2 0.5672 0.7759 0.5672 0.7759 0.5184 0.6313 3 0.6260 0.8743 0.3634 0.4461* 0.3634 0.4461 4 0.3475* 0.4713* 0.3475* 0.4713 0.3475 0.4713 5 2.2000 2.2000 2.2000 2.2000 0.2347* 0.2966* 6 1.8333 1.8333 1.8333 1.8333 1.8333 1.8333 7 1.5714 1.5714 1.5714 1.5714 1.5714 1.5714 8 1.3750 1.3750 1.3750 1.3750 1.3750 1.3750 9 1.2222 1.2222 1.2222 1.2222 10 1.1000 1.1000 Denotes optimal value. 5.5 Solution of C(u) Using the Queue GI/M/w with Balking General Demand Function In this section, we develop the solution of the cost function C(v) for the ordering scheme (G,M,S,v,w). The subwarehouse places orders of lot size V with an infinite server warehouse, so that when an order is received at the warehouse, processing begins immediately. Once again, we hold the itembyitem demand function D(t) general, but we require that regular orders have service times with the Markov, or negative exponential, distribution. Therefore, the distribution of w. is Pr[w. S w) = H(w) = 1 ew w > 0, j 2 1, (5.5.1) where 1/K is the mean service time to process an order. As before, it is reasonable that the time to fill an order should depend in some manner on V, the lot size ordered. We may allow for this by permitting K to be a function of v. Typically, we may have K = 01/v, a a constant. In order to find the optimal value of V, we prove the following theorem. Theorem 5.5.1 For the ordering scheme (G,M,S,V,m), the cost function C(v) of Section 5.3 has the form C(v) = (CO + v C1)/v ( + C g2/v P(v) (5.5.2) with m([S/],v), [s] 22, 1, [S/v] = 1, where m(2,v),...,m([S/v],v) satisfy the relationships m(2,v) = [(x)]v, n n m(n+l,v) = [t(nX)] [l + b,(n,nk+2) E m(j,v)], k=2 j=k n = 2,3,..., [S/v] 1, with #(0) = e dG(u), 0 = u dG(u), 0 and nk+2 n nk+2 4 ] b (n,nk+2) = (k2) j=O( )(I)! [(A.(j+k2))]". j=0 Proof of Theorem 5.5.1 By Lemma 5.3.1, we need only find lim e[M(t)/t) to complete the proof. Recall that M(t) = U(t ). j=0 j Let V = C1 and V = ONj Nj_ (j a 2). By Theorem 5.2.1(c), FNj is the time until the jth balk occurs in the queue Gv/M"/ with balking at queues of length [S/v] 1. The results of Chapter 3 apply here, with K = [S/v], F(x) = Gv(x), and C(P) = [(CO)V. From Theorem 3.3.2, we therefore have M(t) is a delayed renewal process. By the Elementary Renewal Theorem, Prabhu (1965a), lim C[M(t)/t) = 1/v p p(u). t a The quantities used in calculating p(v) follow from equations (3.2.6) and (3.2.8). The proof is now complete. Writing (5.5.2) out, we see that C(V) is minimized when hl(V) = (I/V)[l + (C2/CQ)/i(v)) (5.5.3) is minimized. Therefore, hl(V) can be considered as the cost function of interest. Example: Poisson Demand Assume the itembyitem demand for objects in the subwarehouse occurs according to a Poisson process with intensity p. Then Pr(D(t) = n) = eA (pt)n/n! n 2 0, and the interdemand times have the distribution Pr[T. .T x = G(x) = 1 e, P x 0, j a 2. (5.5.4) That is, we have the ordering scheme (MM,S,v,m). Define p = PA/X. From (5.5.4), we have *(6) = (1 + e/p)1 Hence, p(v) of Theorem 5.5.1 becomes a function of (VX) = (1 + n V/0)(1 + n/p)1 which is a function only of p and not p and X separately. As an illustration, Table 5.2 gives the values of h (v) of equation (5.5.3) for C 2/C0 = 10.0 and various values of p and S. Once again, considerable savings can be effected by the proper choice of V, the lot size ordered. Comparison of Sections 5.4 and 5.5 We note that Sections 5.4 and 5.5 both deal with inventories subject to general demand and negative expoenetial regular service times. Whereas the results of Section 5.4 are based on orders being placed to a oneserver warehouse, Section 5.5 assumes we have an infiniteserver warehouse so that processing of an order begins as soon as it is received. These two sections represent the extreme cases in terms of the number of processors available in a warehouse to process an order when we have general itembyitem demand and negative exponential service times. Therefore, we can calculate the optimal value of v, the lot size ordered, for the best possible situation (processing begins imme diately when an order is placed, Section 5.5) and the worst possible situation (an order must wait in turn before processing begins, Section 5.4). It is worthwhile to compare Tables 5.1 and 5.2 for the case of Poisson itembyitem demand. TABLE 5.2 VALUES OF h (V) FOR THE ORDERING SCHEME (M,M,S,V,) C2/CO = 10.0 S 8 9 10 VN 0.8 1.2 0.8 1.2 0.8 1.2 1 1.0002 1.0021 1.0000 1.0003 1.0000 1.0000 2 0.5044 0.5234 0.5044 0.5234 0.5001 0.5014 3 0.6260 0.8743 0.3406* 0.3654* 0.3406 0.3654 4 0.3475* 0.4713* 0.3475 0.4713 0.3475 0.4713 5 2.2000 2.2000 2.2000 2.2000 0.2347* 0.2966* 6 1.8333 1.8333 1.8333 1.8333 1.8333 1.8333 7 1.5714 1.5714 1.5714 1.5714 1.5714 1.5714 8 1.3750 1.3750 1.3750 1.3750 1.3750 '1.3750 9 1.2222 1.2222 1.2222 1.2222 10 1.1000 1.1000 Denotes optimal value. 5.6 Solution of C(v) Using the Queue GI/D/i with Balking In this section, we consider the solution of C(v) for the ordering scheme (G,D,S,v,l). The subwarehouse places orders of lot size v with a oneserver warehouse so that orders arrive at the warehouse, form a queue, and are processed on a strict "first come, first served" basis. We allow the itembyitem demand function D(t) to be general, but we require that regular orders have a constant service time b. It may be that b is a function of V, the lot size ordered. Typically, we may allow for this by permitting b = vd, d a constant. By Lemma 5.3.1, we need only find lim S[M(t)/t} to complete t4 C the solution of the cost function C(V) defined in Section 5.3. Recall that 01 M(t) = Z U(t aN.). j=1 From Theorem 5.2.1(c), Nj* is the time until the jth balk occurs in the queue G /D/1 with balking at queues of length [S/V] 1. Hence, the results of Chapter 4 apply here with K = [S/u] and the distribution on interarrival times F(x) = G (x). Except for the trivial case [S/v] = 1 when M(t) = N(t), it was noted in Section 4.3 that the random variables Vj = Nj Nj1, j 2 2, do not have the property of identical distribution as was the case when the service times had a negative exponential distribution. Therefore, M(t) is not a renewal process and e(M(t)/t) does not possess a simple limit. Since we are interested in the long run behavior of the inventory, we shall be content to utilize the steady state properties of the system and to redefine M(t) so that a suitable limit for 8[M(t)/tj can be obtained. Recall the definition of 0 from Section 5.1. It is clear that n a is a function of v, the lot size ordered. Define n S (V) = nb a n 1, and M (y;v) = inf in > 0IS (v) y or S (v) > [S/A] 2 y}, n n [SA/] > 3. Furthermore, let [Y.(v); j = 1,2,3,...} be the Markov process defined by (4.3.18). Y. is written as Y.(v) to emphasize the dependence of the process on v, the lot size ordered, when the quantities K and F(x) of Chapter 4 are [S/v] and G (x), respectively. Denote by F (y,V) the stationary distribution of Y (v) so that F (y,v) satisfies ([s/v]l)b Fy (y,v) = PrY2 () y v) = x dxFy(xv), ([S/v]2)b ([S/A] 2)b < y : ([S/V] l)b. Finally, let g(v) = 1, [S/v] = 1, = 1 + [I G (y0)]/G (b0) dFy(y,V), [S/V] = 2, = J 8(M (yb; V)) dFy(y,v) + [E(M*(0; v))/Pr(SM*(O ; )(v) > ([S/v] 2)b)] Pr(SM*(yb; V)(V) r y + b) dF (y,v), [S/v] 2 3. (5.6.1) If Y1(v) possesses the stationary distribution Fy(y,V), then by Theorem 4.3.4, p(v) is one plus the expected number of orders of lot size v that occur between any two emergency orders when the inventory system is in the steady state. Let V1, V2, V3,..., be a sequence of mutually independent random variables such that the distribution of V. is 3 Pr(V* : x) = .Pr(Vj+1 xY (v) = y) dFy(y,V), j 2 1. * By Theorem 4.3.4, VI, V2' V3,..., are identically distributed and repre sent the time between successive emergency deliveries when the system is in the steady state. For the purposes of this section, we redefine M(t) as M(t) = E U(t V ... V.), (5.6.2) j=1 1 so that H(t) is the number of emergency deliveries during [0,t] when the inventory system is in the steady state. Theorem 5.6.1 For the ordering scheme (G,D,S,v,1), the cost function C(v) of Section 5.3 (with M(t) redefined by (5.6.2)) has the form C(v) = (C0 + C1)/t v + C2/v t P(v) (5.6.3) with CO = u dG(u) o and .(u) given by (5.6.1). Proof of Theorem 5.6.1 By the Elementary Renewal Theorem, Prabhu (1965a), lim n[M(t)/t) = 1/8(VI) = i/v P(v), the last equality following from Theorem 4.3.2 and the definition of V.. 3 Applying Lemma 5.3.1, we complete the proof. Writing (5.6.3) out, we see that C(v) is minimized when h2(v) = (1/V)[1 + (C2/Co)/P(V)} (5.6.4) is minimized, h2(v) may therefore be taken as the cost function of interest. In many cases it may be difficult to obtain p(v) when [S/u] 23. Theorem 4.3.3 then gives a bound that may be used to obtain an approx imate lower bound for C(v). CHAPTER 6 THE INVENTORY PROBLEM: CONTINUOUS CASE 6.1 First Passage Times of NonNegative, Continuous Stochastic Processes with Infinitely Divisible Distributions In this chapter, we wish to consider the ordering scheme for a subwarehouse that maintains an inventory of fluid material. We assume the demand for the fluid in storage occurs continuously. It is reason able to further assume that the demand during any interval of time is independent of the demand during any other nonoverlapping interval of time and that the probability law for the demand during any interval [s, s+tj is functionally dependent only on the length, t, of the inter val. Hence, if (D(t); t 0o is the stochastic process such that D(t) represents the demand for the fluid in storage during the time interval ([0, t], then we are assuming that [D(t); t a O0 is a non negative, continuous stochastic process with stationary, independent, nonoverlapping increments. By Theorem 2 of Feller (1966, p. 294), this is equivalent to stating that (D(t); t a 0) is a nonnegative, contin uous stochastic process whose distribution is infinitely divisible. The distribution function of D(t) will be denoted by Pr(D(t) S x) = sJ g(y,t) dy, x 2 0, t a 0, (6.1.1) with g(',t) a density on [0,) for each t k 0. In the next section, a complete description of the ordering scheme used to replenish the inventory will be given. To solve this inventory problem, we need the probability law for the stochastic process (T(u); u 2 0) defined by T(u) = inf (tID(t) u} (6.1.2) (so that T(u) is the first passage time of D(t) into the interval [u,w)). The rest of the current section will be devoted to properties of T(u). Theorem 6.1.1 Let fT(u); u a 0) be the stochastic process defined by (6.1.2), then (a) T(u) has stationary, independent, nonoverlapping increments, and (b) The distribution of T(u) is Pr{T(u) S t) = J g(y,t) dy, t 2 0, u > 0. u Proof of Theorem 6.1.1 Since T(u) S t if and only if D(t) 2 u, Pr(T(u) S t) = Pr[D(t) Z u) = J g(y,t) dy, u completing part (b) of the proof. To prove part (a), it is sufficient to show Pr[T(y) T(w) & sIT(w) = r) = Pr(T(yw) 5 s) (6.1.3) for all s > 0, r a 0, and 0 s w < y. First we shall calculate Pr(T(y) 4 tIT(w) = r) for w < y and r < t. Now, Pr[T(w) = r) = 0 so that Pr[T(y) b tIT(w) = r] involves conditioning on an event of probability zero. Hence, the quantity Pr(T(y) 4 t, T(w) = r)/Pr(T(w) = r) is undefined and therefore can not be used to define Pr[T(y) S tIT(w) = r). Cramer and Leadbetter (1967, pp. 219222) give two plausible definitions for Pr[T(y) StJT(w) = r) which are known as the verticalwindow (v.w.) and horizontalwindow (h.w.) conditional probabilities. These conditional probabilities are defined by Pr(T(y) 5 tIT(w) = r)}v = lim Pr[T(y) tir T(w) : r + 6} (6.1.4) 6 0 and Pr[T(y) tT(w) = rh.w = lim PrfT(y) tIT(T) = r, 60 for some T e [w,w+6]], (6.1.5) respectively. Both (6.1.4) and (6.1.5) define Pr(T(y)stIT(w) = r) in terms of a limit of conditional probabilities which involve condi tioning on events of nonzero probability. Equation (6.1.4) is the usual definition of Pr[T(y) 5 tIT(w) = r). However, in our particular case (6.1.4) leads to an undefined quantity. Therefore, we choose to use the horizontalwindow definition given by (6.1.5). We have, for 0 < 6 < yw, Pr(T(y) E tIT(w) = rh.w = r Pr(T(y)5 t, T(T) = r for some T C [w,w+6]} 60 Pr[T(T) = r for some T e [w,w+8]) PrfD(t) 2 y, w : D(r) < w + 6} 6 0 Pr[w D(r) w + 6) w +6 SJ g(zx,tr) g(x,r) dx dz lim y w 6 0 w+6 S g(x,r) dx w 0G = S g(zw,tr) dz[g(w,r)/g(w,r)] y = S g(z,tr) dz yw = Pr(D(tr) 2 yw) = Pr(T(yw) 5 tr). (6.1.6) Let t = r+s in equation (6.1.6), then Pr[T(y) T(w) 5 sIT(w) = r}h.w = Pr(T(y) 2 s+rlT(w) = r) = Pr[T(yw) s}, thus completing the proof. The above theorem implies that T(u) is also nonnegative with an infinitely divisible distribution. Hence, the Laplace transforms of T(u) and D(t) have the forms (e ) = ew0)u, 0 > 0, (6.1.7) and (e8D(t) = e v()t, > 0, (6.1.8) respectively, such that w(e) and v(8) are positive for 9 > 0 and possess completely monotone derivatives. See, for example, Feller (1966). An attempt was made to find the relation between the Laplace transforms of (6.1.7) and (6.1.8) by utilizing the following technique. We have SF' e ey PrfT(y) S t) dt dy o o = J eOy ew()Y/ dy = 1/[G(e9+w())], (6.1.9) and J' S et ey Pr(D(t) a y ] dy dt o o = j e (1 e )/9e dt = 1/9 1/[0( + v(0))] = vce)/[eg( + v(B))]. (6.1.10) Now Pr(D(t) 2 y) = Pr(T(y) 5 t) so that if O O e9t e PrfD(t) a y} dy dt o o S e e PrfD(t) 2 y) dt dy (6.1.11) o o (i.e., if we can change the order of integration), then (6.1.9) and (6.1.10) must be identical. Hence, we have 1l/[Ce + w(C))] = v(9)/[ 9([ + v(C))] or v(C) w(Q) = e9 which implies w(9) = c and v(9) = 9/c for some c > 0. We must have by the uniqueness of Laplace transforms that Pr(D(t) = t/c) = Pr(T(y) = cy) = 1. It now seems that the only nonnegative, continuous process D(t) with stationary, independent, nonoverlapping increments is the trivial deter ministic model D(t) = t/c. However, we know that if D(t) has the gamma density g(y,t)= ey/P yt/rt)pt, y Z 0, (6.1.12) then D(t) is nonnegative; continuous; has stationary, independent, nonoverlapping increments; and is clearly not deterministic. The point we make is that (6.1.11) is true only for the trivial case D(t) = t/c and, hence, the order of integration can not be changed for any other choice of D(t). So far we have been unable to find a suitable method for obtaining the Laplace transform of T(u). 