ON THE RATE OF CONVERGENCE OF SERIES
OF RANDOM VARIABLES
By
EUNWOO NAM
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1992
To Sylvia, Petra, and Daniel
ACKNOWLEDGEMENTS
First of all, I would like to express my deep gratitude to Dr. Andrew Rosalsky,
my dissertation adviser, for his guidance, advice, understanding, encouragement
and friendship. I would like to thank Dr. Rocco Ballerini, Dr. Malay Ghosh, Dr.
Richard Scheaffer and Dr. Murali Rao for serving on my dissertation committee.
Also, I would like to thank Dr. Ronald Randles, Chairman of the Department of
Statistics, for his support and encouragement through my years at the University
of Florida.
In addition, let me express my appreciation to the Korean Air Force Academy
and Korean Air Force for their support of my studies at this university.
Most importantly, I wish to express my special thanks to my family, especially
my wife, Sooyeon, for her love, patience, support, and unceasing prayers, and my
children, Hwajung and Wontae, for being a joy to us. I am grateful to our parents
for their teaching me the principle of life.
Finally, I would like to thank my colleagues and friends for their assistance and
continuous prayers.
TABLE OF CONTENTS
page
ACKNOWLEDGEMENTS .................................................. iii
ABSTRACT............... .......................................... v
CHAPTER
1 INTRODUCTION ........................................... 1
2 TAIL SERIES STRONG LAWS OF LARGE NUMBERS I ...... 9
2.1 Introduction and Preliminaries ......................... 9
2.2 Tail series SLLNs for Arbitrary Random Variables ...... 14
2.3 Tail series SLLNs for Independent Random Variables ... 23
2.4 Examples ............................................. 31
3 TAIL SERIES WEAK LAWS OF LARGE NUMBERS ......... 42
3.1 Introductory Comments, Tail Series Inequality,
and a New Proof of Klesov's Tail Series SLLN ....... 42
3.2 Tail Series WLLNs .................................... 48
4 TAIL SERIES STRONG LAWS OF LARGE NUMBERS II .... 55
4.1 Introduction and Preliminaries ....................... 55
4.2 Tail series SLLNs ..................................... 57
4.3 The Weighted I.I.D. Case ................................ 73
5 SOME FUTURE RESEARCH PROBLEMS .................. 81
REFERENCES..................... ......................................
BIOGRAPHICAL SKETCH..............................................
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
ON THE RATE OF CONVERGENCE OF SERIES
OF RANDOM VARIABLES
By
Eunwoo Nam
December 1992
Chairman: A. Rosalsky
Major Department: Statistics
For an almost surely (a.s.) convergent series of random variables S, = = l Xj,
the tail series T, = Ej=, Xj is a welldefined sequence of random variables which
converges to 0 a.s. The rate of a.s. convergence of S, to a random variable S is
investigated through the current study of the rate in which T,, converges to 0 a.s.
Tail series strong laws of large numbers (SLLN) of the form blT,,  0 a.s. (where
{b,, n > 1} is a sequence of positive constants with b,, 0) are obtained under
various sets of conditions. Both the cases of (i) {X,, n > 1} having no conditions
on their joint distributions and (ii) {X,, n > 1} being independent are investigated.
Some earlier work by Klesov on the tail series SLLN problem, which had provided
tail series analogues of Petrov's SLLNs for partial sums, is generalized to a larger
class of random variables. In the case of independent summands, some tail series
analogues of Teicher's SLLNs for partial sums are obtained as well.
Moreover, by employing the von Bahr and Esseen inequality, tail series weak laws
of large numbers (WLLN) for independent random variables are obtained. The tail
series WLLNs provide a bound on the rate in which supj>, ITjl converges to 0 in
probability. These tail series WLLNs are compared with the tail series SLLNs and
with tail series laws of the iterated logarithm of Rosalsky.
Examples are provided throughout to illustrate the current results and to com
pare them with other results in the literature.
CHAPTER 1
INTRODUCTION
The theory of partial sums of random variables has been at the forefront of
research in statistical science for most of this century. The case of independent
summands has been of especial interest. One of the most interesting problems in
classical probability theory has been to determine, for a given series of random
variables, the probability that the series converges. (Here, and throughout the
entire sequel, the term "converges" means that the limit under consideration exists
and is finite. The term "diverges" means "does not converge.") According to the
famous Kolmogorov 01 law (see, e.g., Chow and Teicher [14], p. 64, or Chung [17],
p. 254), a series of independent random variables either converges almost surely
(a.s.) or diverges a.s. The primary objective of the current work is to determine
the almost sure rate of convergence for a convergent series. This objective will be
discussed in more detail below.
Let
Sn = EXj n> 1,
j=1
where {X,, n > 1} are random variables. This dissertation will concentrate on a
series of independent random variables, but some results are obtained without as
suming independence. To establish almost sure convergence of the series Sn assum
ing {X,, n > 1} are independent random variables, the KhintchineKolmogorov
1
convergence theorem (see, e.g., Chow and Teicher [14], p. 110) and the celebrated
Kolmogorov threeseries criterion (see, e.g., Chow and Teicher [14], p. 114, or Chung
[17], p. 118) are very useful devices. In fact, the Kolmogorov threeseries criterion
provides a triumvirate of conditions which are both necessary and sufficient for the
convergence of the series S, when the summands {X,, n > 1} are independent
random variables.
The KhintchineKolmogorov convergence theorem asserts that if {X,,, n > 1}
are independent random variables with
00
E(X,) = 0, n > 1 and f E(X)) < oo,
n=l
then the series S, converges a.s. and in quadratic mean to a random variable S
with
E(S) = 0 and E(S2) = f E(X ).
n=l
The Kolmogorov threeseries criterion asserts that if {X,, n > 1} are independent
random variables, then the series S, converges a.s. iff
(i) PIX, > 1} < oo,
n=l
(ii) E E(X(x)) converges,
n=l
(iii) Var(X() < oo,
n=1
where
Xn") = Xnl4lxl<,], n > 1.
3
If the series Sn converges a.s. to a random variable S, then (set So = Xo = 0)
the tail series
oo
Tn = S Sn = Xj, n n> 1
j=n
is a welldefined sequence of random variables and converges to 0 a.s. In the the
ory of partial sums, the fact that the sum Sn is well defined for every n is, of
course, automatic. On the other hand, in the theory of tail series, the problem as
to whether {Tn, n > 1} is well defined is a genuine one. The two classical the
orems (KhintchineKolmogorov convergence theorem and Kolmogorov threeseries
criterion) play a key role in guaranteeing that the tail series Tn is well defined in
the case of independent summands. In this dissertation, we will focus on the rate
of convergence of the series Sn to a random variable S or, equivalently, on the rate
of convergence of the tail series T, to 0.
We say that the sequence of random variables {X,, n > 1} (such that the series
Sn diverges a.s.) obeys the strong law of large numbers (SLLN) with norming
constants {an, n > 1} if
Sn
0 a.s.
a,
where {a,, n > 1} is a sequence of positive constants with an T oo.
In the same way, we will say that the sequence {X,, n > 1} obeys the tail series
SLLN with norming constants {b,, n > 1} if the tail series Tn is well defined and
+ 0 a.s.
bn
where {b,, n > 1} is a sequence of positive constants with bn 1 0.
Of course, a SLLN has a sharper result for the slower 0 < a, f oo. Similarly, a
tail series SLLN has a sharper result for the faster 0 < b,. 0.
SLLNs for partial sums lie at the very foundation of statistical science and
have been and still are the subject of vigorous research activity. In the case of
partial sums, the SLLN problem was investigated prior to the LIL problem. On the
other hand, the situation is reversed for tail series; the tail series LIL problem was
investigated prior to the tail series SLLN problem. As will be seen, many results
for partial sums S, can be paired with analogous results for tail series T,. (Of
course, the actual random variables in the series S, and tail series T, are necessarily
different.) This duality was first discovered and investigated by Chow and Teicher
[13].
Chow and Teicher [13] constructed a milestone for research about the rate of
almost sure convergence of the tail series of independent random variables. They
developed a tail series counterpart to the renowned Kolmogorov law of the iterated
logarithm (LIL) (see, e.g., Chow and Teicher [14], p. 343, or Petrov [40], p. 292) for
series of independent and bounded random variables. Studies to eliminate Chow
and Teicher's boundedness assumption were conducted by Barbour [9], Heyde [23],
Budianu [12], Rosalsky [41], and Klesov [29]. Barbour [9] suggested a methodology
which yields a tail series analogue of the LinderbergFeller version of the central limit
theorem (CLT) (see, e.g., Chow and Teicher [14], p. 291, or Lorve [34], p. 292).
Using this methodology, Heyde [23] obtained tail series analogues of the CLT and
LIL for a martingale difference sequence. Budianu [12] proved a tail series LIL for
series of independent unbounded random variables, which is a tail series analogue
of Petrov [40, Section 10.2, Theorems 2] LIL, and extended it for twodimensional
random variables. Rosalsky [41] developed a more general tail series LIL than that
of Budianu [12] for series of independent unbounded random variables, which is a
tail series counterpart to Teicher's [45] version of the LIL, and as special cases he
proved tail series LILs for weighted sums of independent and identically distributed
(i.i.d.) random variables (see below for the definition of the weighted i.i.d. case).
In the same year as Rosalsky's article appeared, Klesov [29] developed a version of
the tail series LIL for weighted i.i.d. unbounded random variables, but Klesov's [29,
Proposition 4] result is nothing but the special case / = 0 of Rosalsky [41, Theorem
2]. Klesov [29] also proved two tail series SLLNs for independent random variables,
which are tail series analogues of Petrov [40, Section 9.3, Theorems 12 and 13] and
Petrov [38, Theorem 5], respectively.
In his followup article, Klesov [30] extended his previous tail series SLLNs to
wider classes of independent random variables, and he also obtained tail series
SLLNs for several other dependence structures, viz. arbitrary sequences with no
assumptions on their joint distributions, orthogonal sequences, quasistationary se
quences, and martingale difference sequences.
Solntsev [43] proved a tail series SLLN, but his result is not satisfactory since
his condition involves blocks
E Xi, k >l
j=nl+l
of summands rather than individual summands. It is widely discussed in the
literature (see, e.g., Chung [16] or Lobve [34], p. 270) that conditions for the classical
SLLN for partial sums which involve blocks of random variables (as opposed to only
the individual summands) are unsatisfactory or at best undesirable. Such criticism
carries over directly to the tail series situation.
Throughout the entire sequel, all random variables are defined on a fixed but
otherwise arbitrary probability space (fl, F, P), and the logarithm and iterated
logarithm are conveniently defined for x > 0 and a positive integer r, by
Slog if x > e
logic x =
x if x < e
and
log, = log, log,_. z, r > 2
where log z (when x > e) denotes the natural logarithm.
Some of the results herein as well as some examples illustrating them con
cern the weighted i.i.d. case consisting of sequences {X,, n > 1} of the form
X, = anY, 2> 1, where {Yn, n > 1} are i.i.d. random variables with E(Yi) =
0, E (Y2) = 1, and {an, n > 1} are nonzero constants.
This dissertation will be divided into five chapters which will now be briefly
described. In Chapter 2, we will generalize some of Klesov's [30] tail series SLLNs.
(Throughout this chapter and the subsequent ones, our assumptions on the ran
dom variables {Xn, n > 1} involve the individual summands rather than blocks
of summands as were considered by Solntsev [43].) Furthermore, we will develop
truncated versions of our new tail series SLLNs. Both of the cases
(i) {X,, n > 1} are independent random variables,
and
(ii) {X,, n > 1} are random variables with no assumptions being imposed
on their joint distributions
are investigated. Also we will provide examples which demonstrate that the new
results are indeed better than previous ones.
Chapter 3 is quite independent of the others. In Chapter 3, we will study the
rate of convergence in probability of a series S, of independent random variables to
a random variable S or, more specifically, the rate in which supi>n ITjl converges
to 0 in probability by establishing tail series weak laws of large numbers (WLLN).
These tail series WLLNs take the form
sup ITj
j__ P
} 0
bn
where {b,, n > 1} is a sequence of norming constants with 0 < b, 1 0. As special
cases of our tail series WLLNs, we will obtain the tail series WLLNs for weighted
sums of i.i.d. random variables. Also, via the example of the harmonic series with
a random choice of signs, we will find a sequence of norming constants which yields
tail series WLLNs, but does not yield tail series SLLNs.
In Chapter 4, we will prove advanced tail series SLLNs for series of independent
random variables, which are counterparts to Teicher's [47] SLLNs for partial sums.
As special cases of these tail series SLLN'S, we will investigate the tail series SLLN
problem for weighted sums of i.i.d. random variables. Also we will provide an
example which illustrates the new results.
8
Finally, in the last chapter (Chapter 5), some problems for future research work
are presented.
CHAPTER 2
TAIL SERIES STRONG LAWS OF LARGE NUMBERS I
2.1 Introduction and Preliminaries
The tail series LIL has comparatively rich references; on the other hand, the
tail series SLLN has limited references. Two papers of Klesov [29 and 30] are
good references of the tail series SLLN. Although Klesov [29] proved two tail series
SLLNs for independent random variables by first establishing a tail series version
of the Kolmogorov's inequality (see, e.g., Chow and Teicher [14], p. 127 or Petrov
[40], p. 52), the proof of his tail series version of the Kolmogorov's inequality
is very complicated, and we could not quite follow his argument. His argument
rested on a tail series inequality which he did not substantiate. But, in his follow
up article, using a tail series analogue of the Kronecker lemma (rather than his tail
series version of the Kolmogorov's inequality), Klesov [30] extended his previous tail
series SLLNs to wider classes of independent random variables. He also developed
tail series SLLNs for arbitrary random variables (i.e., not necessarily independent)
as well.
Some of Klesov's [29, 30] work will now be described. Let be the class of
functions O(z) satisfying the following three conditions:
(i) Ok(z) is positive and nondecreasing.
1
(ii) x ( ) tends monotonically to 0 as x 1 0.
x
(iii) E 1 <
Examples of such functions O(x) are
(xa) = IZl, 0 < < 1
W() = (log1 jl)1', e > 0
O() = (log 1zl)(log, Il)'+e, e > 0
and so on.
For arbitrary random variables {Xn, n > 1}, without the assumption of inde
pendence, Klesov [30] developed two tail series SLLNs (Propositions 1 and 2 below).
But Proposition 2 included a technical error in its formulation, and so it needs to
be restated. As in Chapter 1, {T,, n > 1} denotes throughout the sequence of tail
series T, = Ejj Xj, n > 1, corresponding to random variables {X,, n > 1}. Note
that the hypotheses of Proposition 1 and 2 ensure that {T,, n > 1} is well defined.
Proposition 1 (Klesov [30]). Let 0 < p < 1 and let {X,, n > 1} be random
variables. Furthermore, let {bn, n > 1} be a sequence of positive constants with
bn 0. If the series
SE(IXn)
< oo,
n=1
then the tail series SLLN
Tn
 0 a.s.
b,
obtains.
Proposition 2 (Klesov [30]). Let 0 < p < 1 and let {X,, n > 1} be random
variables. If the series
E E(IXIP) < oo,
n=1
then setting
An = E(IXiP), n> 1,
j=n
and assuming that An > 0, n > 1, the tail series SLLN
T,
_ + 0 a.s.
(At(A;1))
obtains for each function O(x) E T.
Proposition 2 is a tail series analogue of a SLLN of Petrov [39].
Under the assumption that {X,, n > 1} are independent random variables,
Propositions 1 and 2 have been extended by Klesov [30] to Propositions 3 and 4
below, respectively, by employing a class of functions instead of a specific function
g(x) = IxzI, 0 < p < 1. But both of them included a technical error in their
formulation, and so they need to be reformulated as follows. In addition, Klesov
[30] did not verify that his conditions ensure that the tail series {T, n > 1} is
indeed well defined.
Let the function g(x) be positive for z > 0 with g(z) T oo as z T oo. Assume
that either of the following two conditions holds
(i)  is nondecreasing for x > 0.
g(x)
(ii) g() is nondecreasing for x > 0,  is nondecreasing for x > 0,
x g;(x)
and E(X) = 0, n > 1.
12
Proposition 3 (Klesov [30]). Let {X, n > 1} be independent random variables
and let {bn, n > 1} be a sequence of positive constants with bn i 0. If the series
0 E(g(X,))
o < oo,
n=1 g(bn)
then the tail series SLLN
Tn
0 a.s.
bn
obtains.
Proposition 4 (Klesov [30]). Let {Xn, n > 1} be independent random variables.
If the series
EE(g(IXn1))
n=l
then setting
0o
An = E(g(IXiI)), n>
j=n
and assuming that An > 0, n > 1, the tail series SLLN
Tn
7 0 a.s.
g1 (An (A1))
obtains for each function (zx) E T.
Not only do Propositions 3 and 4 reduce to two tail series SLLNs of Klesov
[29], respectively, by taking g(x) = xIP, 0 < p < 2, but they also are tail series
analogues of Petrov [40, Section 9. 3, Theorem 11 with gn = g, n > 1] and Petrov
[38, Theorem 5], respectively.
Most of our results in this chapter are based on the following two lemmas.
Lemma 1 (Heyde [23], Rosalsky [41], Klesov [30]). Let {z,, n > 1} be a sequence
of constants and let {b., n > 1} be a sequence of positive constants with bn 0. If
the series
converges,
n=1 bn
then
100
E j 0o.
 j=n
Lemma 1 is a tail series analogue of the Kronecker lemma. This lemma is initially
due to Heyde [23], but Rosalsky [41] reproved it in an alternative way because
Heyde's original proof was not clear. One year after Rosalsky's [41] paper appeared,
but independently from Rosalsky's paper, Klesov [30] proved the lemma in a manner
similar to that of Rosalsky. As we mentioned earlier, in his previous paper, Klesov
[29] proved his tail series SLLNs via a tail series version of the Kolmogorov inequality
instead of the above tail series analogue of the Kronecker lemma. The approach
using this analogue of the Kronecker lemma is simpler and indeed more natural.
Lemma 2 (Klesov [29]). Let {cn, n > 1} be a sequence of nonnegative constants
such that E, 1 c, < oo. If
C, = cj > 0, n>l,
j=n
then
00
C c < 00oo
n=1 C.n (C,;)
obtains for each function tk(z) E T.
This lemma is a tail series analogue of the AbelDini theorem (see, e.g., Knopp
[32], p. 290).
2.2 Tail series SLLNs for Arbitrary Random Variables
For arbitrary random variables {X,, n > 1}, without the assumption of inde
pendence, we obtain the following tail series SLLNs. To avoid trivial considerations,
assume that {X,, n > 1} are not eventually degenerate at 0. This assumption is
in effect throughout the entire chapter and will not be repeated. The main result
of this section, Theorem 1, may now be stated. It will be shown in the proof of
Theorem 1 that the hypotheses ensure that {T,, n > 1} is a welldefined sequence
of random variables. The proof of Theorem 1 will be deferred until after the proof
of the ensuing Lemma 4.
Theorem 1. Let {Xn, n > 1} be random variables and let {g,(x), n > 1} be
strictly increasing functions defined on [0, oo) such that
gn(0) = 0 and lim gn(z) = oo, n > 1. (2.2.1)
Assume that
x
g ) is nondecreasing as 0 < x T for each n > 1 (2.2.2)
9n(z)
and
gn(z) is nondecreasing in n for each fized z > 0. (2.2.3)
If the series
00
E E(gs(lXn)) < oo, (2.2.4)
n=l
then setting
A= E(gi,(IXj)), n>
j=n
and assuming that for some function O(z) E T
P {(Xn < g;1 (An k(An1)) eventually } = 1, (2.2.5)
the tail series SLLN
T,
,  0 a.s. (2.2.6)
9g(Ans (A1))
obtains, where g;1 denotes the inverse function of g, for each n > 1.
Remarks. (i) By the BorelCantelli lemma, a sufficient condition for (2.2.5) is
EP {Ix > g(An(A ))}<00oo.
n=l
(ii) From the definitions of A, and the class %, we note that An (An1) 1 and
so g;'(A,, (A1)) 1. Moreover, the condition (2.2.5) is necessary for (2.2.6) to
hold. This follows from the remark after the ensuing Lemma 4 by setting b, =
g(.An ( 1)), ~ > 1.
(iii) For each n > 1, note that (2.2.2) together with the fact that each g,,() is a
nondecreasing function, implies that each g,(x) is necessarily a continuous function.
In order to prove this, we will show that
g,(xo) = ,,(x+) for arbitrary xo E (0, oo) and for each n > 1. (2.2.7)
Let 0 < s < xo < t. Then (2.2.2) ensures that
s zo t
< < n 1.
gn(s) 9n(xzo) gn(t)'
Take s T xo and t I xo. Then
=l m s t no
X lim < lm = X n 1.
7n(xo) T0 9g(s) tIx 9gn(t) g.(Z+)
Therefore, g,(xo) > g9,(X) for arbitrary xo E (0, oo) and for each n > 1. Hence,
via the monotonicity of each gn, (2.2.7) follows. O
Assuming (2.2.5) (which is necessary for (2.2.6)), then not only does Theorem
1 reduce to Proposition 2 by setting
gn(x) =I, 0 1,
but, Theorem 1 also yields Theorem 2 under the condition (i), without the inde
pendence assumption.
The proof of Theorem 1 utilizes the following two lemmas.
Lemma 3. Let {X,, n > 1} be random variables and let {gn(z), n > 1} be non
decreasing functions defined on [0, oo) satisfying (2.2.1) and (2.2.2). Furthermore,
let {bn, n > 1} be a sequence of positive constants such that
P {IXn < bn eventually } = 1. (2.2.8)
If the series
oo (bgn( < oo, (2.2.9)
n=1 g,(bn)
then the series
y converges a.s. (2.2.10)
n=1 bn
Remark. Since (2.2.10) ensures that b.;X,  0 a.s., (2.2.8) follows. Thus the
condition (2.2.8) is necessary for (2.2.10) to hold.
Proof of Lemma 3. By (2.2.8), for almost all w E fl there exists an integer N(w)
17
such that IXn(w)l < b, for all n > N(w), and hence by (2.2.2)
IX,(w) < b.
Xn(XW)) < (bn), n > N(w). (2.2.11)
gn(jXn(W)j) g n(b,)
Next, via the Lebesgue monotone convergence theorem, (2.2.9) ensures that
E ( ng.(bXn.) < 0
and so
Sg(bX) < oo a.s. (2.2.12)
E= gn(bn)
Thus, for almost all w E
SIX(W)I N() IX(w) IX(w)I
n=1 b ()+ E x (b ) (
n= n=N(w)+l
N'")IX.(w )I = 9.(X(w)l)
S. + ) (by (2.2.11))
1 b, n=N(w)+l g"(b")
< oo (by (2.2.12))
and therefore (2.2.10) obtains. o
Using Lemmas 1 and 3, we obtain the following lemma.
Lemma 4. Let {X,, n > 1} be random variables and let {9g(z), n > 1} be non
decreasing functions defined on [0, oo) satisfying (2.2.1) and (2.2.2). Let {b,, n > 1}
be a sequence of positive constants satisfying (2.2.8) with b, I 0. If gn(bn) = 0(1)
and (2.2.9) holds, then the tail series SLLN
Tn
T , 0 a.s. (2.2.13)
b,
obtains.
Remark. The triangle inequality and the fact that b, I imply
IX < IT.I IT.+1 ,
< + n_1
b. n b,+l '
and so (2.2.13) ensures b'X,  0 a.s. Thus the condition (2.2.8) is necessary for
(2.2.13) to hold.
Proof of Lemma 4. Note that (2.2.9) and g,(b,) = 0(1) ensure (2.2.4) which, as
will be demonstrated in the proof of Theorem 1, ensures that {T,, n > 1} is well
defined. Employing Lemma 3 yields (2.2.10). Since b, 1 0, the lemma follows from
Lemma 1. 0
By assuming (2.2.8) which is a necessary condition for (2.2.10) and (2.2.13),
Lemmas 3 and 4 yield the ensuing Lemmas 5 and 6, respectively, but without
assuming independence in the case when the condition (i) of Theorem 2 is assumed.
Also Lemma 4 reduces to Proposition 1 by setting
g,.(x) = I, 0 < < 1, n > 1.
The proof of Theorem 1 may now be given.
Proof of Theorem 1. Firstly, we want to show that the tail series {T,, n > 1}
is well defined. Since, via the Lebesgue monotone convergence theorem, (2.2.4)
ensures
E g.( X1) < oo,
(n=1
(2.2.4) = g,(IX,) < oo a.s. (2.2.14)
n=l
Sgn(lXn) + 0 a.s.
IX> X + 0 a.s. (by (2.2.3)). (2.2.15)
Now let N(w) be the random integer defined by
N(w) = min {N > 1 : IX,(w) I< 1 for all n > N} (= oo, otherwise ).
Then (2.2.15) ensures that N(w) < oo a.s. whence by (2.2.2), for almost all w E 0
IX., 1
(X<  n > N(w). (2.2.16)
9.(nxn ) 9n(l)
Thus, for almost all w E ft
oo N(w) oo
EIXnl = El X + E Ixnl
=n=1 =1 n=N(w)+l
N(w) oo0 lXl)
< E X + E (1X.) (by (2.2.16))
n=1 n=N(w)+l gn(1)
N(w) 00 (1
E< IX.+M E g,(IX.) M=
n=1 n=N(w)+l 91(1)
< oo (by (2.2.14))
and so
00
SX, converges a.s.
n=1
Therefore {T,, n > 1} is a welldefined sequence of random variables.
Next, let
cn = E(gn(jXn,)), n > 1
and observe that
00
c, > 0, n > 1 and E c, < oo a.s. (by (2.2.4)).
n=1
Since An = ZJl ci > 0, n > 1, Lemma 2 ensures that for each function tk(x) 6 l,
E E(n(lXnlj)) 0 E(n(lXn9 X))
S E(g.( ))(< oo.) ( 2.2.17)
For each function I(x) E *, since
o < g;1'(A O(A'1)) ) o,
then setting
b. = g (An (A;l)), n > 1,
(2.2.8) and (2.2.9) follows directly from (2.2.5) and (2.2.17), respectively. Thus the
theorem follows from Lemma 4 since g.(b,) = An b(A1) = 0(1) (see Remark (ii)
after the statement of Theorem 1). 0
We obtain the following two truncated versions of Theorem 1 as corollaries.
Corollary 1. Let {Xn, n > 1} be random variables and let {g,(x), n > 1} be
strictly increasing functions defined on [0, oo) satisfying (2.2.1), (2.2.2) and (2.2.3).
If
SP {IIXnI > Cn} < oo (2.2.18)
n=1
and
E E(gn.(X. Illx.l
n=l
are satisfied for some sequence of positive constants {Cn, n > 1}, then setting
= EE(g (IX, I jxil<,])), n 1
j=n
and assuming that for some function k(x) E '
P {IX, Ilx. lacn < n (: n b(.~)) eventually } = 1, (2.2.20)
the tail series SLLN
T.
 T + 0 a.s. (2.2.21)
(^ nA. (;)) /
obtains.
Remarks. (i) A sufficient condition for (2.2.20) to hold is
EP {IIX. Illx.l g(n A:1 ))} < OO,
n=l
by the BorelCantelli lemma.
(ii) Since (2.2.18) asserts that {Xn, n > 1} and (X, I[xl 1} are equiv
alent in the sense of Khintchine, (2.2.21) is equivalent to
T*
S* 0 a.s. (2.2.22)
where T* Ejj Xj I[xj 1. Note that g,(I.( (Anj 1)) 1. By the ar
gument in the remark after the statement of Lemma 4, mutatis mutandis, (2.2.22)
ensures the condition (2.2.20). Thus the condition (2.2.20) is necessary for (2.2.21)
to hold.
(iii) The condition (2.2.18) ensures that (2.2.20) is equivalent to the apparently
stronger but structurally simpler condition
P {IX I < 9'1 (.An (a1)) eventually } = 1.
Proof of Corollary 1. Set
Zn = Xn I[lx.l 1.
Then, by applying Theorem 1 to the random variables {Z,, n > 1}, (2.2.19) ensures
that the tail series T.* = E Zj is well defined and then (2.2.22) obtains for each
function O(x) E C. Since {X,, n > 1} and {Xn I[ix.l 1} are equivalent in
the sense of Khintchine, {T., n > 1} is also well defined and the corollary follows.
0
Corollary 2. Let {X,, n > 1} be random variables and let {gn(x), n > 1} be
strictly increasing functions defined on [0, oo) satisfying (2.2.1), (2.2.2) and (2.2.3).
If (2.2.18) and
00
E E(g,(X, I[ix.
n=l
are satisfied for some sequence of positive constants {Cn, n > 1}, then setting
A= E(gj(lXj Il[x,l 1
j=n
and
00oo
Tn = E {X E(Xj I[Ixilc,])}, n > 1, (2.2.24)
j=n
and assuming for some function (b(x) E T that
P {IX I[x.l
the tail series SLLN
0 (2.2.26)
obtains.
Remarks. By the argument in Remarks (i), (ii), and (iii) after the statement of
Corollary 1, mutatis mutandis, we observe, respectively, that
(i) A sufficient condition for (2.2.25) is
oP {Xn I[x.l gn (,n )) < 00.
n=1
(ii) The condition (2.2.25) is necessary for (2.2.26) to hold.
(iii) The condition (2.2.18) ensures that (2.2.25) is equivalent to the condition
P {IX, E(Xn I[Ixl
Proof of Corollary 2. Set
Z, = X, Il[x.nl 1.
Then the result follows from (2.2.18) and (2.2.23) by employing the argument in
Corollary 1, mutatis mutandis. o
2.3 Tail Series SLLNs for Independent Random Variables
For independent random variables {X,, n > 1} we obtain the following tail
series SLLNs. In part (i) of the ensuing theorem, the condition (2.2.5) of Theorem
1 is dispensed with at the expense of assuming that {X,, n > 1} are independent.
The main result of this section, Theorem 2, may now be stated. As in Theorem
1, it will be shown in the proof of Theorem 2 that the hypotheses ensure that
{ T, n > 1} is a welldefined sequence of random variables. The proof of Theorem
2 will be deferred until after the proof of the ensuing Lemma 6.
Theorem 2. Let {Xn, n > 1} be independent random variables and let {gn(z), n >
1} be strictly increasing functions defined on [0, oo) such that
g.(0) = 0 and limg(z) = oo, n > 1 (2.3.1)
and assume that
g,(z) is nondecreasing in n for each fixed z > 0.
(2.3.2)
Suppose that one of the following two conditions prevails
(i) is nondecreasing as 0 < x T for each n > 1.
sn(xX2
gn (s))
(ii) gn(x) is nondecreasing as < is nondecreasing as 0 < x ,
x gn(x)
and E(X,) = 0, for each n > 1.
If the series
00
SE(g.(IX.I)) < oo, (2.3.3)
n=1
then setting
.= EE(gj(IXj)), n> 1,
j=n
the tail series SLLN
Tn
 0 a.s.
g;1 ( An (A; ))
obtains for each function Ob(x) E 9, where g'1 denotes the inverse function of gn for
each n > 1.
Remark. Note that for each n > 1, the hypotheses to (i ) or (ii ), together
with the fact that each gn(z) is a nondecreasing function, imply that each g,(x) is
necessarily a continuous function. Under the hypotheses to (i ) the continuity of
each gn(s) follows directly from Remark (iii) after the statement of Theorem 1. So
it is enough to show that each gn(x) is a continuous function under the hypotheses
to (ii ). To this end, we will prove that
g,(xo) = g,(x+) for arbitrary xo E (0, oo) and for each n > 1. (2.3.4)
Let 0 < s < xo < t. Then (ii) of the theorem ensures that
sa2 T2 t2
< o < n > 1.
9n(s) gn(xo) gn(t)
Take s f xo and t I xo. Then
T2 s2 t2 2
= lim lilim 0 n>l
g.(zo) oe g(s) t=o g,(t) g.f(zo+)
Therefore for each n > 1,
gn(zo) 2 Sg,(+) for arbitrary xo E (0, oo) and for each n > 1.
Hence, via the monotonicity of each gn, (2.3.4) follows. O
Theorem 2 reduces to Proposition 4 by setting g, = g, n > 1. And also Theorem
2, under the hypotheses to (i), follows directly from Theorem 1 by assuming the
condition (2.2.5) which is a necessary condition for the result to hold. Moreover, as
will become apparent, Theorem 2, under the hypotheses to (ii), owes much to the
work of Klesov [30].
The proof of Theorem 2 utilizes the following two lemmas.
Lemma 5 (Petrov [38]). Let {X,, n > 1} be independent random variables and
let {9g(x), n > 1} be nondecreasing functions defined on [0, oo) satisfying (2.3.1).
Assume that condition (i) or (ii) of Theorem 2 holds. Further, let {b., n > 1} be
a sequence of positive constants. If the series
SE(gn(,X < oo (2.3.5)
n=l gn(b.n)
then the series
0X
converges a.s. (2.3.6)
n=1 b"
Lemma 5, under the condition (ii) of Theorem 2, was proved for the case g,
g, n > 1, by Chung [17, p. 124].
Using Lemmas 1 and 5, we obtain the following lemma.
Lemma 6. Let {X,, n > 1} be independent random variables and let {g,n(), n >
1} be nondecreasing functions defined on [0,oo) satisfying (2.3.1). Assume that
condition (i) or (ii) of Theorem 2 holds. Let {b,, n > 1} be a sequence of positive
constants with b, 1 0. If g,(b) = 0(1) and (2.3.5) holds, then the tail series SLLN
Tn
 0 a.s.
obtains.
Proof. Note that (2.3.5) and g,(bn) = 0(1) ensure (2.3.3) which, as will be
demonstrated in the proof of Theorem 2, ensures that {Tn, n > 1} is well defined.
Employing Lemma 5 yields (2.3.6). Since b, 0, the lemma follows from Lemma 1.
Not only does Lemma 6 reduce to Proposition 3 by taking g, g, n > 1, but
it also is a tail series analogue of Petrov [40, Section 9. 3, Theorem 11]. Moreover,
if (2.2.8) holds, then under the hypotheses to (i) of Theorem 2, Lemmas 5 and 6
follow directly from Lemmas 3 and 4, respectively.
The proof of Theorem 2 may now be given.
Proof of Theorem 2. Note at the outset that in the proof of Theorem 1 the
condition (2.2.5) was not employed to establish that the tail series {Tn, n > 1} is
well defined. Consequently, under the hypotheses to (i), {T,,, n > 1} is a welldefined
sequence of random variables.
Next, it will be verified by employing the Kolmogorov threeseries criterion that
under the hypotheses to (ii), E'= X, converges a.s. and hence {T,, n > 1} is well
defined. For each n > 1
P{IX, > 1} = P{g,(lXn,) > g(1)}
E(g,(Xn))
< E(g(X ) (by the Markov inequality)
g9 (1)
< ME(gn(IXn)) (M=
9i1)
and so
oo
P {IXl
n=1
< oo (by (2.3.3)).
oo
> 1} < MEE(g( XI))
n=1
Now for each n > 1
E (X, I[x.l
= IE(X, Ilx,l>]) I (since E(Xn) = 0)
x
g.n() 
1
9,'(1)'
a>l)
E(IXI Ilxl>])
E(gn(lX,I) Illxnl>]) (since
gn(1)
SE9n(lXnl))
n(l)
< ME(gn(Xn)) (M
00
< M E E(g(IXI)) < oo
n=1
(by (2.3.3))
implying that
00
E(Xn I[Ix.<1]) converges.
n=l
Again, for each n > 1
< E(X. I[Ix.I<])
E(gn(IX,I) I[ix,.1
Sg,(1) (since
x2
;gn(a)
1
<  x<
9ng(1)
Thus,
91(1)
00
, EXn I[x.1
n=l
Var(X, I[ix,.1<])
< E(g(IXnj))
g (1)
< ME(g(IXI)) (M
gi(l)
implying
00oo(2.3.3)).
< M E E(g9(IX.)) < oo (by (2.3.3)).
n=1
Hence the conditions of the the Kolmogorov threeseries criterion are satisfied
thereby ensuring that
00
E X, converges a.s.
n=l
Therefore, in both cases, {T,, n > 1} is a welldefined sequence of random variables.
Next, let
Cn = E(g9(Xn)), n > 1
and observe that
00
c, > 0, n > 1, and E c, < oo (by (2.3.3)).
n=l
Since An = Ct=n c > 0, n 2 1, Lemma 2 ensures that for each function k(x) 6 T,
0 E(gn(lXnl))
oo E( ,.(X ))
E /k ( ,)
nt=1ln(.'
Now for each function 'b(z) E I9, since
0 < g;1((' O(A;')) 1 0,
setting
E Var(X, Igx.1])
n=l
bn = g (, 1(A )), n > 1,
the condition (2.3.5) holds. The theorem then follows directly from Lemma 6 since
g.(b.) = .A4 (A41) = 0(1). O
We obtain the following two truncated versions of Theorem 2 as corollaries by
employing an argument similar to that used to establish Corollaries 1 and 2 of
Section 2.1.
Corollary 3. Let {X., n > 1} be independent random variables and let {g,(z), n >
1} be strictly increasing functions defined on [0,oo) satisfying (2.3.1) and (2.3.2).
Assume that the condition (i) or (ii) of Theorem 2 holds. If
00
P {IXI > C,} < oo (2.3.7)
n=1
and
YE E(g(lXn Iix.l
n=1
are satisfied for some sequence of positive constants {C,, n > 1}, then setting
00oo
n = E E(g,(lXj I[Ix, 1,
j=n
the tail series SLLN
S+ 0 a.s. (2.3.9)
obtains for each function b(zx) E x1.
Remark. A necessary condition for (2.3.9) to hold is that (2.3.7) obtains with
C, = g (An 1)), n > 1. (2.3.10)
Proof of Remark. The triangle inequality and the fact that g' (A,n b(,1))
imply
Ix.I IT.nl IT.+l
< + n>1.
g1 (A b(A;1)) 1 (An 0(A;')) g+ 1 (4 n .A41)) 1
Thus (2.3.9) ensures
X,
X 0 a.s.
g;1 (An A ))
Using the BorelCantelli lemma and the independence of {Xn, n > 1}, we obtain
P {IX l > g<(An < 0))< .
n=l
Hence (2.3.7) holds with {Cn, n > 1} as in (2.3.10). 0
Proof of Corollary 3. Set
Z, = X, I[IXl 1.
Then, by applying Theorem 2 to the random variables {Z,, n > 1}, (2.3.8) implies
that the tail series T, = Ej', Z, is well defined and the tail series SLLN
T*
+ 0 a.s.
g; (An )
obtains for each function O(z)E % Since (2.3.7) implies that {X,, n > 1} and
{X, I[ix.i 1} are equivalent in the sense of Khintchine, {T,, n > 1} is also
well defined and Corollary 3 follows. O
Corollary 4. Let {X,, n > 1} be independent random variables and let {g.(z), n >
1} be strictly increasing functions defined on [0,oo) satisfying (2.3.1) and (2.3.2).
Assume that condition (i) or (ii) of Theorem 2 holds. If (2.3.7) and
SE(g,(IXX. I[llx.lc.] E(X, I(Ix._<:c.]))) < 00 (2.3.11)
n=1
are satisfied for some sequence of positive constants {C., n > 1}, then setting
oo
A= E E(gj(IXj Ilx, c,] E(Xj I[IX, C]))), n > 1
j=n
and
00oo
t= E {Xj E(X lI[Ix, I1,
j=n
the tail series SLLN
 0 a.s.
obtains for each function 0b(x) E T.
Proof. Set
Z, = X. I[lx.l .] E(Xn I[lx.jc.,]), n > 1.
Then the corollary follows from (2.3.7) and (2.3.11) by employing the argument in
Corollary 3, mutatis mutandis. o
2.4 Examples
Three examples are provided to illustrate some of the current results as well as
to compare them with related results in the literature.
Example 1. Let {X,, n > 1} be random variables (not necessarily independent)
such that
PX, 1 and P{,,e"= n>1.
n2 
Then for any pE (0, 1],
1 ( 1 ) e"p
E(Xnl) 1 + n >l
2p 2 n
and so
E E(IX./P) = oo for all p (0,1].
n=l
Therefore the hypotheses of Proposition 2 are not met.
Let < a < 1 and let
2
(2.4.1)
Then, recalling the definition of log1 x, the conditions (2.2.1), (2.2.2), and (2.2.3)
are satisfied. For each n > 1
E(gn,(Xn)) = E((log, IX)) = 1
1
n2a
E E(g (IX))
n=1
+ na2 < .
Hence (2.2.4) holds.
Now for n > 1,
00
A = ZE (log1 lXjl)")
j=n
Se 1 1ja
i=n le jl
(1 ) i 2}
M n2a M n1 (M = 1 1 and M2 =
M nha+M2l = ea 2a 1
ha)
. (2.4.2)
Suppose that 1 < a < 2. Then
A) i tM nt12o
If O(x) is taken to be the function
(x) = (log,1 )1+" where e > 0,
and so
n"
n2
gn(x) = (logz x0)", n > 1.
(2.4.3)
then for all large n
A b(A, ) ~ M3 n12 (log, n)+e (s= 3 (2a )')
= o(1).
Therefore, for all large n
implying
9g1 (A. (A1)) = e (A (A1)) M4 n2 (log1 n) 16
(M4 = e Ma).
Thus,
P{IXn > gl (An O(An))I = n for all large n
and so, via Remark (i) after the statement of Theorem 1, the condition (2.2.5) also
holds. Hence for each a E (, j, the tail series SLLN
T7
12 + 0 a.s.
na (log, n) +
obtains by Theorem 1, i.e.,
2
n a
2 Tn  0 a.s.
(log, n) 1
On the other hand, suppose that < a < 1. Then recalling (2.4.2),
An, M na1.
If O(z) be taken to be the function as in (2.4.3), then for all large n
An (A;') ~ Ms n'1 (log, n)1' (Ms = (1 a)')
= o(1)
(2.4.4)
(2.4.5)
A. (A.4x) < 1
implying
A ((A.;1) < 1
for all large n and so
(A,1 (A;')) = e (A. (A1)) I~ Men* (log1 n)'W (Me = e Ms) .
Thus, (2.4.4) holds and so, via Remark (i) after the statement of Theorem 1, the
condition (2.2.5) also holds. Hence for each a E (j, 1), the tail series SLLN
T.
 + 0 a.s.
n (logl n)
obtains by Theorem 1, i.e.,
1.1
l n T'  0 a.s. (2.4.6)
(log1 n)
Next, it will now be demonstrated that Corollary 1 can be also applied. Let
S< p < 1 and let
gn(x) Ix, n > 1. (2.4.7)
Then the conditions (2.2.1), (2.2.2), and (2.2.3) are satisfied. Set
C,, 2, n > 1. (2.4.8)
Then
1
P{IX,, > C} = , n > 1
n
implying (2.2.18). Also
E(gn(XI[nxl.
= (1 1), n2
n2p n2
and so
E g.(X. I[x.
n=l n=l
Thus the condition (2.2.19) holds. Now for n > 1
A = E(I XI[xllxcI)
j=n
= 1 
~ M7n2p M7 =2 1)
If (zx) be taken to be the function as in (2.4.3), then for all large n,
A 0,(A1) ~ Ms n12p (log n)l+" (Ms = (2p 1)')
= o(1).
Hence for all large n,
implying
= &(An i))p ~ MAn2 (log, n)' (M9 = M ).
Thus, (2.4.4) holds and so, via Remark (i) after the statement of Corollary 1, the
condition (2.2.20) also holds. Therefore, by Corollary 1 the tail series SLLN
T7
12 7 + 0 a.s.
nP (logI n) p
obtains, i.e.,
21
n P
i Tn + 0 a.s.
(log1n) P
Not only is this result sharper than (2.4.5) for p > a, a E (, ], but, also,
this result is sharper than (2.4.6) for p > aE (0,1). Hence, for pE (,1],
Corollary 1 gives us a better result than that which can be obtained by Theorem 1.
In conclusion, it may be noted that Theorem 2 and Corollary 3 can also be
applied to this example by setting {g,, n > 1} as in (2.4.1) or as in (2.4.7), respec
tively, if {X,, n > 1} are assumed to be independent.
Example 2. Let {X,, n > 1} be random variables (not necessarily independent)
such that
1 1
P{X = = 1= 1 and P{X, = en} = n2 1
Then for any p E (0, 1],
n 2 '
and so
E E(IX.') = oo, for all pE (0, 1].
n=l
Therefore the hypotheses of Proposition 2 are not met.
It will now be demonstrated that Corollary 2 can be applied. Let
Then by the argument in Example 1, setting {gn, n > 1} as in (2.4.7) and {C,, n >
1} as in (2.4.8), the conditions (2.2.1), (2.2.2), (2.2.3), and (2.2.18) are satisfied.
Moreover, for each n > 1
E(gn(IX. I[lx.c<.] E(X, Iix.
= E(X, IIxc] 1 ) )
1 (1 12 1
np 2 n2 n2
implying
EE(g(I X I[ixI
n=1 n=l n2p
+ }<00.
n2 I
Hence (2.2.23) holds.
Next for n > 1,
00
= E(IXj I[Ix,
j=n
,=n1 1 (
~ M1n 2p (Mi
(2p1 J
2p 1
If O(x) be taken to be the function as in (2.4.3), then for all large n,
A 0(1)
SM2 nl2' logici n)'+ (M2 = (2p 1)')
= o(I).
Hence, for all large n
and so
(An A )) = ( ( )) Ms n2 (log, n)P (Ms = M2) .
Thus, for all large n
P {IX, I.x 9g' (An ( '))}
= P {j[IIxn2] P{IX,I < 2}1 > g; (An0 (in'))
< P {[IXn < 2] n [I[Ilx.l2] P{X, <5 2} > g;' (j,. 0(A'))] } + P{IXn\ > 2}
= P [jX.j<2]n 11+2 >gi(A .)) +P{X>2}
= P{IXn1 > 2}
1
n2
and so by Remark (i) after the statement of Corollary 2 the condition (2.2.25) also
holds. Therefore, by Corollary 2, the tail series SLLN
12 + 0 a.s.
n (login ) P
obtains for the tail series {TI, n > 1} defined as in (2.2.24), i.e.,
22t
+ Tn + 0 a.s.
(log, n) P
In conclusion, it may be noted that Corollary 4 can also be applied to this
example by setting {g,, n > 1} as in (2.4.7) in the case when {X,, n > 1} are
assumed to be independent.
In the following example, we will consider the rate of almost sure convergence
of the harmonic series with a random choice of signs.
Example 3. Let {X,, n > 1} be independent random variables such that
P X, =P x,= = nn>l.
The series of partial sums S, = '=ji Xj, n > 1, can be interpreted as the harmonic
series with a random choice of signs. We will employ Theorem 2 to determine its
rate of convergence to a random variable.
Let 0 < a < 1 and let
g,(z) = n'a, n 2 1, and g(x) = gi(x) = zx, > 0.
Then the conditions (2.3.1), (2.3.2), and (ii) of Theorem 2 are satisfied. For each
n>1,
E(g,(IX.1)) = n(x+a) and E(9(XI)) = n2
implying (2.3.3) and (2.1.1), respectively. Therefore, all the hypotheses of Theorem
2 as well as all the hypotheses of Proposition 4 (with g(x) = g1(z) = x2) are
satisfied.
Now for n > 1,
A = EE(gj(lXjl)) = E j(+) M, n (M, = a1) (2.4.9)
j=n j=n
and
A. = Z E(g(IXj)) = j2 n1 (2.4.10)
j=n j=n
If k(xz) be taken to be the function
then
An (An 1) M2 n (M2 = a)
and
A,, (A;1)~ n2
implying, respectively,
gl(^.4(A; ,l)) ( M3n+ (M =)
and
(An i(A1)) n .
Thus, by applying Theorem 2 and Proposition 4, the tail series SLLNs
n2 Tn  0 a.s. (2.4.11)
and
nT Tn  0 a.s. (2.4.12)
obtain, respectively. Hence, recalling a < 1, (2.4.11) dominates (2.4.12). Therefore
Theorem 2 gives us a sharper result than that which can be obtained by Proposition
4.
Next, by taking t(x) to be the function as in (2.4.3) of Example 1, two relations
(2.4.9) and (2.4.10) yield the asymptotic relations
An (A1') ~ M4 n (log, n)'+' (M4 = a')
and
An b(A) ~ n'(log, n)'+,
respectively. Thus,
n (A ;')) Ms n" (log, n)+'P (M, = at)
and
g'(An,(A1)) ~ n (log, n).
Hence, by either Theorem 2 or Proposition 4, the tail series SLLN
(logn) Tn 0 a.s. (2.4.13)
(log, n) 2
obtains for arbitrary e > 0. Therefore, there is no advantage of Theorem 2 over
Proposition 4 in this case.
Furthermore, let {Y., n > 1} be a sequence of i.i.d. random variables such that
1
P{Y, = 1} = P{Y, = 1} = , n > 1.
Consider the weighted i.i.d. random variables
Xn = a, Yn, where a, = n1, n > 1.
Then
00 001 1
= E = n
j=n j=n
and so
na
2 = o(t+1) and = O(1).
Therefore, by a tail series LIL of Rosalsky [41, Theorem 2] where / therein is chosen
to be 0, the tail series LIL
m sup n aj Yj = 1 a.s.
n o(2 t log2 t)2)
obtains, i.e.,
lim sup 2 n T = v2 a.s. (2.4.14)
no (log, n)I
Hence, for arbitrary e > 0, the tail series SLLN
I
n2
g2 Tn  0 a.s.
(log, n)I+c
obtains. Thus this result of the tail series LIL of Rosalsky [41, Theorem 2] is sharper
than (2.4.13) as well as (2.4.11) and (2.4.12).
This example illustrates the gap between the conclusion of the tail series SLLN
(Theorem 2) and that of the tail series LIL of Rosalsky [41, Theorem 2]. Further
discussion about this will be given in Chapter 4.
CHAPTER 3
TAIL SERIES WEAK LAWS OF LARGE NUMBERS
3.1 Introductory Comments, Tail Series Inequality,
and a New Proof of Klesov's Tail Series SLLN
As was mentioned at the beginning of Chapter 2, in Klesov's [29, Lemma 1]
proof of a tail series version of Kolmogorov's inequality for independent random
variables, not only was his argument obscure, but, also, he employed a tail series in
equality without proving it. After formulating and proving this tail series inequality
(Proposition 5 below), we will provide an alternative proof of the tail series SLLN
of Klesov [29, Proposition 1] (which is not based on the tail series version of the
Kolmogorov's inequality as was used by Klesov to prove his tail series SLLN). As a
direct application of this tail series inequality, we will establish tail series WLLNs
for the case of independent summands. Furthermore, as special cases of these tail
series WLLNs, we will also obtain tail series WLLNs for weighted sums of i.i.d.
random variables. As in Chapters 1 and 2, {Tn, n > 1} denotes throughout the tail
series T, = iO=, Xj, n > 1, corresponding to random variables {X,, n > 1}. As
will be seen, the hypotheses to each of the tail series results presented below ensure
that {Tn, n > 1} is a welldefined sequence of random variables. Hence Tn + 0 a.s.
or, equivalently,
sup Tl P 0.
j>n
As was mentioned in Chapter 1, these tail series WLLNs are of the form
sup IT,
>_n P
_ 0 (3.1.1)
bn
where {b,,, n > 1} is a suitable sequence of norming constants with 0 < b, i 0.
Of course, if the tail series SLLN
T.
T + 0 a.s.
bn
holds, then
]Ti IP
sup 0
j>n bj
whence via 0 < b4 1 0 the tail series WLLN (3.1.1) also obtains and it involves the
same sequence of norming constants.
This tail series inequality under discussion may now be formulated.
Proposition 5 (Klesov [29]). Let {X,, n > 1} be independent random variables
with E(IX,IP) < oo, n > 1, for some p > 0. Assume that one of the following two
conditions holds
(i)0
(ii) 1 < p < 2 and E(X,) = 0.
If
E E(IX,,I) < oo, (3.1.2)
n=l
then for every e > 0, the inequalities
P{sup ITrI > 6 P E(XjlI), n 2 1
j>nobt e j=n
obtain where Cn (p) E (0,2] is a sequence of constants depending only on p.
The proof of Proposition 5, which will be given below, utilizes the following
Lemmas 7 and 8 and the proposition, under the assumption (ii ), is indeed a tail
series analogue of Lemma 7 which concerns partial sums of independent random
variables.
Lemma 7. Let S, = Z=X Xj, n > 1 where {X,, n > 1} are independent random
variables satisfying for some p E (1,2]
E(IXIP) < oo and E(Xn) = 0, n > 1.
Then for all e > 0, the inequalities
P( max ISl > e < E(IXI), n > 1
obtain.
Proof. Note at the outset that the hypotheses ensure that {Sn, .F,, n > 1} is
a martingale where 7, = e(Xi, X2, ..., X,), n > 1, and so {ISn 7n, n > 1}
is a submartingale (see, e.g., Chow and Teicher [14], p. 232) since the function
p(t) = ItlP is convex. Then
P{max Si > e = P max Sj' > e,
l
< E(ISnIP)
(by Doob's submartingale maximal inequality [18, p. 314])
2"
< E E(IX I)
j=1
by employing the von BahrEsseen [8] inequality. Thus the lemma follows. o
Lemma 8. Let {Xn, n > 1} be independent random variables satisfying for some
pE (1,2]
E(IXnI) < oo and E(Xn) = 0, n > 1.
For each n > 1 and 1 < k < n, let
S.j= EXi, k
i=j
Then for all choices of n and k with 1 < k < n and for all e > 0, the inequality
max
Pf Jax S.J > 6} EE(IXi P)
j=k
obtains.
Proof. Fix n > 1 and 1 < k < n. Set
Sj = .+i, 1
i=1
and note that
{Sn,:j=k, .., n}= {S :j=n+ l k, ..., 1}.
Then, applying Lemma 7 to the random variables {Xn, X,1, ..., Xk}, it follows
that for e > 0,
P{max ISni > e = Pf max ISI > e
1k
2 n+lk
< E E(IXn+j I)
j=1
2 "
 j (E(IXiIP)
thereby proving the lemma. 0
Proof of Proposition 5. Let g,(x) IxI, 0 < p < 2, n > 1. Then, by the
argument in the proof of Theorem 2 of Chapter 2, (3.1.2) ensures that {T,, n > 1}
is a welldefined sequence of random variables.
Firstly, suppose that the assumption (i) holds. Then
Pjsup ITi> < pf P Xyll > c
jn j=nCo
= P (E IXi ) > e'
< E (( Ixj l)' (by the Markov inequality)
lim E (EI jX)'P
SP N*oo
(by the Lebesgue monotone convergence theorem)
1 N
< lim E IXj' ( since a + bIP < laI+ IbI, 0
eP N*oo (j=n
1"00
= E E(IXjIP)
j=n
again by the Lebesgue monotone convergence theorem. Thus, the proposition fol
lows under the assumption (i) with C,(p) = 1.
Next, Lemma 8 will be employed to prove Proposition 5 under the assumption
(ii). Note that for N > n > 1,
P max Tj > e = P max lim > e
n
M
= P max lim i X > e
SaNf M m
= P lim max  Xi > e
Moo n
I lim max I '.x,>e)
S Mfooni
< E liminfr I
Moo max IE" XI>.
< liminf E Ir (by Fatou's lemma)
M0ooN  J
= liminfP max XlB xil> e
M*oo n
2M
liminf P max IZXI > e
M+oo n
liminf 2 E(lX, I) (by Lemma 8)
Moo CP. n
2 oo00
= E E(IXiIP).
Letting N  oo yields
P sup IT > e = lim P max T, > e
ujT> N*oo n
2"0
< C E E(IX IP)
j=n
thereby proving the proposition under the assumption (ii) with Cn(p) = 2. O
Now, using Proposition 5, we will reprove (in Proposition 6 below) the tail series
SLLN of Klesov [29, Proposition 1] which we had questioned earlier. Of course,
Proposition 6 is merely the special case g(x) = IxIP, 0 < p < 2, of Proposition 3 of
Chapter 2 as well as the special case gn(x) = IzxI, 0 < p < 2, n > 1, of Lemma 6
of Chapter 2 but an alternative proof may be of interest.
Proposition 6 (Klesov [29]). Let {X,, n > 1} be independent random variables
with E(IXnI) < oo, n > 1, for some p > 0. Assume that either of the condition (i)
or (ii) of Proposition 5 holds. Let {bn, n > 1} be a sequence of positive constants
with b, 1 0. If the series
SE(IX IP)
E < o, (3.1.3)
n=1 b
then the tail series SLLN
+ 0 a.s.
bn
obtains.
Proof. By the proof of Proposition 5 with X, replaced by bn'Xn, n > 1,
{(~ n b. lXj, n > 1) is a welldefined sequence of random variables. Since b, i 0,
the proposition follows from Lemma 1 of Chapter 2. 0
3.2 Tail Series WLLNs
Using Proposition 5, we will prove tail series WLLNs of the form
sup ITA
j2, P
+0
bn
where {b,, n > 1} is a sequence of norming constants with 0 < b, i 0. This, of
course, ensures that
T, p
~ 0.
bn
The following theorem is comparable with Proposition 6.
Theorem 3. Let {X,, n > 1} be independent random variables with E(IX,'I)) <
oo, n > 1, for some p > 0. Assume that either of the conditions (i) or (ii) of
Proposition 5 holds. Let {b,, n > 1} be a sequence of positive constants with b, 1 0.
If
E E(IXjI) 0, (3.2.1)
j=n
then the tail series WLLN
sup ITil
i> P
0 (3.2.2)
bn
obtains.
Remark. Note at the outset that Lemma 1 of Chapter 2 ensures that (3.1.3)
implies (3.2.1). Thus, while in Theorem 3 we obtain a weaker conclusion than that
of Proposition 6, we use a weaker assumption.
Proof of Theorem 3. Since (3.2.1) implies (3.1.2), taking gn(z) =_ Ix\z, 0 < p <
2, n > 1, we see that {Tn, n > 1} is a welldefined sequence of random variables
by the argument in the proof of Theorem 2 of Chapter 2. Alternatively, it may be
noted that (3.2.1) implies (3.1.2) whence {T,, n > 1} is well defined as was shown
in Proposition 5.
In Proposition 5, replace e by ebn for each n > 1. Then, for arbitrary e > 0
p Isup  I Tj( 1
bP ,, > p E(jXjl ) (0 < C.(p) < 2)
0 (by (3.2.1))
thereby proving (3.2.2). O
Corollary 5. Under the hypotheses to Theorem 3, the tail series WLLN
Tn P
0
b.
obtains.
Proof. The corollary follows immediately from (3.2.2). 0
As additional corollaries of this theorem, we obtain the following two tail series
WLLNs (Corollaries 6 and 7) for the weighted i.i.d. case.
Corollary 6. Let {Yn, n > 1} be i.i.d. random variables with E(Y1) = 0, E(Y) =
1, and let {a,, n > 1} be a sequence of nonzero constants. If the series
E a < 0, (3.2.3)
n=l
then setting
o00
t2= = >
j=n
for every a > 0 and positive integer r, the tail series WLLN
sup IEi= a, Y
j>n P
;a + 0
tn (log, t2)
obtains.
Remark. The condition (3.2.3) is necessary for {Tn, n > 1} to be a welldefined
sequence of random variables where T, = Ej=, aj Yj, n > 1 (for clarification see
the discussion in Section 4.3 of Chapter 4).
Proof of Corollary 6. Let a > 0 and let r be positive integer. Set
bn = t, (logr t2), n > 1.
Since (3.2.3) ensures that t 1 0,
1 t ty 1
EE(ayj2) = t2 = 
b. t2 (log, t2)0 =) (log, t2) 0
and so (3.2.1) holds with p = 2 where Xn = a, Y, n > 1. The corollary then
follows directly from Theorem 3. O
As a special case (r = 2 and a = 1) of Corollary 6, we obtain the following
corollary which will then be compared with a tail series LIL.
Corollary 7. Under the hypotheses to Corollary 6, the tail series WLLN
sup IEj ar I
t(o 2) p 0 (3.2.4)
tn (log2 t;2)2
obtains.
The hypotheses of Corollary 7 (or Corollary 6) are weaker than those of some
results of Rosalsky [41, Theorems 2 and 3] which provided conditions for the tail
series LIL
lim sup j=n aJ Y = a.s. (3.2.5)
o t (log2 tn2)
to obtain. Observe that the norming constants in (3.2.4) and (3.2.5) are the same.
The following two examples exhibit a sequence of norming constants {b,, n > 1}
for which a tail series WLLN holds, but a tail series SLLN does not. In the first
example, the harmonic series with a random choice of signs, which was considered
in Example 3 of Chapter 2, will be reconsidered.
Example 4. Let {X,, n > 1} be independent random variables such that
P IX= =P = 11 1 n>l.
Let 0 < a < 1. Then for arbitrary p E (0, 2]
E(IXIP) = nP, n> 1.
Let r be a positive integer, and set
b, = nI(log n)9, n > 1.
Then
E(IX.nl) nP __ _
Sn f (log,n) (log
implying
lE( JP = n (log, n) = oo.
n=1 n=l
Hence, since p E (0, 2] is arbitrary, the hypotheses of Proposition 6 are not met.
Indeed, it will be seen below that for p = 2, r = 2, a = 1, {X,, n > 1} obeys
the tail series WLLN with norming constants {b,, n 2 1} but does not obey the
tail series SLLN with those norming constants (since it obeys the tail series LIL
with those constants).
Next, choose p = 2. Then
00 0) I 1
j=n j=n n
Thus for r > 1 and a E (0, 1],
1 00 n1 1
E E(X?) o(l)
14 = nl(log, n)O (log, n)o
ensuring (3.2.1). By applying Theorem 3, the tail series WLLN
sup ITj,
.>n P
(log 0, (3.2.6)
n (log, n)2
obtains. Choosing r = 2 and a = 1, it follows in particular that
sup 1Tj
n_l I  0. (3.2.7)
n 2(log2 n)
(Note that {X,, n > 1} are weighted i.i.d. random variables, i.e., X, = an Y,, n >
1, where {Yn, n > 1} are i.i.d. random variables with
1
P{Y,=1}=P{Yn=1}=, n>1
2 
and an = n1, n > 1. Thus, by applying Corollary 6 and Corollary 7, we can also
arrive at the same conclusions (3.2.6) and (3.2.7), respectively.) But, recalling the
tail series LIL (2.4.14) of Example 3 of Chapter 2, it is clear that the tail series
SLLN
T.
S 0 a.s.
n2 (log2 n)
fails.
Next, an example constructed by Rosalsky [41, Example 1] of weighted i.i.d.
random variables will be discussed in the context of the tail series WLLN and
SLLN.
Example 5. Let {Yn, n > 1} be i.i.d. random variables with E(Yi) = 0, E(Y,2) =
1. For each n > 1, let
2 (log0 n)" (log n)"
nexp {(log, n) (logn) (log3n)}, < <00
and assume that if u > 1, then
E(Y12 (log2 IY1 )') < oo for some q > u 1.
Then the condition (3.2.3) obtains (see Rosalsky [41, Example 1]). Then, for r >
1 and a > 0, the tail series WLLN
sup IEcij ai Yi
t (log, t2) (3.2.8)
obtains by Corollary 6. By choosing r = 2 and a = 1 (or by Corollary 7), it follows
in particular that
sup IEj" a l II
ji> P
j2 j 0 .
t (log t2)2
But by Rosalsky [41, Example 1], the tail series LIL
limsup m= aJ = V a.s.
t (log, t2))
obtains. Thus the tail series SLLN
E' aj Y
1+ 0 a.s.
tn (log2 t2)n
fails.
Remark. It may be noted that the hypotheses to the tail series LILs of Rosalsky
[41, Theorem 2 and 3] ensure immediately via the Chebyshev inequality that
T, P
" 0.
t, (logz2 2)
Indeed, the hypotheses to these theorems of Rosalsky [41] always entail the stronger
conclusion
sup JTjl
j> P 1
+ 0.
t, (log, t;2)
This observation was already made after Corollary 7 concerning the tail series LIL
of Rosalsky [41, Theorems 2 and 3]. Apropos of Rosalsky [41, Theorem 1], the
observation follows immediately from our Theorem 3 by taking p = 2 and b, =
tn (log 0 2) n 1.
CHAPTER 4
TAIL SERIES STRONG LAWS OF LARGE NUMBERS II
4.1 Introduction and Preliminaries
As was discussed at the end of Chapter 2, there is a gap between the conclusion
of our tail series SLLN (Theorem 2) and that of the tail series LIL of Rosalsky [41,
Theorem 2]. So, it is natural to seek a tail series SLLN whose conclusion is more
akin to that of the tail series LIL of Rosalsky [41, Theorem 2]. To this end, we will
establish in Theorem 4 below a tail series counterpart to the following SLLN for
partial sums by Teicher [47].
Proposition 7 (Teicher [47]). Let 1 < p < 2 and let Sn = Ej=i Xj, n > 1, where
{X., n > 1} are independent random variables with
E(X,) = 0, E(IX.I ) < en < oo, Bn = je oo, n 2 1
j=1
where {e,, n > 1} are positive constants. Assume that
Bn+1 = O(B,).
If for some a E [0, ) and some positive constants 6 and e
SP{IX,j > B. (log2 B.)1'} < oo (4.1.1)
n=1
and
oo E(X2IB 1Io B1)
E ( [eB(lg, B)
n= (B(log, B)')2
55
then the SLLN
S.
' 0 a.s. (4.1.3)
B. (log2 B,)1a
obtains.
Remark. A standard BorelCantelli argument reveals that (4.1.1) is a necessary
condition for the conclusion (4.1.3) to obtain. Moreover, while the condition (4.1.2)
is technical in nature, it is not at all ad hoc in that it is of the spirit of conditions
employed by Chow, Teicher, Wei, and Yu [15], Egorov [19, 20], Heyde [22], Klesov
[31], Petrov [37] (see Heyde [22] and the inequality (I) of Loeve [34, p. 209] for clar
ification), Petrov [40, p. 303], Sakhanenko [42], Teicher [45, 46], Tomkins [48], and
Wittmann [50] to prove LILs (or SLLNs) for partial sums of independent random
variables. Moreover, the above authors also employed a condition in the same spirit
as (4.1.1).
It will be seen after the statement of Theorem 4 that this theorem will yield a
sharper result than that of the tail series SLLN of Theorem 2 of Chapter 2 when
the hypotheses of Theorem 4 are satisfied. Furthermore, as special cases of the tail
series SLLNs of this chapter, we will investigate the tail series SLLN problem for
weighted sums of i.i.d. random variables. In the weighted i.i.d. case, it will also be
seen after the statement of Theorem 5 that this tail series SLLN will narrow the
gap between the conclusion of the tail series SLLN and that of the tail series LIL
of Rosalsky [41, Theorem 2].
4.2 Tail Series SLLNs
For independent random variables {X,, n > 1} we obtain tail series SLLNs
below, which are counterparts to the SLLNs for partial sums of Teicher [47]. The
main result of this chapter, Theorem 4, may now be stated. As in previous chapters,
{T., n > 1} denotes throughout the tail series Tn = EC' Xj, n > 1, corresponding
to random variables {Xn, n > 1}. It will be shown in the proof of Theorem 4 that
the hypotheses guarantee that {T, n > 1} is a welldefined sequence of random
variables. But, the proof of Theorem 4 will be deferred until after the proof of the
ensuing Lemma 10.
Theorem 4. Let 1 < p < 2 and let {Xn, n > 1} be independent random variables
with
E(Xn) = 0, E(jXnj) < en, n > 1
where {en, n > 1} are positive constants with YE~ en < oo. Assume that
AP = O(AP+1) (4.2.1)
where At = E ej, n > 1. If for some a E(oo, 0 )
X P{IX. > 6 A. (log, AP)1} < oo for some 6 > 0 (4.2.2)
n=l
and for all e > 0
oo E (X eA(10g, A')"
E X < oo, (4.2.3)
=1 (A. (log2 An"P)1a)
then the tail series SLLN
0 a.s. (4.2.4)
A. (log2 AnP)1a
obtains.
Remarks. (i) Note that (4.2.1) ensures that A, An+1 > 7 for some y E (0,1).
(ii) Note that if {X,, n > 1} satisfies the hypotheses to Theorem 4, then the
hypotheses to Theorem 2 of Chapter 2 are also satisfied with
gn (x) =xl, 1 < p < 2, n > 1
and so for any O(x) 6
T.
n 1* 0 a.s.
where
00oo
A = E(IXjlP), n>1.
j=n
As for as notation is concerned, note that if {X,, n > 1} obeys the hypotheses to
Theorem 4 with en = E(IXnP), n > 1, then the sequence {An, n > 1} of Theorem
4 is in fact the sequence An, n _> 1 However, in this case,
(log2 AP)1a = ((Al)))
whence Theorem 4 yields a sharper conclusion than does Theorem 2. Of course,
(a) in general, the hypotheses of Theorem 2 may be satisfied, but not those of
Theorem 4,
and
(b) Theorem 2 involves a class of norming sequences which is structurally
different from that of Theorem 4.
As will become apparent, Theorem 4 owes much to the work of Teicher [47]. The
proof of Theorem 4 utilizes the following two lemmas. In Lemma 9, there are no as
59
sumptions concerning the integrability of the random variables exp{t S}, exp{t S,},
n > 1, in (4.2.5). Moreover, Lemma 9 cannot be proved by invoking the continuity
theorem for moment generating functions unless S,, n > 1 and S are all defined on
a common interval of the taxis containing 0 as an interior point.
Lemma 9. Let Sn, = E=Xj, n > 1, where {X,, n > 1} are independent ran
dom variables with
00
E(X,) = 0, n > 1, and f E(XJ) < oo.
n=l
Then there exists a random variable S with E(S) = 0, Var(S) = E= E(X2) and
S,  S a.s. and such that
im E(exp{t S.}) = E(exp{t S}), oo < < oo. (4.2.5)
Proof. The existence of a random variable S with E(S) = 0, Var(S) = E'=i E(X2)
and S,  S a.s. follows directly from the KhintchineKolmogorov convergence the
orem.
Next, for all n > 1, Jensen's inequality ensures that
E(exp{t T.+i}) > exp{t E(T,,+)} = e = 1
and so
E(exp{tS)) = E(exp{tTn+}exp{tSn})
= E(exp{t Tn+1}) E(exp{t s,}) (by independence)
> E(exp{tSn}).
Thus,
limsupE(exp{t S.}) < E(exp{t S}). (4.2.6)
Moreover,
E(exp{tS})) = E(limexp{tS})
< liminfE(exp{t S}) (by Fatou's lemma)
which when combined with (4.2.6) yields the conclusion (4.2.5). 0
Lemma 10. Let {Xn, n > 1} be independent random variables with jX.nI
M,, n > 1, where {M., n > 1} is a bounded sequence of positive constants and
suppose that
E(X,) = 0, n > 1.
(i) If the series
E < 00, (4.2.7)
n=l
where aon = E(Xn), n > 1, then setting
00oo
t2 = a oJ a 2 1,
j=n
the inequalities
(exp T < ex2 1 + tC)}, n> 1
obtain for all t E (0, C1] where
1
C, = sup M, n > 1.
in j>n
(ii) In addition to the assumptions in part (i), let {z,, n > 1} be a numerical
sequence satisfying
0 < Cn, 5 u, n> 1 (4.2.8)
for some constant u < oo. Then the inequalities
P supTj > A t < exp {zx' vA (1 + n> 1
I I 2 n 2
obtain for all A > 0 and all v E (0, u1].
Remark. Observe that our A in part (ii) may vary with n, i.e., the above A can
be replaced by A,.
This lemma is a tail series analogue of the exponential bounds lemma of Teicher
[47, Lemma 1]. The proof of Lemma 10 employs the function 21 (1 + 2'x) playing
a similar role as the function g (z) = x2 (e" 1 z) of Lemma 1 of Teicher [47].
Proof of Lemma 10. (i) The argument is contained in the proof of Theorem 2 of
Chow and Teicher [13].
(ii) In order to prove part (ii) of the lemma, we will employ the argument in
the proof of Proposition 5 of Chapter 3. As in the proof of Lemma 7 of Chapter 3,
note that the hypotheses ensure that, for a given n > 1, {Sn,M, .,,M, M > n} is
a martingale where
M
Sn,M = E Xj, F ,M = a(Xn, ..., Xu), M > n > 1
j=n
and so for t > 0, {exp{(i Sn,M}, Fn,M, M > n} is a submartingale (see, e.g., Chow
and Teicher [14], p. 232) since the function y(s) = exp{t s} is convex. Then for
N > > 1, v E (0, u'], and t = v ,, we have
P max Tj > Axz. t}
n:5j:N )
f M
= P max lim X, >AXzt.
n
( M
= P lim max Xi > A Xn tn}
Mfoo n
I lim max .xo>Antn
M+oon
= E liminflr
IM Ao max I '.Xi>X.xtn
Ln .
MoO n<Awnt,.
t Ib__
(by Fatou's lemma)
=liminf P
Moo
M
max Z X, >
n
S=i
< liminfP max Xi > A z, t,
S<
M*oo
= liminf P max exp{ ,M } > exp{
Moo njS
liminf E(exp{( S.,M})
< m.fo exp{t z}
Moo exp{tAaXn}
Xztn}} (t>0)
(by Doob's submartingale maximal inequality [18, p. 314])
E(exp({ Tn
=Eexp{It A1X}) (by Lemma 9)
exp {tLA z, )
< exp{tAXn+ (l+ )}
(by part (i): note t E (0, C1])
= exp vAx + 1v (+ 2 v(, )
5 exp{ x vA
A Xn tn }
T(1 2)
} (by(4.2.8)).
Letting N + oo yields
P sup Tj > A z tn} = lim P max Tj > A x, t,}
itj>I J NKoo f n
< exp{z( V2A (1 + ))}
thereby proving the lemma. O
Proof of Theorem 4. Observe at the outset that the tail series {T, n > 1} is well
defined by taking g, (z) = IzxI, n > 1, in Theorem 2 of Chapter 2. (Alternatively,
with the above choice of {g, (z), n > 1}, Loeve's [34, p. 252] generalization of the
KhintchineKolmogorov convergence theorem ensures that {T, n > 1} is a well
defined sequence of random variables.)
Let e > 0 be arbitrary and let 0 < a < 1. For each n > 1 set
p
Un = Xn I[Ix.<5eA.(log2 AnP).]
V, = Xn I[Ixl>6SAn(log, Ap)1a]
W, = X, I[ An(log2 AnP)~
Then X, = Un + V, + W,, n > 1. Now, for each j > n > 1,
E(I Vj) < E (ixi I[6Ai(log, AT P)I
+ E(jXj I[ix>A.(log2 A;P)a])
< A, (log2 A;P)a P{IXj > bAj(log, A))1}
+ A (log2 A;')(P1) E(IXj [ixl>An(1og A
and so
IE()I < A, (log2,AP)" P{IXj, > 6Aj(log, A,)1}
j=n j=n
+ AP (log2 AP)(P1) : E X I >An(lo A a
= o(A (log, AP)1a) (since ap < 1), (4.2.9)
using (4.2.2) and the fact that
E X(l' I[ixijl>A,(10g2AP) < A'.
j=n
Note that (4.2.2) ensures via the BorelCantelli lemma that a.s. Vn is eventually 0
and consequently so is t=, 1V. Thus
=n j 0 a.s.
An (log2 An,)1"
implying via (4.2.9) that
S{ I (  0 a.s. (4.2.10)
An (log2 AnP)1
In view of (4.2.3) and KhintchineKolmogorov convergence theorem
: Aj (logA E ) converges a.s.
j=1 AJ (log2 Aj )1
Then by applying Lemma 1 of Chapter 2 to this we obtain
E =~ { Wj E(Wj)}
S W + 0 a.s. (4.2.11)
An (log2 An')1'
Now, observe that E(X,) = E(Un) + E(V,) + E(Wn) = 0. Then, in view of
(4.2.10) and (4.2.11), in order to show that (4.2.4) holds, it suffices to show (since
e is arbitrary) that Rn = E J,, {Uj E(Uj)}, n > 1, is a welldefined sequence of
random variables satisfying
sIRn 6e
lim sup n A)< 6 a.s. (4.2.12)
noo A (log, Ann)10 72
where 0 < 7 < 1 is as in Remark (i) after the statement of the theorem.
To this end, firstly observe for each n > 1 that
E(IU. E(Un)lp)
5 E((IU.1 + IE(UD)I)p)
< 2'{E(IUp'+ IE(Un)IP)}
= 2P{E(IUnP) + IE(U.)IP}
< 2P{E(JUIP) + E(IU,')} (by Jensen's inequality)
= 2p+ E(IU.IP)
and so
f E(U, E(Un)I )< 2p+I E(IUn) < 2p+1 E en < oo.
n=l n=l n=l
Thus, by taking g, (x) xIzl, n > 1, via Theorem 2 of Chapter 2, {In, n > 1} is
a welldefined sequence of random variables.
Next, recalling that An+1 > 7 where 0 < 7 < 1, let
nk = inf {n >1 : A < 7k, k >1.
Then, for all k such that nk > 2, since An,,k > 7k,
Ank 7 Ank1 > 7 k+l AnA,.
Hence {nk, k > 1} is a strictly increasing sequence of integers. Moreover, for all
k > 2 such that nk > 2, since
An. > 7k+1 and A,_, < 7k1,
it follows that
Ank 7k+1
An_, __ > 2
(4.2.13)
For each n > 1
P R> 6 A, (log2 AP)' i.o.(n)
fI 7 n 1
SP n max R > An, (log2 A,)l i.o. (k)
I 6k21
SP sup R > A, (log, Ap) i.o. (k)
P A (l A k)
< P,{ sup R > 6e A,, (log, A:) l i.o. (k)
(by (4.2.13) and the fact that AP T as k T)
= Psup. > 6 e A, (log2 A~)1 i.o.(k)}. (4.2.14)
Now, for each n > 1, let r = E(Ri). Then
2r < E(XJ I[IXly ,A(1 Al )_
j=n
= ~ E(x'lplXjI2P I[Ixj<,Ae,(log, A'P)a]
< 2p A2p oo
(log A)a(p) =n
E2P A, 2
= (lo A (4.2.15)
(log, AP)a(2P)'
For each n > 1, note that, since Un 5 e An (log2 A;P) ,
IUn E(U,)I < 2e A, (log, AP)' = Mn, n 2 1.
Then, for each n > 1, setting
1= M 2 e A (log2 A;~)
Cn = upMj =  sup=M
rn j>n rn rn
66e A (log2 Ap)12P
An r n 
rn (log1 AP)'
z, A,
it follows that
CnhZn = 2e
Ank X n rnh = 6 e Ank (log, A')1
A n = 6 (log2 AP) +oo
2 rk (log, A;') 2
Xn = A 0 < 62p (log2 Ag )"P = o(log2 AP)
Wht
(4.2.16)
(4.2.17)
(4.2.18)
(4.2.19)
by (4.2.15) and the fact that ap < 1.
Now by (4.2.14) and (4.2.17) we obtain
P Rn > A. (log2 A;)1" i.o.(n) < P sup Rn > A,,r, i.o. (k).
S(4.2.20)
(4.2.20)
But, for all k > 2 such that
P{ sup Rn > Anskrn,k
n>n* )I
nk > 2, letting u = 2e and v E (0, u'], we have
< exp X v (vA. (1 + e v)
( by part (ii) of Lemma 10 recalling (4.2.16))
= exp3 (log2 A;:) + K x (K = )
( by taking v = and by (4.2.18))
< exp{2 (log2 A))) ( by (4.2.19))
= (log A;:)
< (p klogI 7)2 ( since AP > 7Pk)
and so for some constant C E (0, oo)
{ (00 lo1 00 1
P sup R, > A; Xnr < C + 2 E 1 < oo.
k=1l n"J p' (nlog 7)2 = k=
Hence, by BorelCantelli lemma,
P sup R. > An,,r,,r, i.o.(k)} =0
implying via (4.2.20) that
P Rn > A. (log2 Al P)1' i.o.(n) = 0.
Hence
R_ 6e
lim sup ,< a.s. (4.2.21)
noo A (log, A')1a 72
Since {(Un E(U.)), n > 1} have the same bounds and variances as those of
{(U. E(Un)), n > 1}, (4.2.21) likewise obtains with Rn replacing Rn thereby
proving (4.2.12) and the theorem. O
By taking p = 2 in Theorem 4, we obtain the following two corollaries which are
partial analogues of Teicher's [47] corollaries of Proposition 7 above.
Corollary 8. Let {Xn, n > 1} be independent random variables with
oo
E(X,) = 0, E(X ) = ao, n> 1, and t = j = o(l).
j=n
Assume that
tn = O(t+1). (4.2.22)
If for some a E (oo, )
00
SP{IX.n > 6tn(log2 tV2)1 < 0o for some 6 > 0 (4.2.23)
n=l
and for all e > 0
oo E X(XI[ t(log2,;2)_
E (log < oo, (4.2.24)
n= tn (log 2)1)2
then the tail series SLLN
T,
+ 0 a.s. (4.2.25)
tn (log, tf2) n
obtains.
Remark. Observe that this corollary precludes a = In fact, the conditions
(4.2.23) and (4.2.24) when a = comprise two of the three conditions for the tail
series LIL of Rosalsky [41, Theorem 2].
Corollary 9. Let {X,, n > 1} be independent random variables with
E(X,) = 0, E(X.) = ao2, n > 1, and t = 2 oJ = o (1)
j=n
If for some a* E (0, ] and M E (0, oo),
IX.I < M t,(log2 t;2)' a.s., n > 1, (4.2.26)
then the tail series SLLN (4.2.25) prevails for all a < a*.
Proof. For a* E (0, 1], observe at the outset that
t2 t2
M2 t2 (log2 t2)2a*
M2
= 1
(log, t2)2"*
1 (since 1t = o(1))
and hence (4.2.22) holds. Note that (4.2.26) ensures that the conditions (4.2.23)
and (4.2.24) hold for any a < a*. The corollary then follows from Corollary 8. O
The two conditions (4.2.2) and (4.2.3) of Theorem 4 will now be combined into
a single one in the next two Corollaries 10 and 11 which are comparable with the
tail series LILs of Rosalsky [41, Corollaries 1 and 3]. That is, a condition which
ensures that the conditions (4.2.2) and (4.2.3) are simultaneously satisfied will be
employed in each of the following two corollaries.
Corollary 10. Let 1 < p < 2 and let {Xn, n > 1} be independent random vari
ables with
E(X.) = 0, E(IXj.I) < en, n > 1
where {en, n > 1} are positive constants with EC=, e, < oo. Assume that (4.2.1)
holds where AP = Ej, ej, n > 1. Let oo < a < 1 and 0 < P < 1. If for all e > 0
oo E( Xg I[IX.I>A.(log, A4)]
S(A (lo2 AP)) 2 < 0o, (4.2.27)
n=l (An (log1 An)1_)
then the tail series SLLN (4.2.4) obtains.
Remarks. (i) Observe that a smaller a gives us a weaker assumption (4.2.27) as
well as a weaker conclusion (4.2.4).
(ii) Also observe that for 3 = 0, the condition (4.2.27) reduces to
oo
SP{IX, > e An (log, AP)*} < oo for all e > 0
n=l
and for / = 1, it becomes
Eoo E(XI[j.>Ana(0g;A)
2 < oo.
n=1 (A. (log2 AP)'10)2
Proof of Corollary 10. Note that for all large n
E(IX.12 I[IXl>A.(log2 A)]) E (jXn20 I[X.l>A.(log2 An)1])
(A, (log, A1')1') 2 (A (log, A(1')1.)2
> E(I[IxI>An(log, A')*]
SP{Xn, > An(log2 AP)l'"}.
Then for some constant C E (0, oo),
EP{X,I > A. (log, AP)''}
n=1
Co E (lX 12P II[Ix=>An(log A)'_).])
n=1 (A, (log2 An')1)2
< oo (by (4.2.27))
implying the condition (4.2.2) with 6 = 1.
Next, note that for arbitrary e > 0 and all n > 1
E (IX, 1" I[I,.>eA,.(log, A;p)])
(An (log2 A'P)1")26
(A, (log, A_)
E (iX, 22(1') [eAn(1log0 A;P)
(An (log2 An')I,)22(IP)
SE(X2 I[.A(log AP)a<IXn
(A, (log, Ap)1,)2
Then
SE(X 'A(10og Ap)O
n=1 (A. (log2 AnP)1)2
0 E (lXn,2f I[Xnj>.An(,log, A'P)])
n=1 (A (loga An")1)2
< oo (by (4.2.27))
and so the condition (4.2.3) also holds with 6 = 1. The corollary then follows
directly from Theorem 4. 0
Corollary 11. Let 1 < p < 2 and let {Xn, n > 1} be independent random vari
ables with
E(X,) = 0, E(IX, I) < en, n > 1
where {en, n > 1} are positive constants with E,=i en < oo. Assume that (4.2.1)
holds where A = ECjn ej, n > Let oo < a < 1, and 7 > 1. If for all e > 0
Y (log1 AP)1^P('"
E (IX I''[IX.I>eAn(log, A;P)] = ( g e. (4.2.28)
then the tail series SLLN (4.2.4) obtains.
Proof. Note that for arbitrary e > 0 and all large n, (4.2.28) ensures that for
some constant C1 E (0, oo)
E(IXnlP I[IXI>An(log2 A;P)]) ______
(An (log2 AP)1a)P AP logici AnP) (log2 A;P) (log3 A )7"
Then for some constant C2 E (0, oo)
E E(Xn lP I[X.>'An(og, A,)_)
n=1 (An (logz AnP) ')
< c+C e
n=1 An (log, AZ') (log, An") (log3 An')
< oo (by Rosalsky [41, Lemma 5])
and so the condition (4.2.27) holds with P = E. The corollary then follows from
Corollary 10. O
4.3 The Weighted I.I.D. Case
For i.i.d. random variables {Yn, n > 1} with E(Yi) = 0, E(Y,2) = 1, and for
nonzero constants {an, n > 1}, {an Y, n > 1} is a sequence of weighted i.i.d.
random variables. Then there exists a random variable S with EC= a Yj  S a.s.
iff E,= a' < oo. (Sufficiency follows directly from the KhintchineKolmogorov
convergence theorem whereas necessity results from the work of Kac and Steinhaus
[28] or Marcinkiewicz and Zygmund [35] or Abbott and Chow [1].) In such a case,
E(S) = 0, E(S') = E'= a .
Corollaries 8 and 10 reduce to Corollaries 12 and 13 below, respectively, in the
weighted i.i.d. case.
Corollary 12. Let {Y,, n > 1} be i.i.d. random variables with E(Yi) = 0, E(Y2) =
1, and let {an, n > 1} be nonzero constants satisfying t2 = EC a = o(1) and
(4.2.22). If for some a E (oo, ,)
SP{lYi > 6 lajl t (log, 2t2)1" < oo for some 6 > 0 (4.3.1)
n=l
and for all e > 0
oo anE aInl'tn(log, tn2 )@11<6nn(l10g, t;2)1])
E E  2 < oo, (4.3.2)
n=1 t. (log, t;2)1
then the tail series SLLN
E;=0 aj
S 0 a.s. (4.3.3)
t, (log, t 2)1a
obtains.
Proof. Since the conditions (4.3.1), (4.3.2), and (4.3.3) are simply transcriptions
of (4.2.23), (4.2.24), and (4.2.25), respectively the corollary follows immediately
from Corollary 8. O
Corollary 13. Let { Y, n > 1} be i.i.d. random variables with E(YI) = 0, E(Y 2) =
1, and let {a,, n > 1} be nonzero constants satisfying t2 = E a = o(1) and
(4.2.22). Let oo < a < 1 and 0 < P < 1. If for all e > 0
aoo la2 E (lYI1 I[y l .lait(log2 t;2])
2 < oo, (4.3.4)
"=' (t. (log t2)1")
then the tail series SLLN (4.3.3) obtains.
Proof. Since the condition (4.3.4) is a simple transcription of (4.2.27) with p = 2,
the corollary follows directly from Corollary 10. 0
The main result of this section, Theorem 5, which is an analogue of the tail
series LIL of Rosalsky [41, Theorem 2], may now be stated.
Theorem 5. Let {Y,, n > 1} be i.i.d. random variables with E(Yi) = 0, E(Y2) =
1, and let {an, n > 1} be nonzero constants satisfying t2 = E ,a = o(1) and
(4.2.22). If
2
nan = ((log2;2)) for some oo < < oo, (4.3.5)
then the tail series SLLN
+ 0 a.s.
t, (log t~2)1"
obtains for every a E (oo, 1) provided in the case 7 > 2(1 a) that
E(Y12 (log, IYl)2(1a)) < oo. (4.3.6)
Remark. Actually, under the assumption (4.3.5) where r < 1, the result follows
directly from the tail series LIL of Rosalsky [41, Theorem 2]. In the case 1 <
r < 2(1 a), the additional assumption is not needed in Theorem 5, although an
alternative additional assumption in the same spirit as (4.3.6) is required for the
tail series LIL of Rosalsky when (4.3.5) holds with r > 1. And for r > 2(1 a), we
assumed the moment condition (4.3.6) which is weaker than the additional moment
condition in the tail series LIL of Rosalsky since 7 2(1 a) < r 1.
Proof of Theorem 5. Without loss of generality, it may be assumed that r > 0
and a > 0. Note that (4.2.22) is tantamount to tn2 t2_1 < M1 for some constant
Mi E (0, oo) and all n > 2. Then for all n > 1,
t2 = t2 < tMn1 = O(Mn)
j=2 I
implying
log2 t2 < (1 + o(1)) log, n. (4.3.7)
Moreover, for all large j and for some constant M2 6 (0, oo), observe that
j1 + 1 j1
= 14
(log2 t '
S1+ Ms2 ( (by (4.3.5))
j1
< 1+2M2(log 1)), (by (4.3.7))
j1
and so for all large no and n > no
no = l til
t2 j=no+l J31
< II (1+2M(j 1)1 (log, (i 1)))
n1
= I exp 2M2j1 (log1 j)}
exp J n( 2M oj J)
n1
exp 2 M2 (logn)r j
5 exp({2M2 (log1n)T7+1.
Thus
log 2 = O(log2 n). (4.3.8)
In view of Corollary 13, it is suffices to show that the condition (4.3.4) is satisfied
for some / E [0, 1] and all e > 0. Recalling (4.3.5), let K E (0, oo) satisfy
2 K(log, t2 ) (4.3.9)
for all large n. For each n > 1, let
e2 n
q K(logt2) (4.3.10)
K(log1 09 )'+2
where e > 0 is fixed but arbitrary. Then, by (4.3.9)
e2 tP
S< (log t (4.3.11)
a2 (log, t2)2a
and also by (4.3.8) and (4.3.10)
log t0 2 = O(log2 qn). (4.3.12)
Next, it will be demonstrated that {q,, n > 1} is eventually nondecreasing with
q,  oo. To this end, note that
n ( ,2 t ) n a'1
tl (logl tl2) (log,2) 2 (log, t ,) (log, t2)
__ (1) (n 1) a___
o2 _) (by (4.2.22))
1 (log t n0) log2,n)
= o(l) (4.3.13)
77
by the assumption (4.3.5). Let O(x) be the extension of {t2, n > 1} defined by
linear interpolation between integers, i.e., for all n > 2
n(x) = t nln + (tt1) (x n + 1), x E[n 1, n).
Then, via the mean value theorem, for all large n there exists a number z, in
(n 1, n) such that
2 2 2n e2(n1)
q qn1 K(log, t2),+2a K (log2 t2i),+2
(e 1 (7 + 2a) (t2 t2x) }
K (log, 2 (X,))+2 (X ) logici ,(x,,))(log2 (X))++
= ~(1 + o(1)) (log ))+2 (by (4.3.13))
K (log, (o2 (X,))
2K
>2 (log2 (z,))( +2)
> 0.
Thus, since (4.3.10) and (4.3.8) guarantee q2 + oo, we have verified that {q,n, n 1}
is eventually nondecreasing with qn  oo.
Let 0 < # < 1 and r* > 0, the exact choices of which will be made later. Then
a. l1'2 E (I 2[ ,'1 Y>l>n1t.(log2, t
n=1 (t (log, 2) )2
oo E (IY12 I['>1,>I ,lt"(log2 2)])
n 0(1)(l 2og )2p()OP (by (4.3.9))
n=1 n (log) t)
5 O(i) n E(Pg2z[ )) (by (4.3.11))
n=l xn (log10 t2 )
< 0(1) E '"mR E(IY1X1 iE5
zn=1 n (log, t2)P(2(1a)r)
= O(1) EE( 2 Il,, +) n(log2 t2)fn(2(1a))
j=1 n=l
O(1) E j (log2 t'2) (r2(1a)) E(Yi12 I[q
j=1
(by Rosalsky [41, Lemma 6])
SO(1)logt 2t *(,2(1)) E(Y12(log2 1YID)* I[qi
j=1 qj (log2 qj)*
= O(1) (lot 2)`(r2(+1a))(lP)(r+2a) E(Y12(log12 1Y11) I[qi,
j=1 (log, qj)r*
(by (4.3.10))
< O(1) E (log2 t 2)'+2 E(Y12(log,2 1Y1,) I[j,
j=1
(by (4.3.12)).
If r < 2(1 a), let r* = 0 and f = '(r + 2a) and then via (4.3.14) the series
of (4.3.4) is dominated by 0(1) E(Y12) < oo.
Alternatively, if r > 2 (1 a), let r* = r 2 (1 a) and f = 1. Then again via
(4.3.14), the series of (4.3.4) is dominated by 0(1) E(Y12(logz2 IYl)') < oo recalling
(4.3.6). The theorem follows then directly from Corollary 13. 0
To illustrate Theorem 5, we will revoke previous examples from Chapters 2 and
3.
Example 6. As was observed in Example 3 of Chapter 2, the harmonic series
with a random choice of signs yields the tail series LIL (2.4.14) thereby ensuring
the tail series SLLN
(log, n)+T 0 a.s. (e > 0)
(log2
obtains. Alternatively, the same conclusion follows directly from our tail series
SLLN (Theorem 5 with r = 0 and a = 1 e).
Example 7. The same argument as in Example 6 can be applied to Example 5
of Chapter 3, i.e., since
~ (log, n)" (log3 n)" and log2t log2 n
n
(see Rosalsky [41, Example 1] for verification), the condition (4.3.5) holds with
7 > u (or 7 = u if v < 0) and so from Theorem 5 the tail series SLLN
 0 a.s.
t,, (log2 tn2)1a
obtains whenever oo < a < 1 provided (4.3.6) holds if 7 > 2(1 a).
In the following example, we will see an application of the tail series SLLN to
the field of time series analysis.
Example 8. Let {St, t = 0, 1, 2, ...} be the moving average process of
infinite order given by
00
St= E a Xtj (4.3.15)
i=0
where {Xt, t = 0, 1, 2, ...} are i.i.d. normal random variables with mean 0
and variance 1 and {ai, j > 0} is a square summable sequence of constants. As a
specific example, consider a long memory process, which is represented by (4.3.15)
with ao = 1 and
r(j +d) k 1 + d
a, = k. = ,j1 (4.3.16)
(j +1)l'(d) o
where
txletdt, if X > 0
Jo
1
d < 2 and r(x) = oo, if x = 0
z1 r(1 + x), if z < 0.
By Stirling's formula
r(z) ~ 2 e,+1 (x 1)'1 as z  oo
applied to (4.3.16), we obtain (see e.g., Brockwell and Davis [11], p. 466) for d # 0
jd1
aj ~ r(d) as 
Then
oo 01 j( 2d1
t2= _a2 T n
S ( j))2 (d) (1 2d) (r(d))2
j=n (r(d))a j=n
implying
n2 na2
log2 t~ log2 n, = O(t1+x), and = O(1).
n
Thus the conditions (4.2.22) and (4.3.5) (with r = 0) hold and so for every integer
t and all a E (oo, 1), the tail series SLLN
n2Id 00
(log2 n)_ aj Xtj + 0 a.s.
follows from Theorem 5. We have thus determined an order bound on the almost
sure rate in which E o aj Xtj converges to St for every t. Observe that this rate is
independent of the time t. Of course, E=o aj Xtj is structurally far simpler than
St.
CHAPTER 5
SOME FUTURE RESEARCH PROBLEMS
The current research work suggests a number of open problems or areas for
future research activity. These problems or areas are discussed in this chapter.
In Chapter 2, a function O(z) in a specific class of functions T defined by Klesov
[29 and 30] was employed for determining the norming constants for tail series
SLLNs for random variables. Actually, this class 9 is a tail series partial analogue
of the class ck defined by Petrov [38] as follows; a function f belongs to Ic if it
is a positive and nondecreasing function such that the series E = converges.
In the case of the SLLNs for partial sums, Egorov [21] defined a wider class of
functions F, as follows; a function f belongs to F, if (f(x))' E Tc for some e > 0. In
the same spirit as in Chapter 2, tail series SLLNs which might exist and correspond
to Egorov's [21] SLLNs for partial sums will possibly employ a function in a class
which is a tail series analogue of Fc. It would be particularly interesting to see
whether such tail series SLLNs subsumes the results of Chapter 2.
In the Theorem 4 of Chapter 4, we established the counterpart to the SLLN for
partial sums of Teicher [47]. But Theorem 4 is indeed an incomplete analogue of
the SLLN of Teicher [47] because we assumed
oo E(X I[An(og AP)a
2< oo, (5.0.1)
.=1 (A (log, AAP)I)
81
for all e > 0 rather than for merely some e > 0, as was the case in a partial sum
version of condition (5.0.3) which was used by Teicher [47] to prove a SLLN. The
reason for this is that our tail series exponential bound (part (ii) of Lemma 10 of
Chapter 4), which is employed to prove Theorem 4 of Chapter 4, was proved only
for all v E (0, u1] rather than for all v E (0, oo). Thus, by establishing an extension
of this exponential bound lemma without the restriction on v (as is the case in an
exponential bound for partial sums), the assumption (5.0.3) for all e > 0 might be
able to be weakened to (5.0.3) for some e > 0. Conceivably, under no additional
conditions or under mild conditions, the convergence of the series in (5.0.3) for some
e > 0 guarantees convergence for all e > 0 but this would require some investigation.
Next, it will be a very interesting problem to establish tail series analogues
of Adler and Rosalsky's [3, 4] general SLLNs for weighted sums of stochasticallyy
dominated or i.i.d) random variables. Adler and Rosalsky [3] established general
SLLNs of the form
Jj=1 aj (Yj 0 ')
0 a.s.
where {Y,, n > 1} are stochastically dominated by a random variable IYI and
{7, n > 1} are suitable conditional expectations or are all 0. In their follow
up paper, Adler and Rosalsky [4] provided sets of necessary and (or) sufficient
conditions for {a, Y., n > 1} to obey the general SLLN of the form
CF=I aj Y,
+ 0 a.s.
where {Y,, n > 1} is a sequence of i.i.d. mean 0 random variables and {a,, n > 1}
are nonzero constants.
Finally, tail series problems for almost surely convergent series of independent
random elements taking values in normed linear spaces is an area ripe for extensive
research activity. Beginning with the pioneering work of Mourier [36] (wherein an
analogue of the classical Kolmogorov SLLN was proved for sums of i.i.d. random
elements taking values in a real separable Banach space), an extensive literature
of investigation has appeared on the SLLN and WLLN problems for partial sums
of Banach space valued random elements. For some recent developments in this
general direction, see the articles Adler, Rosalsky, and Taylor [5, 6, 7] and some
of the references contained therein (specifically, see Beck [10], It6 and Nisio [26],
H0ffmannJorgensen and Pisier [25], Woyczyniski [51, 52], Kuelbs and Zinn [33], de
Acosta [2], and Wang and Bhaskara Rao [49]). The necessary background material
for reading the above papers on Banach space valued random elements may be found
in Taylor [44]. Tail series versions of some of the results in the literature cited above
would certainly be a worthwhile research accomplishment. Indeed, the very question
as to when a series of independent Banach space valued random elements converges
almost surely is one which requires more investigation. For some results in this
direction, see HoffmannJ0rgensen [24], Jain [27], and Woyczyiski [51, p. 386390
and p. 430431].
REFERENCES
1. J. H. Abbott and Y. S. Chow, Some necessary conditions for a.s. convergence of
sums of independent r.v.'s, Bull. Inst. Math. Academia Sinica 1 (1973), 17.
2. A. de Acosta, Inequalities for Bvalued random vectors with application to the
strong law of large numbers, Ann. Probability 2 (1981), 157161.
3. A. Adler and A. Rosalsky, Some general strong laws for weighted sums of stochas
tically dominated random variables, Stochastic Anal. Appl. 5 (1987), 116.
4. A. Adler and A. Rosalsky, Strong laws of large numbers for weighted sums of
i.i.d. random variables, Stochastic Anal. Appl. 5 (1987), 467483.
5. A. Adler, A. Rosalsky, and R. L. Taylor, On the strong law of large numbers for
weighted sums of random elements in normed linear spaces, Internat. J. Math.
& Math. Sci. 12 (1989), 507530.
6. A. Adler, A. Rosalsky, and R. L. Taylor, A weak law for normed weighted sums
of random elements in Rademacher type p Banach spaces, J. Multivariate Anal.
37 (1991), 259268.
7. A. Adler, A. Rosalsky, and R. L. Taylor, Some strong laws of large numbers for
sums of random elements, Bull. Inst. Math. Academia Sinica 20 (1992) (to
appear).
8. B. von Bahr and C.G. Esseen, Inequalities for the rth absolute moment of a
sum of random variables, 1 < r < 2, Ann. Math. Statist. 36 (1965), 299303.
9. A. D. Barbour, Tail sums of convergent series of independent random variables,
Proc. Cambridge Philos. Society 75 (1974), 361364.
10. A. Beck, On the strong law of large numbers, In: Ergodic Theory; Proceed
ings of an International Symposium Held at Tulane University, New Orleans,
Louisiana, October, 1961 (F. B. Wright, ed.), Academic Press, New York
(1961), 2153.
11. P. J. Brockwell and R. A. Davis, Time Series: Theory and Method, Springer
Verlag, New York, 1987.
12. G. Budianu, On the law of the iterated logarithm for tail sums of random vari
ables, Studii si Cercetari Mat. 33 (1981), 149158 (in Romanian).
13. Y. S. Chow and H. Teicher, Iterated logarithm laws for weighted averages, Z.
Wahrscheinlichkeitstheorie und Verw. Gebiete 26 (1973), 8794.
14. Y. S. Chow and H. Teicher, Probability Theory: Independence, Interchangeability,
Martingales, SpringerVerlag, New York, 1978.
15. Y. S. Chow, H. Teicher, C. Z. Wei, and K. F. Yu, Iterated logarithm laws
with random subsequences, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete
51 (1981), 235251.
16. K. L. Chung, The strong law of large numbers, Proceedings of the Second Berke
ley Symposium on Math. Statist. and Probability, July 31 Aug. 12, 1950 (J.
Neyman, ed.), University of California Press, Berkeley and Los Angeles (1951),
341352.
17. K. L. Chung, A Course in Probability Theory, 2nd ed. Academic Press, New
York, 1974.
18. J. L. Doob, Stochastic Processes, Wiley, New York, 1953.
19. V. A. Egorov, On the law of the iterated logarithm, Teor. Veroyatnost. i Prime
nen. 14 (1969), 722729 (English translation in Theor. Probability Appl. 14
(1969), 693699).
20. V. A. Egorov, On the strong law of large numbers and the law of the iterated
logarithm for sequences of independent random variables, Teor. Veroyatnost. i
Primenen. 15 (1970), 520527 (English translation in Theor. Probability Appl.
15 (1970), 509514).
21. V. A. Egorov, Some theorems on the strong law of large numbers and the law
of the iterated logarithm, Teor. Veroyatnost. i Primenen. 17 (1972), 8498
(English translation in Theor. Probability Appl. 17 (1972), 86100).
22. C. C. Heyde, On almost sure converegence for sums of independent random vari
ables, Sankhya, Ser. A 30 (1968), 353358.
23. C. C. Heyde, On central limit and iterated logarithm supplements to the martin
gale convergence theorem, J. Appl. Probability 14 (1977), 758775.
24. J. HoffmannJ0rgensen, Sums of independent Banach space valued random vari
ables, Studia Math. 52 (1974), 159186.
25. J. HoffmannJ0rgensen and G. Pisier, The law of large numbers and the central
limit theorem in Banach spaces, Ann. Probability 4 (1976), 587599.
26. K. It6 and M. Nisio, On the converegence of sums of independent Banach space
valued random variables, Osaka J. Math. 5 (1968), 3548.
27. N. C. Jain, Central limit theorem in a Banach space, In: Probability in Ba
nach spaces; Proceedings of the First International Conference on Probability
in Banach spaces, July 2026, 1975, Oberwolfach(A. Beck, ed.), Lecture Notes
in Math. 526 (A. Dold and B. Eckmann ed.), SpringerVerlag, Berlin (1976),
113130.
28. M. Kac and H. Steinhaus, Sur les functions independantes II (La loi exponentielle;
la divergence de series Studia Math. 6 (1936), 5966.
29. O. I. Klesov, Rate of convergence of series of random variables, Ukrain. Mat.
Zh. 35 (1983), 309314 (English translation in Ukrainian Math. J. 35 (1983),
264268).
30. O. I. Klesov, Rate of convergence of some random series, Teor. Veroyatnost.
Mat. Statist. 3 (1984), 8192 (English translation in Theor. Probability Math.
Statist. 30 (1985), 91101).
31. O. I. Klesov, The law of the iterated logarithm for weighted sums of independent
identically distributed random variables, Teor. Veroyatnost. i Primenen. 31
(1986), 389393 (English translation in Theor. Probability Appl. 31 (1986),
337342).
32. K. Knopp, Theory and Application of Infinite Series, 2nd English ed. Blackie
and Son Limited, London and Glasgow, 1951.
33. J. Kuelbs and J. Zinn, Some stability results for vector valued random variables,
Ann. Probability 1 (1979), 7584.
34. M. Love, Probability Theory I, 4th ed. SpringerVerlag, New York, 1977.
35. J. Marcinkiewicz and A. Zygmund, Sur les functions ind6pendantes, Fund. Math.
29 (1937), 6090.
36. E. Mourier, Elements aleatoires dans un space de Banach, Ann. Inst. Henri
Poincare, Section B 11 (1953), 159244.
37. V. V. Petrov, On the law of the iterated logarithm without assumptions about
the existence of moments. Proc. Nat. Acad. Sci. U.S.A. 59 (1968), 10681072.
38. V. V. Petrov, On the strong law of large numbers, Teor. Veroyatnost. i Prime
nen. 14 (1969), 193202 (English translation in Theor. Probability Appl. 14
(1969), 183192).
39. V. V. Petrov, On the order of growth and sums of dependent variables, Teor.
Veroyatnost. i Primenen. 18 (1973), 358361 (English translation in Theor.
Probability Appl. 18 (1974), 348350).
40. V. V. Petrov, Sums of Independent Random Variables, SpringerVerlag, Berlin,
1975.
41. A. Rosalsky, Almost certain limiting behavior of the tail series of independent
summands, Bull. Inst. Math. Academia Sinica 11 (1983), 185208.
42. A. I. Sakhanenko, Convergence of distributions of functionals of processes defined
on the whole axis, Sibirskii Mat. Zhurnal 15 (1974), 102119 (English transla
tion in Siberian Math. J. 15 (1974), 7385).
43. S. A. Solntsev, On the rate of convergence of series of independent random vari
ables, Teor. Veroyatnost. Mat. Statist. 3 (1986), 105110 (English translation
in Theor. Probability Math. Statist. 35 (1987), 121125).
44. R. L. Taylor, Stochastic Convergence of Weighted Sums of Random Elements in
Linear Spaces. Lecture Notes in Math. 672 (A. Dold and B. Eckmann, ed.),
SpringerVerlag, Berlin, 1978.
45. H. Teicher, On the law of the iterated logarithm, Ann. Probability 2 (1974),
714728.
46. H. Teicher, A necessary condition for the iterated logarithm law, Z. Wahrschein
lichkeitstheorie und Verw. Gebiete 31 (1975), 343349.
47. H. Teicher, Generalized exponential bounds, iterated logarithm and strong laws,
Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 48 (1979), 293307.
48. R. J. Tomkins, A generalized law of the iterated logarithm, Statist. & Probability
Letters 10 (1990), 915.
49. X. C. Wang and M. Bhaskara Rao, Some results on the convergence of weighted
sums of random elements in separable Banach spaces, Studia Math. 86 (1987),
131153.
88
50. R. Wittmann, Sufficient moment and truncated moment conditions for the law
of the iterated logarithm, Probability Theor. Rel. Fields 75 (1987), 509530.
51. W. A. Woyczyiski, Geometry and martingales in Banach spaces. Part II. In
dependent increments. In: Probability on Banach spaces (J. Kuelbs, ed.), Ad
vanced in Probability and Related Topics, Vol. 4 (P. Ney, ed.), Dekker, New
York (1978), 267517.
52. W. A. Woyczyniski, On MarcinkiewiczZygmund laws of large numbers in Banach
spaces and related rates of convergence, Probability and Math. Statist. 1 (1980),
117131.
BIOGRAPHICAL SKETCH
The author was born on June 24, 1956, in Kimcheon, Republic of Korea. In
1979, he graduated from the Air Force Academy, Seoul, Republic of Korea. He
was awarded a Bachelor of Science degree in mathematics in 1982 and a Master of
Statistics degree in 1985, both from Seoul National University, Seoul, Republic of
Korea. He then served as a fulltime instructor in the Department of Mathematics
of the Korean Air Force Academy until 1988. He has held the rank of Major in the
Korean Air Force since 1987. He has published a paper (joint with Jong Woo Jeon
and Suk Ki Han), "Some Distribution Free Tests for Exponential Distributions,"
Journal of the Korean Society for Quality Control 14 (1986), 3946.
Since 1988, Mr. Nam has been working towards the Ph.D. in statistics from the
University of Florida. He has been a member of the American Statistical Association
since 1989. He is married and has two children.
After graduation, Mr. Nam will rejoin to the Faculty Board of the Korean Air
Force Academy as an Associate Professor of Mathematics as well as a Lieutenant
Colonel of the Korean Air Force. His research interests lie in the field of limit
theorems for sums of random variables.
I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and
quality, as a dissertation for the degree of Doctor of Philosophy.
Andrew Rosalsky, Chaian
Professor of Statistics
I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and
quality, as a dissertation for the degree of Doctor of Philosophy.
Rocco Ballerini
Associate Professor of Statistics
I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and
quality, as a dissertation for the degree of Doctor of Philosophy.
Malay Ghosh
Professor of Statistics
I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and
quality, as a dissertation for the degree of Doctor of Philosophy.
Richard Scheaffer
Professor of Statistics
I certify that I have read this study and that in my opinion it conforms to
acceptable standards of scholarly presentation and is fully adequate, in scope and
quality, as a dissertation for the degree of Doctor of Philosophy.
Murali Rao
Professor of Mathematics
This dissertation was submitted to the Graduate Faculty of the Department of
Statistics in the College of Liberal Arts and Sciences and to the Graduate School
and was accepted as partial fulfillment of the requirements for the degree of Doctor
of Philosophy.
December 1992
Dean, Graduate School
UNIVERSITY OF FLORIDA
3 126 082 5 50 5
