Some limit theorems for weighted sums of random variables

MISSING IMAGE

Material Information

Title:
Some limit theorems for weighted sums of random variables
Physical Description:
v, 183 leaves : ; 28 cm.
Language:
English
Creator:
Adler, André Bruce, 1958-
Publication Date:

Subjects

Subjects / Keywords:
Limit theorems (Probability theory)   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1987.
Bibliography:
Includes bibliographical references (leaves 180-182).
Statement of Responsibility:
by André Bruce Adler.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 000947199
notis - AEQ9185
oclc - 16904823
System ID:
AA00002149:00001

Full Text



















SOME


LIMIT


THEOREMS


FOR WEIGHTED


SUMS


OF RANDOM VARIABLES






BY


ANDRE


BRUCE


ADLER


DISSERTATION


PRESENTED


TO THE GRADUATE


SCHOOL


OF THE UNIVERSITY


OF FLORIDA


PARTIAL


FULFILLMENT


FOR THE DEGREE


OF THE REQUIREMENTS


OF DOCTOR


OF PHILOSOPHY
















ACKNOWLEDGMENTS


would


like


to thank


Andrew


Rosalsky


dissertation


advisor,


not only


for his overwhelming


assistance


but also


for his


friendship.


Also


would


like


to thank


Dr. Malay


Ghosh,


Ronald


Randles,


and Dr. Murali


Rao for serving


on my


committee.


For serving


on my


like

thank


qualifying


to thank


Ms. Cindy


examination


Rocco


Zimmerman


and oral


Ballerini.


defense


In addition,


for her expert


typing


committees,


would


of this


would


like to

dissertation.


would


also


like


to thank


my parents


for their


support


which


helped


me in reaching my


goals.


Finally,


would


like


to thank


Dawn


Peters


was always


there


when


needed


her.

















TABLE OF CONTENTS


Page

ACKNOWLEDGMENTS. ... .. ..... .. . . ...........................ii

ABSTRACT. . . . . . ............ ..... .................iv

CHAPTERS


INTRODUCTION .......... ....... ............ ................. 1

GENERALIZED CENTRAL LIMIT THEOREMS........................6


2.1 Introduction....................
2.2 Preliminary Lemmas..............
2.3 Mainstream....................


*. a a *t t
*. ......... a


2.4 A Properly Centered Central Limit Theorem.
2.5 Asymptotic Negligibility.................
2.6 An Asymptotic Representation for {B n >
2.7 Examples............................... .


THREE


........6
........8
S..... .13
. ....36
.......50
.......52
.. ....67


GENERALIZED STRONG LAWS OF LARGE NUMBERS.................74


3.1 Introduction...... ..................................74
3.2 Preliminary Lemmas..................................75
3.3 Generalized Strong Laws of Large Numbers for
Weighted Sums of Stochastically Dominated
Random Variables........ .. .......... ......... ....... 84
3.4 Generalized Strong Laws of Large Numbers for
Weighted Sums of Mean Zero Random Variables........105
3.5 The Petersburg Game...............................119
3.6 Examples. .............................. ... .......138


FOUR


A GENERALIZED WEAK LAW OF LARGE NUMBERS................. 151


4.1 Introduction..... ............................ .....151
4.2 Mainstream...... ............... .. ......... ...... .151


Some


Interesting Examples...........................167


REFERENCES. . . . . . ........... . . . . ..180

















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

SOME LIMIT THEOREMS FOR WEIGHTED SUMS
OF RANDOM VARIABLES

BY

ANDRE BRUCE ADLER


May,


Chairman:


1987


Andrew Rosalsky


Major Department:


Statistics


Asymptotic behavior of normed weighted sums of the form

n


ak(Xk-Yk )/b


is studied.


Central limit theorems


as well


strong and weak laws of large numbers are obtained.


Firstly,


we establish a generalized central limit theorem


+ N(0,1


assuming the {X,


are independent,


identically


distributed with EX


= 0,


= mw


The truncated second moment


assumed to be slowly varying at


infinity.


Then,


via a


transformation,


we obtain a similar result where the condition EX


is removed.


The norming sequence {b


is defined in terms of


n, n
n


the common distribution, and

asymptotic representation for


conditions which elicit an explicit


are found.


. I I j n -I t A


n *










Xn, n


1} are stochastically dominated by


a random variable X,


while in others the random variables

identically distributed. Two there


are independent,


;ms are proved showing that the


assumption of


independence


in general,


not always needed to


obtain a strong law.


More specifically,


their hypotheses involve


both the behavior of the tail of the (marginal) distributions of the


and the growth behavior of the constants b


special


cases,


both old and new results


are obtained.


Moreover,


independent,


identically distributed


a strong law


established under the assumption of regular variation of the tail


P{IXl


of common distribution function.


The famous Petersburg paradox


is also examined in more general


terms.


It is shown that P{lim


nl-* k=1


aX /b
k k


= c)


= 0 for any sequence


and finite nonzero constant


c whenever the random


variables


are independent,


identically distributed


with EIXI


= c and la


n'
P


Finally, a generalized weak law of large numbers (W


obtained.


S0)


Using this theorem we are able to show that the modified


Petersburg game does indeed have a solution in the "classical" or


"weak" sense,


i.e.,


there exists a sequence {bn,
*n


1} such that


+ c, where the requirements placed on


are as before.
















CHAPTER ONE
INTRODUCTION


This dissertation will

studied in probability theor


explore three major modes of


We will


convergence


investigate both the weak and


strong limiting behavior of


a normed sum of weighted random


variables.


Such sequences of


normed sums


are expressed


in the form


akXk


-y, )


While the sequences


and {b


,n >


1} will always be


numerical,


the sequence


will


consist of


random variables


which may


distributed.


forms,


or may not be independent


The sequence


ranging from conditional expect


or even


I} will


identically


take on many different


stations to all


zeros.


Chapter Two will examine when


converges in distribution to a


standard normal random variable

central limit theorem (CLT).


This is commonly referred to


A sequence of random variables,


as a

say Z ,


converges in distribution to a standard normal random variable if


i v ,, -


, n >










case


when the random variables


Xn, n


1} are independent


with finite second moment has been thoroughly investigated


(see


Theorem 2.1).


The topic of interest


in Chapter Two is whether this


limiting behavior still prevails when the second moment is


infinite.


We will


only consider the situation in which the random


variables are independent,


identically distributed


(i.i.d.).


first major result will


establish a CLT for mean zero random


variables.


In contrast to the


case


when the second moment is finite


where the norming constants are universal


(up to a multiplicative


constant),


the norming constants


} when the second moment


is infinite are not universal but,


rather,


depend on the common


distribution of the


1Xn n


Then by studying the shifted


sequence of random variables


= X -EX
n


we will obtain a


CLT for random variables with arbitrary first moment and infinite

second moment.


Strong laws of large numbers


(SLLN)


are investigated in Chapter


Three.


The sequence in


is said to obey the strong law of large


numbers if


ak(Xk-k)/b


- 0 almost surely (a.s.),


that is,


ak(X
~kk


P{lim


n-*w


= o


= 1.


This chapter contains many generalizations of the classical
n


Kolmogorov SLLN ( E X,/n 4


a.s.


whenever


* VI'- -


are i.i.d.











work.


Stout (1974, Chapter 4) for an excellent survey of known


results on the SLLN problem for weighted sums of

Mathematicians have, over the centuries, tr


random variables.


ied to understand the


Petersburg paradox.


Given


a game with Xk winnings at


the kth


stage,


it was asked if the game could be made "fair"


in the


sense


that there


is some entrance fee,


b -bk
k k-


such that


X /b
k


a.s.


This


is relatively


easy


when


are i.i.d


. with


but is


a source of


confusion when E


= C.


This problem,


when the random variables are not integrable,


is called the


Petersburg paradox.


Placing mild restrictions on the sequence


{a n
1n


we will


prove that


akXk


P{lim


= c}


= 0,


every


sequence


{bn, n


every


finite nonzero constant


provided that


the random variables


n' n


1 } are i.i.d.


with


This result generalizes one of Chow and Robbins (1961


wherein ak


S1.


For a detailed discussion of this paradox,


Feller


(1968,


251-253).


Feller


, however,


does produce


a sequence


{b n
n


1 such that


X /b
k1


for a specific sequence of i.i.d.


random variables with EIX


Thsrp


i ls sl ?n t.hp n.i ti. h n


fll it J T I' It II


a general i zed law of"


= a.


see


= 00


- i ir- i


n-o


gn i i7 -1 i u I













P{lim


= 1.


See Rosalsky


some


results


of this


type.


In the final


chapter


we consider


the weak


law of large


numbers


(WLLN).


The normed


sum (


is said


to obey


the weak


law of large


numbers


-Yk)/b


that


= o(


for all e >


Clearly,


a SLLN


holds,


then


a WLLN


also


holds


for the


same


sequences.


Hence


Chapter


Three


also


establishes many


weak


laws.


view


a very


Criterion


see,


general


e.g.,


result


Chow


known


and Teicher,


as the Degenerate


978,


. 32


Convergence


, we acknowledge


the fact


that


most


wor k


on the WLLN


for independent


already


been


done.


Research


in the central


limit


theorem


and laws


of large


numbers


date


back


to the 18th


century.


very


first


SLLN


was proved


Emile


Bor el,


while


the WLLN


dates


back


much


earlier.


WLLN


for i


.d. random


variables


with


a finite


second moment


established


Jacob


Bernoulli


and published


posthumously


nephew


Nicholas


a simple


appli


cation


of Chebys


hev'


i nenua i tv


was shown


that


when


n>


was


I


I I -


. J. A, -


.


.rL










CLT.


For a detailed history of the development of these and related


concepts,


see Feller (1945)


and Le Cam (1986).


Also,


for an


excellent survey of known results,

Some remarks about notation a


see Petrov


mre in order.


symbol C will denote a generic constant


necessarily the same one in each appearance.


(1975).

Throughout


a) which


Also,


is not


in order to avoid


minor complications


= logemax{e,x},
e-


it proves convenient to define


where loget denotes the natural


logarithm.


Finally,


for x >
---N


log2x


will


be used to denote


log log


















GENERALIZED


CHAPTER
CENTRAL


LIMIT THEOREMS


Introduction


The question


as to whether


a sequence


of random


variables


obeys


a CLT has a long


and rich


history.


Firstly,


in the


700s


, there


DeMoivre


and Laplace


discovering


a primitive


version


of the CLT for


sequence


930s


of Bernoulli


, Lindeberg,


random


Lbvy


variables.


, and Feller


hundred


established


years


what


later


we now


generally


refer


as the CLT.


This


famous


result,


which


we will


state


as Theorem


2.1,


is known


as the Lindeberg-Feller


theorem.


Theorem


(Chow


and Teicher


291).


,n >


independent


random


variables


with


, Var(X


satisfying


2
EX I(
k


for all E >


Ek )
k


where


= o(s


then


k=1


d
+ N(O


was


, in


now


are


, n >










Proof.


See Chow


and Teicher


p. 290-293


Clearly,


this


However,


theorem


L&vy


is not applicable


Khint


chine


when


935),


= for


and Feller


some


1935)


have


studi


the CLT problem


when


the random


variables


,n >


are i


with


= 00


Their


version


of the CLT


stated


as follows.


Theorem


2.2 (Chow


and Teicher,


p. 300).


If {


.d. random


variables


with


= 03


then


nd
- N(0


some


2'P{I
C P{l


c EX2
c+w EX I


=- 0.


Moreover,


for all n >


so that


be chosen


EX I


, while


as the supremum of


be taken


= nEXI(


Proof.


Chow


and Teicher


, p. 300-


02).


Theorem


was first


proved


using


the function


in (4)


while


our extension


will


use the function


in (5


inst


ead.


are


and An










variables,


= 0,


Var(X )


= 1,


1 nonzero constants


with


4 m and a


an easy application of Theorem


, e.g.,


Chow and Teicher,


978,


P. 3


02).


Preliminary Lemmas


To study the CLT problem when a


= =, one must examine a special


class


of functions.


A function


is said to be slowly varying (at


infinity)


if for all


g(sx)


- g(x)


as x


Closely related


to slowly varying functions


are those that are regularly varying.


function h


is said to be


regularly varying with exponent p


if h(x)


for some slowly varying function


useful property of


every positive slowly varying function


is that


(see


Rosalsky,


1981)


log g(x)


(and hence


= o(lo


= o(X ),


g(x)


as x


- m


all a >


The question at hand


sequences of constants


variables


is what condition should -we place on the


and the i.i.d. random


1 } to ensure that


ak(Xk
k k


see


= o(


= xPg(


- N(










rate the variance of the variables approaches infinity.


This


where the notion of


slow variation will


be applied.


We define H,


truncated second moment of X,


H(x)


= EX2I(IX


Another function of interest


is G defined by


G(x)


2tP{ Xj


t}dt,


The functions G and H were used by Rosalsky


(1981


to prove


generalized law of the iterated logarithm for weighted sums with

infinite variance.

The first lemma will be used to establish a relationship between

H and G.


Lemma 2.1.


For every random variable


and positive constant p,


tP-IP{ xl


t}dt


= sPp{ XI


s} + EXP I(Ixl


for all


Proof.


Let F


denote the distribution function of the random


variable


that


Flxl(t)


= P{ x


For s >


integration


,.. a-a-- --











tP-Ip


t}dt


tP-(


-FixI(t


= -[tPp
p


tPdF


xl(t)]
Xl


= 1-s P


Using


this


lemma,


with


= 2,


we obtain


= x2P{


should


be noted


that


is nondecreasing,


continuous,


as x


+ o (see


Rosalsky,


981).


Other


relationships


between


and H


which


will


be used


throughout


this


chapter


are given


in the next


lemma.


Lemma


2.2.


(Rosalsky,


981).


The following


are equivalent:


G is slowly



H is slowly



G(x) ~ H(x),


varying,



varying,


x2P{


x}/G(x)


as x


+ w,


+ H(


= o(










At this


variable


point


we note


is not in m.


an interesting


then


fact.


the following


lem


If the random

ma it is in p


all 0


provided H


equivalently


is slowly


varying.


Lemma


2.3.


If H


is slowly


varying


then


E ap


for all 0


Proof.


Lemma


we obtain


whether


or not X


S^p


This


follows


considering


the two


cases;


o or E


Let 0


2-p.


Then


via (3


Lemma


and the hypothesis


that


is slowly


varying,


there


exists


a number


such


that


if t


, H(t)


and t


t}/H(t)


Thus,


or P


whenever


Hence


o tp-1p


> t}dt


+ pit


t}dt


ptp-


= tp


since


_ P tP+a-2
- p
p+a-2 o


. D


One thing


to note


is that


slow


variation


of H


ensures


narti nular


that


I. 1 I -


This


allows


us to estbl ish


a 1 mma


wh i h


tP+a-3dt


+ pit


p+a-2


I










Lemma


2.4.


any

random


variable


constant


xl I(x


= cP{


t}dt.


Proof.


We apply


Lemma


twice,


first


with


= and then


both


cases


we let



II >


= 1.


Therefore,


t}dt


= cP{


- cP{


t}dt.


Before


we proceed


with


our main


result


of this


chapter,


conclude


this


section


with


one last


lemma.


Note


that


the hypothesis


of Lemma


ensures,


via (3)


that


Lemma


If P{


is regularly


varying


with


exponent


then


xlI( x


-)op
P+1


as c


Proof.


We apply


the following


result


in Feller


971,


281):


E


4 0


= C;











Using


this


and Lemma


2.4 we obtain


= cP{


t}dt


= cP{


+ (1+o(


-1)cP
p+1


Mainstream


With

prove


these


preliminaries


our first


major


ace


result


counted for,

, but first


we are ready


we need


to state


a version


Theorem


which


establishes


a CLT for triangular


arrays.


Theorem


2.3.


Suppose


that


for each


n the random


variables


are independent


and satisfy


= 0,


E2
= EX ,n
nk


nf* k=1


CSn)
n


holds


for all s >


then


- N(0


Proof. Se


, e.g., Billingsley


310-31


p+-c
"p+!


. ,X











in (


and (5)


respectively.


Let Q(x)


= x2/G(x


and since


slowly


varying


, it is clear,


via (


that


Q(x)


-+ 0


can also


be shown


that


= (see


Rosal


Lemma


sequence


is defined


Since


we note


that


next


sequence


interest


is {


, n >


which


sequence


of partial


sums


of the


squares


of the weights,


Finally


we define


the positive


constants


n 2
= -nq


first


main


result


follows.


Theorem


2.4.


be i


.d. random


variables


with


= = and


2
x PH( X
H(x)


= o(1)


as x


+ 0.


Let {a


be constants


such


that


max


1



*e,


via


- G(










Then


akXk


- N(O


where


, n >


is defined


as in (7)


akEXI(


Proof.


Via Lemma


2.2,


we see that


implies


that


is slowly


varying


and that


- H(x


+ ( (since


Note


that


since


and hence


G(qn


G(qn)


= Q(qn)



B2


n
2


2
= q


/G(q )
n


(7)).


Utilizing


G(x)


m and


+ = we obtain


via (12)


that


s2/B


Noting


that


Snqn
n n


snq
n n


max


1



1

max a,i
1

- s


= )


= o(










and G


is nondecreasing,


it follows


that


Snqn
n n


n min


Snq
n n


> G(q


in max






Using


the fact


that


is slowly


varying


together


with


and (15)


obtain


Snqn
n n


- G(q


/n min




sq
n n


/n max




Next


we will


show


that


a2H(B
k


Since


- 1.


is nondecreasing,


a2H(B
k 1


>1
SB2
B
n


aH( B
ak


max


snqn
nfn


/n max




sq
nfn


Lemma


2.2)


n
- "(
B2
n


, n >


- G(













n
B


Similarly,


a H(B
k


a H(B
k


1

= -nH
2
B
n


Snq
nrn




Sn q
nfl


1

n
n~


= 1+o(


Combining


these


two results


yields


(17).


Define


(n)
X
k


ak ,
k k'


= Var(S'
Ii


,n >


Now via Lemmas


I'- I -I


K -


+o(1


w !









n
1 2
-2 ak (EXI( X


2 k
B k=1
n

1 2 2
B k=1
n


SB/a, ))2
- n k


= -(E X)2
B
n


= o(1)


(by (14)).


Using this result and (17) we conclude that Y


- B2


as follows:


2 2 1
Y /B = -- Var(
n n B2


akXk I( akXk
k k k k


- n


1 2
= a Var(XI( X
B k=1


SB n/lak))


1 2 2 ,
--
= 12 I akEX I(XI
B k=1
n

1 u2
= 1- a H(B /l a )
B k=1
n


Bn / ak I


1 2
- -2 a (EXI( X|
B k=1


n
1 2
B- -I aa (EXI( XI
B k=1
n


< B / ak
a


SBn/lak))2



))2 1.


Next, noting that X E I and


2a (nEX2
a (EX )


2a (EXI(
a (EXI lXi


< B /lak))2


max
1

max
1

2
E a, (EXI(IXI
k=1


< Bn/ ak))2


S2(EX )2
n


xS










and hence


(n
EX
k


max


1

can now establish


that


the normed


sum of the truncated


(n)
a (Xk
k k


-Ex
k


variables,


, converges


in distribution


standard


normal


random


variable.


Let E > 0.


Clearly


and (


ensure


that


for all large


akE(n
a EX,
k k


E/2.



Hence


for all large


n and


(n)
EXk
k


< 2 or
- 2 ak


equivalently


) -EY


, EX n)


and EX(n
Ek


Thus


for all large


and all


-EXn
'k


S(Cn)
--( xk


-EXn )


-EYn / a
n k k


k(nX
U{ X k


-EXn )


EXn)
EXk
"k


-Y
n


U ixn)


EX~n)
k


cY
n
21a k


.. (n


max


S(n)
-k


= o(


2 k


/ ak
n k


/ a,
n k


l(n)
{Xk


Uxn)










Thus,


for all


, and all large


E(a Xn


-Eak


akn)
k k


-Eak X,
k k


E,)
n


< 2
-2
B
n


2E


(n
-EXk
k


2 (n)
)2I( Xk


(n
-EX
k


(since


-2
n


a2 (n
k k


(EXn
-EXk
k


C(n
Ik


) Cf
) I
) .


the previous


observation)


< 4
-B2
n


2 (n)
a E((Xr
k k


+ (EXn)
k


l (n)
IXk


(using


the elementary


inequality


+b2)


(a-b)


_4
- -
B2
n


2
a E(X
k kc


_4
+ --
82
n


(EXn
(EXk


k(n)
Xk


This


which


second


is o(


sum is o(1


since


, recalling


it is dominated


S


4 2 )2
_2 Sn(EIXl


The first


sum is equal


EY
n
I ,


1 and (














-B2
B


(for


all large


n since


Therefore


by Theorem


-ES'
n


Let 6


+ N(0


Then


akXk
k k


(n)
akXk


<_P{k=1
k=1


'jak = o(1)


= o(











Hence


-ES'


n n
B
n


Y
n
+ (
B
n


-ES'


- N(O


(19),


and Slutsky's


theorem).


Finally,


noting


that


akn)
k k


the conclusion


follows.


With

conditions


this


result


in hand


is A


it is natural


It would


seem


to ask,


natural


under

assume


what

e that


, but


that


in addition


to the other


hypotheses


of Theorem


does


seem


ensure


that


ak
k k


+ N(0


T C' 0-. 4 a A 4e 44 n


-S' +S' -ES'


= o(


T+- t.ni 1


r\Q ca l^/-Mr


'-> *


+- ^


Tn nhn ftr) \f


/^ /- irt C1^^^ s r


nnt w- j^ 4-TAjh










Lemma


2.6.


If P{IX


is regularly


varying


with


exponent


then


is slowly


varying.


Proof.


Without


loss


of generality


assume


that


Since


is regularly


varying with


exponent


+6 for


all x>


some


Thus


for all x >


G(sx)


2tP{


t}dt


t dt


2tP{ IX


t}dt


= G(sx


G(sx


- G(sx


implying


yjdy


+6)(G(x)-G(x


that


G(sx)


= <.


n -











Taking


the limit


superior


of both


sides


as x


-+ w yields


G(sx)
m sup
XE(x)
X40


Likewise,


for all x>


G(sx)


G(sx


= G(sx


ys}dy


-6)2


y}dy


= G(sx


implying


-G(x


that


G(sx)
G(x)


G(x)


-6)[1


Taking


limit


inferior s


as x


-* in this


last


inequality


yields


.r G(sx)
lim inf-
G(x)


Since


is arbitrary


, the desired result


follows


from


this


and (23)


converse


to Lemma


is false;


for a counterexample


Feller


288).


Al-b l-hi nnn ma n rAiv ir.t7


see


-6)[


- i- tim o


rnn\i nt+


-i r nrt T .i fC 4-u I,.


Tn n '%"i r











Corollary


2.1.


are i.i.d.


random


variables


with


, EX2


regular


varying


with


exponent


and (


are satisfied


, then


holds.


Proof.


Lemmas


and 2.2,


condition


holds.


In view


Theorem


it suffices


to show


that


/B
n n


= o(1).


Since


is regularly


varying


with


exponent


we obtain


via Lemma


with


= -2 that


for all x >


some


Recalling


we note


that


whence


max


there


exists


an integer


so that


max


whenever




So if


we obtain


n/ ak


for all k


= 1,


Thus


implies


whenever


Therefore


when


n is sufficiently


large,


1 .
- -I n
B R
n k=


a EXI(
kc


a. EXI( IXI


^L. 1 I I


(since


= 0)


= -1


= m


+ m,,


S *I,n .













S 1B
-B


= o(


The following


two corollaries


will


be shown


to be immediate


consequences


of Theorem


2.4.


Corollary


2.2.


Let {X,


be i


random


variables


with


= w and satisfying


n max a


be constants


so that


= O(s


(26)




mln


Then


- G(q ).
ni


hol ds.


Proof.


In view


of Theorem


2.4,


we only


need


to verify


that


prevail.


max




and observe


aki'


that


n
a >-
n -s


n max a
1

-9 0


_











n max


- -


- -


2
2


and clearly


implying


that


/n max




For arbitrary


, note


that


nH(-q )
2


nH(q )
2


(since


implies


that


slowly varying)


(since
via (


H(x)


- G(x


3)).


Thus


nH(-qn)


Then,


for all arbitrary


and n sufficiently


large


max


EB /


) 2













= nP{ xl


E
> -
C n


E
n(c-q
C n


1
-q


E
H(-q)
Cn
E
H( -q )
Cn


(-q )
C n


Cn


E
H(-qn


(;q )2P
C n


(29))


H(.n)


thereby


proving


10).


prove


in view


of (27


we need


only


show


that


G(q )
n


n )
max l ak
1

(30)


Note


that


entails


n


= o(


- G(


___











Utilizing


the fact


that


G is nondecreasing


we obtain


1
CnG (


< G(qn).
- n


1

Thus


(30)


holds


since


G is slowly


varying.


Corollary


2.3.


be i.i.d


random


variables


with


= o and satisfying


be constants


so that


then


akXk
k k


+ N(0


where


is defined


as in Theorem


2.4.


Proof.


Again


we will


verify


that


and (10)


are satisfied.


Clearly,


B
n
la
k


max




1

. 9 9-


, n >


* _


J 1. 1


*i *


*












max


1

mnn


< G(inf
- i -


Recalling


that


is equivalent


to G


slowly


varying


we obtain


G(--
sup


employing


31).


Hence


obtains.


Define


max


1

and 8B
n


and note


min


that


n
a >-
n -s


4 o.


In light


of (31


there


exists


constants


* such


that


min


max




< C2,


for all n >


Recall


that


= nB2/s
n


and thus


n
n -C1


Again


using


the fact


B


that


n
G(-)
C2
2


is nondecreasing we


B


) < G(qn)
n -n


n
< G(-).
- C
1


obtain


This


together


with


slowly


varying,


ensures


that


for all E >


G(--
sup
k>1


- tnf


, n >










From


(32)


it follows


that


C
- 2


Thus,


for all e >


)nG(q


C2
2
c?
1


nG(q )
n


whence


= 0(


Therefore


for arbitrary


EB /
n


"n


ECs


2


nH(ea )
2











Theorem


does


have


a partial


converse.


however,


impose


relatively


strong


condition


about


the weights


Remark.


Let {


be i


random


variables


with


Let {a


a sequence


of constants


where


= 0(n


min


Then


implies


and (


Proof.


there


exists


a constant


, such


that


which


implies


that


n min


- -m


n min




for all n >


Note


that,


definition


of H


. H(x


as x


Then,


recalling


G(x)


whence


Q(x)


= x2/G(x)


as x


*+ O


This,


in turn,


shows


that


4 00


Therefore


2
= s G(q
n n


- and


B
n
min [a j
1

B
n
max a
1

n
- s
n


2
a)


4


- 0


, n >


= o(


= o(











then,


employing


Corollary


2.2.


of Chow


and Teicher


978)


we can


conclude


that


holds


a,[H(EB,


- (EXI( X


So to establish


note


B / lak
n k


that


for all e >


for arbitrary


max P


6B /
n


6B /


max
1

6B /


= o(


whence


obtains.


Then


using


we conclude


that


P{Ixl


a2[H(B
[11(


/ lak


- (EXI(


n /ak


))2


=- p


40),











min


akH( B


2p
nB P{ X
n


s2H(B
n


min


min


min a


n min




1

1

n .
mln la
1



- -


(37)).


min I


Thus


mmn


min i


(k

= 0(1).


ak )


Then


for all x such


that











we observe


that


PiJX


mmn
S

1<(k ml n


since


H(x)t)


n
min lak
1

mln


) =-o0




recalling


thereby


establishing


The next


corollary


an immediate


consequence


of Theorem


and this


last


remark.


It is the famous


result


of L6vy


Khintchine


935)


and Feller


(1935)


cited


previously


as Theorem


2.2.


Corollary


2.4 (Chow


and Teicher


p. 300).


are i.i.d.


random


variables


with


-= 00


then


- N(0


some


and A


iff (


holds


; moreover,


be chosen


while


be taken


W 00


= 4(











Proof.


Clearly,


since


Also


ak EXI(


= nEXI(IX


,n >


whence


the sufficiency


portion


of the corollary


follows


from


Corollar


2.3.


Necessity


follows


from


the last


remark


since


= n min
1

Properly


Centered


Central


Limit


Theorem


We have


seen


that


via Lemma


, implies


that


e a,1


Then,


we assume


EX = 0


= 00.


and P{jx


regularly


varying


with


exponent


we showed


in Corollary


that


holds.


It is natural


to ask what


happens


when


the mean


of X


finite


but not


zero.


we just


shift


the variables


EX and


achieve


asymptotic


normality?


In other


words,


when


does


the CLT


-EX)


hold?


, n >











are i.i.d.


with


arbitrary


finite mean,


while


sequence


are defined


= X-EX


n


-EX,


Again


we suppose


that


= w and thus


functions


and G


Likewise


will


, Q(x)


be defined

= x2/G(x)


as in (


and (5).


,n >


and B


= Snqn
n n


,n >


Similarly


we need


to define


analogous


quantities


terms


of the random


variable


= EY2I(


2tP{


t}dt


Also


let Q (y)
1


/G (y
1


= Q1


,n >


Snq
nfl


//n,


(44)


Before


we establish


the relationship


between


these


pairs


functions


sequences


we need


a few preliminary


lemmas.


Lemma


2.7 (Rosalsky,


981 ).


If a(


is the


inverse


of the continuous


S,rTf \


4 \ nn I ,F^ i f ntr ,tr


nnl yar~ ^ ? I^ -


F*i 11 /tl ; hny


*^ ~ n Allv l 1*^


~rnH r; *


1r










Proof.


Rosalsky


1981),


Lemma


Lemma


2.8.


Suppose


that


is slowly


varying


and EX2


= w.


Then


- H(t)


as t


+ 0 and H


is slowly


varying.


Proof.


= EX and recall


that


Then,


= X--.


recalling


(42),


for all t


= EY I(


= E(X-p)2I


= EX2I(


- 2iEXI(


i2 EI(


x-ul


for t


since


E t


1 via Lemma


2pEXI(


Also


E|Xi


for t


u2EI(


P{IX-i


Therefore,


since


H(t)


- as t


+ m (from


we obtain


H (t)
1


EX I(


4- 4


} < 2


= )











For t


EX2I


= EX2I


v+t)


EX2I(


= EX2I


Also,


= H(2t)


for t


EX2I


= EX2


EX2- 1t
2


1
< t
-2


= EX I


1_
< -t)
- 2


1
= H(-t).
2


These


two results


entail


for sufficiently


large


1H(
H(-t)

H(t)


EX2I


H(2t)
-H(t)


and since


is slowly


vary


we obtain


EX2I


- H(t).


Tn vi P u


n? (Uh'1


1" 'mi


\ri e] re -a


u (-


- U(.-


^a <-


*^m


I(u-t


u+t)











Finally


for s >


H (st)


- H(st)


- H(t


(since


(since


- H(t))


is slowly


varying


~ H (t)


(since


H (t


And so HI


is slowly


varying.


With


this


relationship


between


1 and H


established,


we note


that


a similar


relationship


clearly must


then


exist


between


I and G


when


is slowly


varying


in view


of Lemma


2.2.


The natural


question


is whether


Prior


to establishing


this


we need


to state


facts


about


functions


of slow


variation.


The following


result


(see,


e.g.,


Feller,


p. 282)


characterizes


the class


of slowing


varying


functions


and is known


as the Karamata


representation


theorem.


Theorem.


function


L(t)


varies


slowly


iff it is of the form


L(t)


=- a(t)exp{f


e(ds
-----s
S


(46)


where


and a(t


+ 0


c, as t


This


next


lemma


will


demonstrate


the value


of the Karamata


representation


theorem


and is


a quite


useful


result.


- H(t











L(u(t))


- L(v(t)).


Proof. Since L(t) is slowly varying, by (46), L(t)


t
=a(t)exp{j --dy},
1


where E(t) + 0 and a(t)


C 0


as t +


Thus,


L(u(t))
L(v(t))


u(t)
a(u(t)) )xp
a(v(t)


^dy
y


v(t)
-1
1


E(y)dy}
dy
y


< (1+o(1))exp{f


max{u(t),v(t)}
E(min{ut),dyt)
min u(t),v(t)} Y


since a(x)


* c as x


- ~ and u(t), v(t)


wm as t


-3 m.


Using u(t)


- v(t)


- 3 and e(y)


* 0 we observe for all t


sufficiently large that I(y)


for all y


min{u(t),v(t) }.


Hence, via (47), for all large t


L(u(t))
L(v(t))


max
= (I+o(I))
min


max{u(t),v(t)}
< (1+o(1))exp{f -dy}
min{u(t),v(t)}


u(t),v(t)
u(t),v(t)


{u(t) v(t)}
= (1+(1))max v(t) u(t)


= 1+o(1).










Reversing the roles of u and v we obtain for all large t


L(v(t))
L(u(t))


1+o(1).


Combining (48) and (49) we conclude that L(u(t))


- L(v(t)). D


Lemma 2.10. If EX2


= w and H is slowly varying, then B


- B
n


Proof. Recall (42) and (43). By Lemmas


2.2 and 2.8,


G(t)


- H(t)


~ H (t)


- G (t) whence Q(t)


= t /G(t)


- t2/G (t)


= Q1(t)


Let q(t)

Q1(q1(t)) = t


= Q1(t) and q1(t)


= Q(q(t))


= Q-1t), t
Q?(t), t


- Q1(q(t)) (since Q(t)


0. Then

- Q1(t)), and


Q(q1 (t))


By Lemma 2.7, L(t)


- Q (q(t)).


-1/2
Sft q (t) is slowly varying since


Q (t)

and G1


= t2/G1(t)


is increasing


(see


Rosalsky, 1981, Lemma


is slowly varying. Next, note that if u(t)


- v(t)


+ ~ then,


via Lemma 2.9, L(u(t))


- L(v(t)). Applying this to (50) we


see


that


L(Q (ql (t)))


rn In t if-1/2


- L(Q1(q(t)))


_ rn / + -I /2


. n rn ( t \


- f













q1(Q1(q(t)))


- ql(Q1(q(t


Using


q (t)


Therefore,


-1
= Q (t)

recalling


we obtain


or qn
*n


and (


*
n n


- Snqn


. 0


Lemma


If EX2


= w and H


B
Smin a
Sk 1

is slowly


- G(


varying


, then


max


max






Proof.


From Lemma


we have


slowly


varying


implies


via Lemma


2.2,


that


G is slowly


varying.


Hence


- w we obtain




1

Lemm a


2.9)


~ qn


- G(


- G(













Lemmas


min ]a


and 2.2).


Likewise


max laki
1

max


- G1


max


1

Thus,


if G(


n
min a
ak

max


then






max


1



(52)).


max


Conversely


if G


max


then


1



1



ak


- GI


~ G(


- G(


- G.(


ak)











Lemma


2.12.


If EX2


= ,-


is slowly


varying,


= o(


for all e >


then


for all e >


Proof.


Let E >


Lemma


there


exists


a constant


such


that


n n


for all n >
-


Recall


that


max


+ and so if


n is sufficiently


large,


then




for all k


2MIa k


. ,n


where


= EX.


Therefore


EB /
n


(53))


[P{X


+ P{X


L njv


EB /
n


n
Mja.


= o(


P{ X-


b










Lemma


If P


then


is likewise


is regularly


regularly


varying


varying


with


with


exponent


exponent


Proof.


Let L(t


and L


for t


It need


only


be shown


that


since


then,


arbitrary


- L(st)


- L(t)


(t).


To this


end,


let 0


be arbitrary


and let


where


= EX.


Then


whence


L (t)
L(t)


P{wX-r
P xT


u-t
Px X


-tl
p {lx


u+t


t1_
1+E


+ p{x











2 t
(1+E) L(-)
1+E


L(t)


2
= (1+e) (1+o(1))


since, by hypothesis, L is slowly varying.


L(t)
t-+


(1+E)2


, and since


E is arbitrary we


obtain


L (t)
lim sup
L(t)
t**m


Again, let 0


be arbitrary, and now let


(1-e) I|/E, which implies that


lint.


Therefore


L1(t)
L(t)


p-t} + P{
P{ X > t


-t} + P(X
l-EC


u+t)


t
1-}


t-


PT X










1-E)


t
L(-
1-E


L(t)


since


is slowly


varying.


Hence


lim inf


L (t)
L(t)


whence


L (t)
L(t)
t**


This


, together


with


shows


that


L (t)


- L(t)


which


is tantamount


to P


regularly


varying


with


exponent


are now able


to state


prove


a CLT for random


variables


centered


about


their mean.


Theorem


2.5.


Let {


,n >


be i.i.d.


random


variables


with


= w and let P


regularly


varying


with


exponent


be constants


n
min la i
1

eB /
n


with


n
max a.
k 1

for all E >


Then


-EX)


-3


- G(


= o(


(1-E)2











Proof.


Via Lemma


and regular


variation


of P


we can


conclude


that


and H


are slowly


varying.


This


, in turn,


implies


that


1 by


Lemma


2.3.


Let Y


= X-EX,


and note


that


since


= = we also


have


Lemma


and the hypothesis


that


regularly


varying


with


exponent


we see that


is also


regularly


varying


with


exponent


Again


Lemma


we can


conclude


that


and H


1 are also


slowly


varying.


Next


via Lemma


we conclude


that


Now (


is equivalent


min


- G (B


max


Lemma


Finally we


note




Lemma


that


implies


EB /


= 0(


for all


> O.


Therefore,


all the hypotheses


of Corollary


are


satisfied


for the


sequence


Hence


ak k


Then remembering


that


-EX)


n k=
- (
B
n


d
- N(O


akYk
k k


N(0,1).


= CO


-EX,











2.5 Asymptotic


Negligibility


In order


to establish


a CLT,


"it is [generally]


essential


impose


a hypothesis


where


individual


terms


in the


sum S


are 'negligible'


in comparison


with


sum


itself"


(Chung


97).


There


are several


different


measures


of negligibility.


literature


(LoBve,


Laha


and Rohatgi


979,


p. 295)


calls


the double


array


,n >


uniformly


asymptotically


negligible


.a.n.)


max




aX
k k
B
n


for all e >


This


condition


is also


known


various


other


names,


such


infinitesimal


(Chow


and Teicher,


422)


or holospoudic


(Chung,


974,


98).


Another


measure of


negligibility


max




aX
Imedian(--
n


= 0.


(57)


Fact.


If {akXk
*k k


u.a.n.,


then


holds.


Proof.


Let E >


hypothesis


there


exists


an integer


such


that


I akXkn I 1 1


nfl4


n-*o


P+ .I 1 I.


U











The measure


of negligibility


that


we are using


our CLT


the condition


Under


our hypotheses


, it is equivalent


akX
k k


P
4QO.


-i-


Fact.


If {


,n >


are i.i.d.


random


variables


then


are equivalent.


Proof.


Let e >


and define


akXk
k k


Thus,


akXk
k~k




n
= P{) [
k=l
k=1


n

k=1


akXk


eB ]


aXk
k k


EB]}
n


akXk


akXk


-Pnk


U In n > a C-


A c. -n A -


max


A- -


knl- 1/*n"


- /


- w











1-exp -


exp{-pnkI
nk


-Pnk


Thus,


1- HI


-Pnk


= o(


= o(


Therefore


(58)


equivalent


to (10)


Our measure of


smallness


implies


u.a.n.


condition.


Fact.


holds


then


{ak k


u.a.n.


Proof.


Clearly,


for all k


akXk
k k


max


- -I


a.X.
3 3


EBj.


Thus


max P
I -


aX
k k


I> Bn
n


max


- -


a.X.
J J3


I > Bn
' n


= 0o(


An Asymptotic


Representation


The CLTs


in this


chapter


hinge


upon


hypotheses


that


involve


func-


tions


of regular


variation.


At first


we assume


that


holds;


later


we suppose


that


a regularly


varying


function.


In either


case


and H


are slowly


varying.


We have


seen


via Lemma


that


-a.-. - l-~ I o.~... .., 1


SI ,n


S> eBn
'~ n


* -.- --ii r


L


for {Bn'


> rl L.. J r


V


II










random


variables


have


a distribution


which


satisfies


these


hypotheses.


To understand


this


class


of random


variables


one needs


to study


the Karamata


the behavior


of regularly


Representation


varying


Theorem


functions.


that


all slowly


We know,


varying


functions


are of


a specific


form.


Also,


recall


that


a function


is regularly


varying with


exponent


if and only


w(t)


= tPL


where


L(t)


a slowly


varying


function.


With


all this


in mind


we will


now generate


an interesting


class


of slowly


varying


functions


which


are applicable


our CLTs


with


Suppose


the distribution


of the random


variable


satisfies


some


aexp
2t2


as t


+ .O


Thus


aexp


t(log


gSt)-}dt
t)1-a


- exp{


(log


can check


that


that


, for s >


a slowly


~ G(x


varying


as x


function


Alternatively,


observing


one can


easily


verify


that


holds.


For let a(t)


E(t)


= a(log


Then


mYn I


a(log


cx-1
) l
MA


= pYn I


a-1 i
mll nil


, G(










We require


that


so that


and we also


need


ensure


that


From


we obtain


= /n/G(qn
n


Since


G is slowly


varying,


natural


question


is whether


we can replace


G(qn


G(/n)


and thus


conclude


that


- /n G(n)


thereby


yielding


via (7)


an explicit


asymptoti


representation


Define G


= exp{


(log


x) }


This


class


of slowly


varying


functions


some


intriguing


properties


It will


seen


that


holds


Proposition


2.1.


some


If G


but not all,


= exp{(log


a in (


, then


for 0


Proof.


(log


We need


- (log


to observe


Jn)


the limiting


From


behavior


we see that


whence


n*o


/2,


a<


= T/niE










= log



= log


1
+ -log
2


1
+ -(log
2


= log /n


and note


that


for all large


n, q


Hence


from


(60)


n is sufficiently


large,


(log


qn)a


- (log


/')


= (log


+ -(log
2


q ) )
nI


- (log


(log J/n


1
2* ^f


+ -(log
2


- (log


/Jn)a


- m


= CX


mean


value


theorem


some


1
m m


where


Therefore


(log


q)a1
n )


- (log


a
/n)


1


a a-1
)


1+o(


+o(1


)-m
2


Hence


Cn
/n)


= (m


/n)) a


S2m
> -(m
-2










Since


is slowly


varying we obtain


for all 0


that


G(x)


for all large


Then


= nG(qn


which


shows


1
1-El


that


for all 0


and large


So for


arbitrary


and all large


Therefore,


recalling


n is sufficiently


large


(log


qn)


/ha


- (log


= (log


+ -(log
2


qn)a)t


- (log


(log


+ -(log
2


nB/2)a)a


- (log


/n)a


= (log


2


(log


Jn)a)01


- (log


na


= (m


+--m
2"


cx
K-)m
2


mean


value


theorem


some


where m


m< m + --m
m 2


Thus,


(log


qn)a
n


- (log


,/ a


a=2
= 8


2a-1


2-)m


This,


in turn,


yields


for all 8


that


/n)


- m












lim sup{(log
n**


qn)a


- (log


/) C
in)


6alim m


2a-1


Letting


we obtain


lim sup{(log
n-+


qna
n


- (log


lim m


2a-1


me01


This


together


with


yields


lim{(log


qn)


- (log


lim m


for 0


for a

for 1


Therefore


= lim


(qn
a n


= lim(


n
ci


I G (O/n)
ax


= lim(exp{(log


qn)ac
n0


- (log


n-pi


A


/n)a


/n)a}


/n/5


/n)











Remark.


In this


last


example


we let G


= exp{


but if


let G

shown


- G
a


to be valid


the conclusion


of Proposition


will


now be


for G


Proof.


Let G


= exp


(log


and let G


some


function


defined


with


2
= x /G (x


- G


= x2/


Define


Since


have


whence


Utilizing Lemma


with


Q- (x)


2
= x L(x)


where


L(x)


see that


is slowly


varying.


Applying


thi s


to (


obtain,


via Lemma


, that


(t))]


(q (t)
a a


-- [Q


(a "C


(t))]


-1/2


a "a


Reapplying


we can conclude


that


This


, together


with


~ Ga
ar


shows


that


(where


for 0


(n))





a <


n**


= Q










In view


of (


and the example


in Proposition


interesting


class


question


of distribution


to raise

s. The


is whether


answer


prevails


is affirmative


for a large


in view


of the


ensuing


Propositions


Proposition


If G(x)


and 2.3.


+ 0


, then


G(x)


- G(xxV-f).


These


, in turn,


are equivalent


to (59).


In such


case,


G is necessarily


slowly


varying.


Proof.


Recall


that


= /n/G(q ),
n


whence /n


for n sufficiently


large.


Assuming


that


G(qn)


- G(/n)


and utili


zing


the fact


that


nondecreasing


we obtain


for n sufficiently


large


that


G(/A


/ ))


< G(/nj9(q ))
- n


= (1+o


))G(/n).


Thus


~ (/in


(cn)).


Now to show


G(x)


- G(x/GT-x)


, apply


Lemma


twice.


First


we obtain


G(/n)


- G(V/nT)


and then


G(/c


(In))


- G(/n+1


(/n'T)).


Hence,


if /n


/n'T


we have


G(An)


G(x)


G(x/G-xT)


G (VrWT)


(/R/G/E)


G(/n)


G(qn


= G(qn


- G(^


G(/n)


G(/in)


G(c 1 T











Next,


we prove


the sufficiency


half


of this


proposition.


Again


we use Lemma


assumption


G(x)


- G(x/GTx7)


implying


that


x/G(Cx)


(x/GTiT)


Thus,


Lemma


G(x)


x/G(Ti


(x/GTxT)


Therefore


GCxV4ZiT)


- CG(


x/GTxT


cG(x/Gdx7)


whence


replacing


x/G(x7


with


yields


G(q )
*n


= G(2n)


3)).


Finally


we will


show


that


is slowly


varying.


then


for suffici


ently


large


G(x)


G(sx)


G(x/GTxT


which,


together


with


G(x)


- G(x/G(x-)


shows


that


- G(sx).


TI t^ -I


- G(


- G(


T "4 t%


Grq )
n9,


LL1- ^


.1. .


_ -I _


I










Thus


G(sx)


~ G(x


as x


+ ~ for all s >


and so G


is slowly


varying.


With


Proposition


in mind


we would


like


to find


other


conditions


Before


that


we do


imply


we need


, equivalently,


to state


G(x)


a definition


(x/ -fxT


prove


preliminary


lemma.


that


a nonnegative


function


defined


preserves


asymptotic


equivalence


at infinity


- g(yn)


whenever


and {y ,
n


are nonnegative


sequences


with


+ m


We have


already


noted


, in Lemma


2.9,


that


if L


is slowly


varying


and a(t


+ CO


then


L(a(t))


- L(b(t


now establish


the following


lemma.


Lemma


a nonnegative


, nondecreasing


function


defined


on [


which


preserves


asymptotic


equivalence


at infinity.


Let g


defined


on [0


agree


with


on the


integers


and be defined


linear


interpolation


between


the integers,


i.e.,


g(x)


= (g(n+1


-g(n))(x-n)


+ g(n


for n <


n+1, n >


Then


g(x)


- g(x)


as x


S* and


preserves


asymptotic


equivalence


infinity.


Proof.


For all large


x, writing


n+1,


it follows


from


hypotheses


on g


that


, n >










+0(1)


g(n)+1
- g(n+i1)


_ g(n)
g(n+1)


g(n+1


- g(x)


g(n+1)
g(n)


+o(1


as x m,


whence


- g(x)


as x


-+ B.


Then


- yn


- g(yn )


so g


preserves


asymptotic


equivalence


at infinity


The next


proposition,


when


combined


with


Proposition


2.2,


and (


yields


the explicit


asymptotic


representation


(Vn).


Proposition


2.3.


Let G be defined


as in (


suppose


that


G(x)


* w.


Then


=> (ii)


=> (iii


- G(x/GCix)


equivalently,


(/n))


where


(ii


, (iii)


are given


other e


exists


+ such


that


exp{


G(x)


- "2"
t(x)


sequences


G(u )
n
with


- G(v


whenever


and {v


are real


- log


(iii)


there


exists


, s(x


+ such


that


- 0


-~


=> G(


0 *;










Moreover


Proof.


G(x)


Suppose


- G(x/ff(x)


that


implies


holds


that


is slowly


and let log


-~ log


varying.


- w.


= min{u


nvn}


- max{u


Then


n},'


- n
< -
- G(m )
n


1+o(


b(mn)
b(M )
n


log M
2 1


log m


log M
2


- lo


))exp{


)exp{0O


log M
)log( o
n


)exp


0(1) log


log v


implying


that


- G(M


Therefore,


G(m )
n
-(M )
n


G(u)
G(v)


n
< _"-
- G(m )
n


= 1+o(1


thereby


proving


=> (ii)


Next


suppose


that


holds.


It will


be shown


that


(iii)


obtains


with


= (lo


Define


= G(exp{x


We will


now verify


that


preserves


asympto


equivalence


infinity. L


- y-


- = and set


= exp{ x v


= exDy4 ,


, x >


~










= G(u


((log v


= g(yn).


Define


on [0,


as in (


63).


Lemma


2.14,


- g(x)


and g


preserves


asymptotic


equivalence


at infinity.


Thus,


exp{


(log x)
G(x)


exp{


(log


exp{


(log


g((l


and to complete


the proof


it suffices


to show


that


eventually


nondecreasing.


To this


observe


that


suffices


show


for n sufficiently


large


and 0


that


<1 t


(n+t


g(n+t )


Choose


such


that


g(n+1)/g(n


for all n >


and 0


g(n+t)


For 0


is differentiable


on (t1 ,t


and is right


left


rPqnoni uavl'v (


= g(


~ G(v


entnn i nll nml


/^n it +* <-


tl t










A'(t)
n


g(n+t)


- e
n+t)


'(n+t)


n= [g(n+t)
[g(n+t)]2


n+t
g(n+t)2 (n+t)


- (g


- g(n+1)


- g(n))]


+ g(n)]


>[ ------ [2g(n
[g(n+t)]1


(n+1)]


recalling


the choice


of N.


mean


value


theorem


guarantees


existence


a point


in (t1


such


that


- A(t


However


and A'(t)
n


, whence


(t2)
n 2


n(tl


thereby


proving


64).


Now,


suppose


that


iii)


holds.


Then


G(x/G(x))


G(x)


exp{


s(xVG(x)


/log(x/ iuy)


r(x/G(x))


r(x)


(1+o(


exp og(x/~Tx)
expr(x
l r(x)


- /log x
J .












log(x/G( i)


= log


x + -log
2


G(x)


= log


x
2


log x


r(x)


- log


Lemma


2.9)


= [log


implying


-(1+o(1))


for all large


)/log x


x that


log(


x/G(x)


= +log x[1+
2


r(x)/


log x


/2 (since
(since


(since


s(x)


r(x)/log x


r(x)/log x


Jlog x


+ -+
r(x)


/log x


(since


r(x)


+ x).


Hence


for all large


hlogx(xx)


fnr a1 1 1 2ra o r


log s
log


+ 0)


/log x(


= /log x


- /log x


x][1+


/log x(


r(x)/loog x


/log x(-


/log x


Th >n


fr nm










G(x/V-iT)


G(x)



+o(1)


))exp{ r--
(since r(x



(since r(x


thereby


proving


G(x)


- G(x/Jcfi).


Finally


, the last


assertion


was proved


in Proposition


2.2. 0


Examples


We conclude


this


chapter


with


two examples


illustrating


some


the results


of the chapter.


Example


2.1.


Let {


be i.i.d


random


variables


with


common


density


function


f(x)


Then


/log k(X


4 N(0,


Vrnlog n


Proof.


Note


that


EX = 2


and EX


= 0.


Also


for x >











Thus


is regularly


varying


with


exponent


Then,


G(x)


2tP{


t}dt


2tdt


+ 21og


which


a slowly


= /log


varying

n, n>


function.


Then


(log


x)dx


+log


x)dx


- nlog


Note,


that


condition


(ii)


of Proposition


is satisfied,


whence


- /nlog n.


Next


observe


that


max


= jlog n.


Set,






for n >


n


-Idt


- n /(/n)
n












max la
1

- /nlog n.


Thus, recalling Lemma 2.9,


G($n)


= (1+o(1))G(/nlog n)


= (1+o(1))log n


= (1+o(1))G(/nlog n)


= (1+o(1))G(8
n


and so G(B )
n


Finally, let


P{Ix|


establishing (55).


For n sufficiently large


eBn/ akj


P{ xl


E/liog n
2/l og k


since B


- /nlog n)


4log k


(by (56))


E n(log n


4nlog n
e2n(log n)2


establishing (56).


2
E log n


= 0(1)


Then by Theorem


2.5,


SN(0.11.


log k(X


- G(a










This


next


example


illustrates


Corollary


2.1.


It will


also


create


a family


of CLT


since


the weights


are not


explicitly


defined.


Example e


2.2.


be i.i.d.


random


variables


with


common


density


function


2(log


f(x)


Let L


any positive


slowly


varying


and nondecreasing


function.


Then


L(k)Xk


-* N(0


e/HL(n)log


Proof.


Firstly,


integration


parts,


we verify


that


is indeed


a density.


f(x)dx


= e2


2(log


x)-1d
---dx


= e2[


x 3dx]


, n >










Clearly EX


= 0 and EX2


= C.


For x


Se2 (log t)-1dt
- d


= e(2(log t)-1)- -t-2)


+ f t-3dt]


e 2
= (-) log
X


and so P{I X


is regularly varying with exponent


e x
G(x) = f2tdt + x2t(t) (log t)dt
0 e


= (elog x)2


which


is slowly varying.


Note that


since


L is slowly varying, it


follows that


(see,


e.g., Feller, 1971, p. 281)


SEa
k=1


L2(k)


- nL (n).


Also,


G(x/G(x) )


= G(xelog


= e (log(xelog


x))2


P{Ix|










Then


Proposition


or 2.3


- e/nL(n)log


Let e >


and note


that


n is sufficiently


large


VnL(n)log /.


Hence


for all large


EBn/L(n)}
n


(since


L(x)


< nP{Ixl


log(E /nlog


E n(log


whence


obtains.


It remains


to show


that


holds.


Note


that


min I
1

= L(


max I
1

= L(n).


Since


slowly


varying


and B


- eJAL(n


Lemma


yields


-Ge/nL(n)log /n'
L(1)


1

- G(/nL(n)log


- s


= o(


P{lxl


/nlog










G(B / max ak, )
1

~ e/nL(n)log /n,
G((n)
L(n)


- G(/nlog n).


Let a


= /-nL(n)log n, n


1, and Sn


= /nlog n, n


establish (9) we need to show that G(a )
n


- G(6 ).
n


Note that


G(a )
n


2 2
= e (log(nL(n)log n))


2 n)+(log L(n))+log2n)2
= e (2(log n)+(log L(n))+log2n)


2 e (-log n)2
2


(by (3))


2,- 2
= e (log(/nlog n))


2 1 2
= e (2(log n)+log2n)



- e (-log n)


and so (9) obtains.


n


Then, via Corollary 2.1,


n


G(8B


I

















CHAPTER THREE


GENERALIZED


STRONG


LAWS


OF LARGE


NUMBERS


Introduction


In this


chapter,


we present


generalized


strong


laws


of large


number


(GSLLN)


for weighted


sums


of random


variables,


-Y)


a.s.


hypotheses


of these


theorems


vary


greatly


In general,


control


the behavior


of the random


variables


restricting the magnitude


of the tail


of the distribution


Xn l


Mos t


of the assumptions


that


involve


sequences


, n >


only


depend


on the absolute


value


of their


ratio.


thus


proves


convenient


to define


sequence


This


notation


will


used


throughout


this


chapter.


sequence


will


for the most


part


be the null


TQ cha nt hn rromi n na1


, n >


n/C!i (Y Q Q ^


r.ri 1 1


caom lon^Q


P on ip m n n/d










Preliminary


Lemmas


The theorems


in this


chapter


assume


that


the random


variables


are either


that


i. i.d.


sequence


or stochastically


is stochasticall


dominated.

y dominated


there


exists


a constant


o such


that


< DP{ IX


t/D}


Sn >


It is important


to note


that


condition


does


not pi


restrictions


on the joint


distributions


of the random


variables


Also


it should


be clear


that


if D


satisfies


then


number


larger


than


also


satisfies


Finally


note


that


if the


random


variables


are i.i.d.,


then


holds


with


-= X


and D


= 1.


Lemma


Let X


and X


be random


variables


such that


stochastically


dominated


in the


sense


that


there


exists


constant


* such


that


t/Do}


Then


for all t


for all


q+1
D E


s/D )
0


s/Do.
0










Proof.


Note


that


+ EIX


t}dt


q1(Ix


Lemma


t/D }dt
OJ


= qD


t}dt


[sqp


+ ED


Lemma


s/Do}


s/Do)
0


Lemma


will


be used


in establishing


Lemma


which,


as will


become


apparent


plays


a maj


role


in this


chapter.


Lemma


Let {X


,n >


and X


be random


variables


such


that


is stochastically


dominated


be constants


with


max
1

where


= 0(n)


some


+ Dq


tq-p


X qI


sqP{










where


as in (


Then


for all 0


1q k
^qEIk


< Mc
- k


Proof.


Let 0


let D


= max


D,/M}


max c


and set do


= 0.


Note


that


and (5


ensures


that


view


of (5)


there


exists


a constant


w such


that


q z
nj =n


for all n >


Note


that


so (


holds


with


and then


1EI Xk
^q k


< Mc )
- k


q1(Ixk


1 q+
--qDo
q o
ck
k


EIXlI( X


oCk)
ok


SD2q+1
+ D
0


oCk0]


Lemm a


with










series


in the second


term


of (9)


converges


since


it is


bounded


above


6)).


D2q+
o


The series


in the first


term


of (9)


is majorized


1
-El
A


o n-1


< Dd )
- on


k -q
k=n c
k


dn-1
o n-1


nP{D


= C 2:
n=
n=1


d }Ddn
o n o n


odn-1


o n-1


qI(D


o n-1


qI(D











Dodk-1


thereby


proving


the lemma.


As previously


noted,


if the


sequence


is identically


distributed


then


automatic


but this


is hardly


necessary.


is quite


eas y


to show


that


if the


sequence


, n >


belongs


scale


family


then


holds


(subject


an additional


assumption)


follows.


(The


symbol


denotes


that


the two random


variables


have


same


distribution.)


Remark.


Let X


, 0


Then


is stochastically


dominated


Proof.


Let D


- max{


1 ,sup


a ).


Then


for all n >


- Pr 1 I A I


f


/n










well-known


example


a scale


family


is the class


mean


zero


normal


random


variables.


Equation

distribution


compares


of the random


the magnitude

variables and


of the tail


the constant


of the

s. It is well


known


(see,


Chow


and Teicher


978,


89-90)


that


whenever


w (strictly)


every


random


variables


where


a strictly


monotone


extension


of {


Hence


, in


some


situations


the question


as to whether


or not


holds


is immediate since,


for example,


from


we obtain


Finally


, we need


to comment


on condition


It is clear


that


is of the form


= n


some


, then


obtains


The question


arbitrary


The next


at hand


three


is when


lemmas


does


address


hold


this question.


Lemma


If 0


+ and


then


E -I
k=n p



if {c


= O(n/cn


, n >


eCp











S 1
k=n C


= O(n/c ).
n


Proof.


Note

1.


that


+ and


ensure


that


Therefore


- -


then


k=n c *
1<


q-p


1
q-p


k=n c
k


= O(n/c )
q-p n
n


Lemm a


= O(n/c )


If 0


= 0(n)


then


e and (


holds


with


same


Proof.


Since


= 0(n)


we obtain


whence


Clearly


+ and


- (q


b
= f(l n


la .
r 1


1
q-p
ck


1
q-p'


O(n/c ),
n


some


mnnv


- 0,











= 0(n)


thereby


proving


the lemma


Lemma


3.5.


Suppose


cq/n
n


for


some


Then


holds


rn
lim inf --
C


some


integer


Proof.


Let d


= o/n,
n


Then


+ and


So (


is equivalent


lim inf


n-+r


some


integer


This


, in turn,


is equivalent


rn
lim inf -
d
n** n


some


integer


On the other


hand,


is equivalent


1._
k n kdk
k=n k


1
= 0( ).
d
n


Hence


we need


only


show


that


whenever


/q(d

/qd(











is equivalent


rn
lim inf --
d
n-*wo n


some


integer


This


equivalence


was proved


Martikai nen


985)


following


lemma


is quite


useful


our work.


From


it is


clear


that


but if


c /n
n


+ for


some


then


can be


weakened.


Lemm a


Let X


random


variable


and let


constants


such


that


c /n
n


some


Then


either


for all A


or else


for all 1


Proof.


See Stout


a new simple


proof


Rosals


(1985)


Thus


, if 0


c /n
n


+ for


some


then


in order


to verify


we need


only


check


that


see


. 1


n
n


+ for


_L _











Generalized Strong Laws


of Large Numbers


for Weighted Sums


of Stochastically


Dominated Random


Variables


With


these


preliminaries


accounted


the first


maj or


result


of this ch

indicators


apter may no

are present


w be established.


It is unfortunate


of the conclusion


that


of Theorem


However


, under


additional


conditions,


it is shown


that


holds


where n


do not involve


indicator


functions.


Theorem


Let {X


,n >


and X


be random


variables


such


that


stochastically


dominated


in the


sense


that


there


exists


constants


such


that


t/D2}


, n >


,n >


and I


be constants


satisfying


and (


with


-= 2,


where


is defined


some


constant


then


every


the GSLLN


-uk


a.s.


i-Yr I / t -


ftir T


- J


I _


- _1




















= O(n)


then


-v.


a.s.


where


= EX


1 and


= E{X


n IX',


n-1 '


Proof.


with


Let M >

= max{D,1


The hypotheses


whence


ensure


Lemma


that


and (


obtains


hold


with


= 2.


Let Z


Mc )
n


Define


,n >


Observe


that


for k


-measurable


and h


ence


Ik-1


ak
- -
bk


a.s.


Thus

This


a
f Ir
b
n


n-Un)
n n


is equivalent


,n >


to saying


is a martingale


that


difference sequence.


-k)


I'n'


martingale.


It will


now be


shown


that


for i


S. t ,X


= o(Xo,


ak
bk
k


S .tX


, n >


, k










Observe that


E(Z.-.i )(Zj .-y )
11 i J i


(since Z.-y. is
-measurable)


= E{(Z-)E1 Z -


= 0.


Next, it will


be shown that


n a
k=E -(Z -
k=1 k


converges


a.s.


In view of the martingale convergence theorem


(see,


e.g.,


Breiman,


1968, p. 89) it suffices to show that


lim sup El E Zk I
n*o= k=( k


Note that



E{(Zk- )2


=E{Z l~k


-2p, E{Z, k


2
+ k


(since Uk is
ak -measurable)
k-1


= Efz2I i


2
- u.


= E{E{(Z -ui)(Zj-j) }}
1 1 J J J~










Thus


n a ))2
lim sup(EI b -(Z k-) )
no k=1I k


na2
< sup E( Z b-(Zk ))
n>1 k=1 k


= sup{E
n>1


n an a n a. a.
S(-k)2(Zk-kk)2 + 2EZ C(t)- ( ()(Zj )
k=1 bk b.j j


n
= sup{
n>1 k=1


-2E(Zk k2


n n a.a.
+ 2 E(b-- bj-)E(Zi-i)(Zj-uj)}
i

(by (19))


EL E(Zk Ik-2
=1 c




S-LE{E{(z )2
=1 ck




2 k -1
=1 Ck




r -E7Z


k


k


k














1 2
-EX I(
2 k


< MCk
- k


as already


noted


, whence


obtains


thereby


establishing


20).


Using


and the Kronecker


lemma,


we conclude


a.s.


However,


-k)


and the second


term


a.s


. via b


- and the Borel-Cantelli


lemma


since


(since


and (


I


4)).










under


Cn for all n >


whence


for all k


CD D3n
2 3


< D1
- 1


CD n
3


4)).


Therefore e


via (


we obtain


w and E


This


in turn


guarantees


that


the conditional


expectations


all exist.


= inf


Note


that


0 and


,n >


Under


some


Utilizing


Lemma


we obtain


No )
n


= Nc P{ IX


> t}dt


Nc P


Nc_/D }


+ Df


t/D2}dt


(13))


P{ID


t}dt]


' --I


I(ID


Nc ).












E --E X I( X
SC n n
n=1 n

nO


Nc )
n


SD1


Z --EjD2X|I( D2X
n=1 n


Ncn )
n


< D1 Z1 -EID 2X( D2X
1 n=1 n


Ne )
n


1
c
n=1 n k=n


EID2X II(Nek


ID2XI


< Nek+ )
-~ k+1


since


EIDXI I(Nek


(D1





- 1


Nek+ P{Nek


ID X I


ID/X


< Ne +





< Nek +l
- ^k+1


1
I) C
n=1 n



k+1

n=1 n


< D N S P{Nek
1 k=1


IDX 2


< Nek+l }Ck+


k+1
nl n
n=1 n


(since ek


< DIN
- 1


P{Nek


ID2X


< Ne }C(k+1)
- k+1


(17))


< ck)














< Nek
- k+1


< 2D N






= C


< Nek+l
- k +1


< Nekl
- k +1


N en}


Men }


(since


Mc )


and (


Hence


4)).


we obtain


-E
C
cn
n


Nc )


Next


, it will


be shown


that










Using


and the fact


that


M > D2D
-2


we obtain


-Ej
C


nlI(Mc


No )


P{Mc


< ND
-1


{Ilx


Nc}


Mc_/D-}


< ND
-


establishing


23).


Combining


and (


yields


1
S-EIX
n


Mc )
n


SEIX
n


I(Mc


n X


Nc )
n


1-E X
EJX


Nc )
n


Hence


the Beppo-Levi


theorem,


I (lx









E -E{I X I( X | >
nl n nn




1 -E{E{ X I(|Xn
n=1 n



Z I-E|X II( X >
n=1 n


Men )




> Mcn) }}
n n-


Me )
n


This implies that


Slan
n bn In IXn
n=1 n


a.s.


Then by Jensen's inequality for conditional expectations and the

Kronecker lemma we conclude that


n
I E ak (ik- k)
k=1


E a (E{X I( Xkl
k=1


- E{Xk)I


< Mck) } k}


akE {XkI ( Xk I
k Lk 'k


Me )
n n-1


MCk k I










Therefore,



n


recalling (15),



n


-v)


ak(Xk- k)


ak k-k)


a.s.


thereby proving (18).D


Remark.


Note that


condition (16)


is automatic


Furthermore,


+ for some a


> 0 then,


via Lemma 3.6,


P{IXI


Consequently,


for all


D2A and hence M can be chosen arbitrarily small and


holds for all M


This first corollary


is a well known SLLN for sums of i.i.d.


random variables and is essentially due to Feller (1946).


It is


extension of the Marcinkiewicz-Zygmund SLLN to more general norming

constants.


Corollary 3.1


(Feller,


1946)


Let {X,


be i.i.d.


random


variables and let


be positive constants.


Suppose that


either


= O(b


(ii)


L J


h /n +.


h /na


+ for some a


c /na















P{ IXI


then


Z X
k
k=l
b
n


a. s.


Proof.


Clearly (13)


Defining a


= 1,


holds with D


we obtain


and X


, via (2),


= X .


that c


Thus (24)


is equivalent to (14)


where D3


= 1.


Suppose (ii)


or (iii)


hold.


Then b


+ 0 and b /n
n


+ for some


Therefore


/(rn)A


b /n8
n


for all integers r


Hence,


in particular,


21/2


b2n
lim inf --
n* w bn
n**-0 n


2
This together with b /n + shows that,


2 1
n-
k=n b
k


1/2
2via Lemma 35,



via Lemma 3.5,


= 0(n).


= D2