The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. Â§107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.

PREDICTABLE PROJECTIONS AND PREDICTABLE
DUAL PROJECTIONS OF A TWO PARAMETER STOCHASTIC PROCESS

By

PETER GRAY

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2006

ACKNOWLEDGMENTS

I am indebted to my supervisor, Dr. Nicolae Dinculeanu, for his infinite patience with me while I slowly learned the material. Also, I am grateful to the many excellent teachers that I have had during my journey at the University of Florida. Finally, for the friendly banter and hearty laughs that I hold in fond memory, I thank Julia, Connie, and Gretchen.

TABLE OF CONTENTS

ACKNOW LEDGMENTS .......................................................................... ii

ABSTRACT .................................................................................................. v

CHAPTER

1 THE CROSS SECTION THEOREM ................................................... 1

Projections PX ....................................................................................... 37
The Uniqueness of PX ............................................................................ 39
The Existence of PX ............................................................................ 43

Step Filtrations (',) ............................................................................ 50
Predictable Dual Projections XP ............................................................. 52
The Uniqueness of XP ............................................................................ 54
Predictable Dual Projections of Measures ............................................ 55
Processes Associated With Stochastic, R-Valued Measures ............... 58
The Existence of XP ............................................................................ 64

Predictable Dual Projections W P ........................................................ 77
The Uniqueness of W P ......................................................................... 81
Processes Associated With Stochastic, E-Valued Measures ............... 82
The Existence of W P .......................................................................... 89

5 AN EXTENSION OF THE RADON-NIKODYM THEOREM
TO MEASURES WITH FINITE SEMIVARIATION .............................. 94

6 SUMMARY AND CONCLUSIONS ........................................................ 106

R EFER ENC E LIST .................................................................................... 107

BIO G RAPHICAL SKETCH ........................................................................ 108

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy PREDICTABLE PROJECTIONS AND PREDICTABLE
DUAL PROJECTIONS OF A TWO PARAMETER STOCHASTIC PROCESS

By

PETER GRAY

August 2006

Chair: Nicolae Dinculeanu
Major Department: Mathematics

The framework of this dissertation consists of a probability space (Q,f,P); a fi[ tration (c'Ft)tER, such that if (9s)R, is a filtration satisfying G, = Ts- for every s c then we have G3s = ,s- for every predictable stopping time S for (Gs)sR+; a double filtration (3:s,t)s,ER+ such that sl = , for every s,t > 0; and a Banach space E. We study initially a real-valued, two parameter stochastic process X : Q x + __+ and then we extend some of our results to a vector-valued process Y : n X __ E.

In Chapter 1 we start by defining the predictable a-algebra p of subsets of

x + to be the u-algebra generated by the left continuous processes X that are adapted to the double filtration (f,1,),,R+. Then we prove the main result of the chapter, the cross section theorem for sets in po.

In Chapter 2 we define the predictable projection of a measurable process

x - R to be a predictable process +X x --+ R such that for every

stopping time Z we have E[ Xz Iq 1 z ] = (PX)z 1z<> almost surely. Then, using the cross section theorem, we show that the predictable projection is unique up to an evanescent set. In addition, we demonstrate that every bounded, measurable process X has a predictable projection.

In Chapter 3 we define the predictable dual projection of a right continuous, measurable process X : x 2 --+ R with integrable variation to be a right continuous, predictable process XP : L x R2 --. R with integrable variation such that for each bounded, measurable process (p :ï¿½K2x R -- R we have E[f P(p dX] = E[ J dXP Then we show that the predictable dual projection is unique up to an evanescent set. We also establish that the predictable dual projection of the process X exists if the filtration (,Ft)t R+ is a step filtration.

In Chapters 4 and 5 we turn our attention from real-valued processes X to vectorvalued processes Y. In this setting, our formulations are based not on finite variation and integrable variation, but on finite semivariation and integrable semivariation.

CHAPTER 1
THE CROSS SECTION THEOREM

Introduction

The theory surrounding one parameter stochastic processes has applications in many fields. In finance for instance, a filtration (Jt),R contains information that is known up to time t about a market; a martingale (XI)tIR, for (T'F,),.R reflects the price of stock options; a predictable process (Ht),=R. houses the number of shares to be held at time t; and a stopping time S for (,t)tR. indicates when stocks should be sold for optimal profit. Note a discrete (or step) filtration suffices for good results in many markets.

The predictable projection (PX,)tRï¿½ and the predictable dual projection (XP')IR, for the process (X,)ER. both play a role in one parameter stochastic theory. For an example of this we retread the finance stage that was set above. The random variable PXs, which is the predictable projection (pX,)t R. evaluated at a (predictable) stopping time S, may be regarded as an updated version of the expected selling price E[Xs] of stock options, given the market information ,%s-.

The goal of this dissertation is to extend the definition and existence of the predictable projection and the predictable dual projection to a two parameter process (XS,1)v,. This extension is difficult because, while the set R+ of positive real numbers is totally ordered, the set R2 of ordered pairs of positive real numbers is not. In order to reduce slightly the complexity that we face, we will retain a one parameter

1

2

flavor: our framework will be built around the double filtration n = YS f , where (,F4)R is a right continuous, complete filtration.

Predictable a-Algebras

The main result of this chapter is a cross section theorem for predictable subsets of , x 1+ relative to the double filtration (,,)st R satisfying Y, = , for s, t > 0. The cross section theorem will be derived with invaluable help from the Monotone Class theorem, which may be found in the text Probabilities and Potential (Dellacherie & Meyer 1975, p. 13-1).

In this section we introduce the predictable u-algebra p of subsets of the space

Notation and Terminology 1.1 The following will be used in the sequel. I.1a (0 ,.T,P) is a probability space.

1.1b R+ is the set of non-negative real numbers. N is the set of natural numbers.

Q+ is the set of positive rational numbers. + is the set 01+ x 1+, and Q+ is the set Q+ x Q+.

1.1 c Ig(R1+) is the Borel u-algebra generated by the intervals (s, t] of 0+. 1+(12) is the Borel a-algebra generated by the rectangles (s, t] x (u, v] of 0+.

1.1d A function X : Q x 12 __, R is called a two parameter process, and is denoted (Xx,,).

1.1e Fï¿½ I(R1+) is the u-algebra generated by the semiring Yx IR(R1+). ,Fï¿½ I (01 2) is the u-algebra generated by the semiring Yx 1 (02+).

I.1f (Y),,+ is a filtration and therefore satisfies

" for each t > 0, F1 is a a-algebra contained in Y, and
" , c Yt if s
We will write simply (J,), and we will assume that the filtration satisfies the usual conditions:

" Y0 contains all the negligible sets (that is, (,T,) is complete), and " Y, fl ï¿½F, for every t > 0 (that is, (J',) is right continuous).
S>t

See 1.1j below for another assumption about (,).

1.ig A function S : Q - [0 , oo] is a stopping time for the filtration (Y,) if{ S< } te)e for every t > 0.

Let S,T be two stopping times. The stochastic interval (S, 1] is the set {(U, t )_xR+ S() < t < T(tu)}, while [S, T) is the set {(tu, t) (= .Q x R, S(tu) _< t < T(u) } The stochastic intervals (S , T) and [S , T] are defined in a similar fashion. 1.1h The predictable a-algebra P of subsets of Q x R+ is the a-algebra generated by the sets A x (s, t] and B x {0}, where A e Tt and B E YO. A stopping time S is predictable if the stochastic interval [S , 00) is a predictable set.

1.1i Let S be a stopping time.

,Ys denotes the a-algebra { A E Y I A n { S < t ) c- Y for every t 0} while ,s- denotes the a-algebra generated by the sets in f'Fo as well as sets of the form An{S>t}, wheret >Oand Ae Y.

1.1j (S)SA, is the filtration defined by the following rules.

9 go= f0 and
e 9, = -FT- for every s>O.

We will write simply (g,), and we will assume that the filtration (ift) is such that gs = Ys- for every predictable stopping time S for the filtration (g,). For example, this is the case when f, - fLj for every s E R+, where LsJ is the largest integer less than or equal to s.

1.1k ('Fs,1)stR is a double filtration and therefore satisfies

" for each s,t > 0, Fs,, is a a-algebra contained in F, and
, c3 7,, , if s
We will write simply (if,1), and we will assume that TF,,r = , for every s, t > 0.

1.11 Let Bcfnx R+.

B(a) is the set { (s,t) R+ I (ri,s,t) c B }. K[B], called the projection of B, is the set { U C f (t,s,t) - B for some s,t E R,}.

1.1m The point (oo, cc) will be denoted by co. Therefore, the inequality (s,t) < 00 means that s < cc and t < oo. i.1n Let g : Q - [0, co] x [0, co] be a function. [g] denotes the set { (U,s,t) E Q x 2[ g(m) = (s,t) }, called the graph of g.

We now commence our study of the real-valued two parameter process (X,,,). We begin by defining the predictable o-algebra. Definition 1.2 Let X Q x 2- _. R be a two parameter process.

1.2a (X,,) is left continuous if for every so,to E R + and w c f, we have

lim XS,,(c) = XSO,0(u). Note that this limit is a pointwise limit. (s,t)-.(so,to)
O

1.2b (Xs,/) is adapted to the filtration (F,,,) if for each s,t r R, the random variable Xs,t is F,,-measurable.

1.2c The predictable a-algebra of subsets of L2 x R2 is the a-algebra generated by left continuous two parameter processes (X,,) which are adapted to (F,,). We denote this a-algebra by g. Proposition 1.3 g is generated by the sets (S , cc) x [0 , I] and A x {0} x [0, r2], where r1, r2 E R+, S : Q --+ [0 , oo] is a stopping time for (Gs), and A e Go. Proof Let (X,t) be left continuous and adapted to (,,). Set := Xoo lo}ï¿½{o}(s,t) + Xo,_L I O}ï¿½( k_ L](S,t) k-O

+ ~ o I~, l(m jï¿½Lx{o}(S, t) + X XALI l(n!ï¿½m (* ](St)
+~~~ ~ 7X. n - + ~ ,", n+1...,, ,+l(s,t),
m--0 k,m=0

where 1 is the indicator function, and n e N. Since (X,,,) is left continuous, Y" --+ X pointwise as n -- oo. Fix m, n, and k, and consider the process R,,, := Xk1 ],, (S, Since X.,_ is gm-measurable, R is the pointwise limit of processes of the form
pn

Paj 1Aix(_M,__x(k_,k where for every index i we have a, E R and A, E gm.

But for every index i we have Ai x ( m , m] - (S, , TJ],

where S {F on A , which is a stopping time for (9s); and
CO on A'

-on A,
Ti -{ o Ai which is also a stopping time for (G).
cc on A'

Therefore, R is the pointwise limit of processes of the form Z ai 1 (S,,T]x(-L- ,
i-!

where for every index i we have ai c R, and Si, T, are stopping times for (9,). By considering in a similar manner the processes Xo, I k iï¿½k and X -o I m(+], we obtain

V c 'a{(S, , T11 x (r, , si], A x {0} x (r2 s21, (S2, T21 x {0) , where rI,r2,sI,s2 E I+, A E Go, and S1,S2,T1, and T2 are stopping times for (G,); = -a{(S,oo) x [0,rl], Ax{O}x[O, r2] }, where rj,r2 e R, A r g0, and S is a stopping time for (G,).

In factwe have Vo = aa{(S,oo) x [0, r], Ax{0}x [0, r2]} because the sets (S, oo) x [0 , r1] and A x {0} x [0 , r2] are elements of V.

Stopping Times

Definition 1.4 Let Z 0 - [0 , cc] x [0 , oo] be a function.

1.4a The function Z is a stopping time for the filtration (Y,,) if { Z < z } E ,'F for every z +I .

1.4b The stopping time Z = (S,T) is predictable if

9 S is a predictable stopping time for (9,), and
* (S < co} c (T < oo}.

Proposition 1.5 Let Z,, = (S,,T.) be a sequence of stopping times for any double filtration (,,) that is right continuous in t. Assume that (S,) is increasing. Then Z (sup S,,, limsup T,) is a stopping time for (,
n n

Proof Let (s,t) G R . We must show that { Z < (s, t) } ,. For each index n and each (r,u) C we have {Z, < (r, u)} = {S_ r }n { T < u} e r by hypothesis. Set T := V Ti.
i~n

Then limsup T. , lim T,.
n n

We have

{Z<(s,t)} = {(sup Snlim Tn) < (s,t)}
n n

= {sup Sr_ s}n{lim T'
n n

= Sn s) n n { T' <{T _t+Ep}
n--I p=N k-I m=k

where N is any natural number, and (Ep) is any sequence of positive numbers decreasing to 0,

Snu {sn
p=N k-I Lm=k
=n OU M} n f{T, < t+Ep}fn{Tm1l < t+Ep}n .... p=N k-I n-I m-k

= nun[{
p=N k=I m=k

since (S,) is increasing;

nu n[{Z. __ (s,t+Ep)} n {ZM,1 < (s,t+p) }n ....]
p==N k-I m=k

E '"FSt+CN N e N being arbitrary. Since (cr,,) is right continuous in t, we conclude that { Z < (s,t) } E=

8

Proposition 1.6 Let n e N, and let Zi = (S;, Ti) i = 1 ... n be a finite set of stopping times for , such that each S is a stopping time for (!9,). Set Ai := {inf Sk =Si} n {S * $}S n f{Si# S2} n ... n {S Si}, i=1 ... n,
k!Sn
n1 n
and set S:= Si 1A, and T:= Ti IA,.
il! iil

Denote (S,T) by minf (Z,), and set Z := minf (Z1). Then i i

" S is a stopping time for (9,),
" Z is a stopping time for ( F.,), and
" if Ti(m)-- o = Si(tu) = o for each index i then T(u)=oo => S(s) =0. Proof We first prove that S is a stopping time for (G,). Note that the sets Ai i =

1 ... n are pairwise disjoint. Further, we have A, e Gs, for each i (Metivier 1982, p. 20), and U Ai = 0. It follows that S is a stopping time for (g,).
iI

Next we prove that Z is a stopping time for (f,,). Let (s, t) c R2. We must show that { Z < (s, t) } g;. By hypothesis, { Z _ (s,) }E 9, for each i.

So we have {Z<(s,t)} U {Zi < (s,t) } n A, i=!

U ({S,< s}nA) n {Z, _ (s,t)}
i=!

E 9".

Lastly we prove the third assertion of the proposition. We have T(o) = co => Ti(m) = oo for some index i such that u c Ai, = Si(w) = oo by hypothesis, S(w) = oo since CU E A,.

9

Projections n[A]

In this section we establish a key result concerning the projection Ir[4] of an YFï¿½ 1(R2)-measurable set A. It will emerge that 4A] is F-measurable. Proposition 1.7 Let A e ï¿½ I1(R+) and let g be a function such that K[ [g] ] E 3, and [g]c A on 4[A]n R [ [g] ]. Then [[g]nA]e( YF. Proof Let A be the collection of sets B ,Tï¿½ II(R2) such that for any function h : Q - [0, oo] x [0, o] satisfying r[ [h] cY and [h] c B on 4[B] n 4[ [g]] we have ;r[ [h] n B E Y.

Let R, be the ring generated by the sets (C x (s, ] x (uv]) n E x R 2 and L2x ,2 where C E Y and s,t, u,v E R. First we show that A contains 'R. Let B E R and let a function h be such that ;[ [h] ] E Y and [h] c B on 4[B] nr[ [h] ].

Without loss in generality we consider B = C x (s,t] x (u,v], where C e F and s
Wehave ,r[[h]nB] = (n[[h]])n C E Y. Hence, B E A.

Next we show that A is a monotone class. Let (B,) be a monotone sequence from A.

Assume first that (B,) is increasing. Set B:=UB,. Let a function h1 satisfy ir[[hY]]e , and [h ]cB on
n l
,r[B] nl ,[ [h1] ]. Let h' : l --- [0, co] x [0, co] be a function that satisfies the con-

10

Mons set out below. There are four conditions, and they involve the function h1. We require that ir[ [h'] ] = ir[ [hi] ],

h' = h1 on (r[B])c,

[h']cB on r[B1] nr[h],

and [h'] c Bi on (4r[Bi] \ 4[Bi-1] ) Oi r[hI] for i = 2,3,4... Such a function h' exists. Note that we have r[ [h'] ] c ,Y, and [h'] c B, on r[Bi] n r[ [h'] ] for i E N. Therefore for each index i we have r[ [h'] n B ] E ,F (since each B, is in A). Hence we have r[ [h n B ] r[ [h'] n B ] from the definition of h', = '[[hl]nu B, ]

= U [h']n B ]
i

Next, assume that (B,) is decreasing. Set B' := nB,,. Let a function h2 satisfy r[ [h2]] EF and [h2] c B' on
n

r[B'] n r[ [h2] ]. Let h" :n -- [0, cc] x [0 , oo] be a function that satisfies the conditions set out below. We require that [ [h"]] = r[ [h2] ],

h"= h2 on (r[Bl])cUr[B'],

and [h"] c Bi on (;r[Bi] \ r[B+]) n r[h2] for i ( N. Such a function h" exists. Note that we have ;r[ [h"] ] E Y, and [h"] c B, on r[B;] n r[ [h" ] for i E N.

11

Therefore for each index i we have x[ [h"] n Bi] Y F (since each Bi is in A). Hence we have [ [h2] n B'] = [ [h"] n B'] (from the definition of h"), = r[ [h"]n B,]
i

= nr[ [h"] n Bi] (since [h"] is a graph),
i

SF.

We have shown that A is a monotone class which contains the ring of generators of Fï¿½ 1(12). Because this ring contains the whole space Q x +, we conclude from the Monotone Class theorem that A is equal to Fï¿½ R(R2 ). The statement of the proposition is now seen to be true. Corollary 1.8 Let A e Fï¿½ 1R(R2). Then K[A] E Y. Proof We are able to find a function h : Q -- [0,cc] x [0, oo] such that 4[ [h]] -Q and [h] c A on 4[A] nx[ [h] ]. From Proposition 1.7 we obtain [ [h] n A] E F. For such a function h we have 4-[ [h] n A ] = 4[A]. Hence we have 4[A] c Y.

Sets /C6

We continue to prepare for the Cross Section theorem by introducing a special collection ICs of predictable sets with compact cross sections. Definition 1.9 Let K c +. x 2.

1.9a We say that K has compact cross sections if for each u G ;[K], the cross section K(w) c R2 is compact.

1.9b The cross sectional closure of K, denoted K*, is given by K*() = (K(w) )* for every tu c K2, where (K(w) )* denotes the closure of the

12

cross section K(m) c R+. (The closure includes all adherent points.) Proposition 1.10 Let N E N, and let (K,) be a sequence of subsets of nx .+ with compact cross sections. Assume that each set K, has the following properties: ;r[B* n K,.] e for every .Tï¿½ 1g(R2)-measurable set B; and there is a stopping time Z,, = (S.,T.) such that

9 S, is a predictable stopping time for (Os),
* [Z,] c K,
" {Z < oo} = ;[K.],
" Tn(i) = oo Sn(w ) = oo for every wE.K, and
" (s,t) E K.(tu) s> S,(uy) for every 0 E K.

N 00
Set K:=UK, and K':=AlK,.
n-I ?I-]

Then K and K' both have compact cross sections, and K has all the above properties of the sets K,. Further, if the sequence (Kn) is decreasing, then the set K' also has all the above properties.

Proof The finite union and the countable intersection of compact subsets of R+ is compact.

Therefore, K and K' have compact cross sections. Next, let B C Fï¿½ (R+).

N
We have ;r[B* n K] = ;r[B* n U K,]

N
= U i[B* n K,,] n=I

and ir[B* N K'] = ir[B* n n K,] n-I

CO
= r ,[B* K,] if (K,) is decreasing,
n=I

since each set B* fl K, has compact cross sections;
E ,..

We now show that K has the second property set out in the proposition. Set Z := minf (Z, ). (See Proposition 1.6 for the definition of "minf".)
n-I...N

From Proposition 1.6 we know that Z is a stopping time, and that if we write Z = (S,T) then S is a stopping time for (9,). Note that the stopping time S is a predictable stopping time because S is the minimum of finitely many predictable stopping times S,, n = I.. .N. Next, let u c [Z]. Then Z(u) < oo. There is an index n such that Z(ti) = Z,(tu). From this equality we deduce that Z"(Cf) < X.

By hypothesis we have (tu,Z(w)) c K. Hence we have (w,Z(w)) = (tu,Z.(w))

cK.

We now show that { Z < oo} = ,[K].

N
We have [K] = 7r[UK] n-I

N
n=I

N
U {Z, < oo} by a property of the set K., n-I
N
SU ({s, < ao} n {Tn < 00)

N
U {S. < oo} by a property of the function Z,, n-I

N
U {S,
ff= nt n
Penultimately, note that by Proposition 1.6 we have T(w) = OD =* S() =0 for every W E .

Lastly we show that (s, t) c K(u) implies that s > S(tu) for every w E- Q. We have (s,t) e K(u) = (ay,s,t) E K (ay,s,t) c Kn for some index n, Ss > Sn() by hypothesis, > S(n) since S = inf S..
n=I.N

To complete the proof, assume that the sequence (K,) is decreasing. We verify that the set K' has all five properties that were listed. Let Z := (sup S , limsup Tn). Note that Z is a stopping time.
n n

To see this, it is enough to show (by Proposition 1.5) that the sequence (S,) is increasing.

Let GY E L2 and no E N, and assume first that Zn0+I(1) < Then (t, So,,,+(), Tn0+I( U)) E Kn0+, by a property of the set K,+.

15

But we have the containment Ko+] c K,.. So we have (tu, So+I(m), To+I(ti)) c Kno(ay). By a property of the set K,, we have S+1 (tu) _ So(r). Next, assume that Zo+,(tu) - oo. Then S0+1(t) = oo or T,+](W) = X. We have So+] (w) - co by a property of the set K,0+1. Hence So+(u) > S~o(c), and we have shown that Z is a stopping time. Now note, the stopping time sup S, is a predictable stopping time for (9,).
n

This follows since each S, is a predictable stopping time for (G), and since (S,) is increasing.

We now prove that [Z] c K. Let Z(w) < oo. We must show that (tu, Z(w)) c K'. Since Z(tg) < co, we have sup S(r) < oo.
n

Therefore S.(tu) < oo for every index n. It follows from a property of K. that Z.(m) < oo for every n. So (ay, Z.(tu)) c K, for every n, since [Z.] c K, for all n. There is a subsequence (T.,((U))k of (T.(zu)). (that depends on iv) such that T,,(t) - limsup Tn(M) as k - oo.
n

Since (S.(G)) is increasing, we have S.k('U) -. sup S.(w) as k-- oo.
n

Thus, we have (U. Z k(O)) = (0, Sk(tu), Tf,(tu)) -. (w, Z(u)) as k-- co. For each k we have (u, Z,(tu)) c Knk. Since by assumption the sequence (K.) is decreasing, then (0, Z.k(u))k is a sequence that eventually belongs to each of the sets Knk, k e N.

16

By the compactness of the cross sections involved, it follows that (tu, Z(w)) = lim (w, Znk(u)) belongs to each of the sets Kk, k e N.
k

So we have (a, Z(w)) n fl Kk
k

K'.

We now show that { Z < oo} = K[ K']. First we establish that { Z < oo {Z. < oo}.
n

In fact, let v E { Z < x }. Suppose there is a number no E N such that Z,.(u) < 00. Then either S2o(tu) = cc or T.o(r) = oo. Since T1o(u) = co = S.o() = o, we obtain So(tu) = cc. Since (Sn) is increasing, we have S,(tu) = oo for each m > no. It follows that sup S.([U) = c.
n

Therefore Z(tu) = (sup S1(tu) , limsup T.(G)) < oo, and we have reached a
1 n

contradiction. Because of this contradiction, we conclude that Z1(w) < oo for every n e N.

Hence, we have {Z < oo c n (Z, < o}.
n

Next let tue E { Z. < cc }. Then for each n we have Z,(tu) < cc.
n

Note for each n we have [Z2] c K,,. Therefore for each n we have (&r,Z2(aT)) c Kn c K1.

17

Since K, has compact cross sections, there is a number M in R2 (that depends on tu) such that (s,t) < M V (st)c K (E). Thus, for each n we have Z,(tu) < M. Consequently we have Z(w) - (sup S.(uy), limsup T()) < M.
n n

Accordingly, we have u E {Z < oo}. Hence, we have nl{Z, < oo} c {Z
n

Therefore we have {Z < oo} = n { Zn < 0o ) (by the preceding lines),
n

= nir[K.] since {Z. < oo}- r[K.] for all n,
n

= 7r[ f K, ] since all sets K, have compact cross sections,
n

= ir[ K'].

We now show that limsup T,(m) = oo => sup S.(t) = cc for every 0 e 9.
n n

In fact for each zu e Q we have limsup Tn(tff) = 00 Z(t) < o
n

z Zlo(w)
n

=> either S,.(uy) = -o or T.o(tg) = O S.o(tu) = oo (as Tno(rz) = x =* Sno(U) = 0), sup S.(W) = 00.

18

Lastly we prove that (s,t) c K'(tu) => s > sup S,(w) for every W E 0.
n

In fact, for each tu c Q we have

(st) c K'(w) = (,s,t) E K'= n K,
n-I

(t,s,t) r K, for every n, S(s,t) e K,(a) for every n,

= s ? S"(tu) for every n by a property of K.,

Ss > sup S,((U).
n

Definition 1.11 We define the set C to be the collection of finite unions

N
U [Si+1i,T1Ani]x[ri ,r1, where ni, N N, i >0, ri, iE R+, and Si,T are stopping times for (9,), for every index i. We define the set C, to be the collection of countable intersections of sets from K. Proposition 1.12 The set K3 is closed under finite union and countable intersection. Further, the elements of K6 have all the properties that were presented in Proposition

1.10.

Proof It is evident that K6 is closed under countable intersection. Next we show that K6 is closed under finite union. Let ME N and let (K,)=1 ..M be a finite family from )C6. For each index i we may write Ki = fl Kj, where each set Kj is an element of 1.
j=l

M Mo0
Then we have U K; = UnKi
z I i=1 j-=

00
n Kj, U K2j2 U ... U KM,,
jhj2,... jM =1

which is a countable intersection of elements of C, since each set Ku, U K2j. U ... U KMJM is an element of 1C.

M
Accordingly, the set U Ki is an element of C,5.
i-I

We now prove that the elements of C,5 possess all the properties that were presented in Proposition 1.10.

Let S,T be stopping times for (g,), let E > 0, let n c N, and let r,r' E R. In view of Proposition 1.10, it is enough to show that the set K0 := [S + E , TAn ] x [r, r'] possesses all the aforementioned properties. Without loss in generality, we assume that S + c < TAn and r < r'. Let B c ,fï¿½ I?(R2). We will show that ir[B* n K0] e X. Let (6m) be a sequence of positive numbers decreasing to 0. For each index m, denote by Km the set [S + E - Sm , TAn + 6,] x [r - mr' + Sm], and by K, the set (S + c - ,., TAn + 3,) x (r - ,r' +3). We have 4[B* n K0] = fl [B n Kin].

In fact, we have n[B n Km] c flr[B* n K.] since B c B*,
m=1 or=I

- R[B* n n Kn] since (Kn) is decreasing, and since each set B* n Km has comm,1

pact cross sections.

20

This last set is equal to the projection [B* n Ko]. On the other hand, for each index m we have ,[B* n Ko] c ;[B* nl K] since K0 c K ,

Sr[B n K'] since every cross section of the set K, is an open ball when S+E < oo,

C r[BfNK,.] since K, c K,. Therefore, 4rB* n Ko] c n 4[ n Kn].
m=!

So we have r[B* n Ko] n n B n K,]
M,- I

E F by Corollary 1.8, since K,. e Yï¿½ I(R+) for every index m.

We now reveal the remaining properties of the set Ko. Set Z := (S + c , r). Note that Z is a stopping time. In fact, let (s,t) E 1. Then {Z < (s,)} = {S < s) if r < t 0 otherwise

In either case we have {Z _ (st)} 9 ;s. We complete the proof by making the following five observations. The stopping time S + E is a predictable stopping time since E > 0. The graph [Z] is a subset of K0, since by assumption S + E <_ TAn and r < r'. We have {Z < co} = {S+E < co} = 4[[S+E,TAn]] since S+E < TAn, = 7r[Ko].

21

For each 0iEQ we have r(w) = oo = S(r) +E = oo, since r < oo. For each a r.n we have

(s,t) c Ko(ai) =* sc [S(m )+E , T(ti)An] and t E [r, r']

> >_ (S + )t.

The Cross Section Theorem We begin with some important precursors to the cross section theorem for predictable subsets of +1 ï¿½ R2.

Definition 1.13 Let 91 be the ring of subsets of R2 generated by the sets (z, z'] such that z,z' E R+, with z < z. The measure y is the set function from 9 into R+ that is given by p((z, z']) = the area of the rectangle (z, z'] for every z < z', and which is additively extended to R1.

Remark The measure p is the Lebesgue measure on 91, and can be extended to a sigma-additive measure on (), with values in [0 , oo]. We still denote by p this sigma-additive extension.

Lemma 1.14 Let B c X ï¿½ 1I(R2) and UY E Q. Then B(w) E1(R2). Proof Let F c ï¿½, z,z' E R2+, and (B,) c Q x R+. Then (F x (z, z'])(tg) ( while (BI - B2)(0) = BI(w) - B2(w) and (UB,)(w) = UB,(t).
n n

Notation 1.15 Let A,B E ï¿½ I(R2), A3. E R+, and z e R2. We denote by ABA;L the set

{ w A B I an( (A n n x [o, zt)( ra) ) > mhi( (B n se x [as, zb)(e)in When A c B and 0 < A < 1, the reader might think of the set AB z as being the

22

projection of the portion of A n n x [0 , z] that fills some Borel cross section of B n Q x [0 , z] by a factor of 100 percent or more. Proposition 1.16 Let A, B E JF ï¿½ (12(R), and let z E R2. Then for every A E R+ we have ABA.; c .
n
Proof Let R be the ring generated by the sets U Fi x (zi , z'], where n E N, i=1

F, r JF, and zi, z' E R2 for every index i.

no
Fix A0 r R. Write A0 n n x [0 , z] = U Fo,, x B0,,, where the sets F0,1 E Y are mutually disjoint, and where each set B0, is an element of 1(R+). Let A be the collection of sets C in Yï¿½ B( 2) such that for every X. C R +, the set AoCA; is an element of F. We first show that A contains the ring R. Let C1 e R and let E R+.

nI
Write Cl n n x [0, z] = U F 1, x B 1j, where the sets F 1,, E 3F are mutually disjoint, and where each set B1,i is an element of +). Then AoC1 = U Fo, nF j, where the union is taken over all pairs (ij) such that
(41)
/t(Boj) > X/y(Bij).

Therefore we have AoCI ' E F. Accordingly we have C1 E A, since ;L E R+ was arbitrary.

Next, let (C,) be an increasing sequence from A.

Set C := c.
"=

We will show that C e A. Note for each tu E D we have ( (C n Q x [0 , z])(t)) = lim., p( (C. n Q x [0 , z])(a) ), since y is sigma-additive. Let A E R,, and let (6,) be a sequence of positive numbers decreasing to 0, with A- 31 > 0.

We have AOC;L = J fl AoC,, x-b, (since u is finite on [0, z]),
m=I n-Il

E Y.

Since A E R+ was arbitrary, we conclude that C E A. Now let (Cn') be a decreasing sequence from A.
'0
Set C' fC
n=1

We will show that C' rc A. Note for each u e K we have #( (C' n Q x [0 , z])(x) ) = lim,,-. p( (C' n Q x [0 , z])(w) ), since p is sigma-additive and since ( (C, n n x [0, z])(w) ) < oo for every w i D. Let A, e R+.

We have AoC .= U AoC' . Since ;L E R+ was arbitrary, we conclude that
n-l

C' c A.

By applying the Monotone Class theorem we deduce that for every A0 ER, C e Fï¿½ f(R2), and A, e R+, we have AoC ; e T. Now let A' be the collection of sets A' in ,Yï¿½ I(R2) such that for every A, e I+, the set A'B I is an element of F.

By the above we have R c A'. Next, let (A') be an increasing sequence from A'. Set A':= UA.
n-I

We will show that A' e A'. Let ;Le +. Since /p is sigma-additive we have A'B ;L= U AnBx.,
n-I

J (F.

Since A e R+ was arbitrary, we conclude that A' e A. Lastly, let (An) be a decreasing sequence from A'.

{0
Set A" l.
n -I

We will show that A" - A'. Let ). E R, and let (5,) be a sequence of positive numbers decreasing to 0.

0o cc
Since y is sigma-additive we have A"B,, = UnA B ,

Since k E- R+ was arbitrary, we conclude that A"e A'. We have shown that A' is a monotone class containing R. By applying the Monotone Class theorem we deduce that for every ,t e R+, the set ABA;L is an element of YF. Proposition 1.17 Let z e RI, let (An) be an increasing sequence from Fï¿½ a(R2), and let A = U A. Let B be a measurable subset of A, let c > 0,
n-I

25

and let 0 :s A < 1. There is an index N c N such that P([ B n a x [o, z]]) P( (B n AN)B;L ) < E.

Proof Since y is sigma-additive, for each w E f we have /((BNA. n o x [o, z])(w)) j( (BNAf K2 x [, z])(t)) = j( (B n x [, z])(t)).

Therefore, since A < I and since z E t2 is finite, for each W E K there is an index N(tu) such that /j( (B N AN(,) n L2 x [0, zj)(t)) > I.u( (B 1 2 x [0 z])( ) Hence we have (BnA,)B;L, / (BnA)B;..=. Since each set (B n A,)B;Lz is an element of F (see Proposition 1.16), we have P( (B n A,)B;,) 7 P( (B N A)BI-) = P( BBA )

= P(ir[ B n x [0, z] 1).

The statement of the proposition now follows. Proposition 1.18 Let z G R2, Ao E p, and Eo > 0. Let Bo be a measurable subset of Ao. Then there is an element KO of K5 such that Ko c Ao n 1 x [0, z] and P(r[Bonfux[0,z]]) - P([BonKonfx[0,z]]) _< Eo. Proof Let A be the collection of sets A E p such that for every E > 0, 0 < I < 1, and B cA, there is an element K of /C5 such that K c An Lx [0,z] and P(;[BnL2x[0,z]]) - P((BnK)BA,) < E. Let S denote the ring generated by the sets (S , T] x (s , t] and D x (0) x (u, v], such that S,T are stopping times for (Ge), D E Go, and s,t,u,v E R+. Note that we have p = aa(S).

26

It is our goal to show that A is a monotone class containing S. It will follow from the Monotone Class theorem that A Let A c S.

n mI
We may write A U (Si, T,] x (r, , si] U U [0Aj , Uj] x (vj, wj] where S,, T,, and Uj are stopping times for (g,), Aj E g0, and ri,si,vj,wj are real numbers, for every index i and every index j.

Without loss in generality, we will consider A = U (Si , T1J x (ri , s,], and we will asI-I

sume that the sets A' := (S , TJ x (r, , si] are pairwise disjoint. Let E > 0, 0 < IL < 1, and let B be a measurable subset of A. Let (E,) be a sequence of positive numbers decreasing to 0. For each L E N and each index i denote by KL,, the set [Si + EL , Ti A L] x [r, + EL , S] E K.,6. For each index i we have KL,, / Ai as L - oo. By Proposition 1.17, for each index i there is a number Li E N such that P([ B n A' n x [0, z]]) - P( (B n A' KL,)(B n A)AL) < 6i. Set K := U KL,..
i_ I

Wehave Knnx[0,z] E )Cb, Knix[0,z] c Anfx[0,z], and P([BAn x[0,z]]) - P((BfnKn0x[0,z])Bk_) =P(Kr[BN x[0,z]] \ (BnK)BI, )

since (B n K n x [0, z])B.;L = (B n K)BIz. c B n n x [0, z],

= PQII[BfUA'flix [O,z]] \ (BflUK l(nA))

UP U[ BlA'f L x [O,z]] \ U(BnKL)(BnA)'hA)
( i =I /-1I

since the sets Ai are disjoint,

nP(r[ BnA'nf2x[o,]] \ (BnKL,,)(BnA)a,)
n
< EP([ BnA'I.Q x [0,z]] (BnKL,.,)(BnIA')A,,

<
'
-Il

< E.

Thus, A E A. Since A e S was arbitrary, we have S c A. Next, let (A') be an increasing sequence from A. Set A' U A'. We will show that A' c A.

Let B' be a measurable subset of A', and let E > 0 and 0 < L < 1. There are numbers A 1, 2L2 such that 0 < Al < 1 and 111 A2 = 1. By Proposition 1.17 there is an index N e N such that P(4[ B' f L2 x [0, z]]) - P( (B' nA ,)B>.) < Since A, e A, then for the measurable subset C:= [ ((B'-I A%)B',.) x 1 ] i (B'" A' ) n n x [0, z] of AN there is a set KeK:C such that K c AN nE2x[o ,z] c An f2x[0 ,z], and P([ C fn L x [0, z] ]) - P( (C iK)C;,.z ) < E But [cn x[o,z]] = (B'n A,)B'L.-.

28

Hence we have P( (B' n A)B 1.) - P( (C n K)C=,z ) < Note that we have (B' n K)B, (C n K)CkA,. Hence we have P(r[ B' N K x [0, z] ]) - P( (B' n K)Bz)

< P([ 1B' n n x [0, z] ]) - P( (C n K)C =,;)

P([B 'nnx[o,z]]) - P( (B'n A'v)B',z) +

P( (B' n A)B,.) - P( (C n K)C;,,,)

< C + f- 2 2

= C.

We conclude that A' e A. Lastly, let (An) be a decreasing sequence from A. Set A" " .A We will n-I

show that A" E A. Let B" be a measurable subset of A", and let E > 0 and

0 < L < 1. Let ;L' ~ + satisfy I < A'< 1. There is a sequence (;.,) from [0, 1) such that fj 2.L = ;L'.
n-I

Note that the set B" is a measurable subset of A;. So there is a set K1 E K2' suchthat K c Al'nflx [O,z] and P(r[ B" n n x [0, z]]) - P( (B" n K)B ,. )

-- 2 ï¿½
The set C :=[ (13" n KI)B", ) X R2 ]NB These C n B"n K n Q x[0, z] isameasurable subset of A". So there is a set K2 E /C6 such that K2 c A" n fn x [0 , z] and P(Cr[ClnLx[0,z]]) - P((CN K2)C1,) _< 22 Note that [Cil x[0,z]] = (B" n K)BA.:-

29

The set C2:- [((C n K2)C1 A2-) xR InC n K2Rox [o, z] is a measurable subset of A". So there is a set K3 e ICS such that K3 c A" n n x [0 , z] and P([ 02fn x [0, z] ]) - P((C2n K3)C2 ) <13-Z-Note that r[ C2fnlx[o,z]] = (CiN K2)C1 , Continuing inductively we obtain a sequence (K,) from /C5 such that for each index n we have K.+ c A"1 fNx[o,z] and P([ C. n x [o, z] ]) - P( (C, n K.+)C., ) < f+ where C,= [((C,-1 n K.)C,,1 ) x+ ] n C, n K. n n x [0, z] and ir[ C. f x [0, z] ] = (C,1 n K.)C,1 j., n = 2,3,4... So for each index n we have P(z[ B" n t x [o, z ]I) - P( (Cn n K,,+)C. n1-,i) =P([ B" n nx [o, z] ]) - P((B" n KI)B",..) + P([Cin L2x[0,z]]) - P((CAn K2)C, X,z) + + P(i[C. n L2x [o, z]]) - P( (Cn K,,)C L,+, ) ++ + f
- 2 22 2+

< E.

The sequence ((C, n K,,+)C. X.-. ). is decreasing, and so we may write P(;r[ B" n n x [0 , z] ]) - P( lim._,n(C. n K,+I)C. A,+,- ) =P([ B" n n x [o, z] I) - lim, P( (C. n K.+m)C. LI) < E.
Now for each index n we have

30

(C, n K,,,+)C. A.,z c (B" n K1 n K2 n n K,+)B" +1 Hence, we have

n -- , , i , z i+l

But we have B ' (B"n Kfi )B.
S n Az)B
n=]

In fact, let y c B".

( "+I n+1
Then p(B" nfn Ki n n x [0,z])()) > Fj[-[ y((B" n 2 x [o,z)()) for every index n. So we have p (B" K n n x [o,z])(tu)) _ f-[n p ((B" n n x

- A'p((B" n o2 x[0z)r) AjU ((B" n L2 x [O,z])() This means that E- (B" n Kn)BI.
n A,

We remark here that if we had considered only A = A' = 0, then all we would have been guaranteed at this point is zu e 7r[B"* fi Kn], which would have been unhelpn-i

ful.
OD F
We now have P(4[B" n (Q x [0, z]]) - P((B" n N)B>) < E.
n-I

Set K:= Kn.
n-I

31

We have K e KQ, K c A" n12x[o ,z] (since Kc A, nftx[o,z] for every n), and P(r[ B" n Q x [0, z] ]) P( (B" n K)B" ) < E. Accordingly, we have A" E A. We have shown that A is a monotone class that contains the ring S of generators of p. Since S generates the whole space f x REI, we have A = p. Then for the sets Ao e p and Bo c Ao, the value Eo > 0, and any number A E [0, 1), we may find an element Ko c- C so that we have Ko c Ao n 1 x [0, z] and P([ BonLx [0, z]]) - P(r[ Bon KoNflx [0, z]]) < P([ Bo n Lx [0, z]]) - P((o n Ko)Bo A. ) since (BonKo)BoA, c r[BonKonQ x[0,z]], < Eo.

The statement of the theorem is now seen to be true. Corollary 1.19 Let A E p, and let E > 0. There is an ,,-measurable function f: Q - [0 , oo] x [0, ao] such that

" [ cA and
" POr[A]) - P( [A]) < E.

Proof Let (z,) be a sequence from R2 such that z, 7 co. For each index n there is a set K. E C, such that K. c A n f2 x [0 , z.] and P(i[ A n 12 x [0, z] ]) - P(4r[K.]) < -E- (take Bo = Ao = A in Theorem 1.18). Set D:= UK. We have Dc A (since z. 7oc), and

oOcoD
P(K[A]) - P(ir[D]) = P~r[ U A n n x [0, z,,] ])-P(;[U K.])

= P( Ui[ A n 9x [0,z]] "I U r[K,])
n=I n= I

< P(7r[ A n n x [,z.] ] \ i[K])

n=F

r,=I

For each index n there is an F-measurable function Z, : .Q -. , o] x [0, 00] such that [Zn] c Kn and 7[ [Zn]] = [K.] (see Proposition 1.12). Set f:= ZI,,If[z,] + E ZMn=2 lrZ.]] ", U g[zi]]
'-I

Then f is an Y-measurable function such that [f] c D and [ [I]] = ,[D]. Accordingly, we have [/ c A and P(4[A]) - P(r[ [M ]) < E. Theorem 1.20 Let Ao e p, co > 0, and let f: .Q - [0, oo] x [0, oD] be an Fmeasurable function such that [/] c A0. There is an element Ko e K, such that

" Ko cAo and
* P(4[1]) - P(4 [ n Ko ) < Eo.

Proof Let A be the collection of predictable sets A such that for any E > 0 and any T-measurable function g : 0 - [0, co] x [0, co] with [g] c A there is an element KeC, such that K c A and P([[g]]) - P(4K[n[g]]) < E. Let S,T be stopping times for (g,), and let s,i e R . First we show that the predictable set A := (S , T] x (s , t] is in A. Let E > 0 and let g be an f-measurable function such that [g] c A. Let (E,) be

33

a sequence of positive numbers decreasing to 0. For each index n, let K,[S+E,, TAn]x[s+E, t] E /C. We have K, /A. Hence we have P(r[ [g] ]) = P([ [g] n A]) = P([ [g] n U K])
n=I

= P(U r[ [g] l K,,)
n-I

= lim,, P(;r[ [g] n K,,]) since P is sigma-additive and since each set r[ [g] n K,, ] is an element of ,; see Corollary 1.8, <_ P(r[ [g] n KN ]) + E

for an existing index N. Note that we have KN e K6, KN c A, and P(ir[ [g]]) - P( [g]r l KN]) 5 E. Hence, we have A e A. We may similarly show that the set B x {0} x (u , v] is an element of A, where B E go and u,v E R+.

Next we show that A is closed under countable unions. It will follow that a monotone increasing sequence from A has its limit in A, and also that A contains the ring S generated by the sets (S , T] x (s, t] and B x {0} x (u, v] such that S,T are stopping times for (g), B e g0, and s,u,v e +. Let (A,) be a sequence from A. Set A' UAn
n-I
Let E > 0 and let g be an ,-measurable function such that [g] c Aï¿½.

34

For each index n, the set [g] n An c A, is the graph of the J-measurable function gl, [g].nA] + ccl (,r[[g]n.])c. Hence for each index n there is an element Kn G AC6 such that K. c A, and P(;r[ wg n An ]) - P(r[ [g n Kn ]) < 2We have P(r[ [g] ]) Pr[ [g] n A' ])

= P(I[ [g n UA.])
n-I

P(U r[ [g]n An )
n=1
NI
SP( ~l~~)+

for an existing index N1 E N. N,
Set K:= U K. c K5. Then K c A', and we have
n-I

NI NI P[ [g] ]) - Pr[ [g] n K]) < P(U r[ [g] n An ]) - P(U 7r[ [g] n K,) +
n=1 n=l

NI
< P(U(r[[g]nA.I\ r[[g]nK,])) + 2
n-I

NI
< P(r[[g]nA.]\r[[g]nKn]) +
n=l

NI
< + T
n<

< E.

We conclude that A' E A.

OO
Lastly, let (A,) be a decreasing sequence from A. Let A" :fn A. We will show
n-I

that A" - A. Since A contains S, and since aa(S) = i, we will then conclude that A = Vo. The statement of the theorem will follow as a consequence. Let E > 0, and let g be an ,F-measurable function such that [g] c A". Then [g] c A1, and so there is an element KI = KC6 such that KI c A and Por[ [g] ]) - P(;4[ [g] nK]) < The measurable graph [g] n K, is contained in A2, and so there is an element K2 e AC6 such that K2c A2 and P(r[[gi] n ]) - P(;r[][gnKnK2]) C2 * The measurable graph [g] n K, n K2 is contained in A3, and so there is an element K3 GC6 such that K3 c A3 and P(i[ [g] n K1 nK2 ]) - P(r[ [g] n KinK2nK3]) < 6

Continuing inductively we obtain a sequence (Ku) from /C6 such that for each index n-I n
n we have K. c A, and P(ir[[g]NNK1]) - P(tr[[gl nKJ) <2

Set K := nK,. We have K EC6, K c A", and
n-I

P(r[ [g] ]) - P(r[ [gn K]) = P(r[ [g]]) - P(G[ [g]N nKn ]) n-I
n
= P(,r[ [g]]) - P(N r[ [g] n K, ]) since [g] is a graph,
n-I

n
= P(ir[ [g]]) - lim_. P(7r[ [g] nf K ]) since P is sigma-additive,
i=I

36

-P(K[ [g]]) - P([ [g] n K, ]) + E P(7r[[g]fnn Ki]) - P(r[ [g]nfKi])
n=-2 (Pi--i=

n-l

We conclude that A" c A. We close Chapter 1 with the main result of the chapter, the cross section theorem. Theorem 1.21 Let A E k, and let c > 0. There is a stopping time Z such that

" [Z] c A and
" P(ir[A]) - P(;r[ [Z]]) < E.

Proof By Corollary 1.19 there is an Y-measurable function f satisfying

1.21a [] c A and

1.21bP(r[A]) - P(,r[W]) < By Theorem 1.20 there is an element K c AC, so that

1.21c K c A,

1.21d P(ir[ ]) - P(-[ RKI) n K 2, and

1.21e there is a stopping time Z such that [Z] c K and ,r[K] - r[ [Z]]. We have [Z] c K by 1.21e,

c A by 1.21c;

and P(r[A]) < P(r[ [A) + 2 by 1.21b,
2

< P(ir[[f]n K]) + E by 1.21d,

_ P(;r[K]) + E = P(r[ [Z]]) + E by 1.21e.

CHAPTER 2
PREDICTABLE PROJECTIONS In this chapter we define the predictable projection PX for a measurable two parameter process (X51). In addition, we demonstrate that the projection PX exists, and is unique.

Projections PX In the paragraphs that immediately follow, we motivate the definition of the predictable projection. Definition 2.1 Let Z be a stopping time for (F,,). The a-algebra YFz is defined by Fz = {AcFIAn {Z< (s,t)} cF,, V (s,t) E R'}. Proposition 2.2 Let Z = (S,T) be a stopping time such that { S < 00 } c { T
We have {S< s} = {S < s}l{T < oo} since {S< oo) c {T
fl-I

U{z < (S,,n)}

Next we show that Gs c Fz.

38

Let Ae 9s. Then AcY and An{S < s} e q forevery selR+. Let s,t c R,.

We have An {Z <(s,t)} = An{S
So A ,'z.

Lastly we show that YFz c gs. Let A c Fz, and let s E R, and n e N. Then Ae Y, and AR{Z < (s,n)} c Y, = $.

Hence we have An{S < s} = An{S _ s}N{T < oo}

(because { S < oo } c { T < x } by hypothesis), UAn{Z < (s,n)}
nr-I

E .

So A E Gs. We conclude that YFz gs. Remark Let Z = (S,T) be a predictable stopping time. Since Fz = 9s (by Proposition 2.2),

- ,F (by assumption),

we are motivated to make the following definition for the predictable projection. Definition 2.3 Let X P x _ R be an TFï¿½ I,(R2)-measurable process. A predictable process Y: Q x 2+ R which satisfies E[ Xz 1 {> I q z] - Yz I Z

39

and is denoted PX. Note that because the process X is measurable and Z is a stopping time, the function Xz is T-measurable. Therefore, since ,Fz c JF, the conditional expectation E[ Xz I {z<> I ,Yz ] is defined when the process X is, for example, positive or bounded.

The Uniqueness of PX

We begin by providing ourselves with some tools that will enable us to prove that if a predictable projection PX for X exists, then it is unique. Definition 2.4 A subset B of n) x R2 is evanescent if there is a P-negligible set Nc Q such that B c NxR 2. Theorem 2.5 Let qO : Q x 2 -, R be a real-valued predictable process. If Tz 1 = 0 a.s. for every predictable stopping time Z then T = 0 outside an evanescent set.

Proof The proof is divided into four parts. First we show that if A c fd satisfies (1A)zl {z<,} = 0 a.s. for every predictable stopping time Z, then 1A - 0 outside an evanescent set. Let A E V be such that (1A)zI {z<.y = 0 a.s. for every predictable stopping time Z. Suppose P(7r[A]) * 0.

Then P(;r[A]) > 0 and so there is an E > 0 such that P(ir[A]) > E. By Theorem 1.21, for this E there is a predictable stopping time Z, such that

2.5a [Z,] c A, and

2.5bP([A]) _ E + P(,r[Z,]). We have the following chain of equalities and inequalities.

0 P({(1A)Zlz, 0)) by hypothesis,

- P({oIgGQ(,Z (L)) E A})

SP([ A A [ZJ])

= P(7r[ [ZWJ] ) by 2.5a, > P(,[A]) - E by 2.5b,

> C - E by assumption,

= 0, which is a contradiction. So we cannot have P(Yr[A]) * 0. Accordingly, IA = 0 outside an evanescent set. Second, we show that for any disjoint sets Ai e p and any numbers ai > 0, if

ailA, ) > 1 0 a.s. for every predictable stopping time Z then ailA, =0 outside an evanescent set. Let nEN, and for i = 1 ...n let Ai E and a >0. Assume that the sets A, are disjoint. We have

(n ailA) l{z<> = 0 a.s. for every predictable stopping ailA,)I q = as. or ver prditabe soppngtime Z

= for each index i, (ailA,)z lIzAo = 0 a.s. for every predictable stopping time Z

(since the sets Ai are disjoint),

= for each index i, (1Ai)z 1 Z = 0 a.s. for every predictable stopping time Z

(since each ai > 0),

= for each index i, Ai, = 0 outside an evanescent set (by the first part above),

n
=> ailA, = 0 outside an evanescent set. This completes the second part. Third, we show that if qo is positive and satisfies the hypothesis of the theorem, then T = 0 outside an evanescent set. Assume that the predictable process T is positive and satisfies oz 1 z<' = 0 a.s. for every predictable stopping time Z. Since (p is positive and predictable, there is a sequence ((,) of positive predictable step functions such that q,, n / T pointwise. So {p, > 0) n / {qT > 0) and ('On)ZlIZ<. n / PIozZ for every predictable stopping time Z.

Hence we have

2.5c P('r[ { Ton > 0 }]) / 7 P(,r[ { To > 01]) since P is sigma-additive (note 1r[{ (, > 0) ] and ;r[ {V > 0}] are in T by Corollary 1.8), and

2.5d for each index n, (pn)z I z<-c = 0 a.s. for every predictable stopping time Z.

kn
Now each step function To, can be expressed as ailA,, where each a, > 0,
i=1

and the sets Ai are disjoint.

In light of this, when we apply the result in the second part of the proof to 2.5d we obtain (p, = 0 outside an evanescent set; n e N. Then, by 2.5c we have (p = 0 outside an evanescent set. Fourth, we show that if (p is real-valued and satisfies the hypothesis of the theorem, then T = 0 outside an evanescent set. We have the following chain of implications.

42

(pz 1 0 a.s. for every predictable stopping time Z

- 0 a.s. for every predictable stopping time Z

- = 0 outside an evanescent set (note, Jqp is a positive

predictable process, so the third part of the proof applies); (p = 0 outside an evanescent set. We are now able to establish the uniqueness, up to an evanescent set, of PX. Corollary 2.6 Let (p, V K2 x R2 - R be two real-valued, predictable processes. If (pz 1 Vz a.s. for every predictable stopping time Z then (P = V outside an evanescent set.

Proof From the hypothesis we deduce that (T - y')z 1 4z
(Note that subtraction is valid since the processes T and V take values in a Banach space, R.)

The process V - V satisfies the hypothesis of Theorem 2.5, and therefore we conclude that - = 0 outside an evanescent set. Thus, / - outside an evanescent set. Proposition 2.7 Let X : Q x R2 --+ R be a real-valued, fï¿½ 1B(R2)-measurable process. If X has a predictable projection, then the projection is unique up to an evanescent set.

Proof Suppose (P Q2 x --+ R are two predictable projections for X. Then for every predictable stopping time Z we have

43

((p,)zl~z
((P2)z 1 {z<} a.s..

From Corollary 2.6 we conclude that T, = T2 outside an evanescent set, and the proposition is hence proved.

The Existence of PX

We will begin by presenting explicit forms of the predictable projection for two particular processes, X = H I [o,u]x[o,v] and X = I (z,I). Both forms will be used later in the paper, and one form will intimate that, desirably, a predictable process is its own projection. We will close by proving that every bounded, measurable, real-valued process possesses a predictable projection. Proposition 2.8 Let H r L(F), let u,v e R,, and let X: K x R2 --+ R be defined by XS,,U,) = H(m) 1 [o,.]x[o,v](s,t). Denote by (E[ H 13". ]), the function s '-* E[ H I JF, ], which is chosen (Dinculeanu 2000, p. 181) to be right continuous with left limits (cadlag). Then X has a predictable projection, and we have

(PX)S,(' y) = (E[ H Y-]) ,(t) 1 [o,,]j[o,v](s, t). Proof Let Y,,,(u):= (E[ H I T. ]), (t) I [o,t]4o,v](s.t) for every tu E Q and every s, t c R,. First we verify that the process (Y,,,) is left continuous. Since (E[ H 1F3. ]), is cadlag, (E[ H I Y. ]), is left continuous. Also, 1 [o,u]x[o,v] is left continuous. Therefore, (Y,,,) is left continuous. Next we check that for every s,t E R,, the map Y,,, is T,,I-measurable. Let st E R,+.

If s > u or t > v then Ys,t = 0, which is ,Ts,,-measurable.

44

If s< u and t< v then Yst = (E[ H -F.

= E[HI _]

(note, since E[ H . - ] is cadlag and s is a predictable stopping time, the Stopping Theorem (Metivier 1982, p. 87) applies),

Lastly we confirm that E[ Xz 1 {z< > I ,z ] = Yz I {q<.> a.s. for every predictable stopping time Z.

Let Z = (S,T) be a predictable stopping time. We have Yz 1 Zo = YST I
= (E[ H I F. ])s- I{z<(uv)>

= E[ H I Fis- ] 1 Z<(u,,)

(note, since E[ H I F. ] is cadlag and S is a predictable stopping

time, the Stopping Theorem (Metivier 1982, p. 87) applies). Also, E[ Xz I I Fz ] = E[ H 1 Z<(uv)> I Fz ] SE[ H I Fz ] I lz<(u,,)} since { Z < (u, v) } ,Fz,

- E[ H I Fs- ] 1 {z<(.uv) since ,z = Gs by Proposition 2.2, and since 9s = Fs- by assumption. Therefore we have E[ Xz 1 q

45

the predictable projection of 1 (z,.) is itself. Therefore we have P(O (z.)) - I (Z,.). Proof Let Z' be a stopping time. To prove the first assertion of the proposition, we must show that {Z < Z' < oo}n{Z' < (s, 0 } G, for every s, tR R+. Let s,t e R +.

Note for every r,q R R+ with r< s we have {Z < (r,q)} cFr,q

= GSr

Also, for every r,q E R with r < s and q < t we have A:={r < S' < s}fn{T' < t} = {Z' < (s,t)}fl{Z' < (r,t)}c

{Z < Z' < oo}n{Z' < (s,t)} {Z < < (s,t)} U {Z < (r,q)}n{(r,q) < Z< (s,t)}
r,qcQ
r

e s.
We now prove the second assertion of the proposition. Then, we may be able to infer

46

that for any predictable process (X,,) that has a predictable projection PX we have PX= X.

Observe that the process 1 (z,) is left continuous. Also, for each s,t ( R + we have I (z,)(s,t) = l Z<(s,0. Since I z<(S,,)> is 37s,t-measurable, then I (z)(s,t) is also -F,t-measurable. Now let Z' be a predictable stopping time. We have E[ (1 (z,.))z, I Jz'<} I D I ] = E[ 1 {z '
- (1 (Zo))z, I z'
We have proved that P(1 (z,.)) = I(z). Theorem 2.10 Let X' ï¿½ x+ --. R be a bounded, real-valued, 3ï¿½ 11(R+)measurable process. Then X' has a predictable projection. Proof The proof is inspired by Dellacherie and Meyer's proof for the existence of the predictable projection of a one parameter process. The proof will unfold in five steps. In the first step we show that if X, and X2 are measurable, real-valued processes having predictable projections PX1 and PX2 respectively, then X, < X2 implies that PX 1 < PX2 outside an evanescent set. Thus, in particular, if IX I < K then IPXI < K outside an evanescent set. Let XI, X2 :1 x R2-+ R be two Tï¿½ I(R2)-measurable processes having predictable projections PXI and PX2 respectively. Let Z be a predictable stopping time. We have

47

E E[ (XI)z I z
: (PX )Z I{Z
> (PX2- PXI)zlz<.> > 0 a.s.

((Px2 - PXI) 1PXPX<%)Z1{z
Since the predictable stopping time Z was arbitrary, we deduce that if X1 < X2 then (PX2 - PXI) lx,2--Px, 0 outside an evanescent set. We show secondly that if X is a measurable, real-valued process and (Xn) is a sequence of uniformly bounded, measurable processes such that each Xn has a predictable projection and Xn n / X, then X has a predictable projection and we have PX = liminf, PX .

Let (X") be a sequence of measurable processes Xn : Q x R2 -- R, n E N, and let X ï¿½Qx + - R be a measurable process. Assume that the sequence (Xn) is uniformly bounded, that each Xn has a predictable projection PXn, and that X" n, / X pointwise. Let Z be a predictable stopping time. Then (X.)z I Zo " ,/ Xz I z I ,z] n / E[ Xz z<> I ,z ] a.s. and in L' (P). Thus we have

48

E[ Xz I Z I ,'z] = lim E[ (Xn)z I z
n

- liminf (PXn)z I iz<>, a.s.
n

- (liminf PX"") lz<1 .
n )z

In our third step we prove that if (X") is a sequence of bounded, measurable processes each having a predictable projection, and X is a measurable process such that Xn --- X uniformly as n - oo, then X has a predictable projection and we have PX = liminf, PX . Let (X") be a sequence of bounded, measurable processes Xn x --+ R, n e N, and let X ï¿½g2 x 2+ _ R be a measurable process. Assume that each X" has a predictable projection PX", and that X" _ X uniformly as n - 0.

Since each Xn is bounded and X" -- X uniformly, X is bounded. Hence, E[ Xz I {z I ,Fz] exists for every predictable stopping time Z. We have E[ (Xn)z I qo ,z ] - E[ Xz I jz<.o I ,z ] a.s. and uniformly as n oo, for every predictable stopping time Z. So for every predictable stopping time Z we have E[ Xz 1 {z<> I ] = lim Ef (X")zl z< } I , z] a.s.
n

= liminf (PX")z 1 a.s.
n

-(liminf npXn )iz I~}

49

In this fourth step we verify that the projection p is linear. Let X,Y" g2 x R- IR be two processes that have projections PX, PY respectively, and let a,b e R.

We will prove that aX + bY has a predictable projection, and P(aX + bY) = a PX + b PY. In fact, for every predictable stopping time Z we have E[ (aX + bY)z 1z
- a(PX)z 1 {z<- + b(PY)z 1 z< a.s.

- (a PX + b PY) z 1 Z<0o>.

Lastly, let B be the set of all real-valued, bounded, Yfï¿½ g(RE)-measurable processes, and let ?f be the set of all 'Fï¿½ Ig(R2)-measurable, bounded, realvalued processes which admit a predictable projection. We will show that R = B. From the four steps above we deduce that 'H is a vector space that is closed under bounded monotone convergence and uniform convergence. Further, h contains the process 1, since P1 = 1. Let C be the class of bounded, measurable processes X : nx 2 _, R of the form XS,1(u) = H(w) I[o,u][o,,](s,t), where H E LD(,T) and u,v E R. Then C is closed under multiplication. Also, C c 7- by Proposition 2.8. Thus, by the Monotone Class Theorem (Dellacherie & Meyer 1975, p. 14-I), 7- contains all processes which are bounded and oa(C)-measurable. But ca(C) = yï¿½ 1J(R2). Hence H = B.

CHAPTER 3
PREDICTABLE DUAL PROJECTIONS In this chapter we present the predictable dual projection XP for a two parameter process X. It will be shown that if XP exists then it is unique, and that in the presence of a step filtration, XP exists when X is right continuous, measurable, and has integrable variation.

We begin with the definition of a step filtration.

Step Filtrations (YI)

Definition 3.1 Let E > 0. A filtration (Y') is a right continuous E-step filtration if

Y = ' for every a > 0, where LaJ denotes the largest integer that is less
ac [a] C

than or equal to a.

A filtration (Y') is a left continuous E-step filtration if YF' = 'T' for every a > 0, where Fal denotes the smallest integer that is greater than or equal to a. Remark A step filtration (FT) is of interest to us because

" it facilitates a property that is assumed of the filtration (), namely that
Gs = Fs- for every predictable stopping time S for (9,), and
" it causes (G,) to be a left continuous step filtration, a property that will help us
later to demonstrate the existence of a predictable dual projection.

Proposition 3.2 Let E > 0, and assume that (YI) is an E-step filtration. Then

" the filtration (G,) is a left continuous E-step filtration, and
" for every predictable stopping time S for (G,) we have Gs = FsProof We prove the first assertion of the proposition. Let so E R,.

We have Go =Y,,= ca (U 0 r)

Y ,so if so =E for every n eN
Fs0- if so = nE for some n e N

since (,F,) is a right continuous (see Notation 1.1 f)) E-step filtration. Hence, the filtration (g) satisfies G,, = GrlI for every a > 0, and Go = 9'. As such, (g.) is a left continuous E-step filtration. Next we prove the second assertion of the proposition. Let S be a predictable stopping time for (G,). We must show that 9s = YsFirst we show that Ys- c Gs. Let B be a generator of Ys-. Then B = A n{S > s} for some s- R+, and AEcFs. Hence, for every tc R, we have

Bn{S< t} = An{S> s}n{S< t}

= r0 if s>t
A n{s
C 91.

So B c gs. Since gs is a --algebra, it follows that Ys- c Gs. Next we show that gs c s-. Let A E Gs. We have A (An{S< 0)) U U[An{S< nE}n{S > (n-)E}1

Bou UBnn{S>(n-1)E},
nr-I

where B0 = An{S< o},

c Go since A c gs and S is a stopping time for (G),

= 'TO

andwhere B, = An{S< nE}, n cN,

S,, since A c Gs and S is a stopping time for (9,),

= CT(nj)r.

We deduce that A E Ys-.

Predictable Dual Projections XP We begin by introducing some terms (Dinculeanu 2000, p. 363-390) we will use during the definition of the predictable dual projection, as well as beyond. Definition 3.3 Let X : x R2 -- R be a two parameter process.

3.3a (X,,) is right continuous if for every s0,t0 e R and tu E Q, we have

lir X5,,(U) = Xo00(w).
(s,t)-.(so,to)
to
3.3b Let o e Q and let s < s' and t < t' be elements of R+. The increment of X(tur) on the rectangle R:= (s, s ] x (t, t'], denoted AR X(rU) or A (st),(S , X(rU), is defined by AR X(CU) = X',,,(u) + X,,, (tu) - - X ,(m). One might think of Xz(o) as measuring the "area" of the rectangle [0 , z], for every z G R2. Then, one would see that AR X(m) delivers the area of the rectangle R.

3.3c (X,,) is increasing if for every Qi e Q and every z < z' from R2 we have X. < X,.

53

(XSI) is incrementally increasing if for every o c K and every z < z' from R2 we have A ,X(c) > 0.

3.3d Let o c 92 and let I,J c R+ be intervals. The variation of the function

+ - on the rectangle I x J, denoted var( X(ay), I x J ), is defined by

var( X(m), I x J) = sup E I A(s,s,+,1]x(jj,,,] X() 1, where the supremum is taken over ij

all divisions so < s, < ... < s, of points from I, and all divisions to < tj < ... < tm of points from J.

3.3e The variation process IXI: X R* R+ï¿½is given by IXlz(r) = var(X(w) , (-o0, z]) for every m E n and z E R2, after extending X to Q x R2 by setting X, = 0 for every w e 2 \ +.

(Xl,,) has finite (bounded) variation if for every ru c 0, the function IlX.(u) is finite (bounded).

(XS) has integrable variation if

" (Xs,,) is Fï¿½ I(R2)-measurable, and
" the total variation IXl := sup IXk is P-integrable.

3.3f Let U E Q. Denote by K the ring of subsets of R2 generated by the rectangles [0 , z]. The measure associated with the function X(u) is the additive set function mx(W) : 7Z -, R defined by mx(W)( [0 , z]) = Xz(o) for every z e R2. Note that it follows that mx()( (s, s'] x (t, t'] ) = A(t, ]X(,']X(tu) for every 0 < s < s' and 0 < t < t'. Remarks If (Xsj) has finite variation then the variation process IlX is increasing and incrementally increasing. If (Xs,/) is incrementally increasing and right continuous then

54

for each ay c n, the measure mx(,) is sigma-additive on 7. Furthermore, if the process (X,1) is right continuous and has bounded variation then for each w f, the measure mx(,) can be extended uniquely to a sigma-additive measure on R(R 2) that has finite variation Imx(,)l on g(R2), and we have Imx()l - mx()I on 1R. Therefore, if (Xst) is right continuous and has integrable variation, then for each u G 2 and each g(R)-measurable, Imx(,)-integrable, real-valued function f: R2 -+ R, the Stieltjes integral f f dmx(,) is defined and is often written J f dX(zu).

If (XS,,) is right continuous and 3ï¿½ l(R2)-measurable, and has integrable variation, then for any bounded, Yï¿½ g(R2)-measurable, real-valued process T, the expectation E[ f p dXz ] is defined and is finite.

Definition 3.4 Let X : .2 x R2 - R be a right continuous, Yï¿½ fl(R2)-measurable process with integrable variation IXI. A right continuous, predictable process Y" Q x I+ R with integrable variation IYI is called a predictable dual projection for X if for every bounded, Fï¿½ BJ(R2)-measurable, real-valued process T we have E[ J TdY] = E[ J Pqi dX ]. The predictable dual projection Y of X is denoted XP.

The Uniqueness of XP

We now show that if a predictable dual projection XP for X exists, then it is unique up to an evanescent set.

Proposition 3.5 Let X : x 2R __ -+ be a right continuous, measurable process with integrable variation. Assume that X has a predictable dual projection Y. Then Y is unique up to an evanescent set.

55

Proof Suppose that X has two predictable dual projections, Y, and Y2. Let Z be a stopping time.

Set A:= { (Yl)z lz< o> > (Y2)z lz<> } and T(tu,s,t) := l[oz](cu,s,t) IA(UJ) lz<}(U). Note that A c ,F and To is real-valued, measurable, and bounded. We have E[JTdY, ] = E[ JPT dX] = E[ f T dY2 ]. So J (Y,)z lz< q, dP = J(Y2)z l z<.> dP (see Definition 3.3f).
A A

Therefore J ((Y,)z lZ< - (Y2)1 i z<->) dP = 0.
A

Similarly, setting B as { (Y)z lz<.} < (Y2)z 1 Z<-. } and Tp'(tu,s,t) as 1[0,zj(,s,t) IB(G) l{z< y(w) leads to J ((Y2)z l
B

PREDICTABLE PROJECTIONS AND PREDICTABLE DUAL PROJECTIONS OF A TWO PARAMETER STOCHASTIC PROCESS By PETER GRAY A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2006

PAGE 2

ACKNOWLEDGMENTS I am indebted to my supervisor, Dr. Nicolae Dinculeanu, for his infinite patience with me while I slowly learned the material. Also, I am grateful to the many excellent teachers that I have had during my journey at the University of Florida. Finally, for the friendly banter and hearty laughs that I hold in fond memory, I thank Julia, Connie, and Gretchen. II

PAGE 3

TABLE OF CONTENTS ACKNOWLEDGMENTS ii ABSTRACT CHAPTER 1 THE CROSS SECTION THEOREM 1 Introduction 1 Predictable ^-Algebras 2 Stopping Times 6 Projections 7 t[A] g Sets fZs 1 1 The Cross Section Theorem 21 2 PREDICTABLE PROJECTIONS 37 Projections 37 The Uniqueness of 39 The Existence of /Â’X 43 3 PREDICTABLE DUAL PROJECTIONS 50 Step Filtrations (^,) 50 Predictable Dual Projections X/' 52 The Uniqueness of XF 54 Predictable Dual Projections of Measures 55 Processes Associated With Stochastic, K-Valued Measures 58 The Existence of X^ 64 4 VECTOR-VALUED PREDICTABLE DUAL PROJECTIONS 77 Predictable Dual Projections W^ 77 The Uniqueness of W^ 81 Processes Associated With Stochastic, E-Valued Measures 82 The Existence of W^ 89 iii

PAGE 4

5 AN EXTENSION OF THE RADON-NIKODYM THEOREM TO MEASURES WITH FINITE SEMIVARIATION 94 6 SUMMARY AND CONCLUSIONS 106 REFERENCE LIST 107 BIOGRAPHICAL SKETCH 108 IV

PAGE 5

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy PREDICTABLE PROJECTIONS AND PREDICTABLE DUAL PROJECTIONS OF A TWO PARAMETER STOCHASTIC PROCESS By PETER GRAY August 2006 Chair; Nicolae Dinculeanu Major Department: Mathematics The framework of this dissertation consists of a probability space (Q, a filtration such that if is a filtration satisfying = 3^ sfor every s e then we have Qs = 3^sfor every predictable stopping time S for a double filtration {3^s,i)s,ie^i^ such that 3^s,t = 3^sfor every > 0; and a Banach space E. We study initially a real-valued, two parameter stochastic process X : Q x K, and then we extend some of our results to a vector-valued process Y : Q x ^ In Chapter 1 we start by defining the predictable o-algebra p of subsets of Q X to be the a-algebra generated by the left continuous processes X that are adapted to the double filtration {3^s,t)s,K'SL^. Then we prove the main result of the chapter, the cross section theorem for sets in p . In Chapter 2 we define the predictable projection of a measurable process X : Q X IR2 to be a predictable process ^X ; x oj such that for every V

PAGE 6

stopping time Z we have E [ Xz 1 \3^z] = (^X)z 1 almost surely. Then, using the cross section theorem, we show that the predictable projection is unique up to an evanescent set. In addition, we demonstrate that every bounded, measurable process X has a predictable projection. in Chapter 3 we define the predictable dual projection of a right continuous, measurable process X : Q x integrable variation to be a right continuous, predictable process X^ : Q x k with integrable variation such that for each bounded, measurable process (p :ClxKl R we have E[j Pep dX ] = E[ | (p dX^ ]. Then we show that the predictable dual projection is unique up to an evanescent set. We also establish that the predictable dual projection of the process X exists if the filtration is a step filtration. In Chapters 4 and 5 we turn our attention from real-valued processes X to vectorvalued processes Y. In this setting, our formulations are based not on finite variation and integrable variation, but on finite semivariation and integrable semiva nation. VI

PAGE 7

CHAPTER 1 THE CROSS SECTION THEOREM Introduction The theory surrounding one parameter stochastic processes has applications in many fields. In finance for instance, a filtration contains information that is known up to time t about a market; a martingale (X,),er^ for reflects the price of stock options; a predictable process (H/),er^ houses the number of shares to be held at time t\ and a stopping time S for (^/),^r^ indicates when stocks should be sold for optimal profit. Note a discrete (or step) filtration suffices for good results in many markets. The predictable projection (^X,),eR^ and the predictable dual projection (XO/eR+ for the process (X,),eR^ both play a role in one parameter stochastic theory. For an example of this we retread the finance stage that was set above. The random variable PXs, which is the predictable projection (^X,),er^ evaluated at a (predictable) stopping time S, may be regarded as an updated version of the expected selling price E[Xs] of stock options, given the market information 3^ sThe goal of this dissertation is to extend the definition and existence of the predictable projection and the predictable dual projection to a two parameter process (Xj,/)j,,eR^. This extension is difficult because, while the set IR+ of positive real numbers is totally ordered, the set of ordered pairs of positive real numbers is not. In order to reduce slightly the complexity that we face, we will retain a one parameter 1

PAGE 8

2 flavor: our framework will be built around the double filtration = ^sn where is a right continuous, complete filtration. Predictable a-Aigebras The main result of this chapter is a cross section theorem for predictable subsets of QxIR 2 relative to the double filtration satisfying 3^s,t= 3^sfor s,t> 0. The cross section theorem will be derived with invaluable help from the Monotone Class theorem, which may be found in the text Probabilities and Potential (Dellacherie & Meyer 1975, p. 13-1). In this section we introduce the predictable a-algebra p of subsets of the space Q X K+. Notation and Terminology 1.1 The following will be used in the sequel. 1.1a(Q,^,P) is a probability space. 1 .1 b K+ is the set of non-negative real numbers. N is the set of natural numbers. Q+ is the set of positive rational numbers. 1R+ is the set IR+ x K+, and Q+ is the set Q+ x Q+. 1.1cR(lR+) istheBorel cr-algebra generated by the intervals {s , t] of 0^+. is the Borel a-algebra generated by the rectangles (s , r] x (Â« , v] of Rl. I.ld A function X ; Q x ^ is called a two parameter process, and is denoted (X.,). I.le ^(g) B(K+) is the
PAGE 9

3 1 ,1f (^/),Â€r, is a filtration and therefore satisfies Â• for each r > 0, is a 0 (that is, {3,) is right continuous). S>t See 1.1j below for another assumption about {3,). 1.1gA function S : Q [0 , cÂ»] is a stopping time for the filtration (3,) if { S < r } g 3, for every t > 0. Let S,T be two stopping times. The stochastic interval (S , T] is the set { (cT,r) G n X K+ I S(cj) < 1 < T(c 7) }, while [S , T) is the set { (c7,r) e D X IR+ I S(cj) < / < T(cj) }. The stochastic intervals (S , T) and [S , T] are defined in a similar fashion. 1.1hThe predictable cr-algebra V of subsets of Q x K+ is the a-algebra generated by the sets A x (^ , r] and B x {0}, where A g 3^and B g ^o. A stopping time S is predictable if the stochastic interval [S , oo) is a predictable set. 1.1i Let S be a stopping time. 3 s denotes the cr-algebra {AeJ^j An{S 0 } while 3sdenotes the cr-algebra generated by the sets in 3o as well as sets of the form A n { S > r }, where t > 0 and A g 3,. is the filtration defined by the following rules.

PAGE 10

4 Â• ^0 = ^0 and Â• Qs = for every 5Â’>0. We will write simply {Q,), and we will assume that the filtration (^,) is such that Gs = J^sfor every predictable stopping time S for the filtration (Gs)For example, this is the case when for every 5 e 1 R+, where [sj is the largest integer less than or equal to 5. 1.1k i 3 ^s,t)s,eR+ is a double filtration and therefore satisfies Â• for each s,t> 0 , 3 ^s,t is a cr-algebra contained in 3 ^, and Â• 3 ^sj <= 3 ^u,v if s0. 1.1I Let B cOx B(c 7) is the set | (t(7,j:,r) e B }. ;r[B], called the projection of B, is the set { t(7 e n I {fD,s,i) e B for some s,t e K+ }. 1.1m The point (00, oo) will be denoted by 00. Therefore, the inequality {s,t) < 00 means that 5 < qo and t < 00. I.ln Let g : n ^ [0 , 00] X [0 , 00] be a function. [g] denotes the set { (c;,5,r) g x j g(gj) = (5^f) called the graph of g. We now commence our study of the real-valued two parameter process (Xsj). We begin by defining the predictable
PAGE 11

5 lim X^^(cj) = X (uj). Note that this limit is a pointwise limit. (i,/)Â— (io,Â«o) 0x(-s',0 + Xp x 1 /ovxfik=0 k,m=0 m=0 where 1 is the indicator function, and Â« g N. Since (Xs,,) is left continuous, Y" ^ X pointwise as n^oo. Fix m,Â«, andA:, and consider the process := XÂ™. *. IfÂ™. .atLixf*. itiiUOÂ’ J^\ n > n -I Since X^ Â± is ^^-measurable, R is the pointwise limit of processes of the form p ;=1 where for every index i we have a, g 1R and A, g But for every index i we have A, x (-^ , -^] = (S, , T,], r on A. I 00 on aJ where S, = , which is a stopping time for (Qs)', and

PAGE 12

6 f on A. ' / = ^ f , which is also a stopping time for (Qs). I 00 on A. p Therefore, R is the pointwise limit of processes of the form ^ a, 1 /=! where for every index i we have a, e K, and S/, T, are stopping times for (Os)By considering in a similar manner the processes Xq i 1 {o>x(i, *Â±i-] and ^~,o ^ obtain p e aa-(^(Sj , T,] x (n , 5 |], A x {0} x (r2 S2]. (S2 , T2] x {0} where ri,A-2,,si,52 e IR+, Ae Oo, and Si,S 2 ,Ti, and T 2 are stopping times for (Os): = aa{(S , 00 ) X [0 , n], A X {0} X [0 , ^ 2 ] }, where r,,P 2 e K+, A g Oo, and S is a stopping time for (Os). In fact we have p = crfl{(S , 00 ) x [o , ri], A x {O} x [o , a2 ]} because the sets (S , 00 ) x [0 , ri] and A x {0} x [0 , ^ 2 ] are elements of p. Stopping Times Definition 1.4 Let Z : Â— > [0 , qo] x [0 , 00 ] be a function. 1.4a The function Z is a stopping time for the filtration (J^sj) if { Z < z} e for every z e K?. 1 .4b The stopping time Z = (S,T) is predictable if Â• S is a predictable stopping time for (Os), and Â• {S
PAGE 13

7 Proof Let We must show that {Z< (s,/)}e ^s,iFor each index n and each (r,M)Â€lR2 we have {ZÂ„< (r,Â«)} = { SÂ„ < r}n{TÂ„< Â«} e 3^r,u by hypothesis. GO Set t: := V TÂ„ i=n Then limsup TÂ„ = lim JÂ’Â„. n n We have { Z < (5,r) } = { (sup SÂ„ , lim Tl) < {s,t ) } n n = {sup SÂ„< s}n{lim Tl p=N k=\ Ln=1 m=k 00 00 00 nu nn{TÂ„, < , + e,>n .... p=N k=l _n=l m=k 00 00 CO = > n {2Â„, < >n ....] p=N m=k e ^i./+CATN e N being arbitrary. Since is right continuous in t, we conclude that {Z<{s,t)} g

PAGE 14

8 Proposition 1.6 Let n e N, and let Z, = (SÂ„T,) i = 1 ... Â« be a finite set of stopping times for {3^ sj) such that each S, is a stopping time for {Qs). Set A, := {inf S* = S,} n {S, S,} n (S, ^ S 2 } n ... n (S, ^ S;-i}, i = 1 ... n, k S,(cj) = oo for each index / then T(cj) = oo => S(c 7 ) = oo. Proof We first prove that S is a stopping time for {Gs). Note that the sets A, / = 1 ... n are painwise disjoint. Further, we have A, e Gsi for each / (Metivier 1982, 00 p. 20), and A, = Q. It follows that S is a stopping time for (Gs). i=l Next we prove that Z is a stopping time for {3Â’sj). Let (5 :,/)gK 2. We must show that {Z<(s,/)}e GsBy hypothesis, (Z,<( 5 ,/)}e Gs for each /. n So we have { Z < {s,t) } = (J { Z, < (5,/) } n A, Â»=i = U({S, i=I G Gs. Lastly we prove the third assertion of the proposition. We have T(cr) = co => T,(ct) = qo for some index / such that gj e A,, => S/(ct) = 00 by hypothesis. S(cj) = 00 since m e A/.

PAGE 15

9 Projections ;r[A] In this section we establish a key result concerning the projection k[A] of an ^0 B(IR?)-measurable set A. It will emerge that 7t[A] is ^-measurable. Proposition 1 .7 Let A e J ^0 and let g be a function such that k[ [g] ] g and [g] c A on ;r[A] n ;r[ [g] ]. Then n-[ [g] n A ] g Proof Let A be the collection of sets B g ^0 such that for any function h : Q Â— [0 , oo] X [0 , oo] satisfying n[ [h] ] g and [h] c B on ;r[B] n n[ [g] ] we have ;r[ [h] n B ] g Let n be the ring generated by the sets ( C x ( 5 ,/] x (Â«,v] ) n QxK? and fixK?, where C g ^ and s,t,u,v g K. First we show that A contains TZ. Let B G 7^ and let a function h be such that k[ [h] ] g ^ and [h] c B on K[B]nn[ [h]]. Without loss in generality we consider B = C x ( 5 ,/] x (m,v], where C g ^ and s < t and u < V. We have 7 t[ [h] n B ] = ( ;r[ [h] ] ) n C g H ence, B g A Next we show that ^ is a monotone class. Let (BÂ„) be a monotone sequence from A. Assume first that (BÂ„) is increasing. Set B:=[JBÂ„. Let a function hi satisfy ;r[[hi]]G ^ and [hijcB on n ;r[B] n ;r[ [hi] ]. Let h' : f 2 Â— >Â• [0 , 00] x [0 , 00] be a function that satisfies the con-

PAGE 16

10 ditrons set out below. There are four conditions, and they involve the function hi. We require that ;r[ [h ] ] = ;r[[hi]], h' = h, on (;r[B]r, [h']eBi on 4Bi]n;r[hi], and [h']cB, on (;r[B,] \ 4B,_i] ) n 4h,] for / = 2,3,4... . Such a function h' exists. Note that we have k[ [h ] ] e and [h'] c Bj on ;r[Bj] n n[ [h ] ] for i e N. Therefore for each index / we have n[ [h'] n B, ] e ^ (since each B, is in A). Hence we have ;r[ [hi] n B ] = ;r[ [h'j n B ] from the definition of h', = ^[[h'jnUB,] i = u >'[ Ih'l n B, 1 / e .r. Next, assume that (BÂ„) is decreasing. Set B' := p BÂ„. Let a function ha satisfy n[ [ha] ] e ^ and [ha] c B' on n ;r[B'] n n[ [ha] ]. Let h" ^ [0 , co] x [0 , oo] be a function that satisfies the conditions set out below. We require that k{ [h"] ] = n[ [ha] ], h" = ha on (;r[Bi]ru4B'], and [h"]cB, on (;r[B,] \ ;r[B,+i]) n 7r[ha] for / e N. Such a function h" exists. Note that we have ;r[ [h"] ] e and [h"] cz B, on ;r[B,] n n[ [h"] ] for i e N.

PAGE 17

11 Therefore for each index i we have ;r[ [h"] n B, ] e ^ (since each B, is in A). Hence we have n[ [h 2 ] n B ] = tc[ [h"] n B ] (from the definition of h '), = 4[h"]nr|B,] i = n I J (since [h"] is a graph), i G We have shown that is a monotone class which contains the ring of generators of B(IR?). Because this ring contains the whole space n x we conclude from the Monotone Class theorem that A is equal to 3^Â® ^([K^). The statement of the proposition is now seen to be true. Corollary 1.8 Let A e IX(K2). Then ;r[A] e Proof We are able to find a function h ; n Â— > [0 , oo] x [0 , oo] such that 7c[ [h] ] = Q and [h] c A on ;r[A] n n[ [h] ]. From Proposition 1.7 we obtain ;r[ [h] n A ] g 3^. For such a function h we have ;r[ [h] n A ] = Â«-[A], Hence we have n^[A] g 3^. Sets K.S We continue to prepare for the Cross Section theorem by introducing a special collection JCs of predictable sets with compact cross sections. Definition 1.9 Let K c Q x 1.9a We say that K has compact cross sections if for each cj g ;r[K], the cross section K(cr) a is compact. 1.9b The cross sectional closure of K, denoted K*, is given by K*(c 7) = ( K(cj) )* for every m e Q, where ( K(c7) )* denotes the closure of the

PAGE 18

12 cross section K(cj) e (The closure includes all adherent points.) Proposition 1.10 Let N e N, and let (KÂ„) be a sequence of subsets of x with compact cross sections. Assume that each set KÂ„ has the following properties: n[B* n KÂ„] G ^ for every ^0 B(K+)-measurable set B; and there is a stopping time ZÂ„ = (SÂ„,TÂ„) such that Â• SÂ„ is a predictable stopping time for (Qs), Â• [ZÂ„]cKÂ„, Â• {ZÂ„ < oo} = ;r[KÂ„], Â• TÂ„(tJ 7 ) = 00 => SÂ„(cj) = 00 for every cr g Q, and Â• (5,/) G KÂ„(cj) s > SÂ„(cj) for every tu e Q. Set K := (J KÂ„ and K' := Q KÂ„. nÂ— 1 nÂ— 1 Then K and K' both have compact cross sections, and K has all the above properties of the sets KÂ„. Further, if the sequence (KÂ„) is decreasing, then the set K' also has all the above properties. Proof The finite union and the countable intersection of compact subsets of K? is compact. Therefore, K and K' have compact cross sections. Next, let B g ^0 We have ;r[B* n K] = ;r[B* n |J KÂ„] n=l = U ;r[B* n KÂ„] n=l G , 4 B* n K'] = ;r[B* n n KÂ„] ff=l and

PAGE 19

13 = p ;r[B* n KÂ„] if (KÂ„) is decreasing, n=l since each set B* n KÂ„ has compact cross sections; G We now show that K has the second property set out in the proposition. Set Z := mint ( ZÂ„ ). (See Proposition 1 .6 for the definition of "minf.) n=l...JV From Proposition 1 .6 we know that Z is a stopping time, and that if we write Z = (S,T) then S is a stopping time for {Qs). Note that the stopping time S is a predictable stopping time because S is the minimum of finitely many predictable stopping times SÂ„, n-=\...N. Next, let vj e [Z], Then Z{uj) < 00. There is an index n such that Z(cj) = ZÂ„(c7). From this equality we deduce that ZÂ„(CJ) < 00 . By hypothesis we have (n7,ZÂ„(cr)) e KÂ„. Hence we have (cj,Z(c7)) = (cj,ZÂ„(oj)) eKÂ„ cK. We now show that { Z < oo } = ^[K]. N We have ;r[ K ] = ;r[ (J KÂ„ ] n=l = n=l

PAGE 20

14 N = [J{ZÂ„ n < Â«.}) rt=\ N = [J{SÂ„ < cÂ»} by a property of the function ZÂ„, n=] N = U [{SÂ« S{m) = oo for every m e Q.. Lastly we show that (5,r)eK(rz7) implies that s> S(cr) for every cr g Q. We have (s,t) g K(cj) (cr,-s,f) e K => {uj,s,t) G KÂ„ for some index n, => s > SÂ„(cj) by hypothesis, > S(ct) since S = inf SÂ„. n=I..JV To complete the proof, assume that the sequence (KÂ„) is decreasing. We verify that the set K' has all five properties that were listed. Let Z := (sup SÂ„ , limsup TÂ„). Note that Z is a stopping time. n n To see this, it is enough to show (by Proposition 1 .5) that the sequence (SÂ„) is increasing. Let men and Â«o e N, and assume first that ZÂ„Â„+i(nj) < oo. Then (cj, SÂ„Â„+i(cr), TÂ„Â„+i(cj)) g KÂ„Â„+i by a property of the set KÂ„Â„+i.

PAGE 21

15 But we have the containment KÂ„Â„+i c KÂ„Â„. So we have {m, SÂ„Â„+,(cj), TÂ„Â„+,(cj)) Â€ KÂ„Â„(cj). By a property of the set KÂ„Â„ we have SÂ„Â„+,(n 7 ) > SÂ„Â„(cr). Next, assume that ZÂ„Â„+i(c7) = oo. Then SÂ„Â„+i(ct) = oo or TÂ„Â„+i(t; 7 ) = oo. We have SÂ„Â„+i(cj) = oo by a property of the set KÂ„Â„+i . Hence SÂ„Â„+i(c7) > SÂ„Â„(nj), and we have shown that Z is a stopping time. Now note, the stopping time sup SÂ„ is a predictable stopping time for (Qs)n This follows since each SÂ„ is a predictable stopping time for {Q^), and since (SÂ„) is increasing. We now prove that [Z] c K. Let Z{m) < oo. We must show that (sj, Z(uj)) e K'. Since Z(ct) < oo, we have sup SÂ„(nr) < oo. n Therefore SÂ„(cj) < oo for every index n. It follows from a property of KÂ„ that ZÂ„(t
PAGE 22

16 By the compactness of the cross sections involved, it follows that (c 7 , Z(ct)) = lim (cj, ZÂ„*(ct)) belongs to each of the sets KÂ„^, A:Â€N. k So we have (nr, Z(gt)) g pj KÂ„^ k = K'. We now show that { Z < oo } = n[ K']. First we establish that { Z < oo } = n< ZÂ„ < oo }. n In fact, let C 7 g { Z < oo }. Suppose there is a number no e N such that ZÂ„Â„(jzj) < oo. Then either SÂ„Â„(ct) = oo or TÂ„Â„(cj) = oo. Since TÂ„Â„(cj) = oo => = oo, we obtain SÂ„Â„(cj) = oo. Since (SÂ„) is increasing, we have SÂ„(ct) = oo for each m > Â«oIt follows that sup SÂ„(gj) = 00 . n Therefore Z{tu) = (sup Sn(tu ) , limsup TÂ„(?zt)) < oo, and we have reached a n n contradiction. Because of this contradiction, we conclude that ZÂ„{m) < oo for every A7 G N. Hence, we have { Z < 00} c n< ZÂ„ < CO }. n Next let or G PI { ZÂ„ < 00 }. Then for each n we have ZÂ„(ti 7 ) < 00. n Note for each n we have [ZÂ„] cz KÂ„. Therefore for each n we have (c;,ZÂ„(c 7 )) g KÂ„ c Ki.

PAGE 23

17 Since Ki has compact cross sections, there is a number M in K? depends on cf) such that (s,t) < M V (s,r) e Ki(cj). Thus, for each n we have ZÂ„{uj) < M. Consequently we have Z(c 7 ) = (sup SÂ„(cj) , limsup TÂ„(cj)) < M. n n Accordingly, we have cj e { Z < oo }. Hence, we have p|{ZÂ„ sup SÂ„(cj) = oo for every m e Q. n n In fact for each uj g Q vje have limsup TÂ„(ct) = 00 => Z{uj) < 00 n ZÂ„o(c 7 ) < 00 for some index Â«o, since {Z either SÂ„Â„(cj) = oo or TÂ„o(cr) = oo => sÂ„Â„(cj) = 00 (as TÂ„Â„(cj) = 00 => SÂ„o(c 7 ) = oo), => sup SÂ„{m) = 00.

PAGE 24

18 Lastly we prove that (s,r) G K'(c7) => S> sup SÂ„(c7) for every n In fact, for each cj e Q we have (Sj) G K'(bj) => 00 (u,s,/) e K' = f^KÂ„ n=l => (g7,s,/) g KÂ„ for every n, => (s,t) G KÂ„(c;) for every n, => s > SÂ„{uj) for every n by a property of KÂ„, s > sup SÂ„{m). n Definition 1.11 We define the set K to be the collection of finite unions N U [S, + e, , T, A rii ] x [r, , r\ ], where rii, N & H, e, > 0, r,, r\ g and S;, T, are j=i stopping times for {Qs), for every index /. We define the set fCs to be the collection of countable intersections of sets from K. Proposition 1.12 The set K& is closed under finite union and countable intersection. Further, the elements of Ks have all the properties that were presented in Proposition 1 . 10 . Proof It is evident that Ks is closed under countable intersection. Next we show that Ks is closed under finite union. Let A/g N and let (K,)/=i a/ be a finite family from Ks. 00 For each index i we may write K, = f| where each set is an element of K. 7= I M A/ 00 Then we have (J K, = (J H i=l i=l 7= I

PAGE 25

19 n u ^2,2 u ... u jiji, -Jm =1 which is a countable intersection of elements of fC, since each set Ki_Â„ u Kij^ U ... U Kmjm is an element of K,. M Accordingly, the set K, is an element of ICsi=\ We now prove that the elements of ICs possess all the properties that were presented in Proposition 1.10. Let S,T be stopping times for (Gs), let e > 0, let n e N, and let r,r' e K+. In view of Proposition 1.10, it is enough to show that the set Ko := [S -Ie , TaÂ« ] x [r , r ] possesses ail the aforementioned properties. Without loss in generality, we assume that S + e < Tao and r < r'. Let B G B(IR^). We will show that ;r[B* n Ko] e 3^. Let (SÂ„) be a sequence of positive numbers decreasing to 0. For each index m, denote by KÂ„ the set [S + e , Tao + 5Â„] x[r-SÂ„ ,/ + 5Â„], and by K; the set (S + e-SÂ„ ,TAn + 5Â„) x{r-5Â„,r + 5Â„). 00 We have ;r[B* D Ko] = p| ;r[B n KÂ„]. m=\ CX5 CO In fact, we have p| n[B n KÂ„] c p| n[B* n KÂ„] since B c B*, m=\ CO = ;r[B* n Pi KÂ„] since (KÂ„) is decreasing, and since each set B* n KÂ„ has compact cross sections.

PAGE 26

20 This last set is equal to the projection ;r[B* n Ko]. On the other hand, for each index m we have ;r[B* n Ko] <= 7t[B* n K^j since Ko c K;,, = ;r[B n K;,] since every cross section of the set K; is an open ball when S + Â€ < oo, c ;r[B n K,Â„] since K^ c KÂ„. 00 Therefore, 4 B* n Ko] c p| ^B n KÂ„]. m=] oo So we have ^B* n Ko] = p| ;r[B n KÂ„] e iF hy Corollary 1 .8, since KÂ„ g ^0 for every index m. We now reveal the remaining properties of the set Ko. Set Z := (S + e , r). Note that Z is a stopping time. In fact, let ( 5 ,r)GK 2 . Then {Z< (5,/)} {S < 5:} if r < r 0 otherwise In either case we have {Z < (5,/)}g QsWe complete the proof by making the following five observations. The stopping time S + e is a predictable stopping time since e > 0. The graph [Z] is a subset of Ko, since by assumption S + e < Taw and r < r' . We have {Z < 00} = {S + g < 00} = ;r[ [S -f6 , TAw] ] since S + g < TaÂ«, = ;r[Ko].

PAGE 27

21 For each or e Q we have r(cj) = oo ==> S(rar) + e = oo, since r < ao. For each oj e Q. we have {s,t)eKo{nj) => 5 e [S(cr) + e , T(ct)AÂ« ] and r e [r , r ] => s > (S + e)(oj). The Cross Section Theorem We begin with some important precursors to the cross section theorem for predictable subsets of n X [R2. Definition 1.13 Let be the ring of subsets of K? generated by the sets (z , z ] such that z,z' Â€ with z < z . The measure is the set function from into 1R+ that is given by niiz.z']) = the area of the rectangle (z , z] for every z < z', and which is additively extended to 9^. Remark The measure fi is the Lebesgue measure on and can be extended to a sigma-additive measure on with values in [0 , oo]. We still denote by n this sigma-additive extension. Lemma 1.14 Let B and tn e Q. Then B(c7) e B(K?). Proof Let Fen, z,z' g K?, and (BÂ„)eQxlR2. Then (F x (z , z'])(cr) g while (B, -B 2 )(cj) = B,(o7)-B2(c 7) and (|JBÂ„)(nj) = (JBÂ„(c7). n n Notation 1.15 Let A,B g Â® B(IR2), x e K+, and z g We denote by ABa^ the set { G n I /i( (A n n X [0 , z\){m ) ) > A/i( (B n n X [0 , z]){m) ) }. When AeB and 0
PAGE 28