Title: Error bound for polynomial and spline interpolation
CITATION THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00102771/00001
 Material Information
Title: Error bound for polynomial and spline interpolation
Physical Description: Book
Language: English
Creator: Howell, Gary Wilbur, 1951-
Copyright Date: 1986
 Record Information
Bibliographic ID: UF00102771
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: ltuf - AEK1879
oclc - 15293463

Full Text















ERROR BOUNDS FOR POLYNOMIAL AND SPLINE INTERPOLATION


By

GARY WILBUR HOWELL





















A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


1986





Copyright 1986

by

Gary Wilbur Howell





To my wife, Nadia





ACKNOWLEDGEMENTS


I wish to express my sincerest appreciation to Dr.

Arun Varma for his research counseling and assistance

throughout my graduate school years. I wish also to thank

Drs. David Drake, Nicolae Dinculeanu, and Soo Bong Chae,

for their teaching and for encouraging me to pursue the

doctorate in mathematics, as well as Drs. Vasile Popov and

A. I. Khuri for their kindness in serving on my committee.

Finally of course, my parents and wife deserve rather more

thanks than can be easily expressed.

























































































L


TABLE OF CONTENTS


Page


* iv

* 711


ACKNOWLEDGEMENTS . . . . *

ABSTRACT . . . . . *

CHAPTER


INTRODUCTION


ONE


Lagrange and Hermite-Fejbr
Interpolation ....
Optimal Error Bounds for Two Point
Hermite Interpolation ...
Birkhoff Interpolation ....
Polynomial Approximation ....
Spline Approximation .....
Parabolic Spline Interpolation ..
Optimal Error Bounds for Cubic
Spline Interpolation ...


. 2

. 4
. 8
. 13
.. .16
. 25

. 27


TWO


BEST ERROR BOUNDS FOR DERIVATIVES OF TWO
POINT LIDSTONE POLYNOMIALS....

Introduction and Statement of
Main Theorem .......
Preliminaries.......
Proof of Theorem 3.1......


A QUARTIC SPLINE


THREE


Introduction and Statement of
Theorems ...
Proof of Theorem 3.1 ...

A QUARTIC SPLINE ....

Introduction and Statement of
Theorems ....
Proof of Lemma 4.1 ....
Proof of Theorem 4.1 ...
Proof of Theorem 4.2 ...


. 43
. 54


. .
. .


. 60


FOUR


. .
.
.




























































































1


FIVE IMPROVED ERROR BOUNDS FOR THE
PARABOLIC SPLINE . . . 81

Introduction and Statement of
Theorems . . . 85
Proof of Theorem 5.1 . . . 85
Proof of Theorem 5.2 . . . 88
Proof of Theorem 5.3 . . . 98

SIX CONCLUDING REMARKS .. ... 107

REFERENCES . . . . . 110

BIOGRAPHICAL SKETCH .. .. .. 113





Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy


ERROR BOUNDS FOR POLYNOMIAL AND SPLINE INTERPOLATION

By

Gary Wilbur Howell

August 1986

Chairman: Dr. Arun K. Varma
Major Department: Department of Mathematics

The present dissertation is motivated by a desire to

have a more precise knowledge of asymptotic approximation

error than that given by best order of approximation. It

owes its inspiration to a paper by G. Birkhoff and A. Priver

concerning error bounds for derivatives of Hermite

interpolation and a paper of C. A. Hall and W. W. Meyer

concerning error bounds for cubic splines.

In Chapter One we consider well known results

concerning interpolation, polynomial approximation and

error analysis of spline approximation. The results given

here are meant to provide a context for the theorems given

in later chapters. In Chapters Two and Three we consider

the problem of best error bounds for derivatives in two

point Birkhoff interpolation problems.


Vll





Chapter Four presents the problems of existence,

uniqueness, explicit representation, and the problem of

convergence for fourth degree splines. Moreover we also

consider the problem of optimal pointwise error bounds for

functions f 8 C(5) [0,1]. In Chapter Five our main object

is to sharpen the error bounds obtained earlier by Marsden

concerning quadratic spline interpolation. By doing so we

obtain in some special cases error bounds that are in fact

optimal.


viii





CHAPTER ONE
INTRODUCTION


The purpose of this chapter is to provide a context

for the results derived in succeeding chapters. In order

to show some of the important achievements in

approximation by polynomials, we discuss briefly the

Lagrange and Hermite-Fejek interpolations, which match a

given function at any finite number of distinct points.

After exploring the question of computational stability of

a given interpolation, we discuss in some detail the

problem of best order of approximation by polynomials as

initiated by S. N. Bernstein [1912], D. Jackson [1930],

and A. Zygmund [19681.

In contrast to high order approximation by a single

polynomial, we next consider in great detail the problem

of approximating a given function f(x) defined on [a,b]

by the interpolatory piecewise polynomials known as

splines. Special attention is given to the problem of

approximating by piecewise cubic and piecewise parabolic

splines. The study of these splines motivates us to also

study two point Hermite and Birkhoff interpolations.





































































L


Lagrange and Hermite-Fej~r Interpolation

Let X denote an infinite triangular matrix with all

entries in [-1, 11

x0,0
(1..1) X: x0,l x1,1



x0,n x1,n ..x .n
We denote by Ln[f,x;X] the Lagrange polynomial of

interpolation of degree 4 n which coincides with f(x) in

the nodes xkn (k = 0, 1, ., n). Then


(1..2) Ln [f,x;X] = f(xkn)1kn(x)
k=0

where

(1.1.3) 1k (x) = n(x)
[x xkn) a '(xkn)


wn(x) = H (x xkn *
k=0

It is known from the results of G. Faber and S. N.

Bernstein that no matrix X is effective for the whole

class C of functions continuous in [-1, 1]. Bernstein

showed that for every X, there exists a function f0(x) and

a point x0 in C[-1,11 such that

(1.1.4) limn-> m L [f0,x0;X] = m

L. Fej~r [1916] showed that if instead of Lagrange

interpolation, we consider the Hermite-Fejer interpolation

polynomials, the situation changes. The Hermite-Fejhr

polynomials Hn+1[f,X,X] are of degree L2n + 1 and are

uniquely determined by


































































L


(1.1.5) H[f,xkn;X) f(xkn), Hn+1) '"kn;X1 = 6kn

where 6kn are arbitrary real numbers, k = 0, 1, . n.

The explicit form of Hn+1[f,x;X] is given by
n n
(1..6)H .[f X; X] = 1 f(xkn) hkn(x) + C kn kn (x)
k=0 k=0

where

(1..7)hkn(x) = {1 an" (xkn)(x xkn) } Ikn2(x)
Wn (xkn)

=: kn(x) Ikn2(x)
and

(1.1.8) kn(x) = (x xkn) 1kn2(x).

Fejk brought out the importance of Hermite interpo-

lation by introducing the concept of "strongly normal"

point systems. To each set of n + 1 distinct points x0'

xl, . xn, Fejbr associates a set of n + 1 points X0'

X1' * 'Xn which are the zeros of the linear functions

k(x). The points XO' X1, . ., X, are said to be the

conjugate point system of x0, xl, ., xn. A system of

points x0, xl, ., xn is called strongly normal if the

conjugate point system lies inside [-1, 1]. For example,

the zeros of the Tchebycheff polynomial Tn(x) = cosn9,

cose = x form a strongly normal point system. Fe j r

proved (using these ideas) that Hermite-Fejkr interpola-

tion polynomials based on strongly normal point systems

(and under certain conditions on Skn) converge uniformly

to f(x) on [-1, 11.





Optimal Error Bounds for Two Point Hermite Interpolation

In order to motivate the present day work on error

bounds, we first consider the classic error bound of

Cauchy. Let us consider once more the interpolation

formula of Lagrange. Let f(x) e C[a,b] and consider the

Lagrange interpolation polynomial


Ln [f,x] = f(xkn) 1kn(x).
k=0

Next we set

(1..1) e(x) = f(x) L, [f,x] .

In the case f(x) is itself a polynomial of degree

I n, then it is easy to see from the uniqueness of the

Lagrange interpolation polynomial that e(x) = 0. Thus it

is of interest to study what can be said about e(x) if

f(x) is a given smooth function other than a polynomial of

degree I n. The following theorem gives the most widely

known error bound.

Theorem 1.1 (Cauchy). Let f(x) e C[a,b] and suppose

that f(n)(x) exists at each point of [a,b]. Let L [f,x]

be the element of the class of polynomials of degree

I n 1 that satisfies the equation

(1.2.2) L [f,xin] = f(xin) i=0 ,..,n

Then for any x in [a,b], the error

e(x) = f(x) Lnlf,xJ
has the value

(1.2.3) e~x) = w,(x) f(n+1)(5 )/(n+1)!

where E is a point of [a,b] that depends on x and




































































I


W,(x)=C (x-xin)*
i=0

An immediate consequence of (1.2.3) is the inequality

(1.2.4) le(x)l
where I Idenotes the supremum norm on [a,bl. If we set

f(x) = W (x), we see that (1.2.4) becomes an equality.

Thus the right hand side cannot be made smaller. We

therefore say that (1.2.4) is an optimal bound.

The Equations (1.2.3) and (1.2.4) have been

extensively studied. For instance, the study of

minimizing I~ |wn led to Tchebychev's system of

orthogonal polynomials. For a good discussion of some of

the elementary analysis associated with this error bound,

see Powell [1981].

In contrast to the precise and beautiful pointwise

Cauchy bound, very little has been known about precise

polynomial derivative errors. Denoting e(x) as the Cauchy

remainder for Lagrange po lynomial interpolation, we

consider the role played by the term f(n+1)(S). If f e Pn

(the class of polynomials of degree < n), the remainder

vanishes identically. For a fixed x, we may consider the

remainder

e (x) = f(x) L [f,x]

as a process which annihilates all elements of Pn. We

may now formulate the following theorem of Peano [19131.





Theorem 1.2 (Peano). Let L be a continuous linear

functional such that L(p) = 0 for all p e Pn. Then for

all f f C(n+1)[a,b],


(1.2.4) L(f) = J f(n+1)(t) K(t) dt

where


K(t) = ( LX[(x t)n] 3 n


and
=(x t)n for x > t
(x t)f
= for x < t.

The notation Lx[(x-t)n] means that the functional L

is applied to (x t)n considered as a function of x.

For a detailed study of the Peano theorem we refer to P.

J. Davis [1975] and to A. Sard [1963]. We next turn to an

application of the Peano theorem to derive pointwise

optimal derivative error bounds.

Let u(x) e C 4)[0, h] be given; let v3(x) be the

unique Hermite interpolation polynomial of degree < 3

satisfying

(1.2.5) v3(0)=u(0) v3 (h)

v'3(0)=u'(0) v'3 (h)=u'(h).
Ciarlet, Schultz and Varga [1967] obtained a

pointwise error bound for e(x)= v3(x) u(x) and its
derivatives in terms of

U =max0




Their bounds are

(1.26) ~ k)()] hk[x(h-x)]2- U k = 0, 1, 2.
k! (4 2k)!

For k = 0, (1.2.6) is best possible, since equality

holds for u(x) = x2(h-x)2, whose Hermite interpolation

polynomial is v=0.

G. Birkhoff and A. Priver [1967] obtained the

following optimal error bounds on the derivative le(k)(x)l

in terms of U.

Theorem 1.3 (Birkhoff and Priver). Let u(x) 6 C [0,1].

Then we have (h = 1)

(1.2.7) le'(x) /U L x(x-1)(2x-1) ] / 12

for 0 < x < 1/3 ,

S[ 16x3 105x2 + 197x 162

+ 66/x 13/x2 + 1/x3 ] / 96

for 1/3 < x < 1/2.
(1.2.8) le"(x)l/U L [ 48x5 + 24 -103

+ 54x2 12x + 1 ] / 2(1-x)3

for 0 < x < 1/3 ,

< ( -6(x-1/2)2 + 1/2 ] / 12

for 1/3 < x < 2/3.
(1.2.9) le"'(x) /U ( -(x-1/2)4 +3x12)/+ 3/16

for 0 < x< 1.

For 1/2 < x < 1 the bounds of e(k)(x) are given by

(1.210)e(k) (x) = e(k) (1-x) k = 0 1, 2, 3.

Further, from Birkhoff & Priver, the uniform error

bounds are given by





le(r)(x) < ar U r = 1, 2, 3,
-g r

424!

(1.2.11) al = (/Ti)/216

a2 = 1/12

a3 = 1/2.

The proof of the above theorem is based on the Peano

kernel theorem. It gives a general and highly useful

method for expressing the errors of approximations in

terms of derivatives of the underlying functions of the

approximation. For a computer routine which gives

polynomial error bounds by numerical quadrature of the

Peano kernel, see Howell and Diaa [1986]. Stroud [1974]

gives a readable account of some other applications.


Birkhoff Interpolation

We have just observed that in problems of Hermite

interpolation, function values and consecutive derivatives

are prescribed for given points. In 1906, G. D. Birkhoff

considered those interpolation problems in which the

consecutive derivative requirement can be dropped. This

more general kind of interpolation is now referred to as

the Birkhoff (or the lacunary) interpolation problem (s) .

The Birkhoff interpolation problem differs from the

more familiar Lagrange and Hermite interpolation in both

its problems and its methods. For example, Lagrange and

Hermite interpolation problems are always uniquely





solvable for every choice of nodes, but a given Birkhoff

interpolation may not give a unique solution.

More formally, given n + 1 integer pairs (i,k)

corresponding to n + 1 real numbers cilk, and m distinct

real numbers xi, i. = 1, 2, , m n + 1, a given problem

of polynomial interpolation is to satisy the n + 1

equations

(1.3.1) P (k)(x ) F~

with a polynomial P, of degree at most n. (We are using

the convention that Pn(0)(x) = Pn(x).)

If for each i, the orders k of the derivatives in

(1..1)form an unbroken sequence k = 0, 1, . ,ki, then

the interpolation polynomial always exists, is unique, and

can be given by an explicit formula. If some of the

sequences are broken, we have Birkhoff interpolation. As

remarked by Professor Lorentz [1983], the two cases are as

different as, let us say, the theory of linear a~nd

nonlinear differential equations.

Pairs (i,k) which appear in (1.3.1) are most easily

described by means of the interpolation or incidence

matrix E. If P (k)(xi) is specified in (1.3.1), we put a

"1" in the i+1st column and kth row of E. If P,(k) (x ) is

not specified in (1.3.1), then a "O" appears in the i+1st

column and kth row. Each of the m rows of E has a non-

zero entry. An incidence matrix E and a pointset X, which

lists the points xi, specify a Birkhoff interpolation

problem of the type of (1.3.1). For a given E and X, the





unique existence of an interpolation polynomial of degree

n + 1 is equivalent to the invertibility of the system of

equations given by (1.3.1), or equivalently to the inver-

tibility of a matrix V which we will refer to as a

generalized Vandermonde matrix V. For Lagrange

interpolation of the points xi, i = 1, 2, ., n + 1,

the Vandermonde V is given as

1 1. .. 1

x1 x2 . xn+1

(1.3.2) V =. ..
n n n
xl x2 ..xn+1

Inversion of the Vandermonde gives the coefficients of the

fundamental functions 1kn(x) of Lagrange interpolation.

As Lagrange interpolations are always unique, it follows

that Vandermonde matrices are invertible.

For a given system (131,it is not hard to

construct an analagous matrix to (1.3.2), which we will

refer to as the generalized Vandermonde. Just as

inverting the Vandermonde matrix gives the fundamental

functions of Lagrange interpolation, inverting the genera-

lized Vandermonde gives a convenient form for representing

a Birkhoff interpolation. The Vandermonde and its

counterpart for Birkhoff interpolation are examples of

Gram matrices, of which a good account is to be found in

Davis [1975].

Though invertible, the Vandermonde matrices are known

to be extremely ill-conditioned for real-valued





interpolation. Many of the generalized Vandermonde

matrices associated with Birkhoff interpolation processes

are much better conditioned, illustrating an advantage of

Birkhoff interpolation over the more traditional Lagrange

interpolation. To make this point more explicit, we

define "condition" of a matrix.

For a given norm I and invertible matrix M, we

define the condition cond (M) of the matrix M by

(1.3.3) cond(M) = II|10 IM-11

If we rescale the Birkhoff interpolation problem

specified by E and X to the unit interval, we can define

the conditionof an interpolation as the conditionof the

associated generalized Vandermonde. In the L2 norm for

eleven equally spaced points, the condition number of

Lagrangian interpolation is on the order of a million. On

the other hand, Lagrangian interpolation on eleven equally

spaced complex roots of unity has L2 condition number one,

as does the eleven term MacLaurin expansion.

Computationally speaking, the inverse of the

condition number of a matrix M is the norm distance of M

from a singular matrix (See Golub and Van Loan [1983]).

For example, the Vandermonde for Lagrange interpolation of

eleven points on the unit interval is thus seen to be a

norm distance of only one-millionth from being singular.

Not only is the ill-conditionedness of the Vandermonde

troublesome in determining the coefficients of the

fundamental functions, but it also causes problems of





round-off error in evaluating a polynomial by use of the

fundamental functions. For these reasons, it is very much

preferable to use a well-conditioned interpolation.

The MacLaurin expansion, having diagonal generalized

Vandermonde, is as well-conditioned as is possible.

Another particularly well-conditioned interpolation is the

Lidstone interpolation.

A Lidstone polynomial is a truncation of a Lidstone

series. In turn, a Lidstone series is a generalization of

a Taylor series which approximates a given function in the

neighborhood of two points instead of one. Such series

have been studied by G. J. Lidstone [1930], by Widder

[1942], by Whittaker [1934] and by others. More

precisely, the series has the form

(1.3.3) f(x) = f(1)AO(x) + f(0)AO(1-x) + f" (1)A1(x) +

f" (0)Al(1-x) +...

where A (x) is a polynomial of degree 2n + 1 defined by
the relations

Apn(x) = x

(1.3.4) n," (x) = An-1l(x)

An(0) = A (1) = 0, n = 1, 2,...
Thus it is clear that the sum of an even number of

terms of the series (1.3.3) is a polynomial which coin-

cides with f(x) at x = 0 and at x = 1. Moreover, each

even derivative of the polynomial coincides with the

corresponding derivative of f(x) at those points.





Polynomial Approximation

Weierstrass first enunciated the theorem that an

arbitrary continuous function can be approximately

represented by a polynomial with any degree of accuracy.

We may express this theorem in the following form.

If f(x) is a given function, continuous for

a < x < b, and if E is a given positive quantity, it

is always possible to define a polynomial P(x) such that

(1.4.1)f(x) P(x)] < E

for all a < x < b.

It is readily seen that the number of terms required

to yield a specified degree of approximation, or under the

converse aspect, the degree of approximation attainable

with a specified number of terms, is related to the

properties of continuity of f(x). Naturally this has led

to many interesting developments in the theory of degree

of approximation of continuous functions by polynomials to

which we turn to describe.

A first important step in building this theory was

made by D. Jackson [1930]. Let f 8 C[-1,1]. Suppose that

we define the best approximation of f by polynomials of

degree n by

(1.4.2) E (f) = inf If P,

where Pn ranges over all algebraic polynomials of degree n

and Ifl | = max If(x)l, a
problem of estimating E (f). To describe his results we
need the following definition.











Definition 1.1 If f e Cla,b], then the modulus of

continuity of f is a function (f,h) such that

(1.4.3) (f,h) = sup x-ylJh; x,y 6 [a,b] /f(x) fly) .
Now Jackson's theorems may be easily stated.

Theorem 1.4 (Jackson). Let f be continuous on [-1,11.

There is a positive constant A such that

(1.4.4) En(f) I A w(f,1/n) n = 1, 2,..

where A is independent of f.

An important corollary of Theorem 1.3 deserves to be

mentioned. Let Lipa [1,11(M) (or simply Lipa) be the
class of functions f in C[-1,1] such that

(f(x) f(y)l I M Ix-yJa

for all x and y in [-1,1]. It is easy to see that

f 6 Lipa[-1,11(M) if and only if

w(f,h) 0

We then have the following consequence of Jackson's

theorem.

Corollary 1.5 Let 0 < a< 1. If f e Lipcl_ l(M), for
some constant M, then

(1.4.5) En(f) < A for n = 1, 2,...

for some positive constant A.

A. F. Timan [1951] noticed the following

strengthening of Jackson's theorem.

Theorem 1.6 (Timan). There is a positive constant C such

that if f e Cl-1,11 and n is a natural number, then there





is a polynomial Pn of degree n such that


(1.4.6) If(x) P (x)l ( A[ w(f,J1 x2) + w(f,1/n2)

for all x in the interval [-1,11.

In this result, in contrast to the theorem of

Jackson, the position of the point x in the interval

[-,]is taken into consideration and it is apparent that

for the polynomial P (x) thus constructed, as Ixl -> 1,

the deviation If(x) P (x) is of magnitude w (f,1/n2)

Following the important theorem of Timan, V. K.

Dzjadyk [1956] proved the converse of Jackson's theorem.

Theorem 1.7/ (V. K. Dzjadyk). Let f 8 C[-1,1]. Suppose

that: 0
each n there corresponds a polynomial P, of degree n such

that


(1.4.7) If(x) -Px) n n2
if and only if w (f,h) ( C he1 for some constant C.

From Jackson's theorem we noticed that if f 8 Lipa,

then

E (f) < AM n = 1, 2...

where A is an absolute constant. To achieve a more rapid

decrease to 0 of En(f), it is necessary to assume more

smoothness for f, for example, that f has several

continuous derivatives. Let Cr[-1,11 r = 0, 1 .

denote the subset of C[-1,1] consisting of those functions





which possess r continuous derivatives on [-1,1]. For

this class of functions, Dunham Jackson proved also the

following direct theorem.

Theorem 1.8 (D. Jackson). If f e C(r) [-1,1], then

(1.4.8) En(f) I Ar (1/n)r ,(f(r),1/n) n = 1, 2,...

For many important contributions we refer to the work

of G. G. Lorentz [1983].


Spline Approximation

One uses polynomials for approximation because they

can be evaluated, differentiated and integrated easily and

in finitely many steps using just the basic arithmetic

operations of addition, subtraction and multiplication.

But there are limitations of polynomial approximations.

For example, the polynomial interpolant is very sensitive

to the choice of interpolation points. If the function to

be approximated is badly behaved anywhere in the interval

of approximation, then the approximation is poor every-

where.

This global dependence on local properties can be

avoided when using piecewise polynomial approximation.

Concerning piecewise polynomial approximation, Professor

I. J. Schoenberg remarked that "polynomials are wonderful

even after they are cut into pieces, but the cutting must

be done with care. One way of doing the cutting leads to

the so-called spline functions" (Schoenberg [1946],

p. 46).





Splines were introduced by Prof. Schoenberg in 1946

as a tool for the approximation of functions. They tend

to be smoother than polynomials and to provide better

approximation of low order derivatives. Though we will

later use the word spline in a somewhat broader context,

we first give the more traditional definition.

Let

(1..1) xl < x2 < . < xk

be a sequence of strictly increasing real numbers called

the knots of the spline function. We may say sm(x) is a

spline function of degree m having the knots

xl, x2,. xk
if it satisfies

a) sm(x) e Cm (-1 poo)

b) In each interval (xi, xi+1), including (- ,xl)

and (xk'm), the restriction of sm(x) to (xi, xi+1) is a

polynomial of degree at most m. Thus, a step function

s0(x) may be regarded as a spline function of degree 0,
while a spline function of degree 1 is a polygon (broken

line function) with possible corners at some or all of

the possible corners at some or all of the points (1.5.1).

Similarly, s2(x) has a graph composed of a sequence of

parabolas which join at the knots continuously together

with their slopes. Both for a smoother approximation and

for a more efficient approximation, one has to go to

piecewise polynomial approximation with higher order

pieces. The most popular choice continues to be a











piecewise cubic approximating function. Various kinds of

cubic splines are in use in numerical analysis. The ones

most commonly used are complete cubic splines, periodic

cubic splines and natural cubic splines.

A spline function of degree m with k knots is repre-

sented by a different polynomial in each of the k+1

intervals into which the k knots divide the real line. As

each polynomial involves m + 1 parameters, the spline

function involves a total of (m+1) (k+1) parameters.

However, the continuity conditions stated earlier impose

certain constraints on those parameters. At each knot,

the two adjoining polynomial arcs must have equal

ordinates and equal derivatives of order 1, 2, . .,

m 1. Thus, m constraints are imposed. It is easy to

see that every spline function s(x) of degree m with the

knots xl, x2, ., xk has a unique representation in the
form


(1..1)s(x) = Pm(x) + q c (x x )m
]=1

where Pm(x) denotes a polynomial of degree m and

(1.5.2) x = xm x > 0


Also

(1.5.3) Cj = (1/(m)!) ( S(m)(x +) S ")(x 3.
The class of "natural" spline functions was intro-

duced by Prof. Schoenberg [19461. A spline function s(x)











of odd degree 2p-1 with knots xl, x2, ., xk is called

a natural spline function if the two polynomials by which

it is represented in the two end intervals (- ,xl) and

(xk,+ ) are of degree p-1 or less. It is easy to express
the natural spline functions by


(1.5.4) stx) = Pp-1(x) + Cj (x-x ) 2p-1
j=1.

where


C C x r = 0, r =I p,+1, ., 2p-1.
j=1.

The following theorem states an important interpola-

tion property of natural spline functions.

Theorem 1.9 Let (x yi ), i= 1, 2, .., k, be given

data points, where the xi's form a strictly increasing

sequence, and let p be a positive integer not exceeding n.

Then there is a unique natural spline function s(x) of

degree 2p 1 with the knots xi such that

(1.5.5) s(xi) = i ,2 ...,k

Natural spline functions possess certain impressive

optimal properties and can be shown to be the "best"

approximating functions in a certain sense. This is the

content of the next theorem.

Theorem 1.10 Let P(x) be the unique natural spline

function that interpolates the data points (xji'T '

i =1, .. ,kin accordance with Theorem 1.7. Let

f(x) be any function of the class C(P that satisfies the

conditions





(1.5.6) f(xi) = i ,2,..,k

Let (a,b) be a finite interval containing all the knots

xi. Then


(1.5.7) j [f 2) (x)]2 dx ji~gs ()(x) dx
a a

with equality only if f(x) = s(x).

The effectiveness of the spline approximation can be

explained to a considerable extent by its striking conver-

gence properties. Interesting contributions were made by

J. N. Ahlberg and E. N. Nilson [1964], C. DeBoor and G.

Birkhoff [1964], A. Sharma and A. Meir [1967], M. J.

Marsden (1972], T. R. Lucas 11974], E. W. Cheney and F.

Schurer (1968], C. A. Hall [1968], C. A. Hall and W. W.

Meyer [1976], and A. K. E. Atkinson [1968]. As a good

reference on splines which offers a good comparison of the

approximating properties of polynomials and splines, we

recommend A Practical Guide to Splines by C. DeBoor

[1978].

First we discuss error analysis for the class of

functions f(x) 6 C(2) with period one. Let

(1.5.8) (xi =,0: 0 = xn0 < xn1 < ...
be a division of [0,11 of mesh gauge

(1.5.9) hn = max0 where


hni = xni xni-1 *





A periodic cubic spline function yn(x) is .a function

composed of a cubic polynomial in each of the intervals of

{xi =,0 with the requirement that
y (x) e C(2)[0,11
and

yn () = yn () i = 0, 1, 2.

It was observed by Walsh, Ahlberg and Nilson [1962] that

there exists a unique periodic spline function yn(x) which

interpolates f(x) at the points xn,1. It was shown that

yn(x) and y'n(x) converge uniformly to f(x) and f'(x)
respectively as hn -> 0. Later Ahlberg and Nilson [1966]
studied the more delicate question of the convergence of

y" (x) to f"(x). Writing

(1.510)An,i = hnli+1/(hnli + hn,i+1) '
i = 1, 2,..,k 1

and

n, = max0 where for m = kn' n,m+,1 is taken as


they show that

y" n(x) ->f" (x)
uniformly provided that

h, -> 0 and A, -> 0
After this result, I. J. Schoenberg (1964a] raised the

question that it would be very interesting to find out to

what extent the condition A, -> 0 is really necessary in
the above mentioned theorem. The above theorem together





with the open problem of Schoenberg lead to important

contributions by Birkhoff and DeBoor [1964], and Meir and

Sharma [1969] which we turn to describe.

In 1964, Garrett Birkhoff and Carl DeBoor made the
following contribution. Let f(x) e C'[0,1] and let

(1.5.11) Exi =0, O O = x0 < xl < . be a partition. The function f(x) is now interpolated by

a cubic spline function s(x) (called a complete cubic

interpolation spline function) which means that s(x) is a

cubic polynomial when restricted to each interval

(xi'xi+1), and s(x) F C(2)[0,1]. Moreover s(x) is
uniquely defined by the conditions


f'(0) = s'(0) ,

f'(1) = s'(1).


This first important result concerning the error analysis

yielded the following theorem.

Theorem 1.11 Let f(x) 6 C(4)[0,11. Denote

e(r) = (r)- s(r)

There are constants cr(m), r = 0, 1, 2, 3, depending

only on m > 0, such that

(1..1) e(r)(x) < c (m) h4-r Il(r)
r = 0, 1, 2, 3,

provided that

mh < m,

hi = xi+1 xi





h = max hi

mh = [max(hi)]/[min(hi)l
and I I denotes the supremum norm.

The authors go a step further and prove a convergence

theorem related to f 6 C (3) [0,1].

Theorem 1.12 Le t f"' (x) be absolutely continuous on

[0,1]. Let (x @i=0,n (where k depends on n)beasqnc

of partitions of 10,1] such that hn = maxihi,n -> 0 as

n -> .Let mh,n I m as n -> .Let e (x) be the error

incurred when f(x) is interpolated by a spline function on

{xi =,0,n. Then

le" 'n| -> 0
uniformly on [0,1] as n -> "

The next important development came with some

interesting results by Prof. A. Sharma and A. Meir [1967]

concerning degree of approximation of spline interpola-

tion. This paper does away with some annoying assumptions

under which uniform convergence of the interpolating cubic

spline and its derivatives was proven earlier (see above

for these restrictions).

Theorem 1.13 Let f(x) be continuous and periodic with

period unity. Let

(1.515)q = maxirj (hn,i/hndj)
where

hn,i = xn,i+1 xn,i
Let sn(x) be the cubic spline of period unity with

joints (or knots) xn,i, i = 0, 1, ., n in [0,1], such





that s (x) interpolates f(x) at the joints. Let

||g1 = maxx g(x)] for g 6 C[0,1]
and

o(g,h) = max ( jglu) g(v)| :u-vl < h ), h > 0
The authors prove


i)


ii)




iii)




iv)


qn2) w(f,hn)


I f snj| i (1 +
if f 8 C(1), then

If(r) -srn|


L 76 hn1rfn
r = 0 1


C(2), then

-s (r)n| 5 hn2-r


C(3), then

-s(r)nl C hn3-r


if f e

lif(r)


if f 8

| |f(r)


a (f",hnh)
r = 0, 1, 2 ;



n (f"',rhn) '
r = 0, 1, 2, 3 ;


where


C = 1 + q (1+qn 2


C = 1 + (1 + P )2/(2 P )


with


Pn = maxi (h~/ )for j=i1 +
satisfying

P, < 2.
From these results one can draw the obvious conclu-

sions regarding uniform convergence of the interpolating





splines and derivatives. The arguments are surprisingly

simple. The uniform convergence of sn" to f", which

follows from iii), had been proved earlier by Ahlberg and

Nilson (see above) under the additional assumptions that

the mesh become eventually uniform, i.e.,

(1.5.20) lim,_>oo [hn,i/(hn,i+hn,i+1)] = 1/2.

Parabolic Spline Interpolation

Many interesting results were obtained by M. Marsden

[1974] concerning the approximation of functions by even

degree splines. Of particular interest are the simple

parabolic splines. If break points are the same as the

interpolated points, then the resulting spline is ill-

behaved, as can be seen by simple examples (DeBoor

[19781). On the other hand, if we take the interpolated

points midway between break points, the parabolic splines

are very well-behaved. In fact in the first theorem given

below, a good approximation to a continuous function is

assured with no conditions on the partition other than the

length of the largest subinterval being small.

We first give some necessary notation. Let
(1 .6 1) { i =0 0 = x 0 < x l < . < x n = 1

be a fixed partition of [0,1). Set

(1.6.2) h =xi -xi-1 h =max h,

zi = (xi + xi-1)/2 ,

hO = hn ai = hi+1/(hi + hi+1) '

ci + ai = 1 for i = 1, 2, ., n.





Let

y e C[0,1] y(0) = y(1),

ly| = sup { |y(x) : 0 < x < 1 3

such that y is extended periodically with period 1.

A function s(x) is defined to be a periodic quadratic

spline interpolant associated with y and {xi 2=0

(1.6.3) a) s(x) is a quadratic expression on each



b) s(x) e C'[0,11

c) s(0) = s(1) s'(0) = s'(1),

d) s(z ) = y(zi) ,2,..,n

The following theorems were obtained by Marsden.

Theorem 1.14 (Marsden). Let {x-} b prttono

[0,1], y(x) be a continuous 1- periodic function and s(x)

be the periodic quadratic spline interpolant associated

with y and (xi3 n=0'
Then

(1.6.4) Isi < 2 ||yl Ilsl (< 2 yff ,

leil 2 w(y,h/2) ,

lell L 3 w(y,h/2).

(where si = s(xi) and ei = y(xi) s(xi) *
The constant 2 which appears in the first of the above

equations can not, in general, be decreased.

Theorem. 1.15 (Marsden). Let y and y' be continuous 1-

periodic functions. Then

(1.6.5) (js'i ( < 2 ly |,





le-| (h w(y', h/2) ,
leil
le | (5/4) h ly' ,

le'il I 3 o(y',h/2),
Je' J (9/2) w(y',h/2)

le II (13/8)h w(y',h/2).
Theorem 1.16 (Marsden). Let y, y', and y" be continuous

1- periodic functions. Then

(1.6.6) /e.l ( (1/8) h2 w(y",h),

le'i | (1/2) h w(y" ,h)
e'lr 2 h Ily" ,

lel | (5/8) h2 Ily

le" (x)j L [1 + (h/h )] w(y" h) ,

xi < x < xi+1
Theorem 1.17 (Marsden). Let y, y',r y", and y"' be

continuous 1- periodic functions. Then
(1.6.7) lej( ( 1/6 3 Jly"'

le'jl I (11/24) h2 Ily.

le"l L [hi + (2 h2/3 hil )]y" |

xi < x
Optimal Error Bounds for Cubic Spline Interpolation
An interesting application of the theorem of Birkhoff

and Priver (1967] (discussed above) was given by Hall

[1968] and subsequently by Hall and Meyer [19761, concern-

ing optimal error bounds for cubic spline interpolation.





In order to describe these results let f e C(4) [0,11 and

let s(x) be the complete cubic spline function satisfying

the conditions (1.5.13). The main result of Hall and

Meyer may now be stated.

Theorem 1.18 (Hall and Meyer). Let s(x) be the unique

complete cubic spline interpolation satisfying(153)

Suppose

f 6 C(4)[0,1].

Then for 0 < x < 1

(1..1)f(r)(x) s(r)(x)l < c, h4-r 11(4)
r = 0, 1, 2

with

h = max(xi+1 xi) c0 = 5/384 ,

cl = 1/24 c2 = 3/8.

Further, the constants c0 and cl are optimal in the sense
that

(1.7.2) c |( )r
r sup lf )rl
h4-r l(4)

where the supremum is taken over all {xi ~=0 partitioning

[0,1] and over all f 8 C(4)[0,11 such that f(4) is not

identically equal to zero.

Varma and Katsifarakis (in press) were able to

resolve the cases of f e C(3) and f C- C(2) in the

following theorems. Let s(x) be the unique complete cubic

spline satisfying the relationship:

(1.7.3) s(xi) = f(xi) i = 0, 1, ., k ;

s'(xi) = f'(xi) i = 0, k.





Theorem 1.19 If f, f', f", and f"' are continuous on

[0,1], then

(1.7.4) s()() r x

< cr h3-r max0 r = 0, 1, 2

where

C0 = 1/96 + 1/27 cl = 4/27 ,

c2 = 1/2 + 4/ (3 J7).
Theorem 1.20 If f,f', and f" are continuous on [0,11,

then

(1.7.5) Is (r)(x-f((x) x where


a0 = 13/48 al = 5/6 a2 = 4.





CHAPTER TWO
BEST ERROR BOUNDS FOR DERIVATIVES OF
TWO POINT LIDSTONE POLYNOMIALS


Introduction and Statement of Main Theorem

Let 6 2ml0,h] be given and let v2m-1 be the unique

Hermite interpolation of degree 2m 1 matching u and its

first m-1 derivatives unj at 0 and h. Let e = v2m-1 u

be the error function. For the special cases m = 2 and

m = 3, G. Birkhoff and A. Priver [1967] obtained

pointwise optimal error bounds on the derivatives e~k),

0 < k < 2m 1 in terms of h and max0
These results are described in detail in Chapter One.

Birkhoff and Priver note that for the cases m > 3, their

method is not likely to give analytically exact bounds,

though it can be adapted to give numerical approximations

to pointwise exact error bounds. In the next chapter, we

will directly apply the results of Birkhoff and Priver to

the case of u in C(2m)[0, h] and the interpolatory

polynomial w2m-1 which matches u at 0 and h and which also
matches the 2nd through mth derivatives of u at 0 and h.

Analogously to using Hermite interpolation

polynomials, one may choose to approximate a given

function u(x) in C2ml0, h] by the so-called Lidstone

interpolation polynomial L2m-1[u'x] of degree < 2m 1





matching u and its first m 1 derivatives u(2j) at 0 and

h. Thus L~m-1[u'x] satisfies the following conditions

(where we assume h = 1):

(2..1)L~m-1(2p) [u, 0] =u(2p)(0) ,

Lm-1(2p) [u, 1] = u(2p) (1) ,
p = 0, 1, ., m 1.

The explicit formula for L2m-1[u,x] is
m-1
(2.1.2) LZm-1[u'x] = u(2i)(1) i(x)
i=0
m-1
+ C u(2i)(0) i(1-x)
i=0
where

(21.) () 2i B (+) ,for i > 1
(213 i(X 2 2i+1(1x
(2i+1)! 2

and

(2.1.4) AO(x) = x.

Here B (x) denotes the Bernoulli polynomial
(2.1.5) Bn(x) = nk xk Bn-
k=0

and where the constant Bj is given by


(2.1.6) Bj = ( ~) Bk 0 = 1.
k=0

That (2.1.2) in fact satisfies (1.)follows from

the facts

A 2p)(0) = 0 p =0,1 .. i ;

(2.1.7) A.;(2p)(1) = 0 p =0,1 ..,i -1
A 2i) (1) = 1.

The main object of this chapter is to obtain

pointwise optimal error bounds for





e j(x) =f j(x) L~m-1 [ff~ x]

in terms of U = max0~
denotes the jth derivative of the Lidstone polynomial

defined by (2.2.2). An important role in Theorem 2.1 (see

below) is played by the Euler polynomial Q2m(x) of degree

2m given by the formula


(2.1.8) Q2m(x) = 0 G1(x't) Q2m-2(t)dt m = 1, 2,..

where

(2.1.9) Q0(x) = 1
and

(2.110)G (x,t) = t (x 1) O < t < x < 1

=x (t- 1), O< x < t <1

We may now state the main theorem as follows.

Theorem 2.1. Let u(x) e C m[0,11 and let L2m-11u~x] =

L2m-1(x) be the unique polynomial of degree L 2m 1
satisfying the conditions (2.1.1). Then, for 0 < x < 1,

with

U = max0
(2.1.11) lu(2j) (x) L2m-12j)() (X U Q2m-2j (x),
j = 0,1, . ,m 1

U Q2m-2j(1/2)


and for j = 1, 2, ., m





'U ((1-2x) Q2m+2-2j'(x)

+ 2Qm+2-2j(x))

SU IQ2m+2-2j'(0)
where for a given integer k, Q2k(x) is the well known

Euler polynomial defined by (;2.1.7). Moreover, (2.1.11)

and (211)are both best possible in the sense that

there exists a function u(x) e C2m[0,11 such that (2.1.11)

and (211)become equality for every x e [0,11.

From (221)and (211)follow immediately the

also exact bounds

(2.113) u(2j) L~m- 2j)I 2Qm-2j (1/2) | u(2m)


and

(2.1.14) u(2j-1) L2m-12j-1)

I Q2m+2-2j'(0)| | Iu(2m)


where I I denotes the supremum norm on [0, 11.

Preliminaries

It is well known that the Bernoulli po lynomials

defined by (2.1.5) satisfy

(2.2.1) B '(x) = nBn-1(x)
and

(2.2.2) Bn(1-x) = (-1)n Bn(x).

In particular it follows that

(2.2.3) B2n+1(1/2) = 0





From (2.2.1), (2.2.3) and (2.1.3)-(2.1.6), we obtain

(2.2.4) A (x) = Ai1x i(0) = 0 A (1) = 0



The proof of Theorem 2.1 depends on repeated use of

the kernel G1(x,t) defined by (2.1.10). Let us consider


(2.2.5) g(x) = I G1(x,t) r(t)dt
x 1
= (x-1)t r(t)dt + I (t-1)x r(t)dt.
O x


On differentiating, we have

x
g'(x) = I t r(t)dt + (x-1)x r(x)


(t-1) r(t)dt x(x-1) r(x)
x 1
= ft r(t)dt + I (t-1) r(t)dt.


Differentiating once more with respect to x we obtain

(2.2.6) g" (x) = x r(x) (x-1) r(x) = r(x).

Also

(2.2.7) g(0) = g(1) = 0

Let r(t) = m1()in (2.2.5). From the above
discussion it follows that


g(x) = i G1(x't) Am-1(t)dt

satisfies
(2.2.8) g" (x) = Am1x) (0 g(1) = 0

From (2.2.4) we also know that for i 1

Ai (x) = Ai-1 (x) A i(0) = 0 Ai(1) = 0





Therefore


(2.2.9) g~x) = Am(x) = I G1(xit) Am-1(t)dt.

From (2.1.9) it follows that

(2.110)G (x,t) I 0.

Also A 0(t) = t > 0 O < t < 1 Therefore we
obtain from (2.2.9) that

(2..11 Alx) < 0, O On using (2.2.9), (2.2.10), and (.1),we can assert

that

(2.2.12) A 2(x) > 0, O
Inductively, it follows that A (x) > 0 for 0 < x < 1

provided m is an even positive integer and AZm(x) I 0
O < x < 1 if-m is an odd positive integer. This property

of Am(x) will be needed many times in the proof of the
theorem.

The following iteratively defined kernels comprise

the essential machinery of the proof. Define


(2.2.13) G2(x~t) = JO G1(x~y) G (y~t)dt

and inductively

(2.2.14) G (x.t) = I Gl(x,y) Gn1ytd n = 2, 3,..
O,(itd

From (2.2.10) and (2.2.13) it follows that

(2.2.15) G (x,t) > 0 G (x,t) < 0;
0 < x < 1, O < t < 1.





In general

(2.2.16) (-1)nGn(x,t) 0 ;
0 < x( < O < t < 1.

Finally, let us define


(2.2.17) h(x) = Gn(x,t) qlt)dt.


We note again that h(x) uniquely satisfies

h(2n) (x) = q(x)
(2.2.18)
h(2k)(0) = h(2k)(1) = 0 k = 0, 1, ., n-1

We also need some of the known properties of Euler

po lynomials introduced in (2.1.7) and (218.We can
easily verify that

(2.119) 2n'(x) = Q2n-2(x)
Q2n(0) = Q2n(1) = 0
Furthermore,

Q(2p) (0) = Q2n2p)(1) = p = 0, 1, ., n-1,
(2.2.20) Q2n) n1) 2n") (0) =(-)

Q2n2j)(x) =- (-1)j Q2n-2j (x)
Using (2.2.13) we note that




1
Q2x G1(x't) dt lty yid
O


Q4(x = -j G1(x,t)d Q2t)d





and in general,


1

JO Gm (x,t) dt.


(2.2.21) QZm(x) = (-1)"


Explicitly some of the first Euler polynomials are

given by

Q2(x) =x(1-x) ,Q4(x) = x2(1-x)2+x(1-x)
2! 4!

06(x) = x3(1-)+x-x) x)+3x(1-x).
61


Proof of Theorem 2.1

Le 2m-1 denote the class of polynomials of degree
S2m-1. Following the notation used by Birkhoff and

Priver [1967] we shall denote


81 DGm(x,t)


=u(x) for u(x) e P2m-1 it follows
that for u e C2ml0,11

- 2m-1[u'xI


(2.3.1) Gm'(x

Since L~m1[~]

from the Peano theorem

(2.3.2) e(x) =: u(x)


= JGm(x,t) u(2m)(t) dt

where Gm(x,t) is the Peano kernel defined by (2.1.10) and

(2.2.14) Differentiating (2.3.2) we have

(2.3.3) e(2j)(x) = u(2j)(x)- L~m-12j)[u,x]


= JG (2j,0)(x,t) u(2m)(t) dt.

Let us substitute u(x) = Q2m(x) (as defined by

(2.1.7)) in (2.3.3) and use various properties as given by





(2.2..20) and (2.2.21). We then obtain

(2.3.4) Q2m2j) (x) -L~m-12j) Q2m'x]

= J G~j,0)x~t)2m!2m) (t)dt.


We know from (2.2.20)

(2.3.5) Q2m m(t) =Q2m~p()=(1
Mor-eover,

(2.3.6) Q2m2P)(0) = Q,2mp)(1) = 0, p = 0, 1, ., m-1
It follows that

(2.3.7) L2m-1 Q2m'x] = 0
identically. Thus (2.3.4) can be rewritten as


1
I Gm2j,0) (x,t) dt.
O


(2.3.8) Q2m2j)(x) = (-1)m


Next we note from (2.2.14) that

Gm2,0) = Gm-1(x't)
Hence


Gm4,0)(x,t) = Gm-2,0)(x,t) =Gm2xt
and in general,

(2.3.9) G 2j,0)(x~t) = G (x,t).
From (2.2.16) and (2.3.9) we have

(2.3.10) (-1)""3 Gm2j,0)(x,t) = (-1)""j Gm-j(x~t) > 0
in the unit square 0 < x < 1 O < t < 1.

Combining (2.3.3) (2.3.9) (2.3.10) (2.2.19) and

(2.3.8), it follows that


1
G(Ic2j,0)(x,t)/ dt
0


le(2j)(x)| < U





=U J G 2j,0)(x~t) dt


=U Q2m-2j(x),


This proves (2.1.10).

We next turn to prove (2.1.11). Due to (2.3.9), it

is enough to prove (2.1.11) for j = 1. From (2.2.14), it

follows that


(2.3.11) G 1,0) (x,t) = IJ y G (y~t) dy



+ I (y -1) Gm-1(ylt) dy.

Therefore


(2.3.12) m,)xtjd

1 x
< r I y |Gm-1(yit)l dy dt
0 0

1 1
+ I I (1-y) jm (yt dy dt.
O x

Recalling (2.2.21)


Q2m-2(y) = (-1)m-1 I Gm-1(y,t)dt ,

m = 2, 3,...

and the fact that in the unit square 0 < x < 1, O < t < 1,

(-1)m- Gm-1(yrt) > 0
we can assert that





I1 Gm-1(ylt) dt.
O


(2.3.13) Q2m-2(y) =


On changing the order of integration in (2.3.12) and

making use of (2.3.13), we obtain


1
/ G ,0)(x,t)( dt &
0


x
I y Q2m-2(y) dy
0


(2.3.14)


+ I (1-y) Q2m-2(y) dy


=: X2m-2(x).


Using (2.2.20) we note that


(2.3.15) XZm-2(x) = Y Q2m





On integrating by parts, we have


'(y)dy



(1- y)Q2m'(y)dy.


x
I 2m'(y) dy
0


x
(2.3.16) X2m-2(x) = y Q2m 7y) i
0


1 1
-Q2m'(y) (1-y)l + J -Q2m'(y) dy
x x

=-x Q2m'(x) + (1-x) Q~m'(x) + 2Q2m(x)

= (1 2x) Q2m'(x) + 2Q2m(x).


Also

(2.3.17) X 2m-2 '(x) = (1-2x) Q2m'1(x).

Since Q~m-2 vanishes only at x = 0 and x = 1, it follows

that the critical point at X2m-2(x) inside [0,1] is only

at x = 1/2. Also we note that X2m-2(1) = X2m-2(0).





Further

(2.3.18) X2m-2(1) X2m-2(1/2)


= (2x-1) Q2m-2(y) dy > 0
1/2

Thus we conclude that X2m-2(x) has an absolute maximum at

x = 0 and x = 1. Therefore, from (2.3.2), (2.3.14), and

(2.3.11), it follows that


(2.3.19) le'(x) < U G(0)xt dt


(U X2m-2(x)

( U (1-2x) Q2m'(x) + 2Q2m(x)

I U X 2m-2 (1).
On using (2.3.15) it follows that

le'(x)l IU X2m-2(1) = U Q2m'(1) = U Q2m'(0),
which proves (2.1.12).

That (2.1.11) and (2.1.12) are best possible follows

from the Peano theorem, or more simply, by choosing u(x)=

Q2m(x), the Euler polynomial defined by (2.1.7). In view

of (2.2.20), we have U =: max0
use of (2.2.20) and the definition of L2m-1[u,x] show that

L2m-1 Q2m'x] is identically zero. Our choice of u(x) then
gives pointwise equality in (2.1.11). Similarly it can be

shown that (211)is also pointwise best possible. This

proves the theorem.

It is perhaps worth remarking that any exact

evaluation of the integral of the absolute value of a

Peano kernel results in an exact error bound (see Sard





[1963] or Stroud [1974]). Generally error bounds

resulting from integration of a Peano kernel under the

assumption that u(x) 6 Cka,lb] also hold for u having

piecewise continuous kth derivative on [a,b], and even for

u having (k-1)st derivative absolutely continuous on

[a,b]. In the case given here we can thus expand the

class of functions for which the error bounds of Theorem

2.1 hold and hence are best possible.

As Theorem 2.1 is stated for function u(x) 2m times

continuous ly dif ferentiab le it also holds when the 2mth

derivative is merely piecewise continuous on [0,1].

Moreover the theorem holds even for the case that u(x) has

its (2m-1)st derivative absolutely continuous. In this

last case U, instead of being the max of the 2mth deriva-

tive on [0,1], becomes the "L infinity" norm of the gener-

alized 2mth derivative. In the following chapters the

classes of functions k times continuously differentiable,

the class of functions having piecewise continuous kth

derivative and the class having k-1st derivative absolute-

ly continuous may be treated as being interchangeable.





CHAPTER THREE
MORE POLYNOMIAL ERROR BOUNDS

Introduction and Statement of Theorems

Let u e C(2m+2) [0,h] be given. It follows from a

result of Schoenberg [1966] that there exists a unique

polynomial w2m+11u,x} of degree ( 2m+1 satisfying

(3..1) w2m+ 11u,01 = u(0) ,w2m+1 [u,h] = u(h)

w2m+1 2)[u,0] = u(P (0) ,

w2m+1 2)[ulh] = u(P [u,h] ,
p = 2, 3, .. m + 1.

Theorems 3.1 and 3.2 will give bounds on u j(x)-

w nl2m+1(x) for the cases m = 2 and m = 3 of polynomials

w2m+1 satisfying (..)
The polynomial w2m+1[u,x] can be expressed in

relation to the Hermite polynomial v2m-l[1[ux]. To
illustrate the relation between wm+ and v~m1 let h = 1

and let v2m-1[g'x] be the Hermite polynomial of degree at
most 2m -1 matching g =: u"' and its first m -1

derivatives at 0 and 1. We can represent v2m-1[g,x] as

(3.1.2) v2m-1[g,x] = AO(x)g(0) + BO(x)g(1)

+ A1(x)g'(0) + Bl(x)g'(1)

+ A2(x)g" (0) + B2(x)g" (1)

+ Am-1(x)g[l(m-)(0 + Bm-1(x]g(m-1)(1)





where Ai(x) and Bi(x), i = 0, 1,., m -1 are

polynomials of degree 2m 1 or less satisfying

(3.1.3) Ai ((0) 6ij A ()(1) = 0

jj =o 0, 1, . ,
Bi (0 i (1 = i '


Define for i = 0, 1, ., m 1


(3.1.4) C (x) = I G1(x, t)A (t)dt ,


D (x) = I_ Gl(x't)Bi(t)dt ,

From (314,(3.1.3) and (2.2.5)-(2.2.8), it follows

that for i = 0, 1, ., m 1

(3.1.5) Ci (0)) = ij2 i 1

Di~j (0) =0 Dij )(1) =6i(j-2)'
j = 2, 3, ., m + 1
where

(3.1.6) Ci(0) = Di(0) = Ci(1) = D (1) = 0 ,

and Ci, Di are polynomials of degree 2m 1 or less.
For a given u C (2m) 10,1] we can use (3.1.5) and

(3.1.6) to give w2m+1[u'x] in the form

(3.1.7) w2m+1[u'x] = u(0) (1-x) + u(1) x
+ u" (0)CO(x) + u" (1)DO(x)

+ u(3)(0)C1(x) + u(3)(1)D (x)


+ u(m+1)(0)Cm-1(x) + u(m+1)(1)Dm-1(x)





For m = 2 and m = 3, we give (3.1.7) explicitly. For

m = 2, if u e C(6)[0,1],-then the unique quintic w5[u,x]
matching u and its second and third derivatives at 0 and 1

is given by

(3.1.8) w51u,x] = (1-x) u(0) + x u(1)
+ u" (0) [-7x/20 + x2/2 x4/4 + x5/10]

+ u" (1) [-3x/20 + x4/4 x5/10]

+ u" '(0) [-x/20 + x3/6 x4/6 + x5/20]

+ u"'(1)[x/30 x4/12 + x5/20].

For u 6 C 8)[0,11, the unique polynomial w7[u,x] of
degree 1 7, matching u and its second, third and fourth
derivatives at 0 and 1 is given by

(3.1.9) w7[u,x] = (1-x) u(0) + x u(1)
+ u" (0) [-5x/14 + x2/2 x5/2 + x6/2 x7/7]
+ u" (1) [-x/7 + x5/2 x6/2 + x7/71
+ u(3) (0) [-13x/210 + x3/6 3x5/10

+ 4x6/15 -x7/14]

+ u(3)(1) [4x/105 x5/5 + 7x6/30 x7/14]
+ u(4)(0) [-x/210 + x4/24 3x5/40

+ x6/20 x7/841
+ u(4)(1) [-x/280 + x5/40 x6/30 + x7/84].

The following theorem concerns the quintic

interpolant w5*
Therem3.1 Let u e C6[O,11 and let w5[u,x] satisfy

(3.110) w5 P)1u,0] = u 9)(0) ,

w5 2)[u,1] = u 2)(1) ,p = 0, 2, 3.





Denote

(3.1.11) e(x) = u(x) w51u,x]
and

(3.112)U = max0 Then for 0
pointwise bounds hold:

(3.1.13) le 2)(x) (< U f ~(x)
where

f0,0(x) = [ x3(1-x)3 + x2(1-x)2/2 + x(1-x)/2 ] / 61

f0,1(x) = [ 1/60 x3(1-x)3/3 ] / 41,

f0,2(x) = [ x21x2]/4

f0,3(x) = [ x(x-1)(2x-1) ] / 12 O < x < 1/3
= [ 16x3 105x2 + 197x 162

+ 66/.x 13/x2 + 1/x3 ] / 96 ,
1/3 < x < 1/2

f0,4(x) = [ 485 + 42x 0x
+ 54x2 12x + 1 ]/1(-)
0
=[ -6(x-1/2)2 + 1/2 ] / 12 1/3 < x < 2/3

f0,5(x) = -(x-1/2)4 + 3(x-1/2)2/2 + 3/16 O < x < 1
and where f0,2 and f0,3 are extended to the whole of (0,11
by even symmetry about 1/2.

Furthermore, the functions f0,p, p = 0, 2, 3, 4, and

5 are pointwise best possible. The functions f0l,2 0,3'

f0,4 and f0,5 are those of Birkhoff and Priver [19671 for
two point cubic interpolation.
That these functions also serve as error bounds in





the present case is a consequence of the fact that

w5l[ux]' is the unique cubic matching u" and u"' at 0
and 1. In other words w5" [u,x] is the Hermite cubic

interpolation vj[g,x] where g = u". The error bounds

given by Birkhoff and Priver in terms of max0 are now expressed in terms of U = max0 g(4) is in fact u(6))
Denoting
(3114 p = a0x< ,(x)j p = 0, 1, . 5

we have

c = 11 1 c = 1 1 c = 1
646! 2 61 24 4!

c3 = 1/ c = 1 c5 =1.
9 4! 12 2

From (3.1.14) and (3.1.13) it follows that for every
u e C(6)[0,11

(3.115)max0 Remark 3.1 Note that

cp = max0 If we set u(x) = f0,0(x) then we have

e~x = 0,0(x w7 f0,0,x

=f0,0(x) and U = max0 By Remark 3.1 we see that for u(x) = f0,0(x) equality is
attained in (311)for p = 0, 1, 2, 3, 4, 5. The

constants cp are thus the smallest possible.
The next~theorem gives error bounds for w7, analogous

to the error bounds for w5 given in Theorem 3.1.





Theorem 3.2 Let u C- C 8)[0,1], and let w7 [u,x] be a

polynomial of degree 7 or less satisfying

(3.116)w7 9)[u,0] = u (0) ,

w7 2)[u,11 = u 9)(1) p = 0, 2, 3, 4.
Denote

(3.117)e(x) = u(x) w7[u,x]
and denote

(3.118)U = max0 Then, for 0 < x < 1 and 0 < p < 7, the following

pointwise bounds hold:

(3.119) e 2)(x)l < U f l(x)
where

fl,(x) = [: x4(1-x)4 + (2/5)x3(1-x)3
+ x2(1-x)2/5 + x(1-x)/5 ] / 81

fl,1(x) = (1/5)(1/81) (1/4)(1/61)x4 (1-x)4
fl,(x) = x3(1-x)3/6!

fl,3(x) =X x(x-1)2(1-2x)/240 O =x2(x-1)2(1-2x)/240

+ 4(x-1)2 10T x2

+ 2T(-15x2+2x+1) + 5x(5x-2) ] / 120,
2/5 < x <1/2

where

T = [ (3x-1)(5x+1) + (x-1)(-15x2+6x+1)1/2 / 12
f (X) = x(1-x)(52-5+)1
1,4
=x(1-x) (5x2-5x+1)/120

+ T14 [ 2T12(2x3-3x2+x)





+ 12T1(-5x3+8x2-x/
+(10x3-18x2+9x-1) ]/ 12,

for (4-J )/10 < x (3-/T)/6

where

T1 = [ 15x2 9x (x-1) (3x(4-5x))12/6(x),

1,(x) = x(x-1)(5x2-5x+1)/120
+ W4x ( 10W2(2x2-3x+1)

+ 4W(15x2-21x+6)

+ 5(10x2-12x+3)

+ 5(10x2-12x+3) ]/60 ,

for (3-i'j)/6 < x < (6-/ ~)/10

and where

W= [ 3(1-x)(5x-2) + x(3(1-x)(5x-1))1/2
6(x-1) (2x-1)

fl,4(x) = x(x-1)(5x2 -5x+1)/120 ,


fl,5(x) = (2x-1) (10x2-10x+1)/120
+W4 [ 2W262-6x+1)

+ 24W1(15x2-14x+2)
+ 30(10x2-8x+1) ] / 120,
O < x < (4-fl)/10

where

W1 = [ 15x2 14x -2 -x(3x(4-5x))1/
12x2 12x + 2
=(2x-1)(10x2-10x+1)/120 ,
(4-J7)/10 ( x ( (6-J6)/10





=(2x-1) (10x2-10x+1)/120

T4 [ 20T22(6x2-6x+1)
+ 24T2(-15x2+16x-3)

+ 30(10x2-12x+3) ] / 120,

(6- 6)/10 < x < 1/2

where

T2 [152-16+3 (x-1)(-15x2+18x-3)1/
12x2 12x + 2

fl,6(x) = [ 15x2 + 5x 1 ]/1

W24 2(x-1/2)

+ W2(15x-7)/5 + 5x/2 1 ]
O < x < 2/5

where

W2=[-1x+7-(-15x+6x+1)1/ ] / (12x-6)

fl,6x) -(x1/22/2 + 1/40 2/5 < x < 1/2

fl,7(x) = 2(x-1/2)6 5(x-1/2)4/2
+ 15(x-1/2)2/8 + 5/32 ,

O and where fl, ,4 1,5 and fl,6 are extended to

(1/2,1] by symmetry about x = 1/2. Furthermore, each of

the functions fl~ where p = 0, 2, 3, 7 is pointwise

exact.

Setting p = max0 (3.1.20) dO a l lf
d 128 ,! d = 5

2 61 26' 3 30,000





d6 0 d7 Z

From (3.1.19) and (3.1.20) it follows that for



(3.1.21) max0 Remark 3.2 Analogously to Remark 3.1, note that

(3.1.22) max0 On setting u = fl,0 (x) it follows from (3.1.22) that
(3.1.21) is exact for each p.
The following would seem to a natural generalization
of the Theorems 3.1 and 3.2.

Conjecture 3.3 Let u e C(2m+2)[0,1] and let w2m+11u,x] be
the polynomial of degree at most 2m + 1 matching u and
its 2nd, 3rd, . (m+1)st derivatives at 0 and 1.
Denote

(3.1.23) e(x) = u(x) w2m+11u,x]
and

(3.1.24) U = max0 Then for p = 0, 1, 2, we have

(3.1.25) le 2)(x)( (U fm-1,p~x)
where


fm-1,0(x)= (-1) ( ) [xm+2+i-x] } / (2m)!
i=0 [(m+i+2)(m+i+1)]

f-1(x) =1 xm+(1-x)m+ ) / (2m)! '
(2m+2) (2m+l) m+1





fm-,(x) = ( xm(1-x)m 3 / (2m)!.
Furthermore (3.1.25) is pointwise exact, p = 0 and 2.

Analogously to Remarks 3.1 and 3.2, it may be that

for every u e C(2m+2)[0,1] and p = 0, 1, ., 2m+1

(3.1.26) max0
If Equation (3.1.26) holds then it is best possible as can

be verified by choosing u = fm-1,0 and noting that then

e~x) is the same as fm-1,0(x). For p = 0, 1, 2,
max0<< m- (x)J = max0
Hence if (3.1.25) holds then (3.1.26) is true for

p = 0, 1, 2. As

fm- (2) (x) = [xm(1-x)m)/(2m)! ,
the conjecture of (3.1.26) is related to the following

conjecture.

Conjecture 3.4 Let u 8 C(2m)[0,1] and let v2m-1 be the
Hermite polynomial of degree at most 2m-1 matching u and

its first m-1 derivatives at 0 and 1. Denote

U = max0 and

e(x) = v2m-1[u'x] u(x).
Then

max0
i U max0 p = 0, 1, 2, . 2m-1.

The results of Birkhoff and Priver demonstrate Conjecture

3.4 for the cases m = 2 and m = 3. Recent work of Bojanov

and Varma indicates that Conjecture 3.4 is in fact true.





The next theorem will concern an interpolatory

polynomial which enjoys a similar property to that of
the above conjectures. Let u C- C(4 [0,1]. Define

k3 [u,x] by
(3.1.27) k31urx] = u(0) (1-x)(1-2x)2 + u(1/2) 4x(1-x)
+ u(1) x(1-2x)2 + u'(1/2) 2x(1-x)(2x-1).

Then k3[u,x] is the unique polynomial of degree 3 or less
satisfying

(3.1.28) k3[u,x] = u(0) k31u,1] = u(1)

k3[u,1/2] = u(1/2) k3'[u,1/2] = u'(1/2).
Theorem 3.3. Let u 6 C(4)1[0,1]. Denote

e(x) = k3[u,x] u(x) ,
U = max0 Then for p = 0, 1, 2, 3, we have

(3.1.29) le(P(x)l < ap U
where

a0 = 1 / (28 4!) al = 1 / (22 4!)

a2 = (5/2) (14) = 1/2.
That the ap are the best possible can be verified by
choosing

u(x) = [ x(1--x)(1-2x)2 ] / (22 4!).

Due to the similarity between the proof of Theorem

3.3 and several other proofs in the following chapters, it

would be redundant to prove it here.





Proof of Theorem 3.1

Let u e C(6) [0,1]. Then

(3.2.1) w51u,x} = (1-x) u(0) + x u(1)
+ u" (0) [-7x/20 + x2/2 x4/4 + x5/10]

+ u" (1) [-3x/20 + x4/4 x5/10]

+ u" '(0) [-x/20 + x3/6 x4/6 + x5/20]

+ u" '(1) [x/30 x4/12 + x5/20]

is the only polynomial of degree L 5 satisfying

(3.2.2) w5 2)[u,0] = u 2)(0) ,

w5 P) [u,1] = u 2) (1) ,p = 0, 2, 3.
Define

(3.2.3) e(x) = u(x) w5[u,x].
Then

(3.2.4) e 2)(0) = 0, e(P (1) = 0, p = 0, 2, 3,

and

(3.2.5) e(6)(x) = Q(x) =: u(6)(x).

In other words, e(x) is the unique solution of the

differential equation (3.2.5) with boundary conditions

(3.2.4). We can rephrase (3.2.4) and (3.2.5) as

(3.2.6) d~e = y(x) ,
dx2
e(0) = 0, e(1) = 0

and

(3.2.7) d'y = Q(x)
dx4
y(0) = y(1) = y'(0) = y'(1) = 0
From (3.2.6) and (3.2.2)-(3.2.6), it follows that





1
(3.2.8) e(x) = G1(xz) y(z) dz

where


z(x-1) O < z < x < 1
Gl(x'z)=
x(z-1) O < x < z < 1

is the Peano kernel for linear interpolation used in the

proof of Theorem 2.1.

Similarly, from Birkhoff and Priver (or by applica-

tion of the Peano theorem), we have


(3.2.9) y(z) = I G4 (z,t) Q(t) dt ,

where


(3t2-2t3 g3 + 3(t-2)t2z2

+ 3t z -t3

(3t2-2t3-1)z3 + 3(t-1)2tz2


,t < z


6G4(z,t) =


t >


for 0 < t < 1, O <

Combining (3.2.8) and (3.2.9) we have


z < 1


= G1(xZ)



= Gl(x,2)


j_ G4(z't)


I_ G4(2,t)


(3.2.10) e(x)


Q(t) dt dz


u(6) (t) dt dz


u(6) (t) dt dz


dz u(6)(t) dt


1 1
= I G1(x'Z) G4(z't)
0 0

1 1
= /G1(x~,z G4(z~t)
0 0





1
= G(x,t)
0


u(6)(t) dt


where


(3.211)G(x,t) = I G1(x,2) G4(z,t) dz.

From (3.2.11) and (2.2.5)-(2.2.8), it follows that

(3.2.12) G(2,0)(x,t) = G4(x,t)
and

(3.2.13) G(p+2,0)(x,t) = Gq(p,0)(x,t) p = 0, 1, 2, 3.

Also, as

G4(z't) I 0 O < z < 1 O < t < 1

G1(x'z) I 0 O < x < 1 O < z < 1
it follows that


0 < x < 1 O < t < 1


G(x,t) > 0


From (3.2.10) and G(x,t) 0 we have


le(x)j I I 1G(x,t) dt max0 0


(3.2.14)

In fact,


1
IJI Gq(x,t)dt dx dx + ax + b
0


(3.2.15) I G(x,t)dt =
0


where a and b are chosen to satisfy


(3.2.16) ii G(0,t)dt = G(1,t)dt = 0
O 0

We know from Birkhoff and Priver (or Hermite) that


1 G4(x,t)dt = [-x2(1-x)2]/4).
O/4!


Then





1
I G (x,t)dt dx dx = 1
0 61


( 5x4 + 3x5 ,6)
2


and to satisfy (3.2.16), we have a and b of (3.2.15) as


a =1/2 ,
61


b = 0.


Rearranging, we have


1
SG(x,t)dt =[ 5x4/2 +35 6+x/ ]/6


(3.2.17)


=[x3(1-x)3 + (1/2)x2(1-x)2 + (1/2)x(1-x) ] / 6!

=f0,0(x) .
Combining (3.2.14) and (3.2.17), we have the result of the

theorem for p = 0.

From 3.2.10, we have


|e 9)(x~l CI 1 G 9)(x,t)l dt max0 O


(3.2.18)


From 3.2.11, we have

x 1
(3.2.19) G(1,0)(x,t) = I y G4(y,t)dy + I (y-1)G4(ylt~dy.


Therefore as G (y,t) I 0


0

x
lG(1,0) (x,t) 1( y G (y~t) dy
0


(3.2.20)


+ / (1-y) IG4(y,t) Idy.

As before, we have


i jG4(yit)ldt= y2(1-y)2 / 41





Thus


(3.2.21) / G(1,0)(x~t) dt < IJ Jy 1G (y~t)l dy at~
0 0 0

1 1
+ j I (1-y) IG4(ylt)l dy dt
0 x

x 1
= y I IG4(yit)| dt dy
0 0

1 1
+ J (1-y) I |34(yit)l dt dy
x 0



= J [ y3(1-y)2 ] / 4!dy


+ fl [(-y)3 2 / 4! dy

=[ 1/60 (x3(1-x)3)/3 ] / 4!


=f0,1(x)
which achieves its maximum value of 1/1440 for x = 0 or
x = 1. We note also that

1/1440 = 1/(2 6!) = C1

=max0 Combining (3.18) and (3.13), we have

(3.2.22) le 2)(x) <


II G4(p-2,0) (x-t) dt max0
for p = 2, 3, 4, 5.
As this inequality is precisely that used by Birkhoff
and Priver to derive the functions f02 0,3 0, and





59



f05 the theorem follows for p = 2, 3, 4, 5. The proof

of Theorem 3.2 is very similar and hence omitted.





CHAPTER FOUR
A QUARTIC SPLINE


Introduction and Statement of Theorems

Among the many beautiful properties of the complete

cubic spline is the fact that for a given partition and

function values, the cubic spline is obtained by solving a

tridiagonal ly dominant system of equations.

Unfortunately, when one uses higher order complete splines

the bandwidth grows. In fact, for a 2m times continuous

spline of order 2m+1, the bandwidth of the system of

equations is 2m+1. Furthermore the diagonal becomes less

dominant as k increases.

It is natural then, to increase the order of the

spline but preserve bandwidth. Ideally we would hope to

increase the diagonal dominance and order of convergence.

In this chapter we introduce a quartic C(2) spline which

gives O(h5) rate approximation to a C(5) function. The

quartics are obtained by the solution of a tridiagonally

dominant system. As desired, it is more diagonally

dominant than the system associated with the complete

cubic spline.

The main result of this chapter will be to give an

exact error bound for the quartic spline discussed here.

We first give the definition.

60





Let f be a real-valued function defined on [a,b).

Choose a partition (xi ~=0suhta

a = x0 < x1 < . < xk = b .

Let zi = (xi-1 + xi)/2, be the midpoint of [xi-1, xi) for

i = 1, 2, .. k and for these i set hi-1 = xi xi-1'

Definition 4.1 Given the function f and the partition

{xi=,0', w define a quartic spline s(x) such that

(4..1) s~x 6 2[a,b] FT P4[xi-1, xi ] i = 1, 2, k ;

(where P4 [xi-1, xi] denotes the functions which are

quartics when restricted to [xi-1, x ])

(4.1.2) s(xi) = f(xi) for i = 0, 1, ., k

s(zi) = f(zi) for i = 1, 2, ., k ;
and

(4.1.3) s'(a) = f'(a) and s'(lb) = f'(:b).

Lemma 4.1 Let f be a real-valued function defined on
[a, b] an e x 0b a partition of [a,b]. A

quartic spline s satisfies Equations (4.1.1) and (4.1.2)

if and only if s satisfies the tridiagonal system of

equations for i = 1, 2, ., k 1

(4.1.4)

-hi s'(xi-1) + 4(hi + hi-1) s'(xi) hi-1 s'(xi+1)


+ 16 [(hi-1/hi) f~zi+1) hih-) i

-5 [(hi-1/hi) f(xi+1) 'hi/hi-1) f(xi-1)]

where hi = xi+1 xi'

We will give the proof later.





Assuming from (4.1.2) that f (xi), i = 0, 1

and flzi), i = 1, 2, . k are known, then (4.1.4) is a

system of k 1 equations in the unknown variables

s'(i),i =1, .. ,k.If we impose the conditions of
(4.1.3) that s' (x0) = f' (a) and s' (xk) = f' (b) are given,
we have k-1 unknowns and the k 1 diagonally dominant

equations (4.1.4). Lemma 4.1 thus assures us that s' (xi)

can be uniquely determined for given conditions (4.1.1)-

(4.1.3). As will be shown in the proof of the lemma, there

is, on any given subinterval [xi, xi+11, a unique quartic

si(x) satisfying the five conditions
(4.1.5) si(xi) = f(xi) si(zi+1) = f(zi+1) '

si(xi+1) = f(xi+1) s'i(xi) = s'(xi)

s'i(xi+1) = s'(xi+1 *
Equations (4.1.4) are derived by imposing the conditions
that

si"((xi+) = i1(i)for i = 1, 2,.., k 1.
For i = 1, 2, ., k, si(x) is thus the restriction of

the spline s to lxi, xi+l]*

Summarizing, unique solution of (4.1.4) implies that

s(x) is uniquely defined on each partition subinterval

[xi, xi+1], i = 0, 1, ., k-1, which is to say, on all
of [a, b]. We have shown

Corollary 4.1 The quartic spline of Definition 4.1 is, for

a given partition {xi=,0 and function f, unique.
We now make the comparisons with the complete cubic

spline more explicit. The system of equations





corresponding to (4.1.4) for the complete cubic spline has

left-hand side

his'(xi-1) + 2(hi + hi-1) s'(xi) + hi-1s'(xi+1)
In comparison (4.1.4) is twice as diagonally dominant.

To interpolate the 2k + 1 function values f(xi) and

f(zi) using our C(2) quartic required solving the
tridiagonal system of k 1 equations (4.1.4) As the

cubic spline must match derivative and second derivative

values at each interior function value, interpolation of

the same 2k + 1 function values by the C(2) cubic spline

would entail solution of a system of 2k 1 equations. In

other words, the matrix equation to be solved for the

quartic is only half as large as that required for the

cubic.

We can now state the main theorem of this chapter.

Given a partition {x @=,0of[,bdnt

h = max0
For each x in [a,b], there exists i such that

0 < i < k 1 and xi < x
Theorem 4.1 Let f e C(5)[a,b] and let {xi }kj be a

partition of [a,b). Let s(x) be the twice continuously

differentiable spline corresponding to f and {xi =,0'

where s satisfies (4.1.1) -(4.1.3) Then

(4.1.6) (f(x) s(x)l < Ic(t)l h5 maxa(x(b g(5)(x)) / 5!
where

c(t) = [3t2(1-2t)(1-t)2 + t(1-2t)(1-t)] / 6.


Define





C0 = max0
=(1 1 ) (1/5 + 2/15 ) / 6.

It follows that

(4.1.7) |f(x) s(x)j ( co h5 ma~xaxb lf(5) (x) / 51

Furthermore, neither Ic(t)| nor co can be improved, as we
can show by letting f = x5/5! and letting k become

arbitrarily large for an equally spaced partition. An

approximate decimal expression for co is .0244482 and

c /5! is approximately .000203818.
We will also show

(4.1.8) Jf'(xi) s'(xi)
& 4 maax f (5)(x)l / 6!

and that this estimate is exact.

Related to Theorem 4.1 is the following conjecture.

Conjecture 4.1 Let f e C(5)[a,b] and let {xi }k2 be a
partition of [a,b). Let s(x) be the twice continuously

differentiable spline corresponding to f and {xi =,0'
where s satisfies (4.1.1)-(4.1.3). Then

(4.1.9) (f'(x) s'(x)( A4axxb 5)) /6.

If Conjecture 4.1 holds, then the constant 1/6! can not be

improved. This conjecture has been verified numerically.

Remark 4.1 Given f e C5[a,b] and a partition {x-}k, of

[a,b], let s be the quartic C(2) spline satisfying

(4.1.1) -(4.1.3) Then the supremum norm | |f~i s~i
is of order h5- maxaxb(5x),i=01,2





Theorem 4.1 demonstrates that the quartic C(2) spline

gives the best possible order of approximation to
functions from the smooth class C(5). We next discuss

interpolation to the much less smooth class of functions

which are merely continuous on [a,b]. As f'(a) and f'(lo)

are not necessarily defined, we consider the quartic C(2)

spline satisfying (.1)and (4.1.2) with boundary
conditions

(4.110)s'(a) = s'(:b) = 0

Denote w(f,h) =: sup x-y ghlf(x) f(y) .
Theorem 4.2 Let f e Cla,b]. If {x }k= isthpaiio

of equally spaced knots, then for xi < x < zi+1 = (xi +

xi+1)/2 and t = (x xi)/hi, i = 0, 1, k 1, we
have

(4.1.11) |f(x) s(x)l L c(t) w(f,h) ; 0 < t < 1/2

and for zi+1 < x < xi+1, or 1/2 < t < 1

(4.112) f(x) s(x)l L c(1-t) w(f,h)

where c(t) = (1 + (13/3)t 3t2 (58/3)t3 + 16t43

Note that max0 1.6572.

The bound of the preceding theorem is only valid for

equally spaced knots. For arbitrary partitions we can not

give a bound of this same form. However, if

m =: max0
we have the following theorem.





Theorem 4.1.3 Let f(x) e C[a,b], and let s be the C(2)

quartic spline satisfying (1.)(412,and (..)

Then for xi < x < zi+1, and i = 0, 1, 2, ..,k 1,

(i.e., for 0 < t <1/2 with t = (x x )/h )

(4.113) f(x) s(x)| < c (t) w(f,h)

and for zi+1 < x < xi+1, i'e. 1/2 < t < 1,

(4.1.14)f(x) s(x)| < c (1-t) w(f,h)
where

cl(t) = [1 + 10t2 28t3 + 16t4]
+ (8/3)[m2 + m] [t(1-2t)(1-t)]

Theorems 4.2 and 4.3 indicate that for suitable

partitions the quartic C2 spline can provide acceptable

approximations to functions which are merely continuous on

[a,1b].

Proof of Lemma 4.1

We first give an expression for the unique quartic

matching function and derivative values at endpoints and

function values at the midpoint. Specifically, let f be a

real-valued function defined on [0,1], and differentiable

at 0 and 1. Let

(4.2.1)

P (x) = 1 11x2 + 18x3 8x4 =(2x1-)(1+4x)

P2(x) = 16x2 32x3 + 16x4 = 16x2(1-x)2

P3(x) = -5x2 + 14x3 8x4=-(1-2x)x2[1+4(1-x)]

P4(x) = x 4x2 + 5x3 2x4 x(1- 2x) (1-x)2

P5(x) = x2 3x3 + 2x4 = x 2(12)1x





Then

(4.2.2) L[f,x] = Pl(x)f(0) + P2(x)f(1/2) + P3(x)f(1)

+ P4(x)f'(0) + P5(x)f'(1)
is the unique quartic satisfying

(4.2.3) L[f,01 = f(0) L[f,1/21 = f(1/2),

L[f,11 = f(1) L'[f,0j = f'(0) L'[f,1] = f'(1).

L is a linear functional and a projection. If f is a

polynomial of degree four or less, then L[f,x] = f(x). In

the future calculations we will need the following facts

about the quartics Pi'
(4 2. ) P "() =-22 Pl (1 = -0

P2 (0) =3P2" (1) = 32

P3 (0) =-0P"1)= -22

P4" (0) =8P4" (1) = -2

P5 (0) = 5" (10) = 8.
Let zi+1 = (xi + xi+1)/2. On the interval [xi, xi+1 '

the unique quartic Li[f,x] interpolating f(xi), f'(xi '

f(zi+1), f(xi+1), and f'(xi+1) can be expressed in terms

of Pi. In fact, let t = (x xi)/hi where hi = xi+1 xi'
Then

(4.2.5) Li[f,x] = f(xi) 1l(t) + f(zi+1) P2(

+ f(xi+1) 3(t) + hi f'(xi) 4q(t)

+ hi f'(xi+1) 5(t).

Let s be the quartic spline of Definition 4.1 corres-

ponding to f and the given partition. Then the restric-

tion si(x) of s to [xi,xi+1] is a quartic. Hence





Lils' x] = s (x). Using' the facts that s(xi) = f(xi '

s(Zi+1) = f(zi+1), and f(xi+1) = s(xi+1) in (4.2.5) we
have

(4.2.6) si(x) = f(x ) Pl(t) + f(zi+1) P2(

+ f(xi+1) P3(t) + hi s'(xi) P4(

+ hi s'(xi+1) 5g(t) t = (x-xi)/hi
In order that s be twice continuously differentiable,

we must satisfy

(4.2.7) si" (xi+) = si-1" (xi-

where si is the restriction of s to lxi, xi+1] and si-1 is

the restriction of s to lx ~,x ]. Differentiating
(4.2.6) twice we have

(4.2.8) s" (x +) = _1 f(xi P1" (0) + f(zi+1) 2" (0)


+ f(xi+1) P3" (0) + hi s'(xi) 4" 1(0)

+ hi s'(xi+1 5" 1(0) 3.
Similarly, from rewriting (4.2.6) for the interval

[xi-1, xi], we have
(4.2.9) s" (xi_) __1- ( f(xi-1 P1" (1) + f(zi P2" (1)
hi-1

+ f(xi) P3"(1)

+ hi-1 s'(xi-1) 4"l(1)

+ hi-1 s'(xi) 5" l(1) 3.
Setting s"(xi+) = s"(xi-) by equating (4.2.8) and

(4.2.9) and using Pi"(0) and P"(1) from (4.2.4), we
have





(4.2.10) {-22 f(x ) + 32 f(zi+1) 10 f(xi+1)

-8 hi s'(xi) + 2 hi s'(xi+1) 3 / hi2

= -10 f(xi-1) + 32 f(z ) 22 f(xi)

2 hi-1 s'(xi-1) + 8 hi-1 s'(xi) } / hi-12

Factoring two, multiplying by hihi-1, and putting the

known function values on the right hand side, we have

(4.211)-his'(xi-1) + 4(hi+hi-1) s'(x ) hi-1s'(xi+1)
=-11 [(hi1h ih-) ~i

+ 16 [(h1/ifzi1 h/i1fzi

-5 [(h1/ifxi1 hih1)fx1

which is the desired system of equations (414. Having

established Lemma 4.1, we next turn to a proof of Theorem

4.1.


Proof of Theorem 4.1

Our method of proof is to establish a pointwise

bound. As in the proof of Lemma 4.1, let L (f,x] be the

unique quartic agreeing with f(xi), f(xi+1), f(zi+1 '

f'(x ), and f'(xi+1), and let s be the twice continuous

quartic spline corresponding to f and Equations (.1)to

(4.1.3) on the partition (xi =,0. Then for xi < x < xi+1'
we have

(4.3.1) If(x) s(x)l < If(x) L [f,x]j

+ (L (f,x] s(x).
Assume that f e C(5)1a,b]. By a proof attributed to

Cauchy, we know that





(4.3.2) If(x) L (f,x]l < (hi5/5!) lt2(1/2-t)(1-t)21 U
where t = (x-xi)/hi and U is the maximum of |f(5)(x) on

[x ,xi+1]. Equation (4.1.9) gives a pointwise bound for

jf(x) L [f,x].
Let i be arbitrary and xi < x < xi+1. We next turn
our attention to deriving a similar bound for

| L [f ,x] s (x) = | L [f ,x] s (x) | Subtracting (4.2.6)
from (4.2.5) gives

(4.3.3) Li[f,x] si(x) = hi [f (xi) s'(xi)l 4qt

+ hi If'(xi+1) s'(xi+1~ 5gt *
Denoting

(4.3.4) e'(x ) = f'(xi) s'(xi)
then we have from (4.2.5),

(4.3.5) IL (f,x] s (K)l <

hi max(|e'(xi)l e'(xi+1)l 4 5 *ll~~t l
As Pq(t) = t(1-2t)(1-t)2 an 5t 2(1-2t)(1-t)

are both positive for 0 < t < 1/2 and both negative for

1/2 < t < 1, IP4 ~ ~ 5~(t)l = IP4(t) + P5(t)l for
0 < t < 1. Then for xi < x < xi+1, we have

(4.3.6) IL [f,x]-s(x)l <

hi maxcle'(x ) e'(xi+1) 3 t(1-2t)(1-t).
Redefine L so that its restriction to [xi, xi+1] is Li for
each i, i = 0, 1, . ,k-1. Choose i so that je'(xi)l is
maximal. We then have for all a < x < b,





(4.3.7) IL[f,x] scx)l L h le'(x ) t12t)1t

where h = max0 and where on each subinterval [x xj+1J, O < j < k 1,

we define t = (x x )/h .
The next task is to bound le'(xi)). From both sides

of (4.1.4) we subtract

-hi f'(xi-1) + 4(hi + hi-1)f'(xi) hi-1f'(xi+1) '
thereby defining a functional BO f)

(4.3.8) hie'(xi-1) 4 (h +hi-1) e'(:xi) + hi-le'(xi+1)

=hi f'(xi-1) 4(hi+hi-1)f'(xi) + hi-1f'(xi+1)
-11 [(hi-1/hi] (hi/hi-1)] f(xi)
+ 61hi-1/hi) f(zi+1) hi/hi-1) f(zi)l

-5 [(hi-1/hi) f(xi+1) hi/hi-1) f(xi-1~

=: BO n *
The linear functional BO(f) is identically equal to
zero when f is a polynomial of degree four or less, as can

be directly verified. (The arithmetic of verification is

simplest if one takes xi-1 = -hi-1, xi = 0, xi+1 = hi and
checks the monomials 1, x, x2, 3, and x4,

We have chosen i so that Je'(xi)j attains its maximum
value. As

4(hi + hi-1)e'(xi)

=-BO(f) + hie'(xi-1) + hi-le'(xi+1)'
it follows that

j4(hi + hi-1) L |]BO(f)l + jhie'(xi-1)
+ Jhi-le'(:x )

I 1BO(f) + I(hi + hi-1)e'(xi)





Hence

13(hi + hi-1)e'(xi)l 0 ~g
and

(4.3.9) |e'(xi)l 0 ~(f) / 13(hi+hi-1) *

As BO(f) is a linear functional which is zero for
polynomials of degree four or less, we can apply the Peano
theorem to get


(4.3.10) BO~f x i+Ol(x-y) 4] f(5)(y) dy / 41
xi-1
From (4.3.10) follows


(4.311)BO xi+1 I BO[(x-y) 4]1 dy Ui/ 4!.
xi-1
where Ui is the maximum of [f(5)1 on [xi-1 xi+1 *

For xi-1 < y < x ,l B01(x-y) 4] takes the form

(4.3.12) Bgl(x-y) 4]

16(hi+hi-1) (xi-y)+3 + 4hi-1 (xi+1_ 3~

1 [hi-/h) hihi1) h/il(xi *+4
+ 16 [(hi-1/hi)(zi+1~7)+4_h/i1(iY+
-5 (h1/i(xi+1 y)4
In order to evaluate the integral of (4.3.11) we need

to know the sign behavior of BO[(x-y) 4]. We rewrite

(4.3.12) in a form which shows its symmetry about xi.

(4.3.13) BO[(x-y) 4]

(hi/i-1)[-5(xi-y) + hi-11 [(xi y) hi-1 3

for xi-1 < y < Zi





(hi/hi-1) (xi 2)[11(xi_ 2- 16hi-1(xi-Y
+ 6hi-2]

for zi < y < xi

(hi-1/hi) (x -y)2 [11(x -y)2 + 16hi(xi-Y
+ 6hi2]
for xi < y < Zi+1

(hi-1/hi) [-5(xi-y) hi] [(xi-y) + h1 ]3
for zi+1 < y < xi+1

As the expression (4.3.13) has factors which are at

most quadratic it is fairly easy to to determine to

determine the sign of BO[(x-z) 4]. In fact, BO[(x-z) 4]

is nonnegative for i-1 I y &xi+1. Evaluation of (..1
is then straightforward. The term by term integration of

(4.3.13) gives

(4..1) xi+1|BO[(x-y) 4] dy = hih-1i1 h 3]/10.

xi-1
From Equation (4.3.11) we conclude that

(4.3.15) IBO(f)l ( Ui hihi-1[hi1 + hi3 25)
From (4.3.9) it is then evident that

(4.3.16) le'(x )l <
U hhi1[i-13 + h3] /[(6!)(hi+hi-1~

for = 2 -1. As
[h-3 +h3] / hih1 mxh2, hi-123

and as

Gi < U,
it follows that





(4.3.17) max le'(x )| I max(hi4, hi-14} U/(6!).
This is the desired bound on I e'(xi) *

Applying it in (4.3.7) we have

(4.3.18) IL[f,x] s(x)l L h5 |t(1-2t)(1-t)l U/ (61).
From (4.3.2) follows

(4.3.19) |f(x) L[f,x]| ( h5 lt2(t-1/2)(1-t)2| U/ 5!

where L restricted to [xi, xi+11 is defined as L [f,x] and

where h is the maximum of hi'
We can now combine the bounds on If(x) L[f,x] and

|L[f,x] s(x~l. From (4.3.19) and (4.3.18), we have

(4.3.20) If(x) s(x)j ( h5 Ic(t)l U / 5!
where

(c(t)[ = 13t2(1-2t)((1-t)2( + It(1-2t)(1-t)l / 6

= l3t2(1-2t)(1-t)2 + t(1-2t)(1-t)l / 6
and

c(t) = [3t(1-t) + 11 [t(1-2t) (1-t)] / 6*
Then

(4.3.21) co = max0 To verify (4.3.21), note that

(4.3.22) 6c'(t) = -30t2(t-1)2 + 1

=-30 [(t 1/2) + 1/2]2 [(t 1/2) 1/2]2 + 1

=-30 [(t 1/2)2 1/4]2 .

For 0 < t < 1, the roots of c'(t) are

(4.3.23) t = 1/2 + /1/4 1//~i .

Evaluating c(t) at the roots of o'(t), we get

(4.3.24) CO = ( J1/4 1//TEi ) (1/5 + 2//TE ) .





We have shown the so-called "direct" part of the

proof, that Equation (4.1.7) holds for co. It remains to
be shown that the theorem holds for no smaller c .

In fact, given c < co, we can produce a function f
and a partition {x 3 =0 of 1-1, 1] such that

(4.3.25) max-1
c h5 max-1 Often, when polynomial interpolation of degree n is

considered, the worst error is attained by a polynomial of

degree n + 1. As s is a quartic spline, it is natural to

try f(x) = x5/5! as a possible worst function. A particu-

larly pleasant feature of the trial worst function f is
that it has fifth derivative identically equal to one.

For xi < x < xi+1, we have by the Cauchy formula

(4.3.26) x5/5! L [x5/5!,x]
=h5[t2(t-1/2)(t-1)2] / 51

Furthermore, for equally spaced knots xi-1, xi, xi+1, we
can calculate

(4.3.27) BO(x5/5!) = hi/5.
If e' (xi-1) = e' (xi) = e' (xi+1)r ,we have from (4.3.8)
(4.3.28) e'(xi) = -BO(x5) / h4/6.

Equation (4.3.3) then becomes for f(x) = x5/5!

(4.3.29) L (f,x] s(x) = -hi hi4 l94t 5 P(t)}/6!
= i [t(2t-1)(1-t)]/61
Combining Equations (4.3.26) and (4.3.29), we have,

for xi < x < xi+1'





(4.3.30) f(x) s(x) = hi5 {~-/)1t/

+ t2(t-1/2)((1-t)2} / 5!

As (4.3.30) gives, after taking its absolute value,

precisely our pointwise bound Ic(t)j of (4.3.20), we will

have attained c provided only that hi = h, and as men-
tioned above,

(4.3.31) e'(xi) = e'(xi+1) = e'(xi-1) = -h4/61

In order that hi = h, we take the knots to be equally

spaced. Attaining (4.3.31) is not so easy. In fact it is

attained only in the limit. The difficulty is the boundary

conditions e' (x0) = e' (xk) = 0. We can show, however,

that as one moves many subintervals away from the

boundaries, e'(xi) goes to -h4/61.

Explicitly, let {x ) =0 be the partition dividing

[-1,1] into k equal subintervals; in this case, h = hi

2/k For i = 1, 2, .., k 1, and f = x5/5!, we have

BO(f) defined on [xi-1, xi+11 and
(4.3.32) BO(f)/h =h/5! = e'(xi-1) 8e'(x ) + e'(xi+1)
We wish to apply (4.3.32) inductively to move away

from the end conditions e'(-1) = e'(1) = 0. In order to

do so we must establish that e'(x ) < 0 for 0 < i < k. We
reason by contradiction.

Let 1 < i < k 1. Suppose e'(x ) > 0. Then

e'(xi 1) + '(xi 1

e'(xi-1) 8 e'(x ) + e'(xi+1)
Sh4/51
































































__


Hence

max{ le'(xi-1)l e'(xi+1)() h4/25)
contradicting the fact (4.3.17) that

h4/61 1 max ( |e'(xi-1)|, |e'(xi+1) 3
We have shown by assuming the contrary that

e'(x ) 0 for i = 1, 2, ..,k 1.

Condition (4.1.3) is that e' (x0) and e' (xk) are zero. Thus

(4.3.33) e'(x ) ( 0 for i = 0, 1 ., k.

Applying (4.3.32) again we have for i = 1, 2, ., k 1

Se'(xi) = -h4/5! + e'(xi-1) + e'(xi+1)

As e'(xi-1), e'(xi+1) i 0, this implies that

8e'(x ) ( -h4/5!
and

(4.334)e'(x ) < -h4/[8(5!)]
Similarly, for i = 2, 3, ., k 2, we have

8e'(xi) = -h4/5! + e'(xi-1) + e'(xi+1) '
and hence by (4.3.34),

e'(x ) -4/5 h4/[8(5!1) h4/[8(5!1)

=-(1 + 1/4) h4 /[8(51)].

Inductively, for i = j to i = k j, we will have

e'(x ) L -{1 + 1/4 + 1/42 + .. + 1/4j-13h4/[8(5 1)]
The harmonic series (1 + 1/4 + 1/2 + ..)i

equal to 1/(1 1/4) or 4/3 Thus, in the limit as i, k,

and j go to infinity, we have

(4.3.35) e'(x ) I -(1/8)(4/3)h4/5! = -h4/6!.

We already know from (4.3.17) that Je'(x )] J4/6.Tu

for k > 2j + 1, and k j > i > j as j goes to infinity,





we have

(4.3.36) e'(x ) goes to -h4/6!.
In the sense of (4.3.36), (4.3.31) is satisfied.

Then, as e'(xi ) goes to -h4/! x55 -sx)ge

uniformly to the expression of (4.3.30) and (4.3.20) with

h = hi. It follows that the expression of (4.3.20)

cannot be improved further. In fact we have shown that

|c(t)| offers a pointwise exact bound, and its maximum co
is the exact norm bound.


Proof of Theorem 4.2

We know from (4.3.1) that for xi < x < xi+1'

(4.4.1) s(x) f(x) = Pl(t) f(xi) + 2(t) f(zi+1)

+ P3(t) f(xi+1) + hi 4q(t) s'(xi)

+ hi 5S(t) s'(xi+1) f(x).
It is easily verified that P (t) + P2(t 3 (t) = 1.
Thus

(4.4.2) s(x) f(x) = P1(t) If(x ) f(x)}

+ P2(t) [f(zi+1) f(x)]

+ P3(t) [f(xi+1) f(x)]
+ hi s'(x ) P4 t)

+ hi s'(xi+1) 5S(t).
Each of the first three terms on the right hand side

can be bounded in absolute value by (f,h). We also must

bound the last two terms. For equally spaced knots h = hi

= h-1;equation (4.1.4) reduces to





(4.4.3) -h s'(xi-1) + 8h s'(x ) h s'(xi+1)

=16 [f(zi+1 f(zi)l

-5 [f(xi+1) f(xi-1 *

Assume s'(xi) is maximal in absolute value. Then

|6h s'(xi)l ( 16 |f(zi+1) f(zil

+ 5 If(xi+1) f(xi)

+ 5 If(xi) f(xi-1)
( 26 w(f,h),
and hence

(4.4.4) Ih s'(x ) ( (13/3) w(f,h).
Combining (4.4.2) and (4.4.4), we have

(4.4.5) Is(x)-f(x)( I IIP (t)J + IP (t)l + IP (t)

+ (13/3)IP4(t)l + (13/3)IP4(t)( } 0(f,h).
For 0 < t < 1/2,

(4.4.6) |Pltl I2() I3(t

+ (13/3) P4(t)j + (13/3) P4~~

= Pl(t +2(t P3( +
(13/3)P4(t) + (13/3)P5(t)
=1 + (13/3)t 3t2 (58/3)t3 + 16 t4

We have shown the theorem for 0 < t <1/2. The

argument for 1/2 < t < is symmetric.
Proof of Theorem 4.3 We are considering now the case in

which knots are no longer assumed to be equal. We assume

that the ratio of the longest subinterval to the shortest

is less than m. Equation (4.4.2) still applies. Again we

choose i so that Is'(xi)l is maximal. From (4.1.4) we now
have





(4.4.7) -his'(xi-1) + 4(hi+hi-1) s'(xi) hi-1s'(xi+1)

= 1 (i1h i ih-) h
+ 16 [(hi-1/hi) f(zi+1) h i/hi-1) f(zi)l

-5 [(hi-1/hi) f~xi+) (hi/hi-1) f(xi-1)]
= 11[hi/hi-11 [f(xi) f(zi)l

+ 5[hi/hi-11 1-f(zi) + f~xi-1~

+ 11[hi-1/h ] If(zi+1) f(xi)l
+ 5[hi-1/hi] [f(zi+1) f(xi+1 *
Then

3 (hi + hi-1) js'(xi)J 16 hi/hi-1 w(f, hi-1/2)
+ 6hi-1/hi o(f,hi/2)
and

Is'(xi) ( (16/3) [hi/hi-1 + hi-1/hi] w(f,h)
hi + hi-1

where h = max0 Then for any given j, O < j < k-1 and

m = maxlhi)/{mini), i = 0, 1, . ., k 1, we have

(4.4.8) max{ hj s'(x ) (h s'(xj+1)l
< (16/3) h (m + 1/m) w(f,h)
2min~hi~
< (8/3) (m2 + m) w(f,h).
Substituting (4.4.8) into (4.4.2) yields the result
of the theorem.





CHAPTER FIVE
IMPROVED ERROR BOUNDS FOR THE PARABOLIC SPLINE


Introduction and Statement of Theorems

The quartic splines of Chapter Four share and improve

many of the properties of the complete cubic spline. To

insure a good approximation to a given continuous

function, we must make the largest subinterval of a

partition small. Unfortunately, we must also pose some

additional restrictions on the partition. For instance,

in Theorem 4.3, the norm of the error depends not only on

the length h of the largest subinterval but also on the

ratio m of the largest to smallest length subinterval.

Similar additional restrictions must be made for the cubic

spline.

In this chapter, we will discuss a spline operator

for which the norm of the approximation error goes to zero

with the length of the largest subinterval, for any par-

tition and any continuous periodic function. This spline

is the piecewise parabolic spline introduced by Marsden

and discussed in Chapter One. Its properties are

summarized in Equations (1.6.1) to (1.6.7).

As Marsden points out, many of the bounds he gives

can be sharpened. The main result of this chapter will be

to accomplish this sharpening. While many of the bounds





given here may still not be exact, at least one of them

is, and in fact is even pointwise exact. In other cases we

can reduce the known bounds by a factor of more than two.

The results given here thus enable one to compare the

error of the Marsden spline to the error of other spline

interpolation processes. Specifically, future work on the

cubic spline interpolant should shed light on the validity

of Marsden's conjecture that the parabolic spline offers

better approximation than the cubic spline when functions

of the classes C(1), C(2), and C(3) are considered.

We first recapitulate the properties of the parabolic

spline. Let

f: 8 Cla,b] ,f(a) = f(b),

Ilf = sup ([f(x)l : a < x < b }

such that f is extended periodically with period b a.

A function s(x) is defined to be a periodic quadratic

spline interpolant associated with f and a partition

{xi ~=0



a) s(x) is a quadratic expression on each (xi-1, xi)
b) s(x) 6 C'[a,10];

c) s(a) = s(b) s'(a) = s'(b)

d) s(zi) = f(zi) i = 1, 2 ., k

where zi+1 = (xi+1+x )/2.

The following theorem is due to Marsden [1974] and

was given in Chapter One as Theorem 1.13.





Theorem Let {x ) =0 be a partition of la, b], f(x) be a

continuous function of period b a, and s(x) be the

periodic quadratic spline interpolant associated with f

and {xi=,0. he

(5.1.2) jjsil ( 2 I((fj i)"s L 2 Jf I

jje.l I 2 w(frh/2);

jje II 3 w(f,h/2).

(where si = s(xi) and ei = y(xi) s(xi) *

The constant 2 which appears in the first of the above

equations can not, in general, be decreased.

For continuous functions to be "well-approximated" by

the spline s, Equations (5.1.2) show that the only

requirement for the partition is that the length h of the

largest subinterval be small enough that the modulus of

continuity of f be small.

Concerning s, we can prove the following results.

These are analogous to the results of Marsden given above

as Theorems 1.14 to 1.16 and improve upon the bounds he

derived.

Theorem 5.1 Let f and f' be continuous functions of period

b a. Then

(5.1.3) je(x)| < c~ h Ilf' ,

whee a =2/3 J1fl/6 and 001 1 +a0 8a02 + 4a03 or

C0,1 is approximately 1.0323. The analogous constant from
Marsden was 5/4.





Theorem 5.2 Let f, f', and f" be continuous functions of

period b a. Then

(5.1.4) e|| ( (1/6) h2 Il.~

(5.1.5) I(e 'l( < (9/16) h f ,

(5.16) |e'l| ( (17/16) h |f" .

(Marsden's constant for (5.1.4) was 5/8, while in (5.1.6)

the value was 2).

If we make the additional assumption that the

partition consists of equally spaced intervals, then we

can improve (5.1.6) to

(5.1.7) |e'l ( .7431 h |f" |
Theorem 5.3 Let f, f', f", and f"' be continuous

functions of period b a. Then

(5.1.9) (eil < (1/24) h3 Il"'l

(51.0)e < (1/6) h2 llf'l

(5.1.11) (e( (1/24) h3 ..f'(

(51.2)e'l (7/24) h2 Ils'



jje"l [h /2 + (h3/3hi2)] If"'" xi-1 Marsden's analogous constants for (5.1.9) to (..1

are 1/8, 1/3, 17/96, and 11/24 respectively.

Furthermore, (5.1.9) and (5.1.11) are best possible.

In fact we also have the exact pointwise bound

(5.1.14) (e(x)l < IE3(t)( h3 lif" 'lJ, xi < x < xi~ ,

where t = (x x )/(xi+1 x ) and

Q3(t) = 1/24 t2/4 + t3/6
is the "Euler spline" of degree 3.





The technique used here is the same as that used in

the last chapter. For a given partition subinterval

[xj, xi+1], we write

(5.115) f )x) -s ()| If i (x) L )(x)

+ L (x) s i)(x)

where L is a polynomial interpolation of f. We then

proceed by obtaining pointwise estimates of the quantities

on the right hand side of (5.1.15).


Proof of Theorem 5.1

Given that f and f' are continuously differentiable

of period b a, we will establish the following pointwise

bound for the parabolic continuously differentiable spline

s interpolating function values at subinterval midpoints

zi. Let xi < x < xi+1 and let

t = (x xi)/(xi+1 x ) and h = max~hi *

Then for xi < x < zi+ we have

(5.2.1) jf(x) s(x)l I h [1 + t 8t2 + 4t3] Il'

For z ~ < x <_ xi+1 replace t in (5.2.1) by 1 t.

Equation (5.1.3) follows from (5.2.1).

In order to establish (5.2.1) we write for

xi < x < xi+1'
f(x) s(x) = f(x) L(x) + L(x) s(x)

where L(x) is the parabola matching f(xi), f(zi+1), and

f(xi+1). Then
(5.2.2) (f(x) s(x) I (f(x) L(x) + JL(x) s(x))





We can represent L(x) as

(5.2.3) L(x) = f(x )AO(t) + f(zi+1)A1(t) + f(xi+1)A2 t)
where t = (x xi)/hi and

AO (t) = 2 (12-t(1-t

Al(t) = 4t (1 t)

A2(t) = 2t (t 1/2).
As L reproduces parabolas exactly and as the restriction

of s(x) to [xi, xi+11 is a parabola, for xi < x < xi+1 we
have

(5.2.4) s(x) = s(x )AO(t) + s(zi+1)A1(t) + s(xi+1)A2 *)

As f(zi+1) = s(zi+1), we have
(5.2.5) |L(x) s(x)| ( jf(x ) s(xi ) AO(t)

+ |f(xi+1) s(xi+1)( A2(t

L { AO~t) + IA2(t)| 3 |leil
< )1 2tJ jje-

where |lei|| = max1 We have shown that

(5.2.6) |f(x) s(x)|

|f(x) L(x) + 1I 2tl ei *
It remains to bound If(x) L(x) and lei| in terms of


Marsden showed that

(5.2.7) |je- | < h |jf'j|
where h is the maximum length of a subinterval.

In order to bound |f(x) L(x) we resort to the

Peano theorem. Defining g(t) =: f(xi+hit) = f~x), we have





(5.2.8) f(x) L(x) = I_ K1(t,z)g'(Z)dz
where

K1(tlz) = (t-z)0 AO(t) 10z]

-A1(t) [1/2 z]0,

-A2 (t) (1 z]0
and
1 for t > z
(t-z) =
0 for t < z.

In order to verify (5.2.8), one need only expand the right

hand side and integrate by parts. For 0 < t < 1/2,

K (t,z) may be written in the more convenient form

K (t,z) = -A (t) A (t) + 1 for 0 < z < t

=-A (t) A2 (t) for t < z < 1/2

= A (t) for 1/2 < z < 1.

From Equation (5.2.8) it follows that


(5.2.9) If(x) L(x) ( I_ Kl(t,z) dz max0


L hi I IKl(t,z) Idz maxx.
i 1~

0h/ ~~~~d ~'

Evaluating the integral in (5.2.9), we have


(5.2.10) 1_ Kl(t,z) dz = t (1 Al(t) A2~~

+ (1/2 t) [A1(t) + A2(l

+ 1 1/)[-A2 ()
=3t 8t2 + 4t3





Combining Equations (5.2.7)-(5.2.10) we have for

0 < t <1/2,

(5..11 |f (x) s (x)l L (h (1 2t) +

hi (3t 8t2 + 4t 3

( h [1 + t 8t2 + 4t3

which is precisely the desired result. The maximum of the

right hand side of (5.2.11) occurs for a0 = 2/3 13/6.

Evaluating gives the value C0,1"

Proof of Theorem 5.2

Let f be twice continuously differentiable of period

b a and let a partition

a = x0 < "1 < x1 < . xi < zi+1 < xi+1 <. < xn = b

be given (where zi+1 = (xi + xi+1)/2, every i). Let s be
continuously differentiable and a parabola on each

interval [x xi+1] such that

s(zi+1) = f(zi+1), s(a) = s(b), and s'(a) = s'(b).
Letting t = (x x )/h we show that

(5.3.1) jf(x) s(x)( < c l(t) Ilf"

CO,2(t) = h2 ((1 2t)/6 + [ t/(3 2t) -t2]
and for zi+1 < x < xi+1

c0,2(t) = c0,2(1-t).
Furthermore the maximum of c0,2(t) is 1/6 and occurs
for t = 0 and 1.

As in the proof of Theorem 5.1 we fix i and let L(x)

be the parabola satisfying





L(x )=f(x ) L(zi+1)= f(zi+1)

L(xi+1) = f(xi+0 *
Then, proceeding in the same way as before,

(5.3.2) |f(x) s(x)) < If(x) L(x)( + jjeil (1 2t .

We must bound If(x) L(x) and Ilei . We first

bound j ei|1. From Marsden [1974), we have

(5.3.3) hi si-1 + 3(hi + hi-1) si + hi-1 si+1

=4 [hi f(z ) + hi-1 f(zi+1) *

Denoting fi =f(xi) and ei = si ,we obtain from

Equation (5.3.3)

(5.3.4) hi ei-1 + 3(hi + hi-1) ei + hi-1 ei+1

=hi fi-1 4 hi f(zi) + 3(hi + hi-1 fi

4 hi-1 f(zi+1) + hi-1 fi+1

=:B(f) .

As B is identically zero for any linear function f,

we have by the Peano Theorem:


(5.3.5) B(f) = xi+1K(y) f" (y) dy / 1!
xi-1
where

K(y) = B [(x-y) ]

= hi-1 (xi+1 Y+ 4hi-1 (zi+1 Y+
+ 3(hi + hi-1) (xi Y+ 4hi (Zi Y+

+ hi (xi-1 Y+
and x y for x y
(x y) =
0 for x < y.





In order to illustrate the symmetry of the kernel

K(y) about xi, we expand in terms of y xi to obtain

K(y) = hi-1 [hi (y-xi)l
for h /2 I y-xi < hi

=hi-1 [3(y-xi) hi]
for 0 L y-xi < hi/2

=hi [-3(y-x ) hi-1]
for -h _/2 I y-xi < 0

=hi [hi-1 + (y-xi)l
for -hi-1 I y xi I -hi-1/2

whee i = xi+1 xi and hi-1 = xi xi-1. As is easily

seen, the sign of K(y) changes at y = xi + hi/3 and

xi hi-1/3.
From (5.3.5), it follows that

(5.3.6) Ihi ei-1 + 3(hi + hi-1) ei + hi-1 ei+1l

xi+1
< I IK(y) dy f"|
xi-1

( (hi + hi-1) hi hi-1 llf" j/3
Let i be such that leil = Iei . Then

(5.3.7) j e.l ( (1/6) h2 If

which is the desired bound on I ei~l
We next bound If(x) L(X) where L is the parabola

matching f at x zi+1, and xi+1. L can be uniquely
expressed as

(5.3.8) L(x) = f(x ) AO(t) + f(zi+1) Al~t
+ f(xi+1) A2(t)


where





AO (t) = 2 (/ )1-t
A (t) = 4t (1 t),

A2(t) = 2t (t 1/2).
Then, defining g(t) =: f(xi+hit) = f(x) we have


(5.3.9) f(x) L(x) = J K2(t,z) g" (z)dz t = (x-x )/hi

where

K2(t,z) = (t 2), AO(t)[0-z],
-1 A(t) [1/2 z], A2(t) [1 z] .

Equation (5.3.9) can be verified by integrating by parts

to obtain (5.2.8). For 0 < t < 1/2, K2 takes the form

(5.3.10) K2(t,z) = 2 (2t 1)(1 t) for t >z, t < 1/2,
-t [1 + z(2t 3)] for t < z ( 1/2

-t (2t 1) (1 z) for 2 >1/2, t < 1/2.

From (5.3.9), it follows that for 0 < t < 1/2


(5.311)f(x) L(x)l/l g" (t)l L r IK2(tlz) dz


= z(2t -1)(1 t) dz

1/(3-2t)
-t[1 + z(2t 3)] dz

1/2
+ J -t~l + z(2t 3)] dz
1/(3-2t)


2t(t 1/2)(1 z)dz
1/2

= -t2 + [t/(3-2t)]





Therefore if 0 < t < 1/2, we have

(5.3.12) If(x) L(x)J < h 2 E-t2 + [t/(3-2t)]}Jf" (x))
We can now assemble the parts to get' the pointwise bound

(5.31).Using the bound for ]f(x) L(x) of (..1

and the bound of (5.3.7) for |ei | in the formula

Jf(x) s(x)l L If(x) L(x)( + | e-jj 1 2t ,
we then have for 0 < t < 1/2,

(5.3.13) If(x) s(x)l ( { hi2 [-t2 + t/(3-2t)]
+ (1 2t) h2/6 ) I f"

which, as hi I h, immediately implies (531.The result

for 1/2 < t < 1 follows by symmetry. It remains only to

be shown that the maximum of

0 < c ,(t) = h2 ((1 2t)/6 + [ t/(3 2t)- t2~
is h2/6 and occurs for t = T e hs xad0,

at 0 as

c0,2(t) = C002(0) + t '0,2'(0) + (t2/2) c0,2 7)Y
where 0 < t < 1/2 and 0 < y < t. It is not hard to

verify that the last two terms of the above expression are

negative, and hence the maximum occurs at t = 0. This

completes the proof of Equation (..)

We next show Equation 5.1.5. From Marsden, we have

the tridiagonal system matching spline derivatives,

(5.3.14) hi-1 si-1' + 3(hi + hi-1) si' + hi si+1l

=8 [f(zi+1) f(zi)l


or equivalently,




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - Version 2.9.9 - mvs