A subrecursive programming language for increased verifiability

MISSING IMAGE

Material Information

Title:
A subrecursive programming language for increased verifiability
Physical Description:
v, 131 leaves : ill. ; 29 cm.
Language:
English
Creator:
Schahczenski, Celia M
Publication Date:

Subjects

Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1990.
Bibliography:
Includes bibliographical references (leaves 128-130).
Statement of Responsibility:
by Celia M. Schahczenski.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001677230
notis - AHY9133
oclc - 24880976
System ID:
AA00003758:00001

Full Text






A SUBRECURSIVE PROGRAMMING LANGUAGE
FOR INCREASED VERIFIABILITY







By

CELIA M. SCHAHCZENSKI


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY




UNIVERSITY OF FLORIDA


1990











ACKNOWLEDGEMENTS


I would like to take this opportunity to express my deep appreciation to Professor

Rick Smith, my major advisor, for the guidance, support and many hours of enlight-

ening discussion. I also want to thank my wonderful husband for his unbounded

support in whatever I choose to do and his preserverance in lengthy conversations

which are interesting to him only because they are to me.















TABLE OF CONTENTS




ACKNOWLEDGEMENTS ............................... ii

ABSTRACT .............. .. ........................ iv

CHAPTERS

1 INTRODUCTION .... ...................... ....... 1

2 MINIMAL PR PROGRAMMING LANGUAGE PR .... ... .... 8

2.1 Syntax of PR .............................. 8
2.2 Semantics of CPR ............................. 9
2.3 PR Computes the Class of PR Functions ............... 16
2.4 Verification of CPR Programs ...................... 21
2.5 Soundness of PR ............................. 23
2.6 Completeness of -HpR .......................... 28

3 BLOCK LANGUAGE ............................. 41

3.1 Syntax of B ......... .. ..... ...... ..... .. 41
3.2 Semantics of LB ............................. 41
3.3 LB Computes the Class of PR Functions .... ......... 44
3.4 Verification of B Programs ....................... 45
3.5 Soundness of "13B ............................. 46
3.6 Completeness of 7-B .......................... 46

4 PARAMETERLESS PROCEDURES . .. .. 49

4.1 Recursion in a PR Programming Language ................ 49
4.2 Syntax of LC ............................... 55
4.3 Semantics of C .................. .......... 59
4.4 LC Computes the Class of PR Functions . ... 66
4.5 Verification of LC programs ....................... 73
4.6 Soundness of H .C ..... .................. ..... .. 81
4.7 Completeness of 'HC ................ .... 87











5 PARAMETERS............................

5.1 Syntax of D ....................
5.2 Semantics of D ...................
5.3 CD Computes the Class of PR Functions .
5.4 Verification of D Programs .............
5.5 Soundness of D ..................
5.6 Completeness of 'D . .

6 CONCLUSION .......................


. . 96
. . 100
. . 101
. . 108
. . 110


APPENDICES


A PRIMITIVE RECURSIVE FUNCTIONS .................. 119


B PRIMITIVE RECURSIVE ARITHMETIC ................ 120


C THE E,-ITERATION RULE IS NOT SOUND ......... .... 122


REFERENCES ..................... ............. 128


BIOGRAPHICAL SKETCH ......


. . 131











Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy


A SUBRECURSIVE PROGRAMMING LANGUAGE
FOR INCREASED VERIFIABILITY

By

CELIA M. SCHAHCZENSKI

December 1990

Chairman: Dr. Gerhard Ritter
Major Department: Computer and Information Sciences


Removing gotos from computer languages resulted in more understandable, main-

tainable and verifiable code. Restricting recursion to primitive recursion in computer

languages such as PASCAL has similar results. This dissertation developed a highly

structured programming language where recursion is limited to primitive recursion.

Programs in the language compute exactly the class of primitive recursive functions.

A Hoare verification system is developed for this language. It is proved that this

system is sound and complete.
















CHAPTER 1
INTRODUCTION

The term structured programming emerged in the seventies. It became neces-

sary for the advertisement of every software product to sport the word structured

several times, preferably in the product's title. This was not due simply to the

faddishness of a quickly growing field. Structured programming works. While at

first gotoless programming appeared to be taking a tool away from the developer, in

fact it provided a framework in which to reason clearly. Structured programming en-

hances programmability by organizing the programmer's thoughts. It also enhances

the verifiability and maintainability of software.

To be structured has been defined as the ability to understand the meaning of

the whole from the meaning of the parts and a few combining rules. This goes hand

in hand with modularity. Each module is a portion of the program. The meaning of

the whole comes from the meaning of each of the modules and the knowledge of how

to put these modules together.

Computer languages have become increasingly structured. For example object

oriented programming modularizes both the code and data. Looking at the con-

trol structures of sequential programming languages such as PASCAL, the author

will continue this trend towards more structured programming languages. Edmund

Clarke reported surprising results that suggest that PASCAL-like languages are too

flexible [7, 8]. He proved that there is no Hoare verification system for these languages.

A central feature of PASCAL is recursion. The author sees a similarity between gotos

and the unrestricted use of recursion. In this dissertation a powerful programming











language is developed which is more structured than PASCAL. Specifically, recursion

in this language is limited to primitive recursion.

Primitive recursion is powerful yet easy to understand. In essense primitive re-

cursion is iterating on a single variable. A function defined by primitive recursion is

defined directly at n equal to zero. For n greater than zero the function is defined

using the result of applying the function at values less than n. The programs of a

structured computer language where recursion is restricted to primitive recursion are

easier to understand, maintain and verify than programs of a structured language

which allows the unrestricted use of recursion.

The terms operational and denotational are used to contrast two methods of

specifying the meaning of programs [34]. Originally program semantics were said to

be operational if they were given in terms of operations on an abstract machine. The

idea was that, although the abstract machine was unrealistic from a practical point

of view, it was so simple that no misunderstanding could occur as to the meaning

of the program. Denotational semantics, sometimes referred to as mathematical

or functional semantics, describes the meaning of programs directly. Some type of

semantic valuation function is used which maps syntactic constructs in the program

to the abstract values which they denote.

The term operational is used more broadly today. Semantics are said to be defined

operationally if they involve describing computational sequences. A problem is that

operational semantics tend to give results of specific computations. Starting with

a particular program, and an input vector, the semantics tell us how to crank the

handle to obtain the result. Such descriptions may allow hidden ambiguities. For

some programs it may be obvious that the operational semantics are well defined.

However, when giving the semantics of a language, all programs that could possibly

be written in the language should be considered.











An operational definition can be made mathematically rigorous. However, there

is still another difference between operational and denotational semantics. In 1977

Joseph Stoy described a difference which still exists today.

The former defines the value of a program in terms of what an ab-
stract machine does with the complete program. Its structure, therefore,
need not correlate with the way the programmer thinks about his pro-
gram when he selects particular syntactic components and combines them
together in particular ways. In the denotational definition, on the other
hand, the value of a program is defined in terms of the values of its sub-
components; it is more easily possible for us to confine our treatment to
any particular part of the program we wish to examine. This may make
it a more satisfactory tool for the language designer and also for those
concerned with validating various techniques for proving the correctness
of particular programs. [34, page 20]


For many programming languages the operational semantics are defined easily

while the denotational semantics are not. In general, denotational semantics have.

not been defined for programming languages including recursion. The author suggests

that this is a result of current programming languages being too flexible.

Denotational semantics exist for the programming language presented here. Every

program can be translated into a primitive recursive function. Furthermore, since

there is a term in Primitive Recursive Arithmetic, PRA, for every primitive recursive

function, there is a PRA term which describes each program in this programming

language. There is also a PRA axiom which shows how that term was built. The

class of primitive recursive functions is defined in Appendix A. The theory of PRA

is presented in Appendix B.

This research is motivated by theoretical issues as well. In his seminal paper

"An axiomatic approach to computer programming" C.A.R. Hoare [20] introduced a

method of capturing the meaning of program constructs. This method may be used

to define a programming language or to verify programs relative to given pre and

postconditions. Hoare statements are triples of the form {P}S{Q} where P and Q










are formulas in a first order assertion language and S is a program segment. The

statement {P}S{Q} is true, if whenever P holds for the initial values of S and S is

executed, either S diverges or Q holds for the final values of S.

Verification systems should be sound and complete. A system is sound if all

statements which are provable in the system are true. Completeness implies that

all true statements are provable. When a verification system is sound and complete

the notions of provability and truth are equivalent. This allows investigators to

manipulate syntactic proofs knowing that the results will be true, and conversely, to

reason semantically knowing that a syntactic proof can be found.

A Hoare verification system consists of a set of axioms and rules, augmented

by a theory. Hoare verification systems have been proposed for many programming

languages. Apt [2] gives an excellent overview of these systems. These systems

have axioms and rules which capture the meaning of each construct in the language;

however, they are not sound and complete in the usual sense. Related to this is the

possibility that the assertion language is not able to express all necessary pre- and

postconditions. These problems will be discussed separately.

Current programming languages are universal in that they are capable of com-

puting the full class of partial recursive functions. Hoare verification systems for

these languages typically use the full theory of the model to augment their Hoare

axioms and rules. This full theory is not even a recursively axiomatizable theory. In

addition this usage obliterates the distinction between provability and truth. One

can no longer talk about soundness and completeness in the usual sense because syn-

tactic proofs depend on a particular model. Given a theory T, write =T {P}S{Q}

if for any model M of T, M 1= {P}S{Q}. A Hoare verification system, with a set

H of axioms and rules, should have a first order theory T to augment H so that a










soundness and completeness theorem would read


-HT {P}S{Q} T {P}S{Q}.

Instead the soundness and completeness theorems for universal programming lan-
guages read


-H,Theory(M) {P}S{Q} M H= {P}S{Q}.

Completeness in this sense is referred to as relative completeness since the theory is
chosen relative to a particular model.

The assertion theory may not be strong enough to express all necessary assertions.

This gives rise to the second problem. Even the relative soundness and completeness
theorem cannot be proven for these systems. There are models for which {P}S{Q}

is true but not provable with any set of Hoare axioms and rules augmented by the
full theory of that model. A model of Presburger Arithmetic provides such an exam-
ple [36]. Even though assertions P and Q may be expressed in Presburger Arithmetic,

the intermediate assertions necessary to prove {P}S{Q} may not. This results from

the inability of Presburger Arithmetic to express multiplication. Given a program-

ming language and assertion language L, a model is expressive relative to L and

L if a postcondition Q can be expressed for each assertion P and program S [12].

Soundness and completeness theorems for universal programming languages read

For all expressive models M

-H,Theory(M) {P}S{Q} M {P}S{Q}.

The above problems do not interfere with the search for Hoare axioms and rules

that capture the meaning of various programming constructs. Bergstra and Tucker

show that models of PA are expressive for a weak WHILE language [4]. However,











the following question arises. Can a truly sound and complete verification system be

developed for a powerful language which is based on a particular theory?

Subrecursive programming languages are languages whose programs compute only

a subset of the class of partial recursive functions. Since there is no reasonable

theory which captures the class of partial recursive functions, a language based on a

particular theory would need to be subrecursive. There are various classes of functions

and corresponding theories. The class of primitive recursive, PR, functions and its

theory Primitive Recursive Arithmetic, PRA. were chosen for this research. PRA

is an attractive theory because all practically computable functions are primitive

recursive, the axioms and rules of PRA are elegant, and primitive recursion itself is

easy to understand.

This dissertation shows that a truly sound and complete verification system can

be developed for a computer language based on PRA. In Chapter 2 a minimal pro-

gramming language and its verification system is presented. The remaining chapters

extend this language into a PASCAL-like language which computes exactly the class

of PR functions. The result is a powerful programming language where recurisons

are cleanly nested. Additional advantages of the system presented is that proofs in

the verification system are recursively enumerable and all programs halt.

At this stage the above advantages may appear more theoretical than practical.

They do, however, give compelling evidence that the restrictions on programming lan-

guages presented in this dissertation are legitimate and may lead to a more verifiable

programming language.

The choice of what computer language constructs to add to the minimal PR

programming language is motivated by Clarke [7]. Clarke proved that there is no rel-

atively complete Hoare system for a language containing internal procedures, global

variables, static scope, procedures as parameters and recursion. In Chapter 3 the











minimal PR programming language is extended to include the declaration of tempo-

rary variables. In the original PR programming language, work variables had to be

treated as input variables. In Chapter 4 recursive parameterless procedures are added

to the language. In this language variables are global and static scope is assumed.

Fairly severe restrictions are made so that this language does not lead outside of the

class of PR functions. These restrictions are what make the language so highly struc-

tured. Variable and procedure parameters are added to the language in Chapter 5.

The reference chain of a procedure call is a list of those procedures which must be

understood in order to understand this call. Olderog [29] showed that Clarke's in-

completeness result hinges on the possibility of a program in the language containing

a call with an unbounded reference chain. For this reason the language in Chapter 5

is restricted so that reference chains are bounded.

Each language is reported in the same format. After the syntax and semantics of

the language are defined, it is shown that programs in the language compute exactly

the class of primitive recursive functions. A Hoare verification system is presented

and in the final sections it is proved that this system is sound and complete.















CHAPTER 2
MINIMAL PR PROGRAMMING LANGUAGE PR


2.1 Syntax of PR

The tokens of CPR programs include an infinite set,VT, of variable identifiers

or simply variables. The vector X5, refers to a list xl,...,z, of variables. Addi-

tionally, there is the constant 0, the successor operator, s, and the special tokens
": =",";", "loop" and "end".

An expression language is used to specify the expressions forming the right side

of assignment statements and expressions controlling loops. The set of expressions

is defined as the closure of 0 and x under the successor operator. The more natural

notation, x + 1, is frequently used instead of s(x).

The set of program segments is defined in Backus-Naur form for variable identifier

x and expression e as follows:

S x := e S; S2 I loop e; S1 end.

The variables which appear in S and e are denoted var(S) and var(e), respectively.

In subsequent languages discussed variables may be bound. Variables which are

not bound in S are said to be free with respect to S, and are denoted free(S). In

this section free(S) = var(S). Nevertheless the terms free(S) and var(S) are not

interchangeable because this section serves as a basis for future languages where

free(S) : var(S). The free variables of program segment S will also be referred to as

the active variables of S. In the program segment loop e; S1 end it is required that

var(e)n free(Si) =0.











A program 7r in this primitive recursive programming language is a program

segment S with the free variables of S serving as r's input variables and a special

variable from free(S) serving as 7r's output variable. Thus 7r is given by the pair

(S, x) where x E free(S).


2.2 Semantics of PR

The semantics of PR programs are given in the style of Olderog [29]. First

some preliminary concepts are presented. An assertion language is used to specify

predicates describing a program's behavior. This language is an extension of the

expression language and is a first-order language in which PRA can be expressed.

The formulas in this language are defined in the usual way. The set of variables in

formula P is denoted var(P) and the set of free variables is denoted free(P).

The meaning of the expressions and formulas of the assertion language depend

on the interpretation and the values of the free variables. States assign values to

variables. A program's state is finite. It is assumed, however, that whenever reference

is made to a variable, the state will have a value for that variable. Therefore the state

can be seen as infinite. This is similar to seeing the Turing Machine tape as infinite to

the right. It is assumed that a move to the right never takes the read/write head off

the end of the tape. Thus the tape appears infinite to the right. For any terminating

program, however, the tape is finite. While it simplifies the semantics to view states

as infinite, they must be representable in each model of PRA. Therefore they must

be finite so they can be encoded.

Assume an interpretation I of the language of PRA with domain D. View the

state s as a totally defined mapping s : VI --- D. Given an interpretation I and

a state s, the evaluation of expression e, denoted I(e)(s), and the truth value of a










formula P, denoted Z(P)(s), are defined in a standard way. Write 2",s P if Z(P)(s)
is true. Write =I-P if 2=,s P is true for every state s. For theory T write =T P if
=-ZP holds for every interpretation I of T. The set of all states is denoted St. For a
state s let s{d/x} denote the state resulting from replacing the value associated with
variable identifier x by domain value d. That is s{d/x} denotes the state s' where

s'(x) = d and s'(y) = s(y) for y Z x.

For a set of variable X C VI the restriction of s to X is denoted s[X. St-(P) denotes
the set of all states expressed by P, i.e. Stj(P) = {s such that =Is P}
The concept of substitution is developed next. Substitutions can occur in a variety
of situations. The terms general substitution and substitution will be defined. For
expression ei and variable identifier xi, all xi's distinct, let p = [ei,..., en/xi,..., ,],
or equivalently p = [e,/i,], denote the mapping

{(xI,el),...,(zx,en)} U {(y,y) y E VI and y z x; 1 < i < n}}.

A general substitution p is a mapping where the replacement terms are expressions.
General substitutions on formulas, Pp, are defined as usual. Recall that bound
variables in P have to be renamed to avoid clashes with inserted variables.
In many contexts variables may only be replaced by other variables. Furthermore,
to avoid more than one variable identifier referring to the same variable, the variables
used as replacement variables must be distinct. Thus a general substitution p =

[4n/,] is called a substitution on Y if each e, is a variable identifier and for all
u, u' E Y, p(u) = p(u') -- u = u'. That is a substitution on Y must be injective on
Y.

Let p be a substitution on X. Define p on a state s by


sp(x) = s(p(x)) for x E X.










General substitutions are not defined on states. The following two lemmas link
general substitutions on formulas to replacements on states.
Lemma 1 (Substitution and replacement on terms.) For a term 7
I(T[e/x])(s) = I(r)(s{Z(e)(s)/x}).


Proof: Prove this by induction on the definition of the term 7. O

Lemma 2 (Substitution and replacement on formulas.) For a formula P
I(P[elx])(s) # I(P)(s{Z(e)(s)/x}).


Proof: Prove this by induction on the definition of a formula P. 0

Corollary 3 For a formula P

s E StZ(P[e/x]) < s{I(e)(s)/x} E StZ(P).

Corollary 4 For a formula P and y ( free(P)

s E StZ(P) s{I(x)(s)/y} E StZ(P[y/x]).

Proof:

s E StZ(P) e s E St{((P[y/x])[x/y]) y V free(P)
# s{Z(x)(s)/y} E StZ(P[y/x]) Corollary 3










Appendix A describes the class of PR functions on the natural numbers. For a
model of PRA with domain D call a function f : D --+ D PR if there is a term r in
the language of PRA such that Vz E ~ (f(x) = r(x)). To show the existence of such
a term show that if V is the set of natural numbers, N, the function f : N -* N is
PR.
For I an interpretation on N and I(e)(s) = n, let f-(e)(s) denote the nth com-
position of f. That is f0 = g where Vx(g(x) = x) and fn"+ = f o f".
The semantics fs of program segment S are given as functions between states as
follows.
S X := e
fs(s) = assign(x, e)(s) = s {(e)(s)/x}
S =S; S2
fs(s) = comp(fs,, fs,)(s) = fs o fs, (s)
S loop e; S, end
= I(e)(s)
fs(s) = loopn(e, fs,)(s) = fe)(s)s

The functions compr (fs,, fs,) and loop1(e, fs,) will frequently be written compg(S1, S2)
and loop1(e, Si), respectively.
Let X be a finite subset of VI and f be a totally defined state function. Then f
is called a program function on X if f is PR and the following properties hold:
1. If f(s) = s' then s[(VI \ X) = s' (VI \ X). This is the stability property with
respect to the variables VI \ X.
2. If si[X = s2[X then f(st) X = f(s2)[X. This is the aloofness property with
respect to the variables VI \ X.

It will be shown that state function fs is a program function on free(S).
Let p be a substitution on X and f be a state function on X. Define p on f as
follows.

()() =( f(sp)(p-'(w)) ifw E p(X)
f ) = s(w) otherwise.










Show that for program segment S, syntactical and semantical substitutions corre-
spond to each other as expected.
Lemma 5 (Pre-Substitution Lemma) For an expression e and a substitution p
Z(ep)(s) = Z(e)(sp).

Proof: Prove the lemma by induction on an expression e. CO
Lemma 6 (Substitution Lemma) For a program segment S and a substitution p which
is injective on free(S)
fsp(s) = fsp(s).

Proof: Prove the lemma by induction on a program segment S.

S = x := e
fsp(s)(x) = assignz(p(xi), ep)(s)(x)
= (s{Z(ep)(s)/p(xi})(x)
= (s{Z(e)(sp)/p(x,})(x)
SI(e)(sp) if x = p(xi)
s(x) if x / p(xi)
I(e)(sp) if x = p(xi)
sp(p-'(x)) if x p(xi)
= sp{I(e)(sp)/xz}(p-'1())
= assign1(xi, e)(sp)(p-'(x))
=assignj(xi, e)p(s)(x)
S S1; S2

The proof is straightforward.

S E loop e; Si end

First show fs, pn(s)(x) = f/ (sp)(p-'(x)) by induction on n. Use this to prove
the following.
fsp(s)(x) = loop1(ep, fs, p)(s)(x)
= frl(ep)(s)()()
fI(e)(sP)( )x
= /^""" ()(x
= (fA,)(sp)(-) (.))
= loop1(e, fs,)(sp)(p-'(x))
= loopZ(e, fs)p(s)(x)










0
Lemma 7 For program segment S, fs is a program function on free(S).

Proof: It is shown that fs is PR for I an interpretation on N in the following section.
Let free(S) = X.
To show fs is stable with respect to X prove that for x E VI\X, s(x) = fs(s)(x).
Since programs are finite and there are arbitrarily many variables there is a variable
y E VI \ X where for any state s, s(y) = fs(s)(y). Let p = [x/y].
s(x) = sp(y)
= fs(sp)(y) Choice of y
= (fs)p(s)()
= fsp(s)(x) Substitution Lemma
= fs(s)(x) x and y are inactive
Show fs is aloof with respect to X = {xi,...,Xk}. Assume si[X = s2[X and
x, E X. Let Y = {yi,..., yk} where X n Y = 0 and p = [xk/Yk]-
fs(si)(xi) = fs(sip)(yi)
= (fs)p(si)(xi)
= fsp(si)(x,) Substitution Lemma
= sl(xi) x; inactive in Sp
= s2(x;) Assumption
= fSp(S2)(xi) x, inactive in Sp
= (fs)p(s2)(xi) Substitution Lemma
= fs(s2p)(yi)
= fs(s2)(xi)


The following properties of program functions do not depend on the syntax of

PR and are important in subsequent proofs. Let f(P) C Q serve as an abbreviation
for f(StZ(P)) C StZ(Q).
Lemma 8 Let f and g be program functions on X. Then
1. assignj(x, e)(P) C Q HIP -- Q[e/x].
2. compZ(f,g)(P) C Q < 3Y C St such that f(P) C Y and g(Y) C Q.
3. loopi(e,f)(P[O/x]) C P[e/x] = f(P[y/x] A 0 < y < e) C P[s(y)/z] where xz
var(e) U X, y V var(e, P) U X, var(e) n X = 0 and P is a bounded formula.









Proof:

1.

assign1(x,e)(P) C Q
<* assignZ(x, e)(StZ(P)) C StZ(Q)
<, Vs(s E StZ(P) -- assign1(x,e)(s) E StZ(Q))
< Vs(s e Stj(P) s{Z(e)(s)/x} E St1(Q))
+ Vs(Z(P)(s) Z(Q)(s{Z(e)(s)/x}))
+ Vs(I(P)(s) -- Z(Q[e/z])(s))
:a Vs(Z(P -+ Q[e/x])(s))
S-PP- Q[e/x]


compZ(f,g)(P) C Q
9 gof(P)CQ
S3Y C St such that f(P) C Y and g(Y) C Q
(Remember that f is a totally defined function.)

3. Let x, y ( X, y V P and t be an arbitrary state. Prove the following by
induction on d.
If Vy < d = PZ,t P[y/x] implies Z.f(t) P[s(y)/x]
then =Z,t P[0/x] implies ZI,loop(d, f)(t) P[d/x].

For d = 0 Z=,t P[O/x] implies = -Joop(0, f)(t) P[O/x]. Suppose d > 0 and
Vy < d Z=,t P[y/x] implies -Zf(t) P[s(y)/x]. By the inductive hypothe-
sis Z.,t P[0/x] implies =ZZJoop(d 1, f)(t) P[d 1/x]. This is equivalent
d-1 d
to -I,f(... f(t)) P[s(.. s(O))/x] which implies =Zf(... f(t)) P[s(..- s(O))/x].
d-1 d
This is equivalent to ZI,loopZ(d, f)(t) P[d/x].

For Z(e)(s) = d, x,y V var(e), var(e) n X = 0 and P a bounded formula the
above uses bounded induction to show
If =P,t P[y/x] A 0 < y < e implies =z,f(t) P[s(y)/x]
then IZ,t P[0/x] implies =-,loopZ(e, f)(t) P[e/].











0


Given an PR program 7r = (S, xz) where free(S) = {x,,..., Xk}, the meaning of

r is the program function fs interpreted as follows: given fs(s) = s' then for inputs

s(xi),...,s(xk), 7r outputs s'(xz).


2.3 PR Computes the Class of PR Functions

A series of classes of PR functions will be defined, each class in terms of the

previous class. The final set represents the functions computed by PR programs.

This class is equivalent to the PR functions.

Throughout the remainder of this chapter underlined variables will denote hard-

coded values. That is, given a pair (xi xi), x, is to be replaced by a variable identifier

at definition time. The second xi in the pair is a regular variable which stands for an

arbitrary domain value.

Assume an interpretation I of PRA with domain D. Let X = {xI,...,xk} be

a finite set of variables and for each variable x, there is a domain value di where

s(xi) = di. Write s[X = ((xi d) ...(xk dk)). If the variable identifiers also come

from D this list can be coded and decoded within I such that for s [X as above, c a

coding function and (X)xi decoding functions,


c((xi d) ..._ (Xk dk)) = x <, (x)xi = di 1 < i < k.

Notice that a set of coding and decoding functions is being defined. A different

coding function, and set of decoding functions, is being defined for each set of variable

identifiers X.

Let c(xk dk) serve as an abbreviation for c((xa dl) (xk dk)). Call x = c(-a d4)

a state code on X = {xl,...,zk} and write var(x) = X. There is a PR predicate










which takes a set of variables Y and a state code x and determines if Y C var(x).

Cutland [13, page 41] provides an example of such a predicate.
State codes are domain objects which encode a portion of a state. The theory

of PRA is typically defined with a single type of object. Therefore states must be
coded into this type of object. An alternative would be to define PRA to be multi-
typed. That is, define PRA so that its models contain not only base elements but

more complex elements such as sets or sequences as well. The details explode in the
formalization of either method. Encoding states into state codes is a straightforward
approach.

For expression e in an Cp program define ge : D -- D, relative to I, by recursion
as follows.

e O0 ge(x)=

e = z ge(x) = (x)x

e s(ei) ge(x) = ge,(x) + 1

Notice that ge is a PR function where ge(c(s[X)) = I(e)(s).

For each variable identifier xi define the function set,, such that

Sc((i (X)x ) ... (Xi-_ (x)w,__,)
set,,(e, x)= (xi ge(x))(xi+, (x),+,) ... (Xk (X)xk)) if xi E var(x)
x otherwise.

Lemma 9 For expression e, state s and coding function c where var(e) C X =
{XI,... Xk}
set. (e, c(s X)) = c(s{I(e)(s)/lz} [X).


Proof: The proof is a straightforward application of the definitions. o










For program segment S in an PR program define gs : D --+ D, relative to 2, by
recursion as follows.

S x :=e gs(x) = set,(e,x)

S S1;S2 9s(x) = gs2(gs,(x))

S loop e; S end gs(x) = g'()(x)

Lemma 10 The function gs is PR and for free(S) = X

gs(c(s[X)) = c(fs(s) X).


Proof: Clearly gs is PR. Using Lemma 9 prove the equality by induction on program
segment S.

S i := e

gs(c(sfX)) = set,,(e,c(s[X))
= c(s{I(e)(s)/xi}[ X)
= c(fs(s) X)

S S1; S2

The proof is straightforward.

S loop e; S' end

First prove gd (c(s[X)) = c(f] (s) X) by induction on d. Use this result to
prove the following.

gs(c(sFX)) = g e(c(Srx))(c(s(X))
= g(e)(s)(c(sX))
= c(f(e)((s) X)
= c(fs(s) X)










Given program segment S with its meaning function fs, gs is a PR function

tightly tied to fs. While fs : St -- St, gs : D -- D. Given fs(s) = s' for fs

defined on variables X, gs(c(s[X)) = c(s'[X). Since gs is PR, any assertion language

in which PRA can be expressed will have a term corresponding to gs. This is the

central advantage of this system. For any program segment S there will be a term

TS in the assertion language which describes the behavior of S. Specifically, given a

domain element x which codes the variable values going into a program, rs(x) codes

the values upon exiting that program.

Next define functions which simulate PR program 7r. Given 7r = (S, x) with

free(S) = X define g, : D -* D as follows:


gr(c(s[X)) = (gs(c(s X)))X.
Lemma 11 For r = (S, x,) with free(S) = X, g, is a PR function where

g,(c(s X)) = fs(s)(x.).


Proof: Clearly g, is PR. The equality follows from Lemma 10. o

Finally, given a program r = (S, x.) with free(S) = X, and given a state s where

sFX = ((x dl) ... (xk dk)), define h, : Dk -- D as follows:


h,(di,...,dk) = M(c(sFX)).
Lemma 12 The function h, is a PR function where

h,(d,,..., dk) = fs(s)(xz).


Proof: Clearly h, is PR. The equality follows from Lemma 11. O

Theorem 13 The class of functions computed by PR programs is the class of PR
functions.











Proof: Lemma 12 guarantees that all functions computed by PR programs are PR.

It will be shown that all PR functions are computed by some CPR program. The

class of PR functions is the closure of the functions Az.O, Ax.x + 1, and Ak.xi for

1 < i < k, k > 1 under composition and primitive recursion. The functions Az.O and

Ax.x+1 are computed by the PR programs 7r = (x := 0, x) and 7r = (x := x + 1, x).

For 1 < i < k, k > 1 the function AFk.Xi is computed by r = (S, xz) for S as follows.


S: X1 := X1


Xk := Xk
Xz := Xi


Before continuing it will be useful to define a condensed notation for the PR

statements to copy the values of n variables to a distinct group of n variables. Let

the PR statements

X, := Xp
Xa3+1 := p+1


Xs+n : Xp+n

be denoted

Xsa,... Xs+n Xp. X p+n

Suppose PR function f is defined by composition as f = h o (gl,... ,g) where

f, gi, ... ,gm are n-place functions and h is an m-place function. Furthermore say h is

computed by the CPR program rKH = (H, y) where free(H) = Y = {yl,..., ym}, and

each gi is computed by the program -r, = (Gi, xz) where free(Gi) = X = {zi,..., z,}.

Without loss of generality say X, Y and {ul,...,u,} are non-overlapping sets of

variables. Then the PR program rf = (F, yz) where free(F) = X computes f for

F defined as follows.











F: ul,..., un x1,..., /* save a copy of F's input values */
G1 /* run G, on E,, */
yi := XZ /* save Gi's output */
/* Repeat the following 3 lines for i = 2,..., m */
x,...,, Xn ul,...,- Un /* get a fresh copy of n */
Gi /* run Gi on */
Yi := Xz /* save Gi's output */
/* Finish up */
H /* run F on output of Gi,...,G, */

Suppose PR function f is defined by primitive recursion as

f(O, n) = h()
f(x + 1, n) = g(x, n,, f (X, in))

Furthermore say h is computed by the PR program rh = (H, y) where free(H) =

Y = {yi,...,yn}, and g is computed by the CPR program 7r = (G,v) where
free(G) = YU{yn+, v}. Without loss of generality say {z}, YU{yn+i}, {v} and {ui,...,un+

are non-overlapping sets of variables. Then the PR program rf = (F,v) where

free(F)= {x} U Y computes f for F as follows.

F: u,...,u, *-- yi,.... /* save a copy of F's input values */
H /* run H on yn */
( := y, /* save f(O, ji) */
un+1 := 0 /* counter, 0 to x */
loop x;
Un+i := s(un+l) /* update counter */
Yi,.., Yn+i -- ul1,... ,un+/* get a fresh copy of inputs for G */
G /* run G on (counter, yn, f(counter, sn)) */
end

Thus there is a program in PR which computes every PR function. o



2.4 Verification of PR Programs

A Hoare-type verification system, HPR, is defined as a set of axioms and rules

augmented by PRA. Proof lines are formulas in the assertion language or Hoare










statements {P}S{Q}. Proof rules are of the form

Il,...,ln
In+1

This rules says that if statements 11,..., are provable then 1,+1 is provable. The
axioms and rules of HiPR are as follows.

Assignment Axiom

{P[e/x]}x := e{P}

Composition Rule
{P}S1{R}, {R}S2{Q}
{P}S; S2{Q}

Iteration Rule

{P[y/x] A 0 < y < e}S{P[y + 1/x]}
{P[Olx]}loop e; S end{P[e/x]}
for z V var(e) U free(S) and y V (var(e) U free(S) U free(P)).


Consequence Rule
P1 -- P, {PS{Q},Q -+ Q
{Pt}S{Q,}
Hoare proofs are frequently developed backwards. That is, one starts with what
is to be proved and works backwards to a list of axioms. Let the following indicate
that Hoare triple A is provable from Hoare triple B using the given rule.

A
T
Rule

B

When the Consequence Rule is used additional information is needed to show the
implications. Let the following indicate that {PI}S{Q1} is provable from {P}S{Q}










using the Consequence Rule.

{PI}S{Qi}
T
a 1 Consequence Rule T b

{P}S{Q}
The proof that H-PRA P1 P and -HPRA Q -+ Qi can be given following the
backward proof. If given, these proofs will be labeled a and 6, respectively.


2.5 Soundness of HpR

Verification system tHPR is sound if, for every interpretation I of PRA, whenever

{P}S{Q} is provable in 3HpR then {P}S{Q} is true with respect to I. A problem
arises because PRA allows induction on bounded formulas yet, as it stands, the Iter-

ation Rule allows induction on arbitrary formulas. This is an unusual case where the
syntax, specifically the Iteration Rule, is stronger than the semantics. Let E, denote

the class of formulas which can be written in the form 3liVi23x3 .. V O(-, -,... in)

or 3Y1Vi233 3 n4.(~. .... ,Y) where 6 is quantifier free. The Iteration Rule where
the loop invariant is a formula from E,, will be referred to as the En-Iteration Rule.

The relation between T and the Iteration Rule is given in the following lemma.
Lemma 14 For a complete theory T DPRA and a Hoare System H which includes
the Assignment Axiom, Consequence and Composition Rule
1. E,-Iteration Rule is sound = T En,-induction
2. T E,+1-induction => E,-Iteration Rule is sound


Proof:

1. Assume the E,-Iteration Rule is sound and for P E E,, P(O) and Vx(P(x) --
P(x + 1)) are in T. It will be shown that T I- P(a) for a new constant a. Therefore

T VxP(x), and by the deduction rule T H- (P(O)AVx(P(x) P(x+l))) VxP(x).

All that is left to prove is T P(a).










First it will be proven that for i V free(P)

-f{P[O/x]}i := O; loop a; i := i + 1 end{P[a/x]}.


{P[O/x]}i:=



{(P[0/x] A i = 0)[O/i]}i



{(P[0/x A i = 0)[0/i]i := 0
{P[0/z] A i = 0}
Assignment Axiom


O; loop a;i :=i + 1 end{P[a/x]}
T
Consequence Rule
I
:= 0; loop a; i := i + 1 end{P[a/z] A i = a}
T
Composition Rule
I
{P[0/l] A i = O}loop a; i := i + 1 end
{P[a/] Ai = a}
T
Iteration Rule
loop invariant: P[u/x] A = u

{P[y/x] A i = y A 0 < y < a}i :=i+
{P[y + 1/x] Ai = y + 1}

I
a I Consequence Rule

{(P[y + 1/x] A i = y + 1)[i + 1/i]}i := i + 1
{P[y + 1/x] Ai = y + 1}
Assignment Axiom


The proof of implication a is as follows.

P[y/z] Ai = yA 0 = P[y + l/xJ A i = y

SP[y + 1/x] A i + 1 = y + 1

S(P[y + 1/x] Ai = y + 1)[i + 1/i]










It has been assumed that the rules utitlized in this proof are sound. Therefore

for any interpretation I of T where i f free(P)

T HZ7 {P[O/x]}i := 0; loop a;i := i + 1 end{P[a/x]}.

That is, the following holds in T

Vs(s E Stz(P[O/x]) -- compZ(assignz(i, 0), loopi(a, assignz(i, i + 1)))(s) E Stz(P[a/x]).

It is assumed that T I=7 P[O/x], so the implicand holds for any state s. Program

functions are stable and i V free(P) so for any state s, s E St1(P[a/x]). Therefore

T -=T P[a/x] and since T is complete T I- P(a).

2. Assume T I- ,n,+-induction. The proof of the soundness of the Iteration Rule

results from the following for formula P, program function f on X and expression e

where x V var(e) U X, y V var(e, P) U X and var(e) fl X = 0.

f(P[y/x] A 0 < y < e) C P[s(y)/x] : loopz(e, f)(P[0/x]) C P[e/x]

The proof of this is the same as the proof of Lemma 8 part 3 with the exception that

for P E E,+-induction is used rather than bounded induction.

0

If one limits oneself to working in the natural numbers N standard induction

holds. That is, for any formula P

P(0) A Vx(P(x) P(x + 1)) Vx E N (P(x)).

Theoretically, however, there are models of PRA for which standard induction does

not hold. Therefore it is possible to define a model where utilizing the unbounded

Iteration Rule allows a false conclusion to be proved. Such a model will be non-

standard.











Peano Arithmetic (PA) is a stronger theory than PRA. In addition to the axioms

of PRA, PA contains an induction axiom for every first order formula. PRA only

allows induction on bounded formulas. Thus every model of PA is also a model of

PRA. The problem is that the models of PRA which are not also models of PA behave

in an unexpected way. One such model is developed in Appendix C. In that appendix
it is shown that the model developed is a model of PRA. An element, obtained via

Ackermann's function, is given which is not in this model. Finally it is shown that,

using the E.-Iteration Rule, it can be incorrectly proven that this element is in the

model.

There are two ways to handle the mismatch between PRA's induction axiom and

RHpR's Iteration Rule. One of the motivations of this research was to develop a
system with a clean Soundness and Completeness Theorem. That is, to develop a

system where the Soundness and Completeness Theorem reads

IPRA {P}S{Q} t _PR {P}S{Q}.

To create a sound system which maintains this clean separation between semantics

and syntax, loop invariants must be bounded. Recall that within the natural numbers

full induction holds so this restriction can be ignored.

Another way to handle this mismatch is to prove a weaker Soundness Theorem.

This Soundness Theorem would disallow those models of PRA for which full induction

does not hold. That is, the Soundness Theorem would be restricted to models of PA.

This is the approach chosen in this research. Notice that this restriction is only

required for the Soundness Theorem. The Completeness Theorem does not require a

similar restriction.
Theorem 15 (Soundness) For a Hoare triple {P}S{Q}

HPR {P}S{Q1} := PA {P}S{Q}.









Proof: Prove this by induction on the proof system "HPR. Let 2 be an interpretation
of PA.

Assignment Axiom:

=-P[e/z] -+ P[e/x]. Therefore by Lemma 8, assignZ(x, e)(P[e/x]) C P. Thus
={P[e/x]}x := e{P}.

Composition Rule:

Assume t- R {P}S1; S2{Q}. In that proof -H.PR {P}Si{R} and

HPR { R}S2{Q}. By the inductive hypothesis =jZ{P}Si{R} and =-j{R}S2{Q}.
Thus there is a set of states Y where fs,(P) C Y and fs,(Y) C Q, namely
Y = StZ(R). Therefore by Lemma 8, compZ(S, S2)(P) C Q which gives
=Z{P}SI; S2{Q}.

Iteration Rule:

Assume H-PR {P[O/x]}loop e; S end{P[e/.]}. In that proof -_pR {P[y/x] A
0 < y < e}S{P[s(y)/x]} for x V var(e) U free(S) and y V var(e) U free(S) U
free(P). By the inductive hypothesis H=-{P[y/x] A 0 < y < e}S{P[s(y)/x]}.
Thus fs(P[y/x] A 0 < y < e) C P[s(y)/x]. Therefore by Lemma 8,
loop1(e, S)(P[O/x]) C P[e/x] which gives jZ{P[O/x]}loop e; S end
{P[e/x]}.

Consequence Rule:

Assume -'HpR{P1}S{QI}. In that proof --'pRP --+ P, F-HpR{P}S{Q}
and FHPRQ -+ Q1. Therefore for any interpretation I of PRA ZP1i -+ P
and #-"Q -+ Qi. By the inductive hypothesis (j{P}S{Q}. Equivalently
St-(Pi) C StZ(P),fs(StZ(P)) C St1(Q), and Stz(Q) C StZ(Q1). Putting
these together yields fs(St$(Pi)) C StZ(Qi). Thus =Z{P1}S{Q1}.










0



2.6 Completeness of NPR

The verification system is shown to be complete as follows. Given program seg-
ment S and assertion P the existence of a Strongest Postcondition, SPC, of S and P
is shown. The SPC Theorem shows that there is a SPC Q such that I=Z{P}S{Q}
and for any valid Hoare triple =Z{P}S{R}, =-TQ -+ R. Given program segment S
and assertion P the provability of a SPC Hoare triple is also shown. For any valid
Hoare triple {P}S{R), -TPR {P}S{R} is proven by applying the Consequence Rule
to the SPC Hoare triple.
Towards this end it must be possible to translate a Hoare triple into a PRA
formula and vice versa. It will be shown that for Z = PRA and a Hoare triple

{P}S{Q}
1-{P}S{Q} e H-2Vx(P+(x) -+ Q+(gs(x)))

where P+ and Q+ are formulas on state codes obtained from P and Q, and gs is the
PR function corresponding to program segment S, possibly extended to operate on
state codes of a larger set of variables than those in free(S). It is interesting to note
that a syntactic version of the above statement does not hold. It is true that


-PR {P}S{Q} = -PRA Vx(P+(x) Q+(gs(x))).

However the proof that


-HpR {P}S{Q} = -PRA Vx(P+(x) -+ Q+(gs(x)))

depends on the soundness of 'HPR. Since for unbounded formulas P and Q the It-
eration Rule may not be sound, -PR {P}S {Q} may be provable while Vx(P+(x) --










Q+(gs(x))) does not hold in all models of PRA. The statement Vx(P+(x) -- Q+(gs(x)))
would hold in models of Peano Arithmetic. If it is required that pre- and postcondi-
tions be bounded whenever the Iteration Rule is utilized, a syntactic version of the

statement holds.

First a set of terms and predicates on state codes are defined. Recall the PR
coding and decoding functions for a state s restricted to a set of variables X. Also

recall that for any set of variables X there is a PR predicate which takes a state code

z and determines if X C var(z). Given a term r with var(r) = X define the term r+
by
7+ r((()lX ,..., (2)xk) if X C var(x)
( undefined otherwise.
Given a formula P with free(P) = X = {x1,... k} define the predicate P+ by

P+(x) holds < X C var(x)A P((x)xz,...,(x)xk).

Notice that for all x, P+(x [X) = P+(x).
Functions on state codes can be extended to functions on state codes of a larger set

of variables. Let X and Y be sets of variables where X C Y and X = {x1,..., k}.
For state code z defined on Y, let z[X = c((x- (z)x) *. (xk (Z)xk)). Say g is a

function defined from the state code of X to a domain value. Define g from g by

g(z) = g(z [X). Notice that g is defined from the state codes of Y to a domain value.
Say g is a function defined on the state codes of X. Define g from g by

((z)) ((g(z X)), if wEX
z (), otherwise
Notice that g is defined on the state codes of Y. Throughout the remainder of this

work extended functions will not be distinguished from their original functions.

To simplify the notation let v represent the set of variables X = {X1,...,zk}.

That is, let v be a function on {1,...,k} where v(i) = xz. For P((z)x,...,(x)xk)

write P((x),).









Lemma 16 For formulas P,Q where free(P) = free(Q) = X = {i,... ,xk} and a
total function f defined on the state codes of X,

F-PRA ViW(P(k) Q((f(c(& k)))v)

FPRA Vx(P+(x) -- Q+(f(x))).
Proof: The proof is a straightforward application of the definition. o
Lemma 17 For IZ PRA, a formula P, a state s and c the coding function
Z(P)(s) holds # P+(x) holds for x = c(s[free(P)).

Proof: The proof is a straightforward application of the definitions. 0
Lemma 18 For IZ PRA and Hoare formula {P}S{Q} where free(S) C free(P) =
free(Q)
HI {P}S{Q} *eH Vx(P+(x) Q+(gs(x))).

Proof: Say free(S) C free(P) = free(Q) = X = {x1,..., Xk.
(=) Assume Vs(s e StZ(P) fs(s) E Stz(Q)).

P+(x)
= Z(P)(s) for any s where x = c(s[X)
s St1(P)
: fs(s) E StZ(Q)
Q Q(fs(s)(X ),..., fs(s)(X.,))
( Q((c(fs(s)[X)) )
= Q((gs(c(srx))), )
X C var(gs)(x) A Q((gs(x)),)
+ Q+(gs(x))
(~) Assume 1=zVx(P+(x) Q+(gs(x))).

s E Stz(P)
= I(P)(s)
= P+(x) for x = c(s[X)
SQ(gs(x))
+ Q+(gs(c(s[X)))
SQ(c(fs(s) X))
SQ ((c(fs(s) [X)),)
= Q(fs(s)(x,),...,fs(s)(x,))
Sfs(s) E StZ(Q)













The concept of substitutions is extended to apply to state codes and PR functions

on those state codes. As expected, substitutions on state codes are defined similarly
to substitutions on states. and substitutions on the PR functions of state codes are

defined similarly to substitutions on program functions. Say p is a substitution on

X. For z a state code on the variables p(X) define p on z by

(zp), = (z)p,() for x E X.

For PR function g, defined from a state code of X to a domain value, define p on g

by

gp(z) = g(zp).

For any PR function g, defined on the state codes of X, define p on g by

(gp(z))p(x) = (g(zp)), for x e X.

Notice that for p a substitution on X and g defined on state codes of X, gp is defined

on state codes of p(X).
Lemma 19 (Composition of substituted functions) For PR functions gl and g2 defined
on state codes of X and p a substitution on X

g2p(gip(z)) = (g2 ogl)p(z).


Proof: The proof is a straightforward application of the definitions. O

Lemma 20 (Substitution on state code formulas) For formula P and substitution p

(Pp)+(x) P+(xp).


Proof: First prove (rp)+(x) # r+(xp) by induction on the term r. Then prove the

lemma by induction on formula P. o










Lemma 21 For a function f defined on the state codes of {xi,..., k} and p = [gYk/k]

F-PRA Vi < k(x = y;) --+ Vi < k((f(c(_k k)))x_ = (fp(c(k Yk)))yi).


Proof: Assume xi = yi for 1 < i < k, then


(f P(c(k Yk)))y_
= (fp(c((yl yl) (k yk)(Xl 2I1) *( zk))))yi
= (f(c((yi yi) (Yk Yk)(XI Z1) ''(x. k Zk))p))x
= (f(c((yi yI) Y (yk yk)(x5 yl) '''(k Yk))))Xl
= (f(c(ik YW)))xi
= f(c(ik Ak))x_


Extend f
Def. p on func.
Def. p on state code
Reduce f
Assumption


Corollary 22 For a function g which takes the state codes o f{ x,...,x k} to a domain
element and p = [yk/ k]

[-pRA Vi < k(xi = y ) -- g(c(Fk Xk)) = gp(c(y yk)).
Theorem 23 (Strongest Postcondition Theorem) Given program segment S and as-
sertion P with free(S) C free(P) = {xa,...,xk}, the SPC of S and P is

Q = 3yk(Vji < k(x, = (gsp(y))yi) A Pp)
where p = [Yk/Xk] and y = c((y_ Yi) (k Yk)).
That is the following hold:

1- kPRA {P}S{Q}
2. HPRA {P}S{R} J= PRA Q R


Proof: Let p = [Yk/;k] and X = {X ..., xk}. In the following assume i ranges from


1 to k.










1.

s e StZ(P)
= P+(c(s[X)) Lemma 17
= (3Wk(x = Yi A Pp))+(c(s[X))
S(3'k((gs(x))xi = (gsp(y))y, A Pp))+(c(s[X)) where
x = c((xl xI)-...(xk xk)) and y = c((yi yi) .. (Yk yk)) Lemma 21
= 3yk((gs(c(s[X)))xi = (gsp(y))yi A Pp)
S3yk((c(fs(s) rX))x = (gsp(y))yi A Pp) Lemma 10
= (3-k(x = (gsp(Y))y, A Pp))+(c(fs(s)[X))
= fs(s) 6 StZ(Q).

2. Assume Vs(s E StZ(P) fs(s) E StZ(R)). By Lemma 18 Vx(P+(x) --
R+(gs(x))). Say y = c((y_ _i) y ) (Yk yk)).

3Yk(xi = (gsP(Y))y, A Pp)
= 3Yk(xi = (gsp(y))yi A (Pp)+(y))
=, 39k(xi = (gs(yp))xi A P+(yp))
= 3Yk(xi = (gs(yp))x_ A R+(gs(yp)))
= R+(x) for x = c((xl xl)--(x k xk))
=R



In the SPC Theorem it is assumed that free(S) C free(P). Notice that this situation
is easily created by adding useless qualities to formula P.
A note similar to the one concerning Lemma 18 may apply to the SPC Theorem.
That is, a syntactic version of the SPC Theorem may not hold. For Q defined as in
the SPC Theorem it can be shown that -PR {P}S{Q}. However the proof given
that -HPR {P}S{R} =>PRA Q -- R depends on the soundness of the verification
system. Since for unbounded formulas P and R the system may prove a Hoare triple
which is not true in some model of PRA, -'PR {P}S{R} may be provable but

/PRA Q -+ R. If it is required that pre- and postconditions be bounded whenever
the Iteration Rule is utilized, a syntactic version of the SPC Theorem holds.










The provability of the SPC Hoare triple for a program segment S and an assertion

P is the major step in proving the completeness of the verification system. As new

constructs are added to the language most of the effort in proving the completeness

of the new system lies in showing the provability of this SPC Hoare triple. Several

lemmas are required to lay the groundwork.

Lemmas 1 and 2 link general substitutions on formulas to replacements on states.

The following two lemmas link general substitutions on formulas to replacements on

state codes.

Lemma 24 (Substitution and replacement on terms.) For a term r, an expression e,
a state code x with var(r), var(e) C var(x) and any variable identifier xj

(T[e/xj)+(x) = (rT)(setX,(e,x)).


Proof: Prove the lemma by induction on the term r. 0


Lemma 25 (Substitution and replacement on formulas.) For a formula P, an expres-
sion e and a state code x with free(P), var(e) C var(x) and any variable identifier
xj
(P[e/xjl )+(x) 4+ P+(set X,(e,z)).


Proof: This follows from Lemma 24 and induction on formula P. O


Corollary 26 (Equality before application of a function.) For a function f defined on
the state codes of {X1,..., k} and 1 < j < k

Vtlkuk(Vi < k(ui = (f(c(iCk Vk)))x) A vj = a
-Vi < k(u, = (f(set, (a, c(i~ ?Tk))))x)









Proof: Define a predicate R as follows

R Agk.Vi < k(ui = (f(c(k gk)))xi).

Notice that the lambda notation allows the u,'s to be considered fixed in R. Let
p = [X/1kVk]. For all Uk

i = (f(c(ik Uk)))x 1 < i < k A vj = a
R RAvj=a
R[a/v,]
Rp[a/xj]
= (Rp[a/xj])+(x) for x = c((xi Xi) ... (xk Xk)(v1 V1) *(vk Uk))
(Rp)+(set,, (a,x))
R+((set,, (a, x))p)
R+ (xp)
SR+ (set,,(vj,xp))
S R+(set.,(a,2:p))
= ui = (f(c((zx (set,,(a,xp))x, ) (xk (set.(a, xp))xk))))i for 1 < i < k
= u = (f(c((xz (xp)x,)... (x a) (xk (xp)xk))))X) for 1 < i < k
S u = (f(set,,(a, xp)))x for 1 < i < k
u = (f(set,,(a,c(Z gk ))))xi for 1 < i < k



Corollary 27 (Equality after application of a function.) For a function f defined on
the state codes of X = {x1,..., k} and x a state code on X
Vlfky(Vi < k, i 5 j(u, = (f(x))xi) A u2 = y
Vi < k(u, = (setr,(y,f(x)))xi)).


Proof: Define a new function f on the state codes of X by

(f (x))x, 1 < i < k,i ,j
(f(x))zi Y --

Let R = AwZ.Vi < k(u, = wi), w; = (f(x))xi for 1 < i < k and p = [xk/wk]. Notice
that for z = c((xl xl)' (xk Xk)(WI ItI) .i wk ( wk)) and 1 < i < k,i i j


(zp)x, = (z)w, = wi = (f(x))x, = (f(x))xi.









For all ULk, y

ui = (f(c(- Vk)))x 1 = ui = (f(c(_k gk))). 1 < i < k A (f'(c(ik Vk)))xz = y
=, u; wi 1 = RAwj =y
4 R[y/wj]
= Rp[y/xz]
= (Rp[y/x,])+(x) for x = c((xi xI) .-(xk Xk)(Wl wl) .. (wk Wk))
S(Rp)+ (set,,(y,aX))
4 R((set,(yzX))p)
R+(zp)
= R+(setj(y, xp))
=: u, = (set.,(y, xp))x_) for 1 < i < k
= ui = (sets,(y,f(x)))x} for 1 < i < k



Lemma 28 shows the provability of the SPC Hoare triple for this system.
Lemma 28 (SPC Hoare triple) For a program segment S and an assertion P where
free(S) C free(P) = X = {x1,...,Xk}, Y = {y,..., yk}, Xn Y = 0 and p = [Yk/k]

7-lpR {Vi < k(x = yi) A P}S{Vi < k(x, = (gsp(c(_ yk)))y) A Pp}.


Proof: Prove the lemma by induction on S. Assume i ranges from 1 to k, x =
c( _i Xk) and y = c(k Yk).

S := e


{xi = y, A P}xz := e{x, = (gsp(y))y, A Pp}
T
a [ Consequence Rule

{(xi = (gsp(y))yi A Pp)[e/xj]}xj := e
{(xi = (gsp(y))yi A Pp}
Assignment Axiom










The proof of implication a is as follows. Let R =- Aik.i = (gsp(y))yi A Pp.


xi = yi A P
= (gs(x))xz = (gsp(Y))y, A Pp
= (set.,(e,x))xi = (gsp(y))yi A Pp
R((set,, (e, ))xz,..., (set, (e, x))Xk)A
X C var(set,(e, x))
= R+ (set.,(e, ))
S (R[e/xl)+(x)
= R[e/X-])(xz,...,Xk)
(xi = (gsp(y))yi A Pp)[e/z]j


Lemma 21
Def. gs
Def. R
Def. +-ext.
Lemma 25


{x, = y, A P}Si; S2{xi = (gsp(y))yi A Pp}


Consequence Rule


{xi = y, A P}S; S2{xi = (gsP(gs, P(y)))yi A Pp}
T
Composition Rule


{x, = y2 A P}Si
{xi = (gsp(y))y; A Pp}
Inductive Hypothesis


{xi = (gsP(Y))yi A Pp}S2
' {xi = (gsp(p(gs y)))y, A Pp}
Inductive Hypothesis


The proof of implication a is as follows.

xi = (gs2p(gs, P()))y^ A Pp
= xi = ((2 o gs )p(y))yi A Pp Lemma 19
= x, = (gsp(y))yi A Pp Def. gs


S = loop e; S, end


S = S;,2











{xi = y, A P}loop e; S1 end{xi = (gsp(y))yi A Pp}
T
a 1 Consequence Rule I b
Say var(e) = Xm
R = X-ku.Xm = y, A xi = (gs, p(y))yi A Pp

{R[O/u]}loop e; S1 end{R[e/u]}
T
Iteration Rule
I
{R[v/u] A 0 < v < e}S, {R[v + L/u]}

c Consequence Rule T d
Let z = c(ik Fk), z, = (gsp(y))yi for 1 < i < k
and p' = [5ke/k]

{x, = zi A Pp}S1 {x = (gslP'(z))zi A (Pp)p'}
Inductive Hypothesis



The proof of implication a is as follows.

xi = y, A Pp
> Xm = Ym A xi = (gs, p0(y))yi A Pp
z R[0/u]

The proof of implication b is as follows. Let z = c((z xl) .. (xk Xk)(u u)).

R[e/u]
SR[e/u](xi,..., k, u) A free(R[elu]) C var(z)
= (R[e/u])+(z)
S R+(set,(e, z))
R+(c((i xjr)... (k X)(zU e(Z))))
+ Xm = Ym A xa = (gs, pg(z)(y))y_ A Pp
Sx; = (gsjPpP(Y)(y))y, A Pp
X 2i: = (gsp(y))yi A Pp









The proof of implication c is as follows.

R[v/u] A 0 < v e
= xm = ym A xi = (gspV(y))y A Pp A < v < e
= xi = zi A Pp

The proof of implication d is as follows.

xi = (gs, p(z))zi A (Pp)p'
= x, = (gsp(y))y A Pp
=: X, = (gs, p(gs, P(y)))y; A Pp
Sri = (gs p"+l(y))y_ A Pp
S x = Ym A xi = (gs, p+l(y))yi A Pp
= R[v + 1/u]

0

Theorem 29 (Completeness) For a Hoare triple {P}S{Q}

PRA {P}S{Q} t-HpR{P}S{Q}.


Proof: Assume #PRA {P}S{Q}. Without loss of generality also assume free(S) C
free(P) = X = {xl,...,xk}, Y = {yl,...,yk},X n Y = 0, p = [Yk/Xk] and y =
c( Y k). The proof of tHpR {P}S{Q} follows.


{P}S{Q}
T
a I Consequence Rule T b
I
{Vi < k(xi = yi) A P}S{Vi < k(xi = (gsp(y))yi A Pp}
SPC Hoare triple


Implication a holds since for P* defined from P by P* Axky.Vk-i < k(x, = yi)AP,

HPRA Vyk(P(k) -- P*(kk))-







40


Implication b holds because Vi < k(xi = (gsp(y))y, A Pp implies the SPC of
program segment S and assertion P. Therefore by part 2 of the SPC Theorem
Vi < k(xi = (gsp(y))yi) A Pp implies Q. D















CHAPTER 3
BLOCK LANGUAGE


3.1 Syntax of CB

The new token "begin" appears in LB. The set of program segments is defined

in Backus-Naur form for variable identifier x and expression e as follows:
S x := e S;S2 I loop e; S1 end | B
B E begin x; S, end.

For simplicity blocks with multiple variable declarations are not allowed. They can

be considered as abbreviations of nested blocks with single variable declarations.

The program segment begin x; S, end binds variable x in S1. Identifiers which are

not bound in a program segment are free in that program segment. Let the oc-

currence of an identifier x at location i be denoted (i,x). Let the x immediately

following the token begin be at location j. The defining occurrence of all occur-

rences of x in 'begin x; S end', which are not also within another program segment

'begin x; S1 end' contained in S, is (j, x).

Call a program distinguished if each defining occurrence of an identifier is unique

and no identifier appears both free and bound in the program. Program segments

S and S' are congruent, denoted S w S', if they differ only by a renaming of their

bounded identifiers.


3.2 Semantics of LB

Recall that state s is a total mapping VI --* D where D is the domain of the

interpretation. Therefore variable x in begin x; SS end will have a domain value
41










associated with it. This introduces nondeterminacy into the language in the case
that S1 reads x's value before writing it. Hence, for simplicity, variables declared
in blocks will be initialized to a fixed domain value before the body of the block is
executed. Call this value a. After S1 is executed x's original value will be restored.
The following state function gives the meaning of the variable declaration statement.

S begin x; S1 end

fs(s) = blockZ(x, fs,)(s) = (f(s{Z(a)/x})){Z(x)(s)/x}


The state function blockZ(x, fs,) will frequently be written blockZ(x, Si).

Lemma 30 (Substitution Lemma) For a program segment S and a substitution p
which is injective on free(S)
fsp(s) = fsp(s).


Proof: Prove this by induction on a program segment S. The case of a variable
declaration statement is given.

S = begin xi; S1 end

fsp(s)(x) = blockZ(p(xi), fsp)(s)(x)
= block$(p(x), fsp)(s)(x)
= (fs, p(s{(a)/p(xi)})){I(p(xi))(s)/p(xi)}(x)
I(p(xi))(s) if z = p(xi)
Sfs,p(s{I(a)/p(x,)}(x) if x $ p(xi)
(xi)(sp) if z = p(xi)
Sfs,(s{Z(a)/p(xz)}p)(p-'x) if xz p(xi)
= (fs,(sp{Z(a)lxz})){Z(xi)(sp)/x;}(p-'x)
= block1(xi, fs,)(sp)(p-'x)
= block(xi, fs, )p(s)(x)









For S = begin x; Si end, it is shown that fs is a PR function in the following
section. It can be shown that fs is stable and aloof, with respects to its inactive
variables, as it was in Chapter 2. Therefore, fs = block1(x, S1) is a program function
on free(S)
Lemma 31 (Extension to Lemma 8) Let f be a program function on X.
blockZ(x, f)(P) C Q < f(P[y/x] A x = a) C Q[y/x]
where y X U free(P V Q)

Proof: This statement will be proved if the following can be proved for y ( X U
free(P V Q).
Vs(s E Stj(P) -- f(s{I(a)/x}){I(x)(s)/xl} St1(Q))

Vs(s e St1(P[y/x] A x = a) -- f(s) e St1(Q[y/x])).
(=) Assume Vs(s e Stj(P) -- f(s{Z(a)/x}){j(x)(s)/x} e StZ(Q)).

s e St1(P[y/x] A x = a)
= s StZ(P[y/x])
= s{I(y)(s)/x} e Stj(P) Corollary 3
= ( (f(s{I(y)(s)/x}{I(a)/x})){I(x)(s
{I(y)(s)/x})/x} E Stj(Q) Assumption
j (f(s{Z(a)l/})){Z(y)(s)lxr} StZ(Q)
= (f(s)){I(y)(s)/x} e StZ(Q) s E Stj(x = a)
= (f(s)){Z(y)(f(s))/x} e Stj(Q) f is stable
= f(s) E StT(Q[y/x]) Corollary 3
(<=) Assume Vs(s E St$(P[y/x] A x = a) -- f(s) e StZ(Q[y/x])).


s e StZ(P)
= s{Z(x)(s)/y} e StZ(P[y/x])
Ss{I(x)(s)/y,Z(a)/lx} St1(P[y/lx A z = a)
= f(s{I(x)(s)/y,I(a)/x}) e St{(Q[y/x])
= f(s{I(a)/x}){I(x)(s)/y} e St1(Q[y/x])
= ((f(s{Z(a)/x}){Z(x)(s)/}){IZ(y)((f(s{Z(a)/x}))
{Z(x)(s)/y})/x} e StZ(Q)
= (f(s{Z(a)/x})){IZ(x)(s)/x} e Stj(Q)


Corollary 4

Assumption
f stable & aloof

Corollary 4










3.3 LB Computes the Class of PR Functions

Two new PR functions are needed before extending gs for variable declarations.

Define add,,, which takes a state code on X and returns a state code on X U {xz},

and drops,, which takes a state code on X U {xi} and returns a state code on X, as

follows.

For x a state code on X = {x1,..., k} and var(e) C X

add,, (e,x) = c((xI (x)x,1 ) .* (xk (x)xk)(xi ge(x))).

For x a state code on X = {x1,..., k}

drop, (x) c((x (x)xl) (x(xi-1 ( )(xi+l (x)i,,+) (xk (x)xk)) if x E X
c((Xi (X)1) ~-.. ( (xk)) if x.i X

The PR functions defined in this dissertation are for distinguished programs.

A PR function could be written to take an CB program as input and output a

distinguished version of that program. Throughout the remainder of this dissertation
it will be assumed that such a translation has already occurred.

Extend the function gs with a clause for variable declarations. For program

segment S in an PR program define gs : D --* D as in section 2.3 with the additional
clause:
S = begin xi; Si end
gs(x) = drop, (gs, (add ,(a,x)))

Lemma 32 (Extension to Lemma 10) Function gs is PR and for free(S) = X

gs(c(s[X)) = c(fs(s) X).


Proof: Function gs is PR since add, and drop, are PR functions. Prove the equality

by induction on program segment S. The cases for S = xi := e, S S1; S2 and

S = loop e; S' end are proved as they were in Chapter 2.










S E begin xi; S1 end Since S occurs in a distinguished program xi will not already
be in X.


gs(c(s X))


= drop, (gs,(add, (a, c(s[X))))
= drop,,(gs(c(s{Z(a)/xi,}[X U {x,})))
= drop, (c(fs (s{Z(a)/zi})r X U {x}))
= c(fs,(s{Z(a)x/z})rX)
= c(fs,(s{I(a)/ z}){Z(xi)(s)/x}[rX)
= c(fs(s) X)


Theorem 33 The class of functions computed by CB programs is the class of PR
functions.


Proof: The proof is the same as that given for this theorem in Chapter 2 except that
the new meaning function for gs is used. o



3.4 Verification of CB Programs

The verification system 7pR is extended to iHB by adding the following Program
and Variable Declaration Rule.

Program Rule

{P}Sd{Q}
{P}7r{Q}
for 7r = (S, z), S S Sd and Sd distinguished.


Variable Declaration Rule


{P[y/x] A z = a)S{Q[y/z]}
{P}begin x;S end{Q}
for y V free(P V Q) U free(S).


Ind. Hyp.

z2; X










3.5 Soundness of nB
Theorem 34 (Soundness) For a Hoare triple {P}S{Q}
-HB{P}S{Q} =PA {P}S{Q}

Proof: Prove this by induction on the proof system Bg. The soundness of the rules
presented in Chapter 2 is proven as it was in that chapter. Let I be an interpretation
of PA.

Variable Declaration Rule:

Assume -b {P}begin x; S end{Q}. In that proof -'HB {P[y/x] A x = a}S
{Q[y/x]} for y free(P V Q)Ufree(S). By the inductive hypothesis =j{P[y/x]A
x = a}S{Q[y/x]} or fs(P[y/x] A x = a) 9 Q[y/x]. Therefore by Lemma 31
block(x, fs)(P) c Q.


Corollary 35 For a Hoare triple {P}r{Q}

F" B{Pr{Q} =>PA {P}r{Q}.

Proof: Let I be an interpretation of PA, 7 = (S, z) and S ; Sd where Sd is distin-
guished. Assume i- B{P}r{Q}. Then in that proof, F-7 {P}Sd{Q} and by the
Soundness Theorem -=Ifsd(P) C Q. By the Substitution Lemma this is equivalent
to -Zfs(P) C Q or ~= {P)}{Q}. 0


3.6 Completeness of 7-B

Once the provability of the SPC Hoare triple is established the completeness
property of 7HB is proven as it was in Chapter 2. The following results are needed to
show the provability of the SPC Hoare triple.
Lemma 36 For expression e with var(e) C {rz,..., k}
addk+, (e, c(xk 'k)) = setk+, (e,c(-+ 4k+1))










Proof: The proof is a straightforward application of the definitions. 0
Lemma 37 For a function f defined on the state codes of {Xi,..., xk}
Vuik+lV6k+(Vi < k + 1(ui = (f(c(_k+l 6fk+i)))xi) A Vk+l = a
-+ Vi < k + l(u, = (f(add,,, (a,c( V-'))))kx))

Proof: This results from Corollary 26 and Lemma 36. O
Lemma 38 (SPC Hoare triple) For a program segment S and an assertion P where
free(S)C free(P)= X ={x,...,Xk}, Y = {y,..., k}, X nY=0 and p = [~k/&k]
-_B {Vi < k(xi = yi) A P}S{Vi < k(x, = (gsp(c(yk Yk)))y) A Pp}.

Proof: Prove the lemma by induction on S. The proof for all cases except the
Variable Declaration Rule will be the same as in Chapter 2. Let y = c( k yk).

S begin xj; Si end

Assume free(begin xj; S2 end) C free(P) = X. Since S occurs in a distin-
guished program, xj will not already be in X. Without loss of generality let
S begin xk+l; Si end.


{Vi < k(xi = yi) A P}begin Xk+l;Si end
{Vi < k(x, = (gsp(y))yi A Pp}
T
Consequence Rule f a
Let yk+i be a fresh variable, y' = c(y+1 Yfk+),
p' = [yk+1/Xk+i] and gs, be extended to operate
on the state code y' so that it leaves yk+i
unchanged.

{Vi < k + 1(xi = yi) A P}begin xk+1; S1 end
{Vi < k + 1(xi = (gs, '(y'))y_) A (P A Xk+I = a)p'}
T
Variable Declaration Rule
I
{Vi < k + 1(x, = y;) A P A Xk+i = a}Si
{Vi < k + 1(xi = (gs, '(y'))yi) A (P A Xk+1 = a)p'}
Inductive Hypothesis













The proof of implication a is as follows.

Vi < k + l(xi = (gs, p'(y))yi) A (P A Xk+1 = a)p'
= Vi < k + l(;x = (gs,'(Y'))yi) A Pp A Yk+1 = a
= Vi < k + l(xi = (gs, (y'P))xA) A Pp A yk+l = a
= Vi < k + 1(zi = (gs, (add,+, (a, yp)))x) A Pp
= Vi < k(xi = droppk, (gs, (addk+, (a, yp))))x_) A Pp
4 Vi < k(xi = (gs(yp))xi) A Pp
4 Vi < k(xi = (gsp(y))y_) A Pp.
















CHAPTER 4
PARAMETERLESS PROCEDURES



4.1 Recursion in a PR Programming Language

A minimal programming language which computes the class of PR functions has

been presented. A more useful language which computes this class of functions is de-

veloped next. In order to do this, constructs similar to those of universal programming

languages will be added. The concern here is not to construct an actual programming

language but to form the theoretical basis for such a language. Constructs such as

conditionals, case statements, bounded while loops and non-recursive procedures can

be straightforwardly added to the language. The unbounded while loop, unbounded

recursion and the goto construct can not be added. Primitive recursion is one of the

constructs between these extremes.

How should primitive recursion be formulated as a programming construct? Hope-

fully there is a general method so recursion in the language does not need to be in

the exact form of primitive recursion. Consider functions with a single output. One

way to restrict recursive procedure calls is to associate a maximum value with the

recursive procedure. If the value computed by the procedure exceeds this value sub-

sequent recursive calls to this procedure are ignored. Peter [32] refers to the function

computed by such a procedure as bounded recursion and shows that this does not

lead out of the class of elementary functions, a subset of the class of PR functions.

This approach was not adopted in this research.











Alternatively, a bound can be associated with each recursive procedure which

gives the maximum nesting depth of that procedure. This bound would restrict the

number of copies of the procedure which can be active at one time. Once this depth

has been reached subsequent calls to the procedure would be ignored. Figure 4.1 il-

lustrates bounded recursion. The programmer should be able to write an expression

for such a bound. When writing a recursive procedure the programmer should men-

tally justify that the procedure will terminate by verifying that on each recursive call

the problem is broken into a finite number of smaller problems. The expression for

the maximum procedure nesting depth can be determined from this reduction. This

is the approach utilized in this research. Throughout the remainder of this paper

bounded recursion will refer to recursion bounded by a maximum nesting depth.

Unfortunately, placing a bound on the nesting depth of a procedure does not

guarantee that the programs will compute only the class of PR functions. Acker-

mann's function is a total function which is not PR. Ackermann's function can be

defined as follows.


Acker(k, n, m) = Ek(n, m) where
Eo(n, m) = m"
Ek(0, m)= 1 for k > 0
Ek(n,m) = Ek--(Ek(n 1,m),m) for k > 0 and n > 0

It can be shown that Ackermann's function is not PR because it grows faster

than any PR function [13]. Ackermann's function is built in stages. It is built

at one stage by iterating on the function at the previous stage. The definition of

Ackermann's function given above starts with exponentiation to avoid special initial

cases. Figure 4.2 shows how quickly Ackermann's function grows on k.

















bound
procedure fib(n) n'
if n=O then z:=1
elseif n=1 then z:=1
else z:=fib(n-l)+fib(n-2)
end


When fib* is executed the recursive procedure fib is nested 3 levels deep. This is
possible if n > 3.


Figure 4.1. Bounded recursion


nb
if n=O then z:=1
elseif n=l then z:=l
else z:=

if n=O then z:=1
elseif n=l then z:=1
else z:=
fib*
R~b-----


fibf n then z:=
elseif n=1 then z:=1
else z:=

b I

ib I














Acker(O, n, m) = m"


Acker( 1, n, m)








Acker(2, n, m) =


= m
n



in
m



m


Figure 4.2. Ackermann's function Acker(k, n, m) grows quickly on k


If the following procedure is executed when z=1, it terminates with the value of

Acker(k, n, m) in variable z.


proc exp(n,m) n
LCB program to set
z to m"
end
proc acker(k,n,m) k
if k=O then exp(n,m)
else
loop n
acker(k-1,z,m)
end
end
end


The construct which leads out of the class of PR functions is the recursive call

within the body of a loop. It is this construct which allows Ackermann's function to

be built at one stage by iterating on Ackermann's function at the previous stage.











It can be argued that a recursive call within the body of a loop is not a structured

construct. Structuredness has been defined as the ability to understand the meaning

of the whole from the meaning of the parts and a few combining rules. For example, a

loop language would be considered structured if the meaning of the program segment

'loop n S end' is completely determined by the meaning of S and the knowledge

that the 'loop n' construct means to repeat the body of the loop n times. In the

above construct the meaning of the body of the recursive procedure requires knowing

the meaning of the loop body. Yet knowing the meaning of the loop body requires

knowing the meaning of the procedure body. Thus the meaning of the procedure

body and the loop body are being defined simultaneously.

Notice the relation between a program segment and the function it computes. A

program segment S; S2 translates into the nested function gs (gs (x)). Similarly the

function computed by the following program segment is a nested recursive function.

S- proc p(x) n
p(x)
p(x)
end
p(x)

If p is defined by primitive recursion as

p(0, x) = x
p(i + 1, ) = p(i,p(i, x))

then gs(x) = p(n, x). P6ter refers to such nested functions as simple nested recursive

functions and shows that this type of nesting does not lead out of the class of PR

functions. Frequently, however, nested recursion does lead out of the class of PR

functions.

Consider the program segment











loop n
S
end

which translates into the function loop(n, x) defined by

loop(0, z) = x
loop(i + 1, x) = gs(loop(i, z)).

This does not lead out of the class of PR functions as long as the function gs is known.

However, if the loop body S contains a recursive call to the procedure containing the

loop, the function gs will not be known. In this case the function loop is an example

of recursion of the first degree. That is, it is a function which depends on a function.

It can be written
loop(O,x,gs) = x
loop(i + 1,x,gs) = gs(loop(i,x,gs)).

P4ter shows that such functions can be reduced to doubly nested recursive functions

and therefore lead out of the class of PR functions.

It has been established by the Ackermann example that procedures of the follow-

ing form must be disallowed in a PR programming language.

proc p(x) m
loop n
p(x)
end
end

It is possible to simulate this procedure without using the loop construct. The fol-

lowing procedure is equivalent to the above.

proc p(x) m
proc q(x) n
q(x)
p(x)
end
q(x)
end











The loop is simulated by recursive procedure q. The offensive call is no longer a call

to the procedure itself. In this program the offensive call is a call within procedure

q to procedure p, the parent of q. Thus calls to the parent of the current procedure

must be restricted in a PR language. Since calls to parents are restricted, calls to

all direct ancestors must also be restricted since a call to a parent could easily be

simulated in a language which allowed calls to a direct ancestor.

If procedure q in the previous example was moved outside of procedure p the

resulting program would still be equivalent to a procedure containing a recursive call

within a loop.

proc p(x) m
q(x)
end
proc q(x) n
q(x)
p(x)
end

The recursive call within a loop is simulated by procedure p calling procedure q and

procedure q calling procedure p. Thus call sequences where siblings mutually call

each other, directly or indirectly, must be restricted.

A recursive call is a call to a procedure which is active at the time the call is made.

Refer to a recursive call as direct if it is a call to the current procedure. All other

recursive calls will be referred to as indirect. The previous two examples contain both

direct and indirect recursive calls.


4.2 Syntax of CC

In addition to the other tokens of CB, an infinite set of ordered procedure iden-

tifiers which is disjoint from the set of variable identifiers, and the new token "proc"










are part of the language CC. Environments are also introduced in this new language.

An environment is a sequence of procedure declarations. Procedure declarations are

of the form proc p e; S end for procedure identifier p and program segment S. Here e

is an expression for the maximum nesting depth of the procedure. This expression is

referred to as the procedure's bound. For a variable identifier x, a procedure identifier

p and an expression e the set of program segments of CC is defined in Backus-Naur

form as follows:
S 5 x := e I S;,2 loop e; S1 end I B I p
B begin x; S end begin E; S end
E I proc p e; Send I ElE2

Recall that the program segment 'begin x; S end' binds variable x in S. Similarly

the program segment

T E begin proc p e; S1 end E; S2 end

binds procedure identifier p in T. Let the p immediately following the token proc

be at location j. The defining occurrence of all occurrences of p in T, which are not

also within another program segment 'begin proc p e'; S' end E'; S' end' contained

in E or S2, is (j,p). Now both variable and procedure identifiers must be renamed

to distinguish a program.

It was shown that program segments of the language CPR and CB can be trans-

lated into PR functions. The situation is more complex for program segments in

LC. These program segments may contain calls to procedures defined outside of S

and they may be contained within a recursive routine. Define a program unit E I S as

an 'environment'/'program segment' pair where all procedure identifiers are bound

in 'begin E; S end'. Define a recursive program unit, or recursive unit as an 'envi-

ronment'/'procedure identifier'/'program segment' triple where E S is a program

unit and calls in S to p would be recursive.










Let min(E I S) denote E' I S where E' is the minimum environment such that
E' S is a program unit. Given a program unit E I S where min(E I S) = E' S,
E' is the set of procedures and idf(E I S) the set of identifiers visible from S. Make
similar definitions for a recursive unit E IP S. Olderog [29] has additional details.
A call di-graph, or call directed graph, for procedure call E I p in a distinguished
program is constructed as follows. Let E I p be the root node. For each node E I p
in the graph, where proc p e; S end E E, consider E I S.
For E I S E' I S' do the following.

E' S' =E' Ix;:=e Do nothing.

E' S' E' S1; S2 Consider E' I S1 and E' I S2 separately.

E' S' E' I loop e; S, end Consider E' S[ 1.

E' S'-=E' begin xi; S end Consider E' S.

E' S' E' | begin EL; Si end Consider Add(E1, E') I S1.

E' S' = E' q Three cases are possible. Either p =q, there is a node E" I q
which is a direct ancestor of E I p, or there is no node with procedure identifier
q which is a direct ancestor of E I p. In the first case the call E' I q is a
direct recursive call. Draw a directed edge from node E I p to itself. Note that
min(E' I q) = min(E I p). In the second case E' I q is an indirect recursive call.
Draw a directed edge from node E I p to node E" I q. Note that min(E' I q) =
min(E" I q). In the last case E' I q is a non-recursive call. Create a new node

E' I q, draw a directed edge from node E I p to node E' I q and repeat the
process for this new node.

Figure 4.3 gives an example of a procedure call and its call di-graph. For simplicity

the environments have not been shown. Notice that each cycle is entered via a single













-- proc p
q A
q
r
end
proc qqq
q
end
procr s q
S
q
end
proc s
q

end
p


Figure 4.3. Example of a procedure call and its call di-graph

node referred to as the start node of the cycle. Call all other nodes on the cycle inner

nodes. Say a cycle is complex if one of its inner nodes is the start node of another

cycle. Otherwise the cycle is simple. All cycles in the call di-graph of figure 4.3 are

simple.

Let C be the language LB without the loop construct and where bounded recursive

calls are allowed. Then the call di-graph for a program in C could contain a complex

cycle. Figure 4.4 shows such a procedure call and its call di-graph. Recall that a

procedure with the call structure given in figure 4.4 may compute a function which

is not PR. Thus the PR language must be restricted so that the call di-graph for any

program in the language contains only simple cycles.

A simple cycle which is entered via the call E ( p will be called E p's cycle

or simply p's cycle if no ambiguity will result. If each cycle on the call di-graph is












S = proc p
q
end
proc q
q






Figure 4.4. Procedure call and di-graph with complex cycles

collapsed into a single node the call di-graph becomes a tree. The height of a call

di-graph is the number of edges in the longest path of this tree, counted by passing

through each node only once.

Given the program segment loop e; S end, in the previous chapters var(e) n

free(S) = 0. In this chapter we assume var(e) n free(min(E I S)) = 0 for program

unit

E I loop e; S end

and for program unit

E I begin proc q e; S end: T end.

A program in C is a block with no free procedure identifiers where the call di-

graph of each call contains only simple cycles, and calls to the next node on a cycle

do not occur within the body of a loop. Assignment, composition, iteration, variable

declaration and procedure declaration are referred to as non-call constructs.


4.3 Semantics of C

In B the work variables can be declared using the construct begin x; S end.

Recall that the meaning of this program segment is independent of the variable











identifier x. That is, begin x; S end and begin y; S[y/x] end, where S[y/x] is the

result of replacing every free occurrence of x in S by y, have the same meaning

provided y var(S). Care must be taken when adding procedures to this language

so that this independence is maintained.

Consider the following program block. For simplicity the bounds associated with

the procedures will be ignored.

begin x
proc p

end
proc q
begin x
x := 2
P
end
end
x := 1
q
end

If this program is executed using dynamic scope of variables, z will be 2 upon com-

pletion of this program. However, if the variable identifier declared in procedure q is

changed to y so that procedure q is defined as in the following program, z will be 1

after executing the block.

proc q
begin y
y:= 2
P
end
end



Thus the meaning of the block is dependent on the variable identifier used to declare

a temporary work variable in procedure q. Static scope of variables guarantees that










the meaning of S will be independent of the variable identifier chosen. The new

language C will use static scope of identifiers.

Identifiers, both variable and procedure, will be renamed to maintain static scope.

Given a program, Hoare triples will be proved for a distinguished copy of that pro-

gram. Since there are an infinite number of distinguished programs which are con-

gruent to a given program, the identifiers chosen to distinguish a program will be

such that the next available identifier is chosen whenever a new identifier is needed.

Whenever a recursive call is made the bound identifiers in the body of the recur-

sive procedure will be renamed. Renaming procedure identifiers has no effect in C.

The groundwork is being laid for when passing procedures as parameters are allowed

and renaming procedure identifiers is necessary to maintain static scope of procedure

identifiers.

Let CI denote the function which performs the renaming necessary to maintain

static scope. The function CI takes a program segment S and a finite set of identi-

fiers I and returns a program segment S' such that S and S' are congruent and no

identifier bound in S' occurs in I. The function CI is the copy rule C60 introduced by

Olderog [29]. It performs a non-deterministic renaming of identifiers. To make this

deterministic let CI choose the next available identifier whenever a new identifier is

needed. The set I of identifiers in CI contains those identifiers which are visible when

the call is made. Thus frequently CI will be written without the subscript I as C.

The meaning of a recursive unit E IP S depends upon the current level of nesting

of that procedure. Therefore semantics must be given relative to bounded recursive

units. The bounded recursive unit E I| S denotes a recursive unit E IP S where S

is a program segment on p's cycle and active copies of this cycle may be nested b

additional times. If b = 0 calls to p will be ignored.









The semantics for CC are defined by a mapping E which, given an interpretation
2, assigns a state function E (7r) to every program of LC as follows.

S'(Ir) = E-(0 I Sd) where 7r = (S, ), Sd S and Sd is distinguished.

EZ(E x, := e) = assign1(x,e)

s2(E S; S2) = compjI(E(E I S), (E I S2))

S (E loop e; S, end) = loop1(e, S-(E I 5'))

E2(E begin xz; S, end) = blockj(x;, 1(E I Sx))

SI(E begin Ei;SI end) = S(E' I S) where E' = Add(E, E)
(IE p) E (E T) if E p is not a start node
S(E I p I(e)(s)) if E p is a start node
for proc p e; T end E E
p id ifb = 0
-E) = (E I, C(T)) if b> 0
EZ(E |P xi, := e) = assign(r, e)

E (E I S1;52) = compZ(EI(E |I SI), EZ(E I S2))

E"(E (I loop e; S1 end) = loopZ(e, SE(E I S,))

EZ(E [1 begin xi; S, end) = block-(xi, ES(E 1P Si))

SZ(E 1E begin EI; St end)= E (E' I Si) where E' = Add(E, E)
E (E I p b) ifq = p
EZ(E l q) = E(E l C(T)) if q is on p's cycle, q 0 p
E (E I q) if q is not on p's cycle
for proc q e; T end E E










Notice that for min(E I p b) = min(E' I p b), EZ(E I p b)(s) = I(E I p b)(s). Also
notice that the above semantics are operational. The meaning of a recursive procedure

call is given in terms of the copy rule applied to the body of that procedure. When
the translation is given from an CC program to the PR function which denotes that
program, denotational semantics are given. Thus, the PR functions corresponding to
program units in LC must not utilize copy rules.
Proving a property about an C program can no longer be done as a simple
induction on program segment S. Following is an example proof showing how an
arbitrary property R is proven for a program unit EI S. That is, it will be proven
that for a program unit E I S, R(E I S) holds.
The proof involves a number of inductions on a program unit E I S. These
inductions occur within two contexts; induction on a distinguished program unit

E I S, and induction on a distinguished bounded program unit E If S. First prove
that property R holds, in either context, for S a non-call construct. That is, first
prove R(E I S) and R(E If S) for S a non-call construct.
Once the above has been proven it is know that R(E I S) holds for the non-call

constructs. All that is left to show is R(E I p). Prove this by induction on the height
h of the E I p di-graph. Let proc p e; T end E E.
Say h = 0 and the call E I p is not the start node of a cycle. Show that, for E I T

a distinguished program unit, R(E I p) follows from R(E | T). Once this has been
shown, all that is left to show is that R(E I T) holds. It will have been shown that
the property R(E I T) holds for the non-call constructs. Since h = 0 there are no

calls in procedure body T.










Say h = 0 and the call E I p is the start node of one or more cycles. Show that
R(E I p) follows from R(E I p b). Once this has been shown, all that is left to show
is that R(E I p b) holds.
Prove R(E p b) by induction on bound b. Show R(E I p 0) holds. Assume

R(E I p b 1) holds for any program unit E I p. Show that, for E I T a distinguished
program unit, R(E I p b) follows from R(E |l- T).
It will have been shown that, for any b, R(E I( T) holds for the non-call constructs.
Since h = 0, calls in procedure body T can only be to the first node on a cycle.
Consider the cycles on node E I p. Label the calls participating in these cycles
according to how many calls there are between the edge representing this call and
the edge entering node E I p. That is, the nth call directly to procedure p is labeled

E, o I qn,o. Label the nth call to a procedure which is m calls away from a recursive
call to procedure E I p, En,m I q,m- Property R(E I_1 T) holds for each call

En,m P Q,,m if it can be shown that, for 0 < i < m, R(En,i I -1 qn,i).
Prove R(En,,i |_ q,,i), 0 < i < m, for any n, by induction on i. First show

R(En,o |I-I qn,o), i.e. R(En,o l-_ p), using the inductive hypothesis on b. For 0 < i <
m, proc qn,i en,i; Un,i end E En,, and E,,i, Un. distinguished, show R(En,i |-1 qn,i)
follows from R(En,i \.-1 Un,,).
It ill have been shown that R(En,i I~ Un,i) holds for the non-call constructs.
Since the cycles are simple and h = 0 the only calls in Un,i are calls of the form
En,I1 I qn',i-l. The property R(En,,i-1 ib- n q,i-1) holds for these calls by induction
on i.
Let h > 0. The proof that R(E [ p) holds where h > 0 is similar to the proof that

R(E I p) holds where h = 0 except that an additional case is needed to inductively
prove some of the equations. Only these additional cases will be discussed.










Say E I p is not a start node. In proving R(E I T) the procedure body T may
contain a procedure call. The height of the di-graph for this call will be less than h.
Thus, R(E I T) holds for this call by induction on h.
Say E p is the start node of one or more cycles. In proving R(E I T) the
procedure body T may contain one or more calls to a procedure whose node is not
on the cycle. Show that for such a call q, R(E I I- q) follows from R(E q). The
height of the E I q di-graph will be less than h. Thus, R(E I q) holds by induction
on h.
In proving R(E I|1 T) for a call to the next node on a cycle R(E,,i If-1 U,i)
must be proved. Here Un,i may contain one or more calls to a procedure whose node
is not on the cycle. This case is handled as it was in the previous paragraph.
The above shows how to prove a property of CC programs. Thus, we have a
programming language, which includes a powerful form of recursion, yet for which
properties can be proved using simple, yet tedious, nested inductions.
Lemma 39 (Substitution Lemma) For a program unit E j S and a substitution p
which is injective on free(min(E I S))

EZ((E S)p)(s) = (E(E I S))p(s).


Proof: This lemma is straightforwardly proved using the technique given in the ex-
ample proof. 1

Lemma 40 For a program unit E I S, E(E | S) is a program function on
free(min(E I S)).


Proof: It is shown that Ei(E I S) is PR for I an interpretation on N in the following
section. It can be shown that E"(E I S) is stable and aloof, with respects to its
inactive variables, as it was in Chapter 2. 0









The following properties of program functions are useful in proving the soundness
of the new verification system.
Lemma 41 (Extension to Lemma 8) Let f be a program function on X.
1. f(P[x/y]) C Q[/y] = f(P) C Q where nX =0 and yf X =0.
2. f(P[g/yj) C Q f(P) C Q where fn (X U free(Q)) = 0.


Proof:

1. Assume f(P) C Q, k n X = 0 and yA n X = 0. Let {s(k)/yk} denote the
replacement {s(xl),..., s(xk)/y1, ... yk}

s E StZ(P[F/yj)
=> s{s(Sk)/Yk} E StZ(P) Corollary 3
=* f(s{s(ik)/yk}) E StZ(Q) Assumption
4= (f(s)){s(ifk))/k} E Stj(Q) f is stable
= f(s) E StZ(Q[I/gkk]) Corollary 3

2. Assume f(P) C Q and :fk n (X U free(Q)) = 0.

s E StZ(P[4k/yk])
= s{Z(ei)(s),... ,(ek)(s)/yl,..., k} E StZ(P) Corollary 3
= f(s{Z(el)(s),...,Z(ek)(s)/yl,...,yk}) E StZ(Q) Assumption
f(s) E StZ(Q) fnk Nfree(Q) =




4.4 C Computes the Class of PR Functions

In the last two chapters the function gs was defined which simulated program
segment S. In this chapter gEls will simulate program unit E I S. Define the
function gEIS(X) as follows.
S xi := e gEIs(x) = set,(e,x)










S Si; S2 9Els(x) = 9E|S2(a9ES(X))

S loop e; S1 end gEIs(x) = 9Eg )(X)

S begin xi; S1 end gEIs(x) = dropx,(gEIs (add~,(a, )))
S = begin E1; S1 end gEls(x) = gE'lIS(x) where E' Add(E1, E)
S = p where proc p e; T end E E

EI(X) f gET(x) if E I p is not a start node
9Ep rEIp(ge(x),x) if E p is a start node


Sx ifb=0
where rfEp(b, z) = /, \
where r (b,x) 9ET(b 1,x, rEp(b 1,x)) if b > 0
follows.


and gEjPs is defined as


5 x; := e gEiPs(b,x,rE'lp(b,x)) = set,,(e,x)

S Si; S2

9EIPs(b, x, rEi'p(b, x)) = gEIPS, (b, gElpS, (b, x, rE'lp(b, x)), rE'p(b, gEjpS, (b, x, rE'Ip(b, X))))


S loop e; S5 end gEIPs(b, x, 7E1'p(b, x)) = g9E (x)

S E begin xi; Si end

9EIPS( Epb, r (b x)) = drop,( gEPs, (b, add, (a, x), rE'p(b, add, (a, x))))


S begin Ei; S, end

9E\Ps(b, x, rE'Ip(b, x)) = gE" ps(b, x, rE'\p(b, x)) where E" Add(EI, E)


S q where proc q e; T end E E
SrE'\p(b,x) if q p
gE1Pq(b,x, rE'Ip(b,x)) = gEIPT(b, x, rE'lp(b, x)) if q is on p's cycle, q p
gElq(X) if q is not on p's cycle










Notice that for min(E I p) = min(E' I p), rElp(b, x) = rE',p(b, x). With the above
definition we can now prove.
Lemma 42 For program unit E I S where free(min(E I S)) = X

gEIs(c(s[X)) = c( (E I S)(s) X).


Proof: Let 7 c(s[X). This proof uses the technique given in the example proof
on page 63. It involves a number of inductions on a program unit E I S. In each
context the treatment of the non-call constructs is the same. In these contexts the
following holds for the non-call constructs.

gEls(7) = c(ZE(E I S)(s)[X) (4.1)

9E|PS(b, 7, rEl(b,/7)) = c(E"(E |j S)(s)[X) (4.2)

The proof of equations 4.1 and 4.2 are similar. The proof of equation 4.2 will be
given.
S = X := e

9EIPS(b,-, rElp(b, )) = set,(e, 7)
= c(s{Z(e)(s)/x,} [X)
= c(assign(x;,, e)(s) X)
= c (E (E l S)(s)[X)

S S=1;S2

gElPs(b, rElp(b, -))
= gEIPS2 (b, gEPS, (b, 7, rElp(b, Y)), rElp(b, gEIPS, (b, rEIp(b, Y))))
= gEIPs,(b, c(E'(E I S)(s)rX),rEp(b, c(EZ(E jI Si)(s)[X)))
= c(E l(E IJ S2) o (KE(E I| Si)(s))rX)
= c(comp1(EZ(E I S,), (E I S2))(s)rX)
= c(EZ(E I| S)(s) X)









S loop e; S, end First prove gdls, (7) = c(E(E I S)(s) [X) by induction on d.
From that result the following holds.

gEPs(b,7,TrEp(b,7)) = gE, ()
= gE )(s)
= c(E(E I Sij)(e)(s)(s)rX)
= c(loopZ(e, EZ(E I S))(s) X)
= c(r(E ( S)(s) [X)

S begin xi; S1 end

gEIPs(b, 7, rTEp(b, 7))
= drop., (gEjPSi (b, add., (a, 7), rElp(b, add, (a, 7))))
drop, (gEIPs, (b, c(s{Z(a)/xli}rX U {x,}),
rE p(b, c(s({Z(a)/x} X U {x;,}))))
= drop, (c(EZ1E I Si)(s{I(a)/x,})rX U {.x}))
= c((E(E I' Si)(s{Z(a)/zx})){I(xi)(s)/x,} X)
= c(blockj(xji, (E [l Si))(s)rX)
= c(E(E I S)(s) F[X)

S begin Ei; S1 end

9EIpS(b, rEjp(b, -)) = gE'ps, (b, 7,rEIp(b, Y)) where E' Add(E, E)
= c(EJ(E' I Sl)(s)[X)
= c(~(E Ij S)(s)[X)

The above proof sections will be referred to multiple times in the proof of this lemma.
The proof of equation 4.1 shows that the lemma holds for the non-call constructs.
All that is left to show is that the lemma holds for the call E I p. Prove this by
induction on the height h of the E ( p di-graph. Let proc p e; T end E E.
Say h = 0 and the call E I p is not the start node of a cycle. If it can be shown
that


gEIT(Y) = c(E (E I T)(s)rX)


(4.3)









then the following holds.

gEjp(7) = 9EIT(7)
= c(E I(E T)(s)[X)
= c(E(E Ip)(s) [X)
Equation 4.3 has been proved for the non-call constructs. Since h = 0 there are no
calls in procedure body T.
Say h = 0 and the call E I p is the start node of one or more cycles. If it can be
proved that

rElp(b, -) = c(El(E I p b)(s) [X) (4.4)

then the following holds.

9gEp(7) = rEjp(9e(7), 7)
= rElp(Z(e)(s), 7)
= c( (E p Z(e)(s))(s) X)
= c(-E(E p)(s)[X)
Prove equation 4.4 by induction on b. For b = 0

1Elp(b,7) = 7-
= c(id(s) X)
= c(VI(E p b)(s)[X).
Suppose that b > 0. If it can be shown that for T distinguished

gEIpT(b 1,, 7rE(p(b 1,7)) = c(YS(E I-1 T)(s)[X) (4.5)

then the following holds.

rEp(b, 7) = gE|PT(b -1, y, rE|p(b 1, ))
= c( (E Ir-1 T)(s)[X)
= c(EY(E j1-, C(T))(s)[X)
= c(C(E (p b)(s)[X)
The proof of equation 4.2 shows that equation 4.5 holds for the non-call constructs.
Since h = 0, calls in procedure body T can only be to the first node on a cycle.









Consider the cycles on node E [ p. Label the calls participating in these cycles
according to how many calls there are between the edge representing this call and
the edge entering node E I p. That is, the nth call directly to procedure p is labeled
E,,o qn,0. Label the nth call to a procedure which is m calls away from a recursive
call to procedure E | p, En,m I qn,m. Equation 4.5 holds for each call En,m I qn, if it
can be shown that, for 0 < i < m, the following holds.

gE,,,Pq,(b 1,7, rElp(b 1, )) = c('. (En,, I-1 qn,)(s)[X) (4.6)

This will be proven for any n by induction on i.
For i = 0 if it can be shown that

rEIp(b 1,7) = c(Z(E I p b- 1)(s)fX) (4.7)

then the following holds.

gEn,olPqn,o(b 1, rElp(b 1, 7)) = gE,,o pp(b 1, 7, rEp(b 1, ))
= rTEp(b 1,7)
= c(Z (E I pb- 1)(s) X)
= c(EY(Eo I qn,o b- 1)(s)[X)
c((En,o 1-,1 qn,o)(s)[X)

Equation 4.7 holds by induction on 6.
Let 0 < i < m and proc qn,, en,i; R,,i end E En,i. If it can be shown that for Rn,i
distinguished

gE,R ,IPR,,(b 1, 7,rEp(b 1, 7)) = c(E"(En,; 1 1 Rn,i)(s)[X) (4.8)

then the following holds.

gEn,lPq,s(b- 1,yrElp(b- 1,7)) = gEn,IPR,(b- 1,7,rElp(b- 1,))
= c(Z(E,,i I-1 Rn,i)(S) X)
= c(CE(E,,i i-1 C(R.,,))(s)[X)
= c(EZ(En,i 1, q,,i)(s)[X)










Equation 4.8 holds for the non-call constructs. Since the cycles are simple and h = 0

the only calls in R,,i are calls of the form En,i-|1 q,',i-i. Equation 4.8 is proved for
these calls by induction on i.
Let h > 0. The proof that the lemma holds for the call E I p where h > 0 is

similar to the proof that the lemma holds for the call E I p where h = 0 except that
an additional case is needed to inductively prove some of the equations. Only these

additional cases will be discussed.

Say E I p is not a start node. In proving equation 4.3 the procedure body T may
contain a procedure call. The height of the di-graph for this call will be less than h.

Thus, equation 4.3 holds for this call by induction on h.
Say E | p is the start node of one or more cycles. In proving equation 4.5 the

procedure body T may contain one or more calls to a procedure whose node is not

on the cycle. If it can be shown that

gEq(Y) = c(EI(E q)(s)[X) (4.9)

then the following holds.
gEIPq(b- 1,y,rEip(b 1,7)) = gElq (Y)
= c(E(E I q)(s)[X)
= c(E(E(E 1I q)(s)rX)
The height of the E I q di-graph will be less than h. Thus, equation 4.9 holds by

induction on h.

In proving equation 4.5 for a call to the next node on a cycle equation 4.8 must

be proved. Here R,,i may contain one or more calls to a procedure whose node is not

on the cycle. This case is handled as it was in the previous paragraph. 0

The proof of Lemma 42 would be simpler if the function being recursively defined

corresponded to a procedure body. Instead the function corresponds to those proce-

dures which make up a cycle. In this chapter, the sequence of procedures making up










a cycle can be reduced to a single procedure by replacing each procedure call to an

inner node with its procedure body. Such a translation makes all recursive calls di-

rect. This translation always occur for program segments in LC whose call di-graphs

contain only simple cycles. While this translation is possible for a PR language with

parameterless procedures, or a PR language with variable parameters, it is not pos-

sible for a language with procedure parameters. Therefore such a translation will not

be utilized here.

Theorem 43 The class of functions computed by C programs is the class of PR
functions.


Proof: Notice that for an LC program 7 = (S, z), S Sd where Sd is distinguished

and free(S)=X

c(Z (r)(s) [X) = 90sd(c(s[X)).

It can be seen from the definition that the function gEls is PR. Therefore each C

program 7r computes a PR function. LC is an extension of LB and there is a program

in LB which computes every PR function. Thus, the class of PR functions and the

class of functions computed by C programs are equivalent. O



4.5 Verification of CC programs

The verification systems HpR and -B consisted of Hoare statements on program

segments. It is straightforward to modify these to Hoare Statements on program

units and bounded recursive units. In addition -C contains rules to verify Hoare

statements about parameterless procedures. The verification system HC for the new

language LC follows.









Program Rule

{P}0 I Sd{Q}
{P}r{Q}
for 7r = (S, z), S S Sd and Sd distinguished.

Assignment Axiom
{P[e/x]}E x := e{P}

Composition Rule
{P}E S1{R}, {R}E S2{Q}
{P}E I Si;S2{Q}
Invariance Rule

{P}E I S{Q}
{PA R}E I S{Q A R}
for free(R) n free(min(E S)) = 0.

Iteration Rule

{P[y/x] A 0 < y < e}E I S{P[s(y)/x]}
{P[0/x]}E I loop e; S end{P[e/x]}
for x ( var(e) U free(min(E I S)) and
y V var(e) U free(min(E I S)) U free(P).

Consequence Rule
P1 P,P}E I S{Q},Q Q,
{PI}E I S{Q1}
Variable Declaration Rule

{P[y/x] A x = aE I S{Q[y/x]}
{P}E I begin x; S end{Q}
for y free(P V Q) U free(min(E I S)).









Procedure Declaration Rule

{P}E' I S{Q}
{P}E I begin E1; S end{Q}
for E' = Add(E1, E).

Environment Rule

{P}E' I S{Q}
{P}EIS{Q}
for min(E I S) = min(E' I S).

Non-Recursive Procedure Call Rule

{P}E I S{Q}
{P}E I p{Q}
for E I p not a start node and proc p e; S end E E.

Recursive Procedure Call Rule
P[O/w] -* Q[0/w],
({P[v 1/w] A O < v e}E I_1 p{Q[v 1/w] A 0 < v < e}
t-c {P[vw] A 0 < v < e}E I|_- S{Q[v/w] A 0 < v < e})
{P[e/w]}E I p{Q[e/w]}
for proc p e; S end E E and v V free(min(E I p)) U free(P V Q).

Rules to prove Hoare triples for recursive program units E I S.

Assignment Axiom
{P[e/x]}E I' x := e{P}

Composition Rule
{P}E I' S1{R}, {R}E Ip S2{Q}
{P}E Ip S1; S2{Q}









Invariance Rule
{P}E [p S{Q}
{P AR}E S{Q A R}
for free(R) n free(min(E IP S)) = 0.

Iteration Rule
{P[y/x] A 0 < y < e}E [I S{P[s(y)/x]}
{P[0/x]}E I| loop e; S end{P[e/x]}
for x V var(e) U free(min(E IP S)) and
y V var(e) U free(min(E IP S)) U free(P).

Consequence Rule
P -- P,{P}E I S{Q},Q -+ Q
{P,}E IP S{Q1}
Variable Declaration Rule
{P[y/x] A x = a}E P S{Q[y/x]}
{P}E lp begin x; S end{Q}
for y V free(P V Q) U free(min(E IP S)).

Procedure Declaration Rule
{P}E' IS S{Q}
{P}E I begin E; S end{Q}
for E' = Add(E, E).

Environment Rule
{P}E' I| S{Q}
{P}E S{Q}
for min(E |P S) = min(E' I S).

Inner Procedure Call Rule
{P}E I S{Q}
{P}E [p q{Q}
for q on p's cycle, q 0 p and proc q e; S end E E.










Off Cycle Procedure Call Rule

{P}E q{Q}
{P}E I q{Q}
for q not on p's cycle.


Substitution Rule #1

{P}E I p{Q}
{P[F/y]}E I' p{Q[Y/y}l
where nfn free(min(E IP p)) = 0 and y'n free(min(E IP p)) = 0.


Substitution Rule #2

{P}E p{Q}
{P[e/y-}E lp p{Q}
where yn (free(min(E IP p)) U free(Q)) = 0.


Notice that the Recursive Procedure Call Rule refers to provability. A calculus
of sequents would be a more formal presentation than the natural deduction system
given here. It can be shown. however, that the system presented can be translated
to a calculus of sequents.
The Recursive Procedure Call Rule requires that a Hoare triple be proved as-
suming the provability of another Hoare triple. Frequently the Hoare triple which is
assumed needs to be modified for the proof. The Invariance Rule enables the adap-
tation of the assumed Hoare triple for the proof. The Invariance Rule can be made
obsolete with slight modifications to 1HC [2]. In fact, in 'nC the following weaker
Invariance Rule would suffice.

{P}E lp{Q}
{PA R}E lI p{Q A R}
for free(R)n ffree(min(E IP p)) = 0.










The general Invariance Rules are convenient however. Without these rules infor-

mation needed about the variables not used in a program segment must be carried

throughout the proof. This is the role assertion P played in the lemma showing

the provability of a SPC Hoare triple in Chapters 2 and 3. The general Invariance

Rules are included in 'HC since extra information in a proof tends to obscure what

is happening in that proof.

Call the variables not used in a procedure auxiliary variables of that procedure.

Here is a description of how the assumption resulting from an application of the Re-

cursive Procedure Call Rule is typically modified. Substitution Rule #1 renames one

or more auxiliary variables of the assumption. The Invariance Rule states that these

renamed variables are not changed by the procedure call. The Consequence Rule is

used to remove the auxiliary variables from the postassertion. Finally, Substitution

Rule #2 is used to replace the auxiliary variables in the preassertion by useful ones.

For r = proc p n; x := x 1; p; x := x + 1 end; p, the proof that


-H {xfz > \=x }A{ = = }

demonstrates the use of these rules. Auxiliary variable z is used to show that for

x > n program 7 does not change the value of x. Let


E procp n;x := x 1;p;x :=x+ 1 end.











{z > n Ax = z}l{ = z}
T
Procedure Declaration Rule

{z > n A x = z}E I p{x = z}
T
Consequence Rule
I
{z > n A x = z}E I p{z > n A = z}
T
Recursive Procedure Call Rule
preassertion: z > w A x = z
postassertion: >_ w A x = z

(z > 0A x= z) -- (2 > 0 A ),
Assume {z > v-lAx z AO Prove {> vA z=AO {z >vAx = zAO < v T
Assignment, Composition and Consequence Rules

{z -1 > v Az = 1 A0 < v < n}E I_ pz 1 > v- 1Ax = z- 1AO < v < n}


This final Hoare triple is the result of replacing z everywhere in the assumption
by z 1. This is done as follows.











{z- > v- Ax =z- AO {z-1 >v- lAx=z- AO T
Consequence Rule

{z 1 > v -1Ax = z 1A O < v < n Az 1 = z 1}E Iv-_ p
{z -1 > v A z AO < v < n}
T
Substitution Rule #2 with [z l/d]
Remove the auxiliary variable d from the preassertaion.
I
{d > v lA = d A O < v < nAd= 1}E p
fz 1 > v A a= z 1 A O < v T
Consequence Rule
Remove the d from the postassertaion.

{d> v- A x = d A 0 {d > v 1 A x = d A 0 < v < n Ad = z 1}
{d2>v-lAx=dAO T
Invariance Rule
Form the connection between the old and new z.
I
{d > v 1 A x = d A O < v < n}E 1'_i p
Id >v 2 1 A x = dA 0 T
Substitution Rule #1 with [d/z]
Replace z in the assumption by the inactive variable d.

{z>v 1 Ax = z AO {z > v- 1A x= z A 0 < v Assumption


This completes the example. Notice that the substitution rules would be sound even
if procedure call p was replaced by program segment S in these rules. The weaker
substitution rules suffice to provide a complete verification system for HC.










A restriction could be placed on the system that programs can not make recursive

calls with a zero bound value. This would require conditional statements in the
program, rather than the bound, to control the depth of recursion. An example of a
recursive procedure controlled by conditional statements, and a recursive procedure
controlled by the bound, are given in figure 4.5. Recursive routines are ordinarily
controlled by conditional statements. The role of the bound should be an assertion of
the maximum nesting depth on the routine. Letting the bound determine the depth
of the recursion creates a procedure more in the flavor of iteration than recursion.
Utilizing the bound as a control mechanism could be considered a misuse of the
language. If a guarantee is made that recursive calls will not be made with a zero
bound, the Recursive Procedure Call Rule simplifies to the following.
P Q,({P}E P p{Q}F-7c{P}E IP S{Q})
{P}E p{Q}
for proc p e; S end E E

This guarantee could be verified by a run time check on the value of the bound each
time a recursive call is executed. This restriction can not be guaranteed syntactically
however. Therefore, recursive calls with a zero bound will be allowed and the more
complicated Recursive Procedure Call Rule will be used.


4.6 Soundness of RC

In the previous two systems a Hoare statement {P}S{Q} is valid, in an in-
terpretation I, if the result of applying a program function fs to any state in P

yields a state in Q. That is Z-{P}S{Q} if fs(Stj(P)) c StZ(Q), or in the
shorter form fs(P) C Q. In the expanded system being presented, Hoare state-
ments are of the form {P}E I S{Q} and {P}E I~ S{Q}. Write =-Z{P}E I S{Q} if

EZ(E I S)(St1(P)) C StZ(Q), or in the shorter form, EZ(E I S)(P) C Q. Similarly








82









Recursive procedure whose depth is determined by a conditional statement.


II1 = proc add n'
if n > 0 then


n := n 1
add
n :=n+1
z := z+1


end
end
Z := m
add


#z TIn' > n}Z11{ = n + m}


Recursive procedure whose depth is determined by the bound.

II2 proc add n'
add
z:= z+
end
S:= m
add


=7 {In' = n}12{z = n + m


Figure 4.5. Example of types of recursive procedure control










write H-={P}E | S{Q} if E (E | S)(P) C Q. For program r, write HI{P}r{Q}

if EC(r)(P) C Q.

Recall that the soundness of the Iteration Rule implies that the theory support-

ing the Hoare axioms and rules proves induction on arbitrary formulas. A similar
situation exists for the Recursive Procedure Call Rule. The Recursive Procedure Call

Rule where the pre- and postconditions are formulas from En, will be referred to as

the E,-Recursive Procedure Call Rule.

Lemma 44 For a complete theory T D PRA and a Hoare system H which includes the
Assignment Axiom, Consequence, Composition. Procedure Declaration and Procedure
Call Rules
1. E,-Recursive Procedure Call Rule is sound = T F- E-induction
2. T En+1 -induction = Zn-Recursive Procedure Call Rule is sound



Proof:

1. The proof of this implication uses the same technique as the proof of this

implication in Lemma 14. Recall that P(0) and Vx(P(x) -+ P(x + 1)) are assumed,

and that P(a) is to be proven. The Hoare statement to be used to prove P(a) is that

for r = proc q a; q; i := i + 1 end; i := O;q, and i V free(P), -H f{P[O/x]}Tr{P[a/x]}.

Let E = proc q a; q; i := i + 1 end.










{P[0/x}7r {P[a/x]}
T
Procedure Declaration Rule

{P[O/x]}E i := 0; q{P[a/x]}
T
Assignment Axiom, Composition and Consequence Rules
I
{(P A i = x)[O/x]}E I q{(P A i = x)[a/x]}
T
Recursive Procedure Call Rule
precondition: (P A i = x)[O/x]
postcondition: (P A i = x)[w/x]

((P A i = x)[0/x]) -- ((P A i = x)[0/x]),
Assume {(P A i = x)[0/x] A 0 < v < a}E I_-1 q{(P A i = )v 1/x] A 0 < v < a}
(P A i = x)[0/x] A 0 < v < a}E I -_ q; i := i + 1
Prove V
{(PA i = x)[v/x]A0 < v < a}
T
Assignment Axiom, Composition and Consequence Rules

{(P A i = x)[0/x] A T
Consequence Rule T a

{(P A i = x)[0/x] A 0 < v < a}E I1, q{(P A i = x)[v 1/x] A 0 < v < a}


The proof of implicaion a is as follows.

(P A i = x)[v l/x] A 0 < v < a
= P[v- /x]Ai =v-1AO SP[v/x]Ai+1=vAO = (PAi+1= x)[v/x]AO
2. Assume T En,+i-induction. The proof of the soundness of ,E-Recursive
Procedure Call Rule, using En+i-induction, is given in the proof of the Soundness
Theorem.
0










A Hoare statement is proven correct as follows. First consider the call di-graph

of each call in the program unit. Reduce the cycles in these call di-graphs so that
the graphs become trees. Prove a Hoare statement for the leaf nodes of these trees.

Using these Hoare statements, prove Hoare statements for those nodes with calls
to the leaf nodes. Continue this way, working up the tree until the original Hoare

statement has been proven. Proving Hoare statements in this way guarantees that
proofs in the antecedent of the Recursive Procedure Call Rule do not require addi-

tional applications of the Recursive Procedure Call Rule. This is a more restricted

definition of proof than the one given by Olderog. In Olderog's proofs the Hoare

axioms and rules may be applied in any order. Proofs in the PR system presented

here are layered according to the call structure of the program. This is a stronger,

and more structured, notion of proof.

Theorem 45 (Soundness) For a Hoare triple {P}E I S{Q}

-c{P}E I S{Q} -= PA {P}E I S{Q}.


Proof: Prove this by induction on the proof system HC. The proof of the soundness

of the rules translated from -B can be straightforwardly modified for this section.

The proof of the soundness of the Procedure Declaration, Environment and the pro-

cedure call rules are straightforward from the semantics of LC. The proof of the
soundness of the substitution rules follow from Lemma 41. The soundess of the

remaining rules will be shown. Let I be an interpretation of PA.

Recursive Procedure Call Rule:

Assume -HC {P[e/w]}E I p{Q[e/w]} for proc q e; S end E E and E I p the

start node of one or more cycles. Then in that proof, HPRA P[O/w] -+ Q[O/w]

and for v free(min(E I p)) U free(P V Q), {P[v 1/w] A 0 < v < e}E |V-1









p{Q[v 1/w] A 0 < v < e}l-H {P[v/w] A 0 < v < e}E 1-_1 S{Q[v/w]}. Due
to our restricted notion of provability, the proof of the above did not require
an additional application of the Recursive Procedure Call Rule. Therefore, by
induction on the soundness of the proof system, =I-{P[v 1/w] A 0 < v <
e}E I,_, p{Q[v 1/w] A 0 < v < e) implies )=Z{P[v/w] A 0 < v < e}E |P_
S{Q[v/w]A 0 < v e}.
Let P be a E,, formula. Prove, by E,+i -induction on b, that for b > 0

-IE (E I p v)(P[v/w] A 0 < v < b) C (Q[v/w] A 0 < v < b). (4.10)

Let b = 1. Since first order logic is sound, =-P[0/w] --, Q[0/w].

IjP[O/w] -- Q[0/w]
S-lid(P[0/w]) C Q[0/w]
r TEZ (E I p O)(P[0/w]) C Q[O/w]
1z-lE I(E gp)(P[O/w]) D Q[0/w]
= I=-Z(E |_, p)(P[v 1/w] A 0 < v < b) C (Q[v 1/w] A 0 < v < b)
= I1 1(E _-1, S)(P[v/w] A 0 < v < b) C (Q[v/w] A 0 < v < b)
S=ISE(E I_- C(S))(P[v/w] A 0 < v < b) C (Q[v/w] A 0 < v < b)
=2-EZ(E I p v)(P[v/w] A 0 < v < b) C (Q[v/w] A 0 < v < b)

Assume the statement is true for b.

-jE (E I p v)(P[v/w] A 0 < v < b) C (Q[v/w] A 0 < v < b)
=> hZEJ (E l p)(P[v/w] A 0 < v < b) C (Q[v/w] A 0 < v < b)
# "=E E E-1i p)(P[v 1/w] A O < v < b+ 1)C
(Q[v- l/w] A 0 < v b + 1)
>=2- (E j( -1 S)(P[v/w] A 0 < v b+ 1) C
(Q[v/w] A < v < b+ 1)










h ZEI(E j, C(S))(P[v/w] A 0 < v < b+ 1) C
(Q[v/w] A < v b+ 1)
= l"-Z(E I p v)(P[v/w] A < v < b + 1) C (Q[v/w] A < v b+ 1)

Prove =IE'(E I p)(P[e/w]) C Q[e/w]. This holds if =jE-Z(E p I(e)(s))
(P[e/w]) C Q[e/w]. For I(e)(s) = 0 the statement holds because -TP[O/w] -+
Q[0/w]. Say Z(e)(s) > 0. Statement 4.10 where b = Z(e)(s) implies Z=E (E
p I(e)(s))(P[e/w]) C Q[e/w].

Invariance Rule:

The soundness of the Invariance Rule for a bounded program unit is shown.
The proof of the soundness of the Invariance Rule for a program unit is similar.
Assume free(R) lfree(min(E IP S)) = 0 and -HC{PPAR}E If S{QAR}. Then
in that proof, -. c{P}E |I S{Q}. By the inductive hypothesis =E (E I
S)(P) C Q. Let s E St1(P A R). Since the program function EZ(E I' S)
is stable, EZ(E Ip S)(s) E StZ(Q A R). Therefore Z-E (E [I S)(P A R) C
(QA R).




4.7 Completeness of HC(

The completeness of nC is proven similarly to how it was proven Chapter 2.
Lemma 46 ForIZ PRA and a Hoare formula {P}E I S{Q} where free(min(E I S)) C
free(P) = free(Q)
Hz {P}E I S{Q} #- Vx(P+(x) '- Q+(gEs(X))).

Proof: This proof uses the same technique as was used in Lemma 18 of Chapter 2.
O










Theorem 47 (Strongest Postcondition Theorem) Given program unit E I S and as-
sertion P with free(min(E I S)) C free(P) = X = {Xi,..., k} the SPC of E S,P
is
Q 3yk(Vi < k(x, = (gElsp(y))yi) A Pp)
where p = [Yik/yk], and y = c((yl Yi) *. (Yk yk)).

That is the following hold:

1. kPRA {P}E I S{Q
2. =PRA {P}E I S{R} =+PRA Q -' R


Proof: This proof uses the same technique as was used in the SPC Theorem of
Chapter 2. E

In Chapter 2 and 3 showing the completeness of the verification system required
showing the provability of a SPC Hoare triple for program segment S and assertion
P. That is, it was proven that for free(min(S)) C free(P) = X = {xi,...,xk},

Y = {yi,.. ,Yk}, X n Y = p = [YA/Pk] and y = c(y y'k)

t-HC {Vi < k(x, = y;) A P}S{Vi < k(x, = (gEIsP(y))ya) A Pp}.

Notice that Vi < k(xi = y;) A P =* Vi < k(xi = yi) A Pp and the free variables of
Pp are disjoint from the free variables of S. Therefore, now that the Invariance Rule
is included in 'HC, a simplified version of the SPC Hoare triple suffices to show the
verification system is complete. Rather than showing the provability of a SPC Hoare
triple, the provability of a most general formula, MGF, will be shown.
Lemma 48 (MGF) For a program unit E I S where free(min(E I S)) = X = {a,..., Xk},
Y= {y1,...,yk}, X n Y = 0, p = [y/ k] and y = c(y yk)

FHC{Vi < k(x, = y)}E I S{Vi < k(xi = (gEISP(Y))y,)}










Proof: Assume that i goes from 1 to k. The proof uses the technique given in the
example proof on page 63. Induction on a program unit E I S is used a number of
times in this proof. In each context the treatment of the non-call constructs is the
same. In these contexts the following holds for the non-call constructs.

--HC {zx = y,}E I S{xi = (gEISP(Y))yi} (4.11)

-C{xi = y,}E (~ S{x, = (gE1Psp(b,y,rEq'pp(b,y)))y} (4.12)
for a cycle entered via the call E' I p

For S an assignment, composition, iteration or a variable declaration statement,
proofs of the above use the same technique as was used in proving Lemma 28 in
Chapter 2. Equation 4.11 will be proved for S a procedure declaration statement.
The proof of equation 4.12 for a procedure declaration statement is similar.

S = begin E1; Si end and E' = Add(E1, E).

{x, = y,}E | begin E1;S, end{x, = (gsp(y))y,}
T
Procedure Declaration Rule

{x = y}I E' I S,{xi = (gsp(y))y}
T
Consequence Rule

{i = y,}E' SI,{z = (gE'Is,p(y))y_}
Inductive Hypothesis


The proof of equation 4.11 shows that the lemma holds for the non-call constructs.
The provability of a MGF for a procedure call E I p is left to show. Prove this
by induction on the height h of the E p di-graph. Notice that this induction
is not occurring within the proof system. A call di-graph has a fixed height. Let
proc p e; T end E E.










Assume h = 0 and the call E p is not the start node of a cycle.


{x, = yi}E I p{xi = (gEIpp(Y))yi}
T
Non-Recursive Procedure Call Rule

{x, = yi}E I T{xi = (gElpp(Y))y,}

Consequence Rule

{x, = yi}E I T{x( = (gEjTP(Y))yi}


The provability of the above triple has been shown for the non-call constructs. Since
h = 0 there are no calls in procedure body T.
Suppose h = 0 and the call E I p is the start node of one or more cycles.











{x, = yi}E I p{x( = (gEIpp(Y))y/}
T
Consequence Rule
Let d be a fresh variable.

{xi = yi A d = e}E I p{jz = (rEipp(d, y))yi A d = e A O< d < e}
T
Recursive Procedure Call Rule
precondition: xi = y, A d = w
postcondition: xi = (rEipP(d, y))i A d = A 0 < d < e

(xi = y, Ad = 0) -- (xi = (rEIpp(d, y))y^ Ad = 0 A 0 < d < e),
e {xi=y Ad= v 1 AO d= v-AO {Pr = yi Ad = v A 0 < v < e}E I_, T{x = (rEIp(d, y))y_
Ad= vA0 T
Consequence Rule
Let y' be the result of extending code y to include the element (d' v),
p' = [Wk/ik, d'/v] and gEIPTP be extended to operate on the state code
y' so that it leaves d' unchanged. Let g = gEPT and r rEIPp.

{xi = y, Ad = v A 0 < d < e}E T{zx = (gp'(v 1,y',rp'(v 1 y')))yiA
d = (gp'(v 1,y',rp'(v 1,y'))), A 0 < d < e}


This Hoare triple holds for the non-call constructs as follows.
{x, = y, A d = v A 0 d = (gp'(v 1, y',rp'(v 1, y'))), A 0 < d < e}
T
Invariance Rule

{x, = y, A d = v}E |_V T{xi = (gp'(v 1,y',rp'(v 1, y')))yiA
d = (gp'(v 1,y', rp'(v- 1, y')))d}
MGF


Since h = 0 calls in procedure body T can only be to the first node on a cycle.
Consider the cycles on node E I p. Label the calls participating in these cycles as










they were labeled in the example proof. That is, label them according to how many
calls there are between the edge representing this call and the edge entering node

E I p. Let gj = gE.,, Iq,, The triple is proven for each call E,,m P q,,,, if it can be
shown that, for 0 < j < n,

-HC {x = i A d = v A 0 (4.13)
rp'(v l,y')))y A d = (gjp'(v ,y',rp'(v 1,y'))), A O < d < e.3)

This is proven for any n by induction on j. Notice that this induction is not occurring
within the proof system. The length of each of a call's cycles is fixed.
Let j = 0.


{x = yi A d = v A 0 rp'(v 1,y')))y A d = (gop'(v 1,y',rp'(v 1,y')))d, A 0 < d < e}
T
Environment Rule

{x = y Ad= v A0 rp'(v 1, y')))^ A d = (g0p'(v 1, y',rp'(v 1, y')))d A 0 < d < e}
T
Consequence Rule T a

{xi= y, Ad 1 = v 1 A O < v < e}E I_- p
{xi = (rp(d 1, y))y, A d- 1 = v 1 A 0 < d -1 < e AO < v < e}


This Hoare triple is the assumption with d replaced by d 1. This translation is
proved using the Substitution, Invariance and Consequence Rules.
The proof of implication a is as follows.

xi = (rp(d l,y))yj Ad 1 = v I A 0< d -1 zi=(rp(d-1,y))y rp(vAd-1=v-A0 = xi = (rp(v 1, y))y Ad = v A O < d < e
= xi = (gop(v 1,y,rp(v l,y)))yi Ad = vA O xi = (gop'(v 1,y',rp'(v y')))y d A =
(gop'(v 1, y', rp'(v 1, y')))d A 0 < d < e










Let 0 < j < m, proc qn,j e,,; Rn,j end E En,j and gR, gE ,,,pR,,.


{xi = y; A d = v}Enj pi-1 n.
{xC = (gp'(v 1,y',rp'(v 1, y')))y^ A d = (gjp'(v 1,y',rp'(v 1,y'))),}
T
Inner Procedure Call Rule

{x, = y, A d = v}Enj I-1 Rnj
{xi = (gjp'(v 1, y',rp(v 1, y')))y Ad = (gjp'(v 1,y',rp'(v 1,y')))d,}
T
Consequence Rule

{x; = y, A d = v}En, I-i_, R,j
{xi = (gR,p'(v y',rp'(v y')))y Ad = (gR,p'(v 1,y',rp'(v l,y')))d,}



This Hoare triple is proven for the non-call constructs. The calls in R,j are of
the form En,j-1 I-i qnj-1. The Hoare triple holds for these calls by the inductive
hypothesis on j.
Let h > 0. The proof that the lemma holds for the call E I p where h > 0 is
similar to the proof that the lemma holds for the call E I p where h = 0 except that

an additional case is needed to inductively prove some of the Hoare triples. Only
these additional cases will be discussed.

Suppose E [ p is not a start node. The procedure body T may contain a procedure

call. The height of the di-graph for this call will be less than h. Thus, the lemma
holds for this call by induction on h.

Say E I p is the start node of one or more cycles. The procedure body T may
contain one or more calls to a procedure whose node is not on the cycle. The lemma

is proven for such a call as follows.











{x, = yi}E I_-1 q{xi = (gEpqp'(v 1,y',rp'(v 1,y )))yA
d = (gEPqp '( 1,y',rp(v 1, y')))d


T
Off Cycle Procedure Call Rule

{X, = yi}E q{fxi = (gEipqP'(V- 1,y',rp'(v 1, y')))y A
d = (gEjpqP'(v 1, y', rp'(v 1, y')))d,}
T
Consequence Rule

{x, = y }E I q{xi = (gEjq'(Y'))y, A d = (gEjP'(y'))_}


The height of the E I q di-graph will be less than h. Thus, the above triple is proved
by induction on h.
In proving the lemma for a call to the next node on a cycle the procedure body
Rn,i may contain one or more calls to a procedure whose node is not on the cycle.
This case is handled as it was in the previous paragraph. O

Theorem 49 (Completeness) For a Hoare triple {P}E I S{Q}

[PRA {P}E I S'{Q} : TIC{P}E I S{Q}.


Proof: Assume p-PRA {P}E I S{Q}. Without loss of generality, also assume
free(min(E S)) C free(P) = X = { 1,...,xk}, Y = {Y,...,Yk}, X n Y = ,
p = [yk/Xk] and y = c(k yk).











{P}E S{Q}
T
a Consequence Rule T b

{Vi < k(x, = y;) A Pp}E I S{Vi < k(xi = (gEIsP(Y))yi) A Pp}
T
Invariance Rule

{Vi < k(xi = yi)}E I S{Vi < k(xA = (gEIsP(Y))yi)}
MGF


Define P* from P by P" Akf~kk.Vi < k(xi = yi) A P. Implication a holds since

h-PRA Vzk(P(fk) P*(k, k)).
The formula Vi < k(xi = (gEIsp(Y))yi) A Pp implies the strongest postcondition
of E I S and P. Thus implication b holds by part 2 of the SPC Theorem. O