Structures subject to space complexity

MISSING IMAGE

Material Information

Title:
Structures subject to space complexity
Physical Description:
iv, 117 leaves : ; 29 cm.
Language:
English
Creator:
Uddin, Zia
Publication Date:

Subjects

Subjects / Keywords:
Mathematics thesis, Ph. D   ( lcsh )
Dissertations, Academic -- Mathematics -- UF   ( lcsh )
Genre:
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 2004.
Bibliography:
Includes bibliographical references.
Statement of Responsibility:
by Zia Uddin.
General Note:
Printout.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 003101039
System ID:
AA00008996:00001

Full Text










STRUCTURES SUBJECT TO SPACE COMPLEXITY


By

ZIA UDDIN
















A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2004














ACKNOWLEDGMENTS

I owe an immense debt of gratitude to my supervisory committee chair,

Professor Douglas Cenzer. Without his guidance, this work simply would not have

existed. He has been unfailingly generous to me, patient, encouraging, sympathetic,

and full of good humor. I am very lucky that he introduced me to the subject of

Complexity Theory, and I am sure I am lucky in more ways than I realize to have

him as an advisor.

I would also like to thank the other members of my supervisory committee:

Dr. William Mitchell, Dr. Beverly Sanders, Dr. Rick Smith, and Dr. Helmut Voelklein.

I have learned a lot from them either through taking classes taught by them, personal

conversations, or hearing them speak in seminars and conferences.

I am very grateful to the Department of Mathematics for supporting me with

a Teaching Assistantship during my years as a graduate student. The staff and many

of the professors have been very generous with their time and help over the years.

Finally, I must acknowledge the contribution of my parents, my brother, all

my good friends, and other sources that I may not be aware of, toward my progress

along this path.















TABLE OF CONTENTS
page

ACKNOWLEDGMENTS ............................. ii

ABSTRACT ... .. ... .. ... ... ... ... .. .. iv

CHAPTER

1 INTRODUCTION .............. ................. 1

1.1 Basic Themes in Complexity-Theoretic Mathematics ........ 1
1.2 Outline of Dissertation .......................... 5
1.3 D efinitions . . . . 9

2 SPACE COMPLEXITY OF OPERATIONS ................ 15

2.1 Arithmetical Operations ........................ 15
2.2 Function Composition .......................... 27
2.3 Applications of Function Composition ................. 39

3 SPACE COMPLEXITY OF CERTAIN STRUCTURES .......... 62

3.1 Basic Structural Lemmas ......................... 62
3.2 Relational, Functional, and Permutation Structures ......... 70
3.3 Abelian Groups .............................. 89

REFERENCES ................................... 115

BIOGRAPHICAL SKETCH ............................ 117


















111ll














Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

STRUCTURES SUBJECT TO SPACE COMPLEXITY

By

Zia Uddin

August 2004

Chair: Douglas A. Cenzer
Major Department: Mathematics

Complexity Theory has been applied to Model Theory and Algebra by Nerode,

Cenzer, Remmel, and others. In their studies, the primary focus has been on

polynomial-time complexity and other notions of bounded time. We now examine

the notions of bounded space complexity in Algebra and Model theory. Of particu-

lar interest are the classes of logarithmic-space, linear-space, and polynomial-space

computable sets and functions.














CHAPTER 1
INTRODUCTION

1.1 Basic Themes in Complexity-Theoretic Mathematics

This work falls under the subject of Complexity-Theoretic Model Theory and

Algebra. In this subject, one uses the rich theory of Recursive Model Theory and

Recursive Algebra as a reference but looks at resource (e.g., time or space) bounded

versions of the results in those areas. Recursive Model Theory and Algebra deals

with effective (in other words, recursive, i.e., computable) versions of results from

ordinary Model Theory and Algebra. As an example, let us consider the following

fact from Algebra: Every linearly independent subset of a vector space can be extended

to a basis. All the known proofs of this fact for infinite-dimensional vector spaces

use the Axiom of Choice and, therefore, are nonconstructive. As a result, one might

conjecture that there is a linearly independent subset of a vector space that cannot be

extended to a recursive basis. This conjecture was in fact proved by Metakides and

Nerode [14] in 1975. Hence, in this particular case, the analogous recursive-algebraic

statement of a theorem of Algebra turns out to be false. As an example of a positive

result, Dekker [9] proved that every recursively-presented infinite-dimensional vector

space over a recursive field with a dependence algorithm has a recursive basis. Here,

a dependence algorithm is one that can determine uniformly effectively whether any

set of n vectors vo, ... vn-1 are dependent.

In the two decades before the work of Metakides and Nerode, there were

vast developments in many areas of theoretical computer science (Hopcroft and

Ullman [12]), particularly in complexity theory (Hartmanis and Stearns [11]). In due

course, Nerode and Remmel [15-20] began the investigation of complexity-theoretic









analogues of results from Recursive Algebra and other areas of mathematics. The

natural complexity-theoretic analogue of Dekker's result is that every polynomial-

time infinite-dimensional vector space over a polynomial-time field with a polynomial-

time dependence algorithm has a polynomial-time basis. Nerode and Remmel [17]

proved this statement only in the case where the underlying field is infinite and

has a polynomial-time representation with certain nice properties. If, however, the

underlying field is finite and we do not have a dependence algorithm, then this state-

ment turns out to be oracle dependent. Nerode and Remmel [17] also gave a general

construction which was used afterwards by Cenzer and Remmel [8; Theorem 8.14] to

prove that there exists an infinite-dimensional vector space without any polynomial-

time basis.

In general, the natural complexity-theoretic analogue of a result in Recursive

Model Theory and Algebra may be false because the proof of the recursive result

uses the unbounded resources allowed in recursive constructions in a crucial way.

In contrast, there are results in Recursive Model Theory and Algebra for which the

natural complexity-theoretic analogue is true but requires a more delicate proof that

incorporates the resource bounds. Furthermore, a number of new and interesting

phenomena arise in complexity-theoretic investigations because of two facts.

First, whereas all infinite isomorphic recursive sets are recursively isomorphic,

not all infinite polynomial-time sets are polynomial-time isomorphic. In particular,

Tal(w), the tally representation of the set of natural numbers w, is not polynomial-

time isomorphic to Bin(w), the binary representation of w. Hence, in Complexity-

Theoretic Model Theory and Algebra, it does make a difference if we choose Tal(w) as

our universe instead of Bin(w). Of course, in Recursive Model Theory and Algebra,

these two representations of w are interchangeable.

Second, complexity-theoretic results do not relativize as is the case for most

recursion-theoretic results. For example, Baker, Gill, and Solovay [1] proved that









there are recursive oracles X and Y such that Px = NPX and P'Y : NPY. We

also recall the oracle-dependency of one part of the complexity-theoretic analogue

of Dekker's result above. In Recursive Model Theory and Algebra, it has generally

been the case that if two classes of recursive structures are equal, then they are equal

relative to all oracles.

In the remainder of this section we will briefly describe the themes of and

survey some results in Complexity-Theoretic Model Theory and Algebra. Section 1.2

discusses how our present work relates to previous research in this area, and then

provides an outline of this work. Precise definitions are given in Section 1.3.

There are basically two major themes in Complexity-Theoretic Model Theory

and Algebra. The first theme is Complexity-Theoretic Model Theory, which deals

with model existence questions. The second theme (Complexity-Theoretic Algebra)

fixes a given structure (e.g., a polynomial-time structure) and explores the proper-

ties of that structure. Our work is concerned with the first theme. This first theme

(Complexity-Theoretic Model Theory) itself has two major subthemes. In the first

subtheme, the focus so far has been on comparing various recursive structures with

feasible structures, and also comparing those recursive structures with feasible struc-

tures with a specified universe. Here, feasible is interpreted to be some relatively low

time-complexity class such as P (polynomial-time) or EX (exponential-time). The

second subtheme involves the problem of feasible categoricity and has been investi-

gated by Cenzer and Remmel [4, 6, 7, 21]. Our work, however, is concerned mainly

with the first subtheme.

Working within the first subtheme involves picking a complexity class A and

asking which structures in some other class C can be represented by models in A. By

far, the complexity class that has received the most attention is P. In view of that,

the four existence questions that have been posed for any class C of structures are as

follows:









* Is every recursive structure in C isomorphic to some polynomial-time structure?

* Is every recursive structure in C recursively isomorphic to some polynomial-time
structure?

* Is every recursive structure in C isomorphic to some polynomial-time structure
with a specified universe such as the binary or tally representation of the natural
numbers?

* Is every recursive structure in C recursively isomorphic to some polynomial-time
structure with a specified universe such as the binary or tally representation of the
natural numbers?

In contrast to these four existence questions pervading the first subtheme, the

second subtheme of feasible categoricity involves asking the corresponding uniqueness

questions. Returning to the existence questions, Grigorieff [10] studied the class C of

linear orderings. He showed that every recursive linear ordering is recursively isomor-

phic to a polynomial-time (in fact, a real linear-time) linear ordering, thus answering

the first two existence questions in the affirmative. To answer the third and fourth

existence questions for the class of linear orderings, one must specify the universe of

the alleged polynomial-time linear ordering in advance. We recall that the choice of

any particular universe as opposed to another may well make a difference. Grigorieff

showed that every recursive linear ordering is isomorphic to a polynomial-time linear

ordering (again, a real linear-time ordering, in fact) whose universe is Bin(w), thus

answering the third existence question in the affirmative for the particular universe

Bin(w). However, Cenzer and Remmel [2] answered the fourth existence question for

this particular universe in the negative by constructing a recursive linear ordering

that is not recursively isomorphic to any polynomial-time linear ordering with uni-

verse Bin(w).

More generally, Grigorieff [10] showed that every recursive structure with a

finite number of relation symbols and no function symbols is recursively isomorphic

to a polynomial-time structure with a standard universe (i.e., whose universe can be

taken to be the natural numbers in either their tally or their binary representation).

Cenzer and Remmel [2] strengthened this result by showing that any purely relational









recursive structure is recursively isomorphic to a polynomial-time structure with a

standard universe. In contrast, it was also shown [2] that a recursive structure with

a single unary function exists that is not recursively isomorphic to any polynomial-

time structure; and that a recursive structure with one unary relation and one unary

function exists that is not even isomorphic to any polynomial-time structure.

In addition to investigating whether recursive linear orderings, recursive rela-

tional structures, and recursive structures with unary functions have feasible models,

researchers have investigated whether recursive Abelian groups [13], recursive vector

spaces, recursive Boolean algebras, recursive graphs [6], and recursive permutation

structures have feasible models. For example, two important positive results [2] are

as follows:

* All recursive Boolean algebras are recursively isomorphic to polynomial-time
structures.

* The standard model of arithmetic, namely, the structure (w, S, +, <, 2x), is
recursively isomorphic to a polynomial-time structure.

As for examples of both positive and negative results, it was shown [3; pp. 343-348]

that any recursive torsion Abelian group G is recursively isomorphic to a polynomial-

time group A; and that if the orders of the elements of G are bounded, then A may

be taken to have a standard universe. It was also shown [3; p. 357] that a recursive

torsion Abelian group exists that is not even isomorphic to any polynomial-time (or

any primitive recursive) group with a standard universe.

1.2 Outline of Dissertation

The bounded resource in all of the investigations mentioned above is time. Our

work involves carrying out some of the same investigations except that our bounded

resource is space instead of time. For example, we are interested to know (following

Cenzer and Remmel [2]) whether any purely relational recursive structure is recur-

sively isomorphic to a logarithmic-space structure with a standard universe. Our

focus is mainly on logarithmic space; and to a lesser extent on linear, polynomial,









and exponential space. The importance of logarithmic space is a consequence of the

following observations:

* Any logarithmic-space algorithm is a polynomial-time algorithm, whereas a linear-
space algorithm is not necessarily a polynomial-time algorithm.

* Algorithms that can be carried out within polynomial time are generally thought
to be the practically feasible ones.

* Algorithms that can be carried out within logarithmic space correspond to those
that do not use up too much computer memory.

Chapter 2 begins by investigating the space complexity of the basic arith-

metical operations. In the case of addition and subtraction, the investigation simply

involves verifying that the standard algorithms for addition and subtraction belong

to a low space-complexity class (Lemmas 2.1.1 2.1.7). The detailed verifications of

the algorithms, although well-known, serve to elucidate later proofs. However, in the

case of binary multiplication, we are obliged to give a modification of the standard

algorithm (Lemma 2.1.8) since the standard algorithm requires linear space instead

of logarithmic space. And thus far, we have been unsuccessful in arriving at a division

algorithm that requires at most logarithmic space. (We note that both the standard

binary multiplication and division algorithms are polynomial-time.) Hence, to prove

the existence of a logarithmic-space bijection between the set of natural numbers

written in binary and the same set written in base 3 or a higher base, we need to

first prove the existence of such bijections between the natural numbers in binary

and various other sets, which in turn are proved to have logarithmic-space bijections

with the natural numbers in base 3 or higher. We then compose these bijections to

obtain the desired result.

Accordingly, Section 2.2 is a necessary detour where we investigate the space

complexity of function composition. Once again, we begin by reproving well-known

results (Lemmas 2.2.1-2.2.3) since we need to refer to their proofs, and not just the

statements, later on. We then go on to prove a series of space composition lemmas









(Lemmas 2.2.3-2.2.9) some of which deal with the situations where a space complex-

ity class is closed under function composition. We end the section by proving the

Generalized Space Composition Lemma (Lemma 2.2.12), which is then used through-

out our work.

Section 2.3 deals with various applications of the space complexity of function

composition. For example, we construct a logarithmic-space bijection from N x N to

N (Lemma 2.3.3), the standard bijection (Lemma 2.3.2) being a linear-space bijec-

tion, and not provably a logarithmic-space bijection. Lemmas 2.3.4-2.3.9 all serve

to eventually prove the existence of a logarithmic-space bijection between the set of

natural numbers written in binary and the same set written in a higher base (Lemma

2.3.10). Later, we prove Lemma 2.3.13, which characterizes those LOGSPACE sub-

sets of Tal(w) that are in fact LOGSPACE set-isomorphic to the whole of Tal(w); and

Lemma 2.3.14, which deals with Cartesian products and disjoint unions, and which

proves to be crucial in constructing structures (models) throughout Chapter 3.

Section 3.1 includes the basic model-building lemmas. Later, we rely particu-

larly on Lemma 3.1.1 and Lemma 3.1.3 (b). Lemma 3.1.1 deals with the most basic

situation where the LOGSPACE complexity of a structure is preserved under the

action of a LOGSPACE bijection. It so happens that LOGSPACE complexity is, in

general, not preserved when we move from the "binary version" of a structure to the

"tally version." The most important exception from our point of view is dealt with

in Lemma 3.1.3 (b).

Section 3.2 begins by answering in the affirmative (Theorem 3.2.1) the ques-

tion posed earlier, namely, whether any purely relational recursive structure is recur-

sively isomorphic to a logarithmic-space structure with a standard universe. Next, we

show (Theorem 3.2.2) that an affirmative answer is impossible in the case of purely

functional recursive structures. In the proof of Theorem 3.2.2, we in fact end up

constructing a permutation structure that has both finite and infinite orbits. Hence,









we reach a natural spot in our work where we can begin to investigate the space com-

plexity of permutation structures according to the number and size of their orbits.

Theorem 3.2.4 shows that we cannot specify a standard universe even for a recursive

permutation structure where all the orbits are finite. However, we can successfully

specify a standard universe by putting further restrictions on the orbits, as in the

cases where we have only a finite number of orbits (Theorem 3.2.5); at least one but

only a finite number of infinite orbits (Theorem 3.2.6); all obits of the same finite size

(Theorem 3.2.7); and an upper bound on the size of the orbits (Corollary 3.2.9). In

contrast, for finitary permutation structures that do not have an infinite number of

orbits of the same fixed size, we put restrictions on their spectrum (defined in Section

1.3). In this way, we obtain both positive results [Theorems 3.2.12 and 3.2.14, where

the spectrum is a LOGSPACE subset of Tal(w); and Corollary 3.2.13] where we can

specify a standard universe, and a negative result (Theorem 3.2.15). Our final result

for permutation structures is a negative one (Theorem 3.2.16) where we quote two

examples with an infinitely number of infinite orbits that are not recursively isomor-

phic to any LOGSPACE structure.

Section 3.3 deals with the space complexity of Abelian groups. Results here

parallel those for permutation structures. We define the direct product of a sequence

of groups, and consider the space complexity of this product in Lemma 3.3.2. This

lemma is a foundation for proving results about the basic groups: In Lemma 3.3.3,

we prove that the basic groups (like Z and Q) have standard LOGSPACE represen-

tations. In Lemma 3.3.4 we prove the same about finite products of the basic groups.

Then in Theorem 3.3.5, we prove the same once again for finitely-generated Abelian

groups. Next, we consider recursive torsion Abelian groups, which is where we have

similarities with the results for permutation structures. For example, Theorem 3.3.8

(where the "orbit" of every group element is finite) is the analogue of Theorem 3.2.4;

while Theorem 3.3.13 (dealing with torsion Abelian groups with an upper bound








on the orders of the elements) is the analogue of Corollary 3.2.9. We conclude our
work by considering torsion Abelian groups with no upper bound on the orders of

the elements. Once again, we define and put restrictions on the spectrum of torsion
Abelian groups. This time, the positive result of Theorem 3.3.17 is the analogue of
Theorem 3.2.14. And we have negative results in this case as well. The remainder of
Chapter 1 gives the necessary definitions and establishes notation.

1.3 Definitions

Let E be a finite alphabet. Then E* denotes the set of finite strings of letters
from E, and E' denotes the set of infinite strings of letters from E, where w =
{0, 1, 2,... } is the set of natural numbers. For a symbol s E and for each natural
number n $ 0, s" denotes the string of n of the symbols s, while so denotes the

empty string 0.
For a string a = (a(0), a(1), .. a(n 1)), where a(i 1) is the ith symbol
of a, 1 < i < n, the symbol Jal denotes the length n of a. The empty string 0 has
length 0. For m < Ia the string (a(0),..., a(m- 1)) is denoted by a[m. A string a
is an initial segment of a string T, written a -< 7, if a = r[m for some m < ITr. The
concatenation arT (or sometimes just aT) is defined by

ar- = (a(0),..., a(m 1),7(0),...,T(n 1)),

where Ja\ = m and IT = n. In particular, we write a^a for a^(a) and a^a for (a)^a,
where (a) is the string consisting of the single symbol a E.
For any natural number n : 0, tal(n) = 1" is the tally representation of
n and bin(n) = ioil...ij e {0,1}* is the (reverse) binary representation of n if
n = io + ii 2 + ... + ij 2i and ij $ 0. And more generally for k > 2, the

(reverse) k-ary representation of n is bk(n) = ioi1 ...ij E {0, 1,..., k 1}*, if
n = io+i -k+- +ij k and ij $ 0. We let tal(0) = bin(0) = bk(0) = 0. Then we let
Tal(w) = {tal(n) : n e w}, Bin(w) = {bin(n) : n E w}, and Bk(w) = {bk(n): n E w}








for each k > 3. Occasionally, we will want to say that B2(w) = Bin(w) and
Bi(w) = Tal(w).
Given A,B C {0,1}*, we let Ax B = {(a,b) : a c A,b B}, the ordinary
Cartesian product. Let A e B = {(0, a) : a E A} U {(1, b) : b E B}, which we call the
disjoint union of A and B. More generally, given A,,..., An C {0,1,..., k 1}*,
we define the n-fold disjoint union A1 E A2 E ... An as A1 E A2 E ... E An =
{(0,al) :a E Ai} U {(1,a2) : a2 A2} U ... U (n 1,an) an e A,}.
Our model of computation is the multitape Turing machine of Papadimitriou
[23]. We recall that in this model, the alphabet E of a machine always contains the
blank symbol U and the first symbol t>. If x E (E \ {U, t>})* is the current nonempty
string on any tape, then we imagine the contents of that tape to be >x U U U ...
The definition of a machine's program ensures that t> is never overwritten and that
upon reading > on any tape, the cursor of that tape moves right. Also strings never
become shorter in this model because no nonblank symbol is ever overwritten by the
blank symbol U. Another important feature of this model is that the cursor of each
tape can move independently of the cursors of other tapes.
Unless stated otherwise, our machines are both read-only (input strings are
never overwritten) and write-only (the output-string cursor never moves left; and
moves right immediately after each output symbol is written). Papadimitriou refers
to such machines as machines with input and output.
A function g(x) is a proper complexity function if g(x) is identically a con-
stant or one of k log x, (log x)k, kx, xk, 2(og x)k, 2kx, 2xk, and 22k where k is a nonzero
constant. For us log x is an abbreviation for log2 x.
Let g : w -+ w be a proper complexity function. A deterministicc) Turing
machine M is said to be g time bounded if each computation of M of inputs of length
I > 2 requires at most g(l) steps. (If M has more than one input string, then I is
taken to be the sum of the lengths of the input strings.) A deterministicc) read-only








write-only Turing machine M is said to be g space bounded if at the end of each
computation of M of inputs of length 1 > 2, the maximum length of a string on the
work tapes (i.e., the non-input and non-output tapes) is g(1). However, if M is not
read-only or write-only, then we take into account the maximum lengths of strings
on the input tapes and the output tape if they get written on, as well as the lengths
of strings on the work tapes.
Once again, let g : w -- w be a proper complexity function. A function
f : cw -+ w of n > 1 variables is said to be in TIME(g) if there is a g time bounded

Turing machine with n input tapes that computes f. The function f is said to be
in SPACE(g) if there is a g space bounded Turing machine with n input tapes that
computes f. A set or relation is in TIME(g) (resp. SPACE(g)) if its characteristic
function is in TIME(g) (resp. SPACE(g)).
We now define the important deterministicc) time complexity classes that we
consider:
R = U{TIME(x + c): c > 0},
LIN = UJ{TIME(cx) : c > 0},

P = U{TIME(x) : c > 0}
EX = UJ{TIME(2cx) : c > 0},
EXP = UJc{TIME(2xc) : c > 0},
DOUBEX = U,.{TIME(22c) : c > 0}.
And if S is any of the above time complexity classes, then
DEX(S) = Ugcs{TIME(29)}.
The smallest class Q containing P and closed under the DEX operator can
be defined by iterating DEX so that PO = P, Pn+1 = DEX(Pn) for each n, and

Q = Un<. P".
Let 0 (resp. k > 0) denote the identically zero respectivelyy k) function. The
deterministicc) space complexity classes that we consider are as follows:








ZEROSPACE = SPACE(O),
CONSTANTSPACE = Uk{SPACE(k) : k > 0},
LOGSPACE = Uc{SPACE(clogx) : c > 0},
PLOGSPACE = U,{SPACE((logx)c) : c > 0},
LINSPACE = U,{SPACE(cx) : c > 0},
PSPACE = U{SPACE(xc) : c > 0},
SUPERPSPACE = U,{SPACE(2(ogx)) : c > 0},
EXSPACE = USPACE(2) c > 0},
EXPSPACE = Uc{SPACE(2x) : c > 0},
EXSUPERPSPACE = USPACE(22( O')c : c > 0},
DOUBEXSPACE = U{SPACE(22cx) : c > 0},
DOUBEXPSPACE = UC{SPACE(22X) : c > 0}.
We say that a function f is quasi real-time if f E R. This is slightly more
general than the usual notion of real time, since real-time functions are in TIME(x).
The function f is linear-time if f E LIN; is polynomial-time or p-time if f E P; is
exponential-time if f e EX; is double exponential-time if f e DOUBEX; and is
q-time if f E Q.
Similarly, a function f is zero-space if f E ZEROSPACE; is constant-
space if f e CONSTANTSPACE; is logarithmic-space if f E LOGSPACE; is
polylogarithmic-space if f E PLOGSPACE; is linear-space if f e LINSPACE;
is polynomial-space or p-space if f C P; is exponential-space if f E EXSPACE;
and is double exponential-space if f e DOUBEXSPACE.
Odifreddi [22] includes all the basic definitions of recursion theory. Let ,n be
the partial recursive function of n variables computed by the ith Turing machine Mi.
If n = 1, we write 0, instead of i,j. Given a string ae E we write O(a) 4. if Mi
gives an output in s or fewer steps when started on input string a. Thus the function
is uniformly polynomial-time. We write 0e(a) 4 if (3s)(8(a) 4), and ,e(a) T if it








is not the case that (3s)(O(a() 4). Throughout our work, we use the terms recursive
and computable interchangably.
The structures we consider are structures over an effective language C =
({Ri}iEs, {fi}iT, {ci}li). Here S, T, and U are initial segments of w, while c,
is a constant symbol for all i U. There are recursive functions s and t such that,
for all i E S, Ri is an s(i)-ary relation symbol; and, for all i E T, fi is a t(i)-ary
function symbol. Also there are recursive functions a and 7 such that for all i S,
a(i) is the index of a Turing machine that computes Ri; and, for all i e T, T(i) is the
index of a Turing machine that computes fi.
If a structure A has a universe (i.e., its underlying set) that is a subset of
Bin(w), then by tal(A), we simply mean the structure isomorphic to A whose uni-
verse is a subset of Tal(w). We have a similar understanding of the notation bin(A) in
the case where A has a universe that is a subset of Tal(w). A relational structure is
a structure over a language that has no function symbols. A structure has standard
universe if its universe is all of Tal(w) or Bin(w).
Let F be some class of sets (relations or functions) such-as primitive recursive
or partial recursive or some complexity class. We say that a set (relation or function)
is F-computable if it is in F.
A structure A = (A, {R}Es, {f}iET, {c}?iEU) (where the universe A of A
is a subset of E*) is a F-structure if the following conditions hold:
* A is a F-computable subset of E*.
* For each i E S, Rf is a F-computable s(i)-ary relation on A('). Or, more formally,
Rf is the restriction to A"') of a F-computable relation R on (E*) '.
* For each i T, fA is a F-computable t(i)-ary function from At(i) into A. Or, more
formally, fJ4 is the restriction to At(i) of a F-computable function f on from (E*)t(i)
into E*.
For any pair of structures A = (A, {Rf}iEs, {f}ier, {cf}ieU) and B = (B,
{R'}iLs, {f}ierT, {ci'}iE), we say that A and B are F-isomorphic if and only if
there is an isomorphism f from A onto B and F-computable functions F and G such








that f = FIA (the restriction of F to A) and f-1 = GIB. We end this section by
including some definitions related to permutations (i.e., bijections from a set onto
itself) and groups.
Let f be an injection of a set A into itself, and let a C A. The orbit Of(a) of
a under f is Of(a) = {b A : (3n E w)(fr(a) = b V fn(b) = a)}. The order laif of
a under f is the cardinality of Of (a). A permutation structure (A, f) is a structure
where A is a set and f is a permutation on A. Given an injection f : A -* A,
we define the spectrum of f by Spec(A,f) = {n C w : 3a E A)(|alf = n)},
and the finite and infinite parts of A by Finf(A) = {a E A : |a;l < w} and
Inff(A) = {a e A : Jalf = w}. An injection f : A -+ A is called finitary if all the
orbits of all elements of A under f are finite; and monic if there are no two disjoint
orbits of the same size.
For a group, we will distinguish two types of computability. The structure of
a group g (with universe G) is determined by the binary operation which we denote
by the addition sign +G, since we are interested in Abelian groups. We let eG denote
the additive identity of G. The inverse operation, denoted by invG, may also be
included as an inherent part of the group. Thus we have the following distinction: A
group G is F-computable if (G, +G, eG) is F-computable and is fully F-computable if
(G, +', invG, eG) is F-computable.














CHAPTER 2
SPACE COMPLEXITY OF OPERATIONS

2.1 Arithmetical Operations

In this section, we describe in detail how Turing machines can carry out the

basic arithmetical operations, and thus determine the space complexity classes to

which these operations belong.

Lemma 2.1.1. The standard algorithms for addition and subtraction of 1 in k-ary,

where k > 2, can be carried out using zero space.

Proof: Suppose we are given a k-ary number a on the input tape of our machine.

We recall that a is written in reverse k-ary.

To add 1 to a, if the first symbol s of a is less than k 1, the machine writes

s + 1 on the output tape and then copies the rest of a on to the output tape. But if

the first symbol of a is k 1, then the machine writes a 0 on the output tape. Then,

each time the machine reads a k 1 on the input tape, it writes a 0 on the output

tape, until, if at all, the machine reads the first symbol t of a that is less than k 1.

At this point, the machine writes t + 1 on the output tape and then copies the rest of

the input on to the output tape. However, if all the symbols of a are k 1, then the

machine writes a 1 on the output tape as soon as it reads the U on the input tape.

The input tape is read-only and the output tape is write-only, and hence no space is

used in this computation.

To subtract 1 from a, if the first symbol on the input tape is 0, the machine

writes k 1 on the output tape. The machine continues to write k 1 on the output

tape each time it reads a 0 on the input tape, until it reads the first nonzero symbol

s on the input tape. At this point, the machine checks whether s = 1 and whether








s is the last symbol of a. If so, the machine halts. Otherwise, the machine writes

s 1 on the output tape, and then copies the rest of the input on to the output tape.

Once again, the input tape is read-only and the output tape is write-only, and so no

space is used.


We generalize Lemma 2.1.1 in Lemma 2.1.5, but the actual algorithms for the

special cases of addition and subtraction of 1 will frequently prove useful. Next, we

verify facts about the relative lengths of a natural number in the tally and in the

k-ary representation, k > 2, and the space complexity of converting between tally

and k-ary.

Lemma 2.1.2. Let a = bk(n), where k > 2 and n > 0. We have

(a) klIl-1 < n + 1 < kA'l, or, equivalently, jul < 1 + log(n + 1) < lal + 1.

(b) If |la = t and r = bk(t), then ITI = O(logl |).

(c) The computation Pk(bk(n)) = 1n can be carried out using linear space, while the

inverse computation ik-'(17) = bk(n) can be carried out using logarithmic space.

Proof: (a) If all the symbols of a are k 1, that is n is the largest k-ary number

possible for its length aul, then n = ki1 1, and so n + 1 = kIl. On the other hand,

if n is the smallest k-ary number possible for its length J|u, then the last symbol of

a is 1 and all the previous symbols are 0. In that case, we have kIll-1 = n and so
kI1- < n+1.

(b) By part (a), we have T|J < 1 + log(t + 1) = O(logt).

(c) To go from k-ary to tally, our machine first copies the k-ary number on

the input tape on to a work tape. Then it keeps employing the standard algorithm

to subtract 1 in k-ary on this worktape until the content of this tape becomes 0.
Moreover, each time a 1 is subtracted from this work tape, the machine writes a 1 on

the output tape. The content of the work tape will never be longer than the length

of the original input. Hence, going from k-ary to tally uses up linear space.









To go from tally to k-ary, our machine first writes a 0 on a worktape. Then,

each time the machine reads a 1 on the input tape, it adds 1 in k-ary on this work

tape using the standard algorithm. As soon as the UL is encountered on the input

tape, the machine copies the k-ary number on the work tape on to the output tape.

By part (a), the length of the number on the work tape will be logarithmic in the

length of the input. Hence, going from tally to k-ary uses up logarithmic space.

O

We shall see later, as a consequence of Lemma 2.3.12, that for each k >, 2,

there is no algorithm to convert a k-ary number to its tally form which uses up only

logarithmic space.

In the following lemmas, we adopt the usual convention that m n is set to 0

if m < n. We shall see in the next two lemmas that the usual arithmetic operations

can be carried out without using up any space if we use the tally representation of

the natural numbers.

Lemma 2.1.3. In Tal(w), the addition and subtraction of, and multiplication and

division (with remainder) by a constant requires zero space.

Proof: Let the constant be c Tal(w), and suppose that the input is x E Tal(w). If

c = 0, then our machine simply copies the input to the output tape in the cases of

addition and subtraction, and writes a 0 on the output tape in the case of multipli-

cation. None of these operations use up any space. Now assume c :A 0. For each of

the four operations, we will use four different machines, each having c special states,

among others.

To add c to x, the machine first copies x on to the output tape if x $ 0. Then

the machine switches to the c states one after another, each time writing a 1 on the

output tape.

To subtract c from x, the machine outputs 0 if x = 0. Otherwise, the machine

switches to the first of the c states when it reads the first symbol of x. The machine









then switches to the remaining c 1 states, one after another, each time it reads a

new symbol of x. If the machine encounters a U on the input tape before or immedi-

ately after switching to the last of the c states, then we have x < c, and the machine

outputs 0. But if the machine encounters a 1 on the input tape immediately after

switching to the last of the c states, it copies this 1 and the remaining 1's of x on to

the output tape.

To multiply x by c, the machine outputs 0 if x = 0. Otherwise, it switches to

the c states, one after another, each time copying x on to the output tape.

To divide x by c, we use a machine with a set Q, of c 1 states, and another

disjoint set Q2 of c states, among others. If x = 0, the machine outputs 0 U 0, sig-

nifying that the quotient and the remainder, respectively, are both 0. Otherwise, it

switches to the c 1 states in Q1, one after another, each time it reads a symbol of x.

If the machine encounters the U on the input tape while in one of the states in Qi,

the quotient is 0, while x is the remainder. So the machine writes OU on the output

tape, then copies x on to the output tape, and then halts.

If the machine exhausts the states in Q, without encountering the input tape's

U, the quotient is nonzero. In that case, the machine resets the input cursor to its

extreme left position (thus reading the input tape's >). Then it switches to the c

states in Q2, one after another, each time it reads a symbol of x. If the machine reads

a 1 of x after switching to the last of the c states in Q2, it writes a 1 on the output

tape (the first symbol of the nonzero quotient), and then switches back to the first

of the c states in Q2. The process is then repeated. Now if at any point the machine

encounters a U on the input tape before switching to the last of the c states in Q2,

then we have a nonzero remainder. The machine writes a U on the output tape (the

1's making up the quotient having been written before this U) followed by a 1, then

switches back to the previous one of the c states, and then keeps writing a 1 on the

output tape each time it switches back to a previous state until it switches back to








the first of the c states in Q2. Note that since the machine is reading a U now, we

have no contradiction with the machine's activity while in any of the c states in Q2

earlier and reading a 1. The output tape now contains a string of l's (the quotient),

followed by a U, followed by a string of l's (the remainder). However, if the machine

encounters a U on the input tape immediately after switching to the last of the c

states in Q2, then we have zero remainder. In that case, the machine simply writes

a U on the output tape followed by a 0, the l's or 1 making up the quotient having

already been written.



Lemma 2.1.4. The addition, subtraction, multiplication, and division (with remain-

der) functions from Tal(w) x Tal(w) to Tal(w), the length function from Tal(w) to

Tal(w), and the order relation on Tal(w) are all in ZEROSPACE.

Proof: Suppose we are given x U y in tally on the input tape. To compute x + y

in tally, our machine copies x on to the output tape (provided x j 0), ignores the

U after the x on the input tape, and then copies y on to the output tape (provided

y 5 0). But if x = y = 0, then the machine outputs 0.

Given x on the input tape, to compute the length j.x of x in tally, our machine

simply copies x on to the output tape unless x = 0, in which case it outputs 1.

For the rest of the proof, we shall use Turing machines with two input tapes,

with x on the first input tape and y on the second.

To compute x y, the two input cursors advance simultaneously to the right.

If x = 0, the machine outputs 0 (since, by convention, we let x y = 0 if x < y),

and if y = 0, the machine outputs x. If the U is encountered on the first input tape

either before or at the same time as the one on the second input tape, then we have

x < y, and the machine outputs a 0. Otherwise, as soon as the machine encounters

the U on the second input tape, it copies the rest of x (including the currently-read

1 of x) on to the output tape.









To compute x y, the machine outputs 0 if either x or y is 0. Otherwise, it

copies x on to the output tape each time it reads a 1 of y.

To compute x-y, if x = 0, the output is 0 U 0, signifying that the quotient and

the remainder, respectively, are both 0. Otherwise, the two input cursors advance

simultaneously to the right. We have three possibilities: (i) If the U is encountered

on the first input tape before the one on the second, we have x < y, and the machine

outputs OUx. (ii) If Us are encountered on the input tapes at the same time, we have

x = y, and the machine outputs 1 U 0. (iii) If a U is encountered on the second input

tape first, the machine switches to a new state q, outputs a 1 (the first symbol of the

quotient), the second input cursor moves left all the way back to the beginning of y

(while the first input cursor stays fixed), and then the two input cursors again move

simultaneously to the right. Now in the case of possibility (i), the machine outputs

a U, the 1's making up the quotient having been already written, and then outputs

a 1 each time the second input cursor moves left (thus writing the remainder) until

the second input cursor cannot move left anymore. In the case of possibility (ii), the

machine outputs a 1, followed by a U, followed by a 0. And in the case of possibility

(iii), the machine switches to a new state q*, outputs a 1, the second input cursor

moves left all the the way back to the beginning of y (while the first input cursor

stays fixed), the machine switches back to state q, and the procedure is then repeated.

Thus division in tally does not use up any space.

Finally, to check whether x < y, the machine simply checks that of the two

simultaneously advancing input cursors, the first one encounters a U either before or

at the same time as the second input cursor.

O

In the next four lemmas, we shall see that, in contrast to the situation obtained

on using the tally representation of the natural numbers, not all arithmetic operations

can be carried out using zero space when we use a k-ary representation where k > 2.









For example, the standard k-ary multiplication and division algorithms use up linear

space. We shall give a modification of the standard multiplication algorithm that

uses up only logarithmic space. However, we begin with those k-ary operations that

can in fact be carried out without using up any space.

Lemma 2.1.5. For each k > 2, the addition and subtraction of a constant in Bk(w)

requires zero space.

Proof: Let the (nonzero) constant be c E Bk(w), and suppose the input is x E Bk(w).

To add c to x, we employ a machine with two special "carry" states, among

others, which inform the machine whether or not to carry a 1. For each of the k

possible symbols from {0,1,..., k 1} currently read, the machine's program has

instructions as to which of the k symbols to write on the output tape, and whether

to carry a 1 or a 0. For example, suppose c = 24 in Blo(w). (We recall that in reverse

10-ary, c is of course 42.) Then two of the many instructions will be as follows: (i)

If at the starting state you read a 9 on the input tape, then write a 3 on the output

tape, move right on the output tape, switch to the "carry a 1" state, and move right

on the input tape. (ii) If at the "carry a 1" state you read a 5 on the input tape, then

write an 8 on the output tape, move right on the output tape, switch to the "carry

a 0" state, and move right on the input tape. Hence if, say, x = 1759 (which on the

input tape will be written as 9571), then the machine will write the correct units and

tens digits of 1759 + 24 on the output tape and then will not carry a 1 to the next

symbol 7 on the input tape. Another necessary feature of the machine is that U's on

the input tape are treated as O's for as long as necessary in case Icl >, lx|. Evidently

this machine uses up no space in its operation.

To subtract c from x, we employ a machine similar to the machine for addition.

Once again, the instructions of the machine's program exhaustively specify which

symbol to write on the output tape depending on the symbol currently read on the

input tape. However, instead of two "carry" states, the machine is equipped with









two "borrow" states that allow it to borrow a 1 or a 0 from the next symbol on the

input tape rather than carry a 1 or a 0 to the next symbol.



Lemma 2.1.6. For each k > 2, the addition and subtraction functions from Bk(w) x

Bk(w) to Bk(w) and the order relation on Bk(w) are all in ZEROSPACE.

Proof: Suppose we are given x and y in k-ary on the first and second input tapes,

respectively.

To compute x + y, we use a machine similar to the one in the proof of the

previous lemma. Once again, the instructions for the machine exhaustively specify

the symbol to write on the output tape and the "carry" state to switch to, depending

on the symbol of x and of y currently read. The two input cursors move right

simultaneously and by one place each time. One of the many machine instructions

in the case of k = 10 will be as follows: If at the starting state you read a 6 on the

first input tape and an 8 on the second input tape, then write a 4 on the output

tape, move right on the output tape, switch to the "carry a 1" state, and move right

on both input tapes. As a result, this machine will output the correct units digit of,

for example, 1066 + 1918 and carry a 1 to the tens digits of x = 1066 and y = 1918

(written as 6601 and 8191, respectively).

To compute x y, the machine used is similar to the one for addition, except

that a 1 or a 0 is borrowed from the next symbol of x rather than carried to the next

symbol.

Finally, to check whether x < y, our machine just needs to check whether

x y = 0. Hence, this machine's program is the same as that of the machine for

subtraction in the previous paragraph with the modification that the output x y is

never written. Instead, the machine simply uses states and the explicit instructions

to borrow 1 or 0 as appropriate until, if ever, one output symbol "would have been"








nonzero had the original machine for subtraction been used. In this case, the machine
outputs "no." Otherwise, the machine outputs "yes."
0

Lemma 2.1.7. For each k > 2, the length function from Bk(w) to Bk(w) is in
LOGSPACE.

Proof: Given x on the input tape, to compute |x| in k-ary, the machine first writes
a 0 on the output tape, and then adds 1 in k-ary on the output tape each time it
reads a symbol of x. We know that ixj in k-ary is O(log |xl) by Lemma 2.1.2 (b).
D

In the proof of the next lemma, we give the modification, mentioned earlier,
of the standard algorithm for k-ary multiplication, where k > 2.

Lemma 2.1.8. Let k > 2. The multiplication function from Bk(w) x Bk(w) to Bk(w)
is in LOGSPACE.

Proof: Suppose we are given k-ary numbers x and y (written in reverse) on, respec-
tively, the first and second input tapes of a Turing machine. Our proof that x y can
be computed using space no more than O(log(|xI + ly|)) is in four steps. In the first
three steps, we verify basic arithmetic properties of the length and the symbols of
x y. We then go on to describe our machine's operation in the final step. We may
assume that x and y are both nonzero; otherwise, the machine simply outputs 0.
Step 1. Since x and y are k-ary numbers, we must have x < kix 1 and
y < kil 1. Hence x y < (kil1 1) (kA-" 1) = kIxzl+lY kixl kiyl + 1 < kixl+ll 1,
whose length is at most jx\ + lyl. In other words, we have Ix y| |x + |y|.
Step 2. For each i > 0, let xi and yi denote, respectively, the ith symbols of x
and y. Here if i > |xf (resp. |y|), then we let xi = 0 (resp. yi = 0), a convention that
will prove useful in Step 3. We now have x = E_ 1 kix and y = oE k1 Y.
In order to arrive at an equation relating the symbols of x y with the symbols








of x and of y, consider the following recursive definitions:
* Let mo = xoyo, let qo = [mo/k], and let r0 = mo(mod k).
* For i > 1, let mi = (xiyo + xi-y +. + x1yi-1 + xoyi) + qi-i, let qJ = [mi/kj,
and let ri = mi(mod k).
It is easy to see now that for each i > 0, the ith symbol of x y is ri.
Step 3. The algorithm we describe in Step 4 involves our machine explicitly
writing down the mi and the q, on work tapes. Hence we must prove that the lengths
of the mi and the qj are logarithmic in |xI + |y|. Our proof involves induction on i.
Since xi, yj ( k 1 for all i >, 0, we have mo < (k 1)2, and so q0 <
[(k 1)2/k = [(k2 -2k+ 1)/kj = [k-2+ (1/k)] = k- 2. We have mi =
qo + xiyo + xoyi < (k 2) + 2(k 1)2 = 2k2 3k = k(2k 3), and so qi < 2k 3.
Similarly we have m2 ( k(3k 4) and q2 < 3k 4.
Suppose for i > 1 we have mi < k[(i + 1)k (i + 2)] and hence qi j
(i+l)k-(i+2). Then mi+ = qi-+E=oi+i-jyj (i+l)k-(i+2)+(i+2)(k-1)2 =
(i + 1)k + (i + 2)k2 (i + 2)(2k) = (i + 2)k2 (i + 3)k = k[(i + 2)k (i + 3)], and so
qj+i < (i + 2)k (i + 3). This completes the induction.
Now if i > | x + y then mi = 0 = qg by our convention established in Step 2.
So suppose 0 < i < xj + jy|. Then we have mrn (k[(i + 1)k (i + 2)] < k[(IxI + |y| +
1)k (\xj + |y| + 2)]. By Lemma 2.1.2 (a), it follows that Imil = O(log(|x| + ly|)),
and hence, we have gqi| = O(log(|xj + |y|)) also.
Step 4. In addition to the two input tapes and the output tape, our machine
uses a counter tape and three other work tapes W1, W2, and W3. The machine
begins by writing a 0 on the counter tape. Then it adds 1 in k-ary on the counter
tape (using the standard algorithm) each time the first input cursor reads a symbol
of x, and then each time the second input cursor reads a symbol of y. Step 1 assures
us that at the end of this procedure, the counter tape has the maximum number
of symbols possible for x y. Moreover, this procedure uses up space logarithmic in









jx + |y| by Lemma 2.1.2 (a). The input cursors now return to their initial extreme-
left positions.

Now the machine is reading the first symbol x0 of x and the first symbol yo

of y. The machine's program contains explicit instructions as to what qo and ro (the

first symbol of x y) are, and so it writes ro on the output tape and q0 on tape W1.

For example, if k = 10 and we are multiplying 1774 by 29, then xo = 4 and yo = 9.

Based on its instructions, the machine writes 6 (which is ro in this case) on the output

tape and 3 (i.e., qo) on W1. Then the machine subtracts 1 in k-ary from the counter

tape using the standard algorithm (to signify that we have already written the first

symbol of x y). The first input cursor now moves right while the second input cursor

moves left, which it cannot do, and therefore it simply points to yo.

The machine is now reading x, and yo, and based on its explicit instructions,

writes x1yo on tape W2. In the example of multiplying 1774 by 29 above, we have

x, = 7 and so the machine is instructed in this case to write 63 (in reverse) on W2.

As soon as something is written on W2, the machine switches to a new state S, adds

the contents of W1 and W2 in k-ary (which requires no space by Lemma 2.1.6), and

writes the answer on W3. It then erases W1 and copies the contents of W3 on to

W1. At this point, W1 contains qo +xIyo (which, in our example, is 3+63 = 66). To

obtain m, the machine needs to add xoyi to the number on W1. So it switches to a

new state, and the first input cursor moves left while the second input cursor moves

right. Now the machine is reading xo and y\ and, once again, based on its explicit

instructions, writes xoUy (which is 8 in our example since yi = 2) on W2 after erasing

the previous contents of W2. Since W2 has been written on, the machine switches to

state S, adds the contents of W1 and W2, writing the answer (74 in our example)

on W3 (after erasing W3), and then copies this answer on to W1 (after erasing Wl).

Then the machine switches state and the second input cursor moves right while the

first input cursor moves left, which it cannot do.









As soon as the first input cursor cannot move left anymore, the machine

"knows" that the number currently on W3 is some mi. At this point in our example,

mi = mi. Now ri (4 in our example) is simply the first symbol of mi (74 in our

example), and q] is the rest of ml. In fact, ri is the first symbol of mi and qi is

the rest of mi for every mi. So the machine switches to a new state, copies the first

symbol on W3 on to the output tape, subtracts 1 on the counter tape, and copies

the remaining symbols on W3 on to W1.

Now the machine must obtain m2 and to do that, it must resume the "zigzag"

motion of its input cursors. So the machine changes state, the second input cursor

moves left while the first moves right. When the second cannot move left anymore,

the machine starts writing and adding x2yo, x1y1, and xoy2 on the tapes W1, W2,

and W3 in the manner described above for x1yo and xoyl. Then it writes r2 and q2

when the first input cursor cannot move left anymore, again in the manner described

for r1 and q, As soon as r2 is written on the output tape, the machine subtracts 1

on the counter tape, and resumes the "zigzag" motion of its input cursors to obtain

m3.

The machine continues in this manner to obtain the remaining ri, treating

the Us on either input tape as O's, and stops when the content of the counter tape

becomes zero. By Step 3, the lengths of the contents of W1, W2, and W3 will never

be more than logarithmic in IxI + lyl.


In contrast to k-ary multiplication, it is not entirely clear to us how to carry

out k-ary division for k > 2 in logarithmic space. So we cannot directly deal with

the standard conversion of a number n written in a base bi to n written in another

base b2 that employs division by b2. Instead we shall prove the result that Bin(w) is

LOGSPACE set-isomorphic to the set Bk(w) for every k > 3 by producing isomor-

phisms between Bin(w) and certain other sets which will themselves be shown to be








isomorphic to Bk(w), k > 3. Then these isomorphisms will be composed to produce
the required isomorphism between Bin(w) and Bk(w), k > 3. In view of that, we
examine the space complexity of compositions of functions in the next section.

2.2 Function Composition

We begin with a basic fact [23; Proposition 8.1] regarding the length of the
output of a multi-variable function in a specified space complexity class.

Lemma 2.2.1. Let f(x, ..., xn) be a function in SPACE(f*), where n > 1 and f*
is a proper complexity function. Let ix\ = Ix,:I + + IXn denote the total length of
an input n-tuple (XI,...,Xn) to f. Then there are nonzero constants c and k such
that f is in TIME (clx"2kf'(Ix)), and hence if(xi,... ,Xn) clxln2kf*(|).

Proof: We may assume that there is a read-only and write-only Turing machine M
that computes f, and suppose that M has T work tapes. Also suppose M operates
using Q states and on S symbols. The maximum number of steps executed by M
before halting is the total number of configurations possible for M. This is because if
a configuration is repeated, M will enter a loop and never halt. A configuration of M
is of the form (q, ul wl,..., unwn, u7*w\,..., u*pw, v), where: (i) q is the current state
of M, (ii) uiwi is the string on the ith input tape with the cursor pointing at the last
symbol of ui, (iii) iu*w is the current string on the ith work tape with the cursor
pointing at the last symbol of u*, and (iv) v is the current string on the output tape.
There are Q choices for q and less than |x| choices for the cursor position on each
of the n input tapes. Since strings on the work tapes cannot be longer than f*(lxl),
there are Sf'(lI) choices for each of the T work tape strings. Thus for the input and
work tapes only, we have a total of at most Q Ixl"STf'(l') possible configurations,
each of which could result in M writing a symbol on the output tape.








Corollary 2.2.2. Let n > 1, let f* be a proper complexity function such that f*(x) >
k log(x) for every constant k and x E w, and let Ixl = xiI +- .. + axnl. We have
(a) If f(xi,...,xn) E CONSTANTSPACE, then if(xi,...,xn)l < c|Xn for some
nonzero constant c.
(b) If f(xl,... ,n) is a LOGSPACE function, then f(xl,... ,xn)l cIxlr for some
nonzero constants c and r > n.

(c) If f (x, ...,xn) is in SPACE(f*), then If(xi,..., X) < 2cf*(Ixl) for some nonzero
constant c.

The next lemma (Proposition 8.2 in Papadimitriou [23]) shows that the space
complexity class is preserved when two LOGSPACE functions are composed. We
reproduce the argument, which, with appropriate changes, later becomes proofs in
the cases of compositions where the complexity class is not preserved. We also use
this argument in a modified form to prove a more general version of Lemma 2.2.3.

Lemma 2.2.3 (Space Composition Lemma I). The composition f o g of two
LOGSPACE functions g(xi,. ,xn) and f(x), where n > 1, is in LOGSPACE.

Proof: Suppose we are given xl,...,xn on the n input tapes. Let Mf and Mg be
Turing machines that compute f and g, respectively, using up logarithmic space, and
let Ix = xi + + |Ixl.
We first note that in order to compute (fog)(x, ..., xn), we cannot simply let
Mg compute and write g(xl,..., xn) on some work tape(s) and then feed g(xi,..., ,n)
as input to Mf. This is because by Corollary 2.2.2 (b), the length of g(x ,.. .,xn)
is not O(log I|x). In other words, the intermediate output g(xi,... xn) may be too
long to write out completely on a work tape. We overcome this difficulty by never
writing all of g(xi,... xn) on a tape. Instead, our machine M to compute f o g
simulates the operations of Mf and Mg in the special manner described in the next
two paragraphs.









The program of M contains the programs of both Mf and M., and additional

instructions. M begins by simulating Mf on certain work tapes. As soon as My

"demands" to read the first symbol of g(xi, ... ,x), that is, right at the beginning,

M changes state and simulates M9 on separate work tapes long enough to write down

only the first symbol of g(xl,..., x,) on the "input tape" for Mf. Once this is done,

M changes state and writes a 1 in binary on a separate counter tape T1 to signify

that the first symbol of g(x1,..., xn) has been written so far. Then M simulates Mf

again and "lets" My read the first symbol of g(xl,..., x,). M continues to simulate

Mf until Mf "demands" to read the second symbol of g(xl,... ,xn). At that point,

M changes state, erases the first symbol of g(x 1,..., xn), and then simulates M. long

enough to output the second symbol of g(xl,...,Xn,) on the place where the first

symbol was written. After that, M adds 1 in binary on tape Tl and then simulates

My to "let" MAl read the second symbol of g(xl,..., Xn). In general, when M mas-

querading as Mf "demands" to read the ith symbol of g(xi,..., XI), M changes state

and simulates M9 long enough to overwrite the (i 1)th symbol of g(xi,..., Xn) with

the ith symbol, and then adds 1 on tape T1 so that TI contains i in binary.

Since each symbol of g(xl,... ,Xn) is overwritten with the next, a difficulty

arises when M in the guise of My "demands" to read a previous symbol of g(x, ..., n)

instead of the current ith symbol. In that situation, M subtracts 1 in binary on tape

T1 and copies the resulting number i 1 on to another counter tape T2. Then M

starts simulating Mg from the beginning to overwrite the ith symbol of g(xI,..., xn)

with the first symbol of g(xI,... ,Xn), then subtracts 1 on tape T2, then simu-

lates M. again long enough to overwrite the first symbol with the second symbol

of g(xi,... x), then subtracts 1 on T2, and so on, until there is a 0 on T2, at which

point M, under the guise of My, can read the (i 1)th symbol of g(xi,..., Xn). This

procedure can be repeated as often as necessary to allow M in the guise of Mf to

read any previous symbol of g(xI,..., Xn) that has long been deleted.








It remains to show that M operates within space O(log \xi). Since each symbol
of g(x1,..., xn) gets overwritten by the next one or the very first one, only constant
space is used on that particular tape. On the tapes where M simulates Mg, only
space 0(log |xl) is used up since Mg operates in logarithmic space. Tapes T1 and
T2 contain at most the total length of g(xl,..., x,) in binary. Thus the contents of
these two tapes cannot be longer than log (clxlr) for some nonzero constants c and
r > n, and log (clxr|) = O(log |xi). Finally, the space used up by M while simu-
lating Mf is logarithmic in the length of g(xi,... xn), which, therefore, is at most
log (cljxr) = O(log |x|) as well.


In the next six lemmas, the notation PSPACE o LINSPACE C EXSPACE,
for example, means: The composition of a linear-space function of n > 1 variables
followed by a polynomial-space function of one variable is an EXSPACE function.
Except for the first one, these six lemmas are grouped according to the space com-
plexity class of the function that is applied second in a composition.

Lemma 2.2.4 (Space Composition Lemma II). Let f* be a proper complexity
function such that f*(x) > k log x for every constant k and x e w. We have
(a) CONSTANTSPACE o CONSTANTSPACE C LOGSPACE.
(b) CONSTANTSPACE o SPACE(f*) C SPACE( f*).
(c) SPACE(f*) o CONSTANTSPACE C SPACE(f*).

Proof: (a) Given a CONSTANTSPACE function g of one variable and a CON-
STANTSPACE function h of n > 1 variables with corresponding Turing machines
M9 and Mh, our machine M computes g oh by composing Mg and Mh in the manner
described in the proof of the Space Composition Lemma I. While simulating Mg and
Mh, M uses up constant space. Also M uses up the one "slot" on the tape needed
to output h one symbol at a time. By Corollary 2.2.2, we have lh(x)l < c|lxn, where
c is some nonzero constant and I|x is the total length of the input to h. Hence, on








the binary counter tapes keeping track of the output symbols of h, M uses up space
at most log Ih(x)| < log(clxln)) = O(log Ix|).
(b) Given a function h of n > 1 variables in SPACE(f*) and a single-variable
CONSTANTSPACE function g with corresponding Turing machines Mh and Mg,
our machine M once again computes g o h by composing Mg and Mh in the man-
ner described in the proof of the Space Composition Lemma I. This time M uses
up space bounded by f* while simulating MAl, the one "slot" on the tape needed to
output h one symbol at a time, and constant space while simulating Mg. And by
Corollary 2.2.2, this time we have jh(x) < 2cf'(Ix1), where c and IxI have the same
meaning as in part (a). Hence, on the binary counter tapes M uses up space at most
log (2c*(Ixi)) = O(f*).
(c) The proof is similar to that of (b).



Lemma 2.2.5 (Space Composition Lemma III). Let f* be a proper complexity
function such that f*(x) >, klogx for every constant k and x E w. We have
LOGSPACE o SPACE(f*) C SPACE(f*).

Proof: Suppose we are given a LOGSPACE function g of one variable and a function
h of n > 1 variables in SPACE(f*) with corresponding Turing machines Mg and Mh.
While computing g o h, the simulation of Mh uses up space bounded by f* and also
the one "slot" needed to output h one symbol at a time. Although the output of h
is written one symbol at a time, the simulation of Mg utilizes the entire length of
the output of h which, by Corollary 2.2.2, can be as long as 2cf*(x1), where c is some
nonzero constant and |xI is the total length of the input to h. Since g E LOGSPACE,
the simulation of Mg uses up space at most log (2cf(Ixl)), which is 0(f*). Finally,
the space used up in the binary counter tapes is at most log Ih(x)l ( log (2'/" lll1) as
well.








In the next lemma, we omit the two results PLOGSPACE o LOGSPACE C
PLOGSPACE and PLOGSPACE o CONSTANTSPACE C PLOGSPACE, since they
are covered by part (a) of the lemma.

Lemma 2.2.6 (Space Composition Lemma IV). Let f* be a proper complexity
function such that f*(x) > kx for every constant k and x e w. We have
(a) PLOGSPACE o PLOGSPACE C PLOGSPACE.
(b) PLOGSPACE o LINSPACE C SPACE.
(c) PLOGSPACE o SPACE(f*) C SPACE(f*).

Proof: (b) Suppose we are given a PLOGSPACE function g of one variable and a
LINSPACE function h of n > 1 variables with corresponding Turing machines Mg
and Mh. While computing g o h, the simulation of Mh uses up linear space and
also the one "slot" needed to output h one symbol at a time. By Corollary 2.2.2,
the output of h can be as long as 2cllX, where c is some nonzero constant and |xl
is the total length of the input to h. Since g e PLOGSPACE, there is a nonzero
constant r such that the simulation of Mg uses up space at most (log (2clxD)))r, which
is polynomial in |xl. And the space used up in the binary counter tapes is at most
log Ih(x)l < log (2c1x1) = O(Ixl).
The proofs of (a) and (c) are similar.


We also omit results involving LINSPACE since they are entirely covered by
the next lemma.

Lemma 2.2.7 (Space Composition Lemma V). Let f* be a proper complexity
function such that f*(x) > k log x for every constant k and x e w. We have
PSPACE o SPACE (f*) 9 UkJ{SPACE(2kf') : k > 0}.

Proof: Suppose we are given a PSPACE function g of one variable and a function
h of n > 1 variables in SPACE (f*) with corresponding Turing machines M9 and








Mh. While computing g o h, the simulation of Mh uses up space bounded by f* and
also the one "slot" needed to output h one symbol at a time. By Corollary 2.2.2, the
output of h can be as long as 2cf*(Ix), where c is some nonzero constant and IxI is the
total length of the input to h. Since g E PSPACE, there is a nonzero constant r such
that the simulation of Mg uses up space at most (2cf/(Ixl))r = 2rcf*(Ix). And the space
used up in the binary counter tapes is at most log |h(x)I <. log (2cf/'Irl)) = O(f*(Ixl)).


As with the previous space composition lemmas, we shall omit results like
SUPERPSPACE o LOGSPACE C SUPERPSPACE, which is covered by part (a) of
the next lemma.

Lemma 2.2.8 (Space Composition Lemma VI). Let f* be a proper complexity
function such that f*(x) > xk for every constant k and x w. We have
(a) SUPERPSPACE o PLOGSPACE C SUPERPSPACE.
(b) SUPERPSPACE o LINSPACE C EXPSPACE.
(c) SUPERPSPACE o SPACE(f*) C Uk{SPACE(2kf) :k > 0}.

Proof: (a) Suppose we are given a SUPERPSPACE function g of one variable and
a PLOGSPACE function h of n > 1 variables with corresponding Turing machines
Mg and Mh. While computing g o h, the simulation of Mh uses up poly-logarithmic
space and also the one "slot" needed to output h one symbol at a time. By Corollary
2.2.2, the output of h can be as long as 2c(log1 l)d, where c and d are some nonzero
constants and Ixi is the total length of the input to h. Since g G SUPERPSPACE,
there is a nonzero constant r such that the simulation of Mg uses up space at most
2(log(2c(s l)d)) which is super-polynomial in lxi. And the space used up in the
binary counter tapes is at most log Ih(x) < log (2c(o gll)X), which is poly-logarithmic
in IxI.


The proofs of (b) and (c) are similar.








We omit any lemmas about EXSPACE and deal with EXPSPACE instead in

the next lemma. This is because Lemma 2.2.9 still holds if we replace EXPSPACE

with EXSPACE in each of the statements (a)-(d).

Lemma 2.2.9. (Space Composition Lemma VII)

(a) EXPSPACE o LOGSPACE C EXPSPACE.

(b) EXPSPACE o PLOGSPACE C EXSUPERPSPACE.

(c) EXPSPACE o LINSPACE C DOUBEXSPACE.

(d) EXPSPACE o PSPACE C DOUBEXPSPACE.

Proof: (a) Given an EXPSPACE function g of one variable and a LOGSPACE

function h of n > 1 variables with corresponding Turing machines Mg and Mh, the

length of the output of h is at most clxlr for some nonzero constants c and r > n.

And so there is a nonzero constant d such that the simulation of Mg uses up space

at most 2(c xlr)d, which is exponentially polynomial in Ixi. The simulation of Mh and

utilization of the binary counter uses up even less space.

The proofs of (b)-(d) are similar.

El

We now proceed to generalize the Space Composition Lemma I to include

functions whose outputs are many-dimensional vectors rather than simply scalars.

In order to facilitate this generalization, we identify the vector Y (xi,..., x,) with

the value p(xi,..., xn) of the coding function p on the n-tuple (x,., Xn. ), as defined

in the following lemma.

Lemma 2.2.10 (Coding Lemma). Let xl,..., xn be finite strings of letters of

some finite alphabet E, and suppose x, = ee2 ...e x2 = ele2 .e. k2, ... Xn

e ne... e Define p : (E*)n -+ (E U {0,1})* as follows

p(i,..., n) = el le ... le Oe lej2 1... le2 0...0enle 1...1en.
Then both p and p-1 are in ZEROSPACE.








Proof: Given xl,...,xn on n input tapes, our machine begins by copying each

symbol el of x, followed by a 1 on to the output tape. When the machine encounters

the U on the first input tape, it writes a 0 on the output tape. It then repeats the

procedure while reading x2 on the second input tape, and continues this procedure

for X3,...,X.n
The machine for p-1 has n output tapes. Suppose we are given the string

e}le~l... le1Oelej~... le 0...0e~le'l... le. on the machine's input tape. The

machine begins by copying the first input symbol on to the first output tape, then

reads the second input symbol, then copies the third input symbol on to the first

output tape, and so on, until it encounters a 0 after having just copied an input

symbol. At this point, the machine switches to a new state and repeats the procedure

on the second output tape while reading the input starting at el. In this way, the

machine ends up using n different states to write the n strings x1,..., xn on the n

output tapes.
O]

In the next lemma, we show that if Y= (xl, ..., xn), and h and H are functions

such that h(xl,..., zn) = H(Y), then h and H almost always belong to the same space

complexity class.

Lemma 2.2.11. Let x1,... ,xn be strings in some E* and identify the vector x =

(Xi,... ,Xn) with its code p(x ,... xn). Suppose h and H are functions such that
h(x,... xn) = H (). Let g* be a nonconstant proper complexity function. Then we

have
(a) If H E CONSTANTSPACE, then h LOGSPACE.
(b) If H e SPACE(g*), then h E SPACE(g*).
(c) Ifh e CONSTANTSPACE, then H e LOGSPACE.
(d) If h E SPACE(g*), then H E SPACE(g*).








Proof: (a) To compute h(xi,... X,,), we compose the ZEROSPACE conversion of

(xi ,...,xn) to p(xj,...,xn), that is, Y, followed by the CONSTANTSPACE com-
putation of H(Y). We know that this composition is in LOGSPACE by the Space
Composition Lemma II (a).
(b) We argue exactly as in part (a) but invoke part (c) of the Space Compo-
sition Lemma II.
(c) The argument here is that in part (d), mutatis mutandis. Hence, we give
the argument for part (d) in full detail.
(d) Let M be a machine with one input tape and suppose we are given
p(xi,... ,xn) on this input tape. We describe how M can compute the output
H((xi,...,Xn)) = H(p(xj,... ,xn)) while operating in SPACE(g*).
First, let Mp-1 be the machine in the proof of the Coding Lemma that outputs
1,. .., x, on its n output tapes (without using up any space) when given p(xi,... x,,)
on its input tape. For convenience, we assume further that Mp-\ has been modified
so that instead of outputting the complete strings x1,... ,xi on its n output tapes,
Mp-i actually overwrites each symbol e1 of each xi on its ith output tape by the next
symbol ej+1 of xi. With this modification, Mp-i still operates in ZEROSPACE but
each of its output tapes contain at most one symbol at any one time. Finally, let Mh
be a machine that outputs h(xl,..., xn) when given x1,..., Xn on its n input tapes,
and operates in SPACE (g*).
The machine M has 3n work tapes T11, T, W,..., 2T, T2, Wn,
among others. The n work tapes W1,... Wn serve as output tapes for Mp-1, (i.e.,
input tapes for Mh). As for the remaining labeled work tapes T1, 1, T,, ...,
T1', T2, each of the tapes T{ and T2 serve as binary counter tapes for the inter-
mediate output tape Wi. The machine M computes the output H((xl,... n)) =
H(p(xl,... ,xn)) by composing Mp-1 followed by Mh in a manner similar to that of
the machine described in the proof of the Space Composition Lemma I. We need to








be careful about the requirement that MAh can read n symbols, one on each of its n

input tapes, all at the same time.

In the beginning, M writes Os on each of the 2n counter tapes Tj. Then M

simulates Mh. At this point, each input cursor of Mh either moves one place to the

right or remains statinary. For each i E {1,...,n}, if the ith input cursor of Mh

"wants" to move to the right, then M adds 1 in binary on the counter tape T1. But

if the ith input cursor of Mh "wants" to remain stationary, then M does nothing on

tape T,. Then M copies the contents of each tape T1 on to tape T2. Now if there is

a 1 on tape T2, then M simulates Mp-\ long enough to write the first symbol of x1

on tape W1. After that, M simulates Mp-i in a modified form long enough to output

all the symbols of x, without actually overwriting the first symbol of x1. However,

if there is a 0 on tape T2, then M simulates Mp-i in a modified form long enough

to exhaust all the symbols of Xl without actually writing anything on W1. Similarly,

the first symbol of each of the remaining xi is either written or not written on tape

Wi depending on whether there is a 0 or a 1 on tape T2. Now M can simulate Mh

again to read the symbols on the tapes Wi all at the same time and do whatever Mh

"wants" to do.

Now each input cursor of Mh may "want" to move one place to the right or to

the left or remain stationary. If an input cursor wants to move right (resp. left), M

adds (resp. subtracts) 1 in binary on the corresponding counter tape T[. But if this

input cursor "wants" to remain stationary, then nothing is done on the corresponding

counter tape. Then M copies the contents of each tape T{ on to tape T2. Now M

simulates Mp-i from the beginning, and each time a symbol of x1 gets written on W1,

a 1 is subtracted in binary on tape T2 until the content of T21 becomes 0. Now W1

contains the symbol a of x, that Mh "wanted" to read. So now M simulates Mp-1

in a modified form long enough to output all the remaining symbols of xa without

actually overwriting a. Then M continues to simulate MAp-1 and each time a symbol








of x2 gets written on W2, a 1 is subtracted in binary on tape T2 until the content of
T22 becomes 0. This process continues for the remaining xi, at the end of which M
can simulate Mh again to read the symbols on the tapes Wi all at the same time and
do whatever Mh "wants" to do. Finally, M halts when Mh "wants" to halt.
It remains to show that M operates in SPACE(g*). Evidently the length of

the contents of each of the binary counter tapes Tj is logarithmic in the length of the
input to M. The length of the contents of each of the tapes Wi is at most 1. And Mh
operates in SPACE(g*). Since g* is a nonconstant proper complexity function, we
have g* (n) k log n for every constant k and n e w. It now follows that M operates
in SPACE(g*).


We have now finally reached the position where we can generalize the Space

Composition Lemma I.

Lemma 2.2.12 (Generalized Space Composition Lemma). Let xi,...,xn be

strings in some E*. Let hi, hm be functions of n variables, and let g be a function
of m variables. Suppose that each hi is in SPACE(H*) and that g is in SPACE(G*),

where G* and H* are nonconstant proper complexity functions. Let f be the compo-
sition defined by f(xi,...,xn) = g(hi(xi,...,xn),...,hm(xi,...,xn)). Then f is in
SPACE(G*(2cH*)) for some nonzero constant c.

Proof: Let yi,...,ym be arbitrary strings in E*. We recall our usual identifica-
tions Y = (xi,... ,) = p(xl,...,Xn) and = (yL,...,ym) = p(yl,..-,ym). For
each i e {1,...,m}, let Hi(Y) = hj(x1,...,xn), and let G(y-') = g(yi,...,ym). By
the previous lemma, each Hi is in SPACE(H*) and G is in SPACE(G*). Note
that f(xl,...,Xn) = G((Hi(Y),...,Hm(Y))). Since p is in ZEROSPACE and we
have (Hi(Y),...,Hm(x)) = p(H\ (),..., Hm(x)), the Space Composition Lemma
II and an argument similar to that in the proof of Lemma 2.2.11 (d) shows that
(Hi(!),...,Hm(Y)) is in SPACE(H*). Therefore by Lemma 2.2.1, we have the








inequality I(Hi(),.. ., Hm(x)) < 2H'(l), where c is some nonzero constant and
xI = i I + ... + Ixn,. Since G* is a nonconstant proper complexity function, an
argument similar to that in the proof of the Space Composition Lemma I now shows
that f = G o H is in SPACE(G*(2cH')).



2.3 Applications of Function Composition

As an immediate application of the Space Composition Lemmas, we have the
following:

Lemma 2.3.1. Let w be the set of natural numbers in either the tally or the binary
representation. The exponential function f(x) = 2X from w to w is in LINSPACE,
while the doubled exponential function g(x) = 22 from w to w is in EXSPACE.

Proof: Note that g = f o f, and so if f E LINSPACE, then g E EXSPACE by the
Space Composition Lemma V.
To prove that f e LINSPACE, first let h : Tal(w) -+ Bin(w) be defined by
h(x) = 1OxI. Thus h computes bin(2x) given input x E Tal(w). Moreover, we have
h E ZEROSPACE since given x E Tal(w) on an input tape, a Turing machine can
output h(x) by writing O's on its output tape each time it reads a 1 of x and then
writing a 1 on the output tape when it encounters the U on the input tape. Finally,
let P2 be as in Lemma 2.1.2 (c), that is, p2 converts a binary number to tally. Then

P2 e LINSPACE.
Now the exponential function f : Tal(w) -* Tal(w) is 12 o h, which is in
LINSPACE by the Space Composition Lemma II. And the exponential function f :
Bin(w) -+ Bin(w) is h o 12, which is also in LINSPACE by the Space Composition
Lemma II.








Next, we give two applications dealing with bijections from w x w to w. The
first bijection we consider happens to be the standard one. Note that we are unable
to replace Tal(w) with Bin(w) in part (b) of Lemma 2.3.2.

Lemma 2.3.2. Let w be the set of natural numbers in either the tally or the binary
representation.

(a) The pairing function [., .] from w x w to w defined by [x, y] = [(x + y)2 + 3x + y]
is in LOGSPACE.
(b) The inverse f : Tal(w) -4 w x w of the pairing function is in LOGSPACE. This
inverse is given explicitly by the formula f(m) = ((m), V(m)) if n(n+ 1)/2 < m <

(n+l)(n+2)/2, where O(m) = m-[n(n+l)/2] and O(m) = [(n+l)(n+2)/2]-m-1.

Proof: (a) Firs,t let w have the tally representation. Define the function g :
Tal(w) x Tal(w) -- Tal(w) by g(x, y) = (x +y). Then g is in LOGSPACE, being the
composition of tally addition and division, either of which is in ZEROSPACE. Now
define the functions hi and h2 from Tal(w) x Tal(w) to Tal(w) by hi(x, y) = (x + y)2
and h2(x, y) = 3x + y. Then h2 is in ZEROSPACE since given x and y in tally on
two input tapes, our machine uses three states to copy x on to the output tape three
times, and then copies y on to the output tape. And hi is in LOGSPACE because

given x and y in tally on two input tapes, our machine adds 1 in binary on a work
tape T each time it reads a 1 of x and then of y, resulting in bin(x + y) on T. Then
each time it subtracts a 1 in binary on T, the machine copies x and then y on to the
output tape. We now have [x,y] = g(hi(x, y), h2(x,y)), and since g, hi, and h2 are
all in LOGSPACE, it follows from the Generalized Space Composition Lemma that
the pairing function is in LOGSPACE.
Now let w have the binary representation. Define g, hi, and h2 as above but
this time define g only on the set of pairs of numbers whose sum is even. Then g is
in ZEROSPACE. This is because given binary numbers x and y on two input tapes
such that x + y is even, our machine simulates the addition of x to y, as described








in the proof of Lemma 2.1.6, but this time it simply does not write the first sym-
bol (which will be a 0) of x + y on the output tape. To see that hi and h2 are in
LOGSPACE, we first define fi, f2, fa, and f4 from Bin(w) x Bin(w) to Bin(w) by

fi(x,y) = x + y, f2(x,y) = xy, f3(x, y) = 3x, and f4(x, y) = y. Evidently f4 is in
ZEROSPACE. We have fi E ZEROSPACE by Lemma 2.1.6, while f2 and fs are in
LOGSPACE by Lemma 2.1.8. (The machine for f3(x, y) simply simulates the one
for f2(x, y) on inputs x and 3.) We now have hi(x,y) = f2 (f(x,y), f(x,y)) and
h2(x,y) = f(f3(x, y),f4(x, y)). Since the fi are all in LOGSPACE, it follows from

the Generalized Space Composition Lemma that hi and h2 are in LOGSPACE. The
same lemma now shows that [x, y] = g(hi(x, y), h2(x, y)) is in LOGSPACE.
(b) Given m E w in tally on the input tape, our machine has to output
0(m) U V)(m) on its output tape. In order to do that, it must determine the n such
that n(n + 1)/2 < m < (n + 1)(n + 2)/2. In fact, our machine computes the number
n(n + 1)/2 which is < m using four work tapes T1-T4 and writes n(n + 1)/2 on T4.
All the work done on tapes T1-T4 will be in binary. If m = 0 (resp. 1), then n = 0
(resp. 1), and so n(n + 1)/2 = 0 (resp. 1). Hence the machine immediately writes

a 0 (resp. 1) on T4. Otherwise, the machine writes a 0 on T1 and a 1 on T2, and
then performs the following procedure: It simulates the addition of the contents of
T1 and T2, and writes the answer on T3. After that, the machine copies the contents
of T3 on to T1 and also on to T4. Once this is completed, the machine advances the
input cursor (starting in its extreme-left position) one place to the write each time
a 1 is subtracted on T4. If the input cursor reads a 1 of m when the content of T4
becomes 0, then the correct number 0 + 1 + -... + n = n(n + 1)/2 < m may still
not have been written on T4, and so the input cursor goes back to its extreme-left
position, the machine adds 1 on tape T2, and the above procedure is repeated. But
if the input cursor reads a Li when the content of T4 becomes 0, then T4 contained
the correct n(n + 1)/2, that is, TI now contains the correct n(n + 1)/2, and so the








machine copies the contents of TI on to T4. On a separate worktape T5, the machine

writes m in binary. After that, the machine simulates the binary subtraction of the

contents of T4 from T5, writing the answer on another work tape T6. Note that T6

now contains bin(O(m)). The machine now copies this bin(O(m)) on to the output

tape if the output is to be in Bin(w) x Bin(w). Otherwise, the machine writes a 1 on

the output tape each time it subtracts a 1 in binary on T6. At the end of either of

these maneuvers, the machine writes a U on the output tape.

It remains to output 0(m). Recall that at this point, T1 and T4 contain the

correct number 0 + 1 + + n = n(n + 1)/2 n m. To obtain (n + 1)(n + 2)/2, the
machine adds 1 on T2, and then adds the contents of T1 and T2, writing the answer

on T3. Now T3 has 0 + 1 + ... + n + (n + 1) = (n + 1)(n + 2)/2. Meanwhile, T5

contains bin(m). The machine adds 1 on T5 and then simulates the binary subtrac-

tion of the contents of T5 from T3, writing the answer, that is, bin(p(m)), on T6.

The machine now copies this bin(O(m)) on to the output tape if the output is to be

in Bin(w) x Bin(w). Otherwise, the machine writes a 1 on the output tape each time

it subtracts a 1 in binary on T6.


Although we cannot prove that the standard bijection from w x w to w in the

previous lemma is in LOGSPACE, we are able to construct another such bijection in

the next lemma that does happen to be in LOGSPACE. Using this new bijection, we

avoid the problem of having to explicitly right down the number 0 + 1 + ... + n =

n(n + 1)/2 < m in binary when the input m itself is in binary, which is the reason
why we cannot prove that the standard bijection from w x w to w is in LOGSPACE.

Lemma 2.3.3. Let w be the set of natural numbers in either the tally or the binary

representation. For each n > 1, define

An = {0,1,...,2n 1} x {2-,2 +l,...,2n+1 },
Bn = {2",2n+l,...,2n+1 -1} x {0,1,...,2"-1},
Cn = {2n, 2n + 1,...,2n+- 1} x {2n,2n + 1,...,2n+l- 1}.








(a) Let f : w x w -> w be defined by the following set of rules: (0, 0) 0, (0, 1) -4 1,
(1,0) 2, (1,1) >- 3. And for each n > 1, we define

(x,y)I 2nx+y+22n-2n if (x,y) E An,
(x, y) 2nx + y + 22" if (x, y) E Bn,
(x,y) -2n + y + 22n+1 -2n if (x,y) C Cn.
Then f is both one-to-one and onto, and in LOGSPACE.
(b) The inverse g : w -+ w x w of f is also in LOGSPACE. This inverse is given
explicitly as follows: 0 4 (0, 0), 1 "- (0, 1), 2 (1, 0), 3 '-+ (1, 1). And for each
z e w \ {0, 1,2,3} and the corresponding n ) 1, we have g(z) = (x,y), where

y [ z 22n |(
x 2n if 22 z 2 .22n 1,
y = (z 22")(mod 2") + 2n

x 22n
S 2n if 2 22n z < 3 22n-,
y = (z 22n)(mod 2n)

z z 22n
x= 2n if 3.22 z 4 22"-1.
y = (z 22n)(mod 2") + 2n


Proof: (a) First, we show that f is indeed defined on all of w x w and that f is a
bijection onto w.
For each n > 1, the smallest element of the set f(An) is 2"x + y + 22n 2n
2n(0) + (2n) + 22n 2n = 22n (which is 4 if n=l), while its largest element is
2nx + y + 22n 2" = 2n(2n 1) + (2n+1 1) + 22n 2" = 2 22n 1. Similarly, the
smallest element of the set f(Bn) is 2 22n, while its largest element is 3 22n 1.
And the smallest element of f (Cn) is 3 22n, while its largest element is 4 22n 1.
Thus for each n > 1, the sets f(An), f (Bn), and f(Cn) are mutually disjoint.
Now f [A. for n > 1 is one-to-one because 2"(xl) + (yi) + 22n 2n = 2n(x2) +

(y2)+22n-2" =- 2n(xl-x2) = Y2-Y1 2n+1-1-2n == (x1-x2) ( 1-(1/2n) < 1
=-4 Xi = x2 == 1 YI = Y2- Similarly, f [Bg and f tc are also one-to-one. Thus for








each n > 1, we have IAJ = If(An)|, |BnI = |f(Bn)|, and |Cn| = |f(C,)|. It now
follows that f is defined on all of w x w and that f is onto w.
We now show that f is in LOGSPACE.
Suppose we are given x and y on the two input tapes of our machine. If (x, y) E
{0, 1} x {0, 1}, then the output is immediate. So assume that (x, y) ~ {0, 1} x {0, 1}.
If either x and y are in tally, then our machine first writes down bin(x) and bin(y)
on two separate worktapes and treats these binary numbers as the inputs from now
on. The machine computes the n > 1 such that (x, y) E An or Bn or Cn. We re-
call that 2lbin(x)l-I ( x < 2lbin(x)l 1 and 2lbin(y)-1i < y < 2jbin(y)l 1. Hence, our
machine only has three straightforward cases to consider: (i) If |bin(x)| = |bin(y)|,
then n = bin(x) 1 = |bin(y) 1, and (x,y) E C,. (ii) If |bin(x)| < Ibin(y)|,
then n = |bin(y) 1, and we claim that (x,y) C An. Otherwise, x > 2" 1, i.e.,
x > 2" = 2 bin(y)l-l, which implies Ibin(x)l| > bin(y)|, a contradiction. (iii) Analo-
gously to the second case, if |bin(x)| > Ibin(y) then n = Ibin(x)| -1 and (x, y) E Bn.
Our machine now writes bin(n) on a separate worktape. We note that the procedures
described so far all use up at most logarithmic space.
Now that our machine "knows" whether (x, y) E An or Bn or Cn, it pro-
ceeds to compute f(x, y) in binary using the appropriate formula. We shall illustrate
how the machine can do this using up at most logarithmic space in the case where

(x, y) e An. The other two cases are similar.
If (x, y) E An, the machine must output the binary number f(x, y) = 2bin(n)
bin(x) + bin(y) + 22.bin(n) 2bin(n). We will define LOGSPACE functions fl, f2, f3,
and f4 such that f(x, y) = f4(fi(x, y), f2 (x, y), f3(x, y)). It will then follow from the
Generalized Space Composition Lemma that f(x, y) is in LOGSPACE. And to output

f(x, y) in tally, the machine simply converts bin(f (x, y)) to tally. This conversion is
linear in bin(x) and bin(y), and hence logarithmic if the inputs x and y are originally
given in tally.








First let f4(bin(a),bin(b),bin(c)) = bin(a) + bin(b) + bin(c). Then f4 E
LOGSPACE by the Space Composition Lemma I. Now define fi(bin(x), bin(y)) =
2bin(n) bin(x). Then fi is in LOGSPACE because the output is n zeros followed by
bin(x), and so the machine simply copies bin(n) on a separate worktape, writes a
zero on the "output tape" of fi each time it subtracts a 1 in binary from bin(n), and
then copies bin(x) to the right of the zeros. The function f2(bin(x), bin(y)) = bin(y)
is in ZEROSPACE. And finally, the function f3(bin(x),bin(y)) = 22bin(n) 2bin(n)
is in LOGSPACE since f3(bin(x), bin(y)) = f7(f5(bin(x), bin(y)), f6(bin(x), bin(y))),
where we have f5, f6, and f7 as follows: f7(bin(a), bin(b)) = bin(a) bin(b) E
LOGSPACE, while f.5(bin(x),bin(y)) = 22-bin(n) e LOGSPACE because the output
is 2n zeros followed by a 1, and 2n zeros can be written given bin(n) using up only
logarithmic space, and f6(bin(x), bin(y)) = 2bin(n) E LOGSPACE, again because the
output is n zeros followed by a 1. We now have f(x, y) = f4(fl (x, y), f2(x, y), f3(x, y))
in binary, and this function is in LOGSPACE by the Generalized Space Composition
Lemma.
(b) First we must verify that f o g = id. If z E {0, 1, 2, 3}, then certainly
(f og)(z) = z. Now let z e w \ {0,1,2,3} and suppose there exists an n > 1
such that 22n < z < 2 22n 1. Then g(z) = (x,y), where x = [(z 22")/2nJ and
y =(z-22n)(mod 2n)+2n. Hence f(g(z)) = 2nx+y+22n-2n = {2n[(z-22n)/2nJ +
(z 22n)(mod 2")} + 2n + 22n 2 = {z 22n} + 22n = z. The argument for the
cases where 2 22n ~ z < 3 22n 1 and 3 .22n < z 4 22n 1 are similar.
Now we prove that g E LOGSPACE. If the input z to our machine is in
{0,1,2,3}, then the output is immediate. Otherwise, if z is in tally, then the
machine first writes bin(z) on a separate work tape and treats bin(z) as the in-
put from now on. Our machine then proceeds to compute the n > 1 such that
22n ( z ( 4- 22n 1 = 22(n+1) 1. Since 2 bin(z)l-1 < z < 2lbin(z)l 1, we have
Ibin(z)| = 2n + 1 or |bin(z)I = 2(n + 1). Hence n = (Ibin(z)| 1)/2 if |bin(z)I is








odd, and n = (|bin(z)|/2) 1 if |bin(z)| is even. Since binary subtraction of 1 and
binary division by 2 are both in ZEROSPACE, our machine can check the parity of
symbols of bin(z) and write the correct bin(n) on a separate worktape without using
up space more than logarithmic in bin(z).

Now our machine must determine whether 22" < z < 2 22n 1 or 2 22n <
z < 3.22n-1 or 3.22n ( z < 4.22n-1, and then output g(bin(z)) = (bin(x),bin(y))
accordingly. For this, the machine simply checks whether 2 22n z and whether
3. 22,n z. Since binary multiplication, the order relation, and raising 2 to a power

of n are all in LOGSPACE, the above checking, being a composition of these three
functions, can be carried out within space logarithmic in bin(z).
Finally, to output g(bin(z)) = (bin(x), bin(y)), consider the case where 22n <
z < 2 .22n 1. Then bin(x) = [(bin(z) 22-bin(n))/2bin(n)] and bin(y) = (bin(z) -
22'bin(n))(mod 2bin(n)) +2bin(n). We observe that binary division by 2t and then taking

the floor of the result involves simply ignoring the first t symbols of the dividend, and
binary b(mod 2t) is simply the first t symbols of bin(b). Hence, arguments similar to
those in part (a) show that both bin(x) and bin(y) can be computed and explicitly

written down within space logarithmic in bin(z). All this also hold for the cases
2 22n < z < 3 22n 1 and 3 .22n < z < 4 22n 1. And the conversion of bin(x)

and bin(y) to tal(x) and tal(y) is linear in bin(z), and hence logarithmic in tal(z).
O
We now proceed to prove that Bin(w) is LOGSPACE set-isomorphic to Bk(w),
k > 3, as a corollary to the next few lemmas, specifically, Lemmas 2.3.5-2.3.9.
In Lemma 2.3.4 and in Lemma 2.3.5, we in fact prove the existence of an order-
isomorphism as opposed to just a set-isomorphism. Although the very next Lemma
2.3.4 is not used to prove that Bin(w) is LOGSPACE set-isomorphic to Bk(w), k > 3,
it is interesting in its own right and will prove useful later in Theorem 3.2.12.

Lemma 2.3.4. The set Bin(w) \ {1}* is LOGSPACE order-isomorphic to Bin(w).








Proof: Let : Bin(w) \ {1}* -> Bin(w) be defined by O(x) = x + 1 \x. Then
0(0) = 0 and 0(2) = 1. Note that 0(1) is undefined since 1 V Bin(w) \ {1}*.
To show that is one-to-one and order-preserving, let x e Bin(w) \ {1}* such
that Ix| = n > 2. (The case |x| < 2 is already taken care of by the fact that 0(0) = 0
and 0(2) = 1.) Then 2`-1 < x < 2" 2. Any two distinct numbers between 2n-1
and 2' 2 have the same length xl = n. Hence, all the numbers between 2n-1 and
2" 2 get mapped by 0 to distinct numbers between 2"-1 + 1 n and 2" 1 n,
and so 0 is order-preserving between 2"-1 and 2" 2.
To show that 0 is onto, let b e Bin(w). Since, as observed above, 0(2"-1)
2"-1 + 1 n and 0(2" 2) = 2" 1 n, and 0 is order-preserving between 2n-1
and 2" 2, it suffices to show that for every b E Bin(w), there exists an n such that
2"-1 +1-n b 2 1 n. We shall show this by induction on b > 2. (For b = 0, 1,
we already have 0(0) = 0 and 0(2) = 1). For b = 2 we have n = 3. Now suppose
2" + 1 7n ( b < 2" 1 n for some n. If b < 2" 1 n, then the same n works
for b + 1. Otherwise, b = 2n 1 n implies b + 1 = 2n n = 2(n+')-l + 1 (n + 1),
and so n + 1 works for b + 1.
It remains to show that ( and its inverse can be computed in logarithmic
space.
Let g(x,y) = x y, hi(x) = x + 1, and h2(x) = |x| be functions defined
on binary numbers. Then h2 e LOGSPACE (Lemma 2.1.7) and g and hi are in
ZEROSPACE (Lemma 2.1.6). Hence by the Generalized Space Composition Lemma,
O(x) = g(hi(x), h2(x)) is in LOGSPACE.
The argument for 0-1 is more complicated. Suppose we are given b E Bin(w)
on the input tape. The machine must output x E Bin(w) \ {1}* such that O(x) = b.
Before we describe our machine's operation, consider for a moment the x e Bin(w) \
{1}* such that O(x) = b and suppose |xj = k. Since x is not all l's, we have |x+11 = k
also. We claim that Jbi = k or |bI = k 1. We have O(x) = x + 1 k = b, and








hence Jbf x + 1i = k. Now to show that |b > k 1, it suffices to show that
b = x + 1 k > 2k-2. Since x > 2-1 we have x+1-k > 2k-1 + 1 k. Now if
2k-l+1-k < 2k-2, then we have 2k-1 -2k-2 < k-1, that is, 2-2 k-2-2k-2 < k-1, and
hence 2k-2 < k-1, a contradiction. It follows that b = x+l-k > 2k-l+1-k > 2 -2,
and therefore k 1 < fbf < k, as we claimed. Now since O(x) = b = x + 1 |x\,
and fb| = k = |xf or fbf = k 1 = xf 1, we are left with only two possibilities
for the output x: Either x = b + fb| 1 or x = b + Ibl. Suppose it so happens that
x = b + Jbf 1, which will be the case if fbf = k = \xf. Then O(x) = b implies
x+1 xf =b, i.e., (b+ |bf -1) +1- (b+ fb|- 1)1 = b, and hence fb| = 1(b+ bi- 1)1.
Thus computing x = -1'(b) involves checking whether \bI = I(b + fb\ 1) in
which case x = b + fbf 1; otherwise x = b + Ibl. To show that all this can be done
without using space more than logarithmic in Ibl, consider the following functions
defined on binary numbers or binary number pairs: hi(y) = fy|, h2(y) = y 1,
h3(y, z) = y + z, and h4 is the order relation on Bin(w), that is, h4(y, z) = 1 (resp.
0) if y < z (resp. y > z). Now hi is in LOGSPACE, while the remaining hi are
in ZEROSPACE. It follows from the Generalized Space Composition Lemma that
checking whether bI| = I(b + fbf 1)|, which is the same as checking whether N =
h4[hi(b) h(h3(hi(b), h2(b)))] is 1 or 0, is in LOGSPACE. Since x = h3(h2(b), hi(b))
if N = 1 and x = h3(id(b), hi(b)) if N = 0, the computation of N followed by the
outputting of x is in LOGSPACE.
El

Lemma 2.3.5. The set Bin(w) is ZEROSPACE order-isomorphic to the set {0, 1}*.

Proof: Let the mapping f from {0, 1}* to Bin(w) be defined by f(o) = (a^l1) 1 if
a 0 0 and f(0) = 0. We note that since ao1 E Bin(w), we have (or-1) 1 e Bin(w),
and so f is well-defined.
If Ua < a2 in the reverse lexicographic ordering of {0, 1}*, then either U2 is
longer than al or the first 1 (counting from the right) of 02 occurs to the right of the








first 1 of o-. In either case, we end up with (al1) < (a ^l) in Bin(w). Hence f is
one-to-one and order-preserving. And f is onto because for every binary number b,

we have b = (b + 1) 1, and if b / 0, then b + 1 = '1 for some nonempty string

S {0, 1}*.

It remains to show that both f and f-1 are in ZEROSPACE.

Given a on the input tape, our machine outputs (a"1) 1 without using any

space as follows: If a is the empty string, then the machine outputs 0. Otherwise, the

machine composes the ZEROSPACE appending of 1 at the end of the input string

followed by the ZEROSPACE subtraction of 1 in binary.

For the other direction, our machine adds 1 in binary but simply does not

write the very last symbol on the output tape. This is done as follows: Given a

binary number b on the input tape, if b = 0, the machine does nothing, and if b is all

l's, the machine ouputs a 0 for each 1 of b. Otherwise, the machine uses special states

that allow the input cursor to advance two places to the right from each position i

on the input tape, and then return to position i. Once the input cursor returns to

position i, the machine outputs the ith symbol of b + 1, and stops after outputing a

symbol of b + 1 only if the input cursor encountered a U the last time it advanced

two places to the right.


It is not easy to generalize the above lemma completely by proving the ex-

istence of a ZEROSPACE order-isomorphism between Bk(w) and {0, 1,..., k 1}*.

More specifically, we cannot change the map in the proof of that lemma to the map
a (ac-(k 1)) (k 1) as this map is not onto Bk(w). However, the existence of a

LOGSPACE set-isomorphism between Bk(w) and {0, 1,..., k 1}* is implied by the

next three lemmas.

Lemma 2.3.6. For each k > 3, the set Bk(w) is ZEROSPACE set-isomorphic to the

(k-1)-fold disjoint union {0, 1,...,k 1}* \{0} ... {0,1,..., k 1}* \ {0}.








Proof: For each n >, 1, let On denote a string of n zeros. Let a denote a string which
has at least one nonzero symbol from the set {0, 1, ..., k 1}. We define a mapping
f from Bk(w) to the (k-1)-fold disjoint union using the following set of rules:

0 -+ (0, 0)
1 -> (0,00) 0" 1 -+ (0, 0n+2) U-1 -- (0, a)
2 --+ (1,0) 0"on2 -4 (1, 0"n+) a^2 -- (1, cra)
3 -+ (2, 0) on^3 -+ (2, 0n+") a-3 -+ (2, a)

k- 2 -- (k- 3,0) On^(k- 2) --+ (k 3, On+) a(k 2) -> (k 3,a)
k- 1 -+ (k- 2,0) On^-(k- 1)-+ (k- 2,0"+1) a-(k- 1)-+ (k- 2,7)

This mapping is defined on all of Bk(w) because every nonzero k-ary number
ends in one of 0,1,..., k 1. Evidently f is one-to-one and onto the disjoint union.
It is also evident that computing the f-value of a k-ary number or the f- -value of
an element of the disjoint union does not use up any space because k-ary addition
and subtraction are in ZEROSPACE.



Lemma 2.3.7. For each k > 3, the (k-1)-fold disjoint union {0,1,..., k 1}* \

{0} e -- {0,1,..., k 1}* \ {0} is ZEROSPACE set-isomorphic to the set
{0,1,...,k- 1}* \ {0}.

Proof: As in the previous proof, we let 0" denote a string of n zeros for n > 1. Let
a denote a string that has at least one nonzero symbol from the set {0, 1,..., k 1}
and let T denote a (possibly empty) string of {0, 1, ..., k 1}*. We define a mapping
from {0, 1, ..., k 1}* \ {0} to the (k-1)-fold disjoint union using the following set of
rules:
2 -+ (1,0) 2"0" -+ (1,0"o+) 2'-a -+ (1, ra)
3 -+ (2,0) 3-0" <2,0" O+1) 3-a -4 (2, a)

k -2 -(k 3,0) (k 2)^0n (k- 3,0n+') (k 2)"a (k 3, a)
k 1 ->(k 2,0) (k 1)0n-- (k 2, 0n+1) (k 1)a->o (k 2, a)








In order to deal with strings that start with a 0 or a 1, we define the following
additional set of rules:

1 --+ (0, (k 1)^0) 1-0' -+ (0, (k 1)0n+") 1"a -+ (0, (k 1) -)

0" -4 (0,0")

0(k 1)-r -+ (0, (k 2)-r) 0n+1(k 1) -> (0,0"n(k 2)-T)
0(k 2) (0, (k 3)-r) O^(1-(k 2)" (0,0" (k 3)-r)

02'r -+ (0, 1-r) On+1^-2-T -> (0, On"^1T)
01'T -+ (0,0-(k- 1)^) On+ -1^T^ -- (0,On+^-(k- 1)^T)

The additional rules above now ensure that the mapping is defined on all of the set
{0, 1,..., k 1}* \ {0}. As in the proof of the previous lemma, it is evident that the
mapping is both one-to-one and onto, and also in ZEROSPACE.



Lemma 2.3.8. For each k > 3, the set {0, 1,..., k 1}* \ {0} is ZEROSPACE
set-isomorphic to the set {0,1,..., k 1}*.

Proof: We can equip the set D = {0, 1,..., k 1}* with the lexiographic ordering
(in which case 0 will be the smallest element in D). Then we can map every element
of D to its immediate successor (relative to the ordering) in D \ {0}. The successor
of 0 is 0 and that of the string (k 1)n (i.e., the string of length n > 1 all of whose
symbols are k 1) is 0""1. And the successor of the string (k 1)'"^-sT, where
1 ( s < k 2 and T is a (possibly empty) string, is (k 1)"'(s + 1)-T. Hence,
it is now evident that the successor function from D to D \ {0} is a ZEROSPACE
bijection.



Lemma 2.3.9. For each k > 3, the set {0, 1,... k-1}* is LOGSPACE set-isomorphic
to the set {0, 1}*.








Proof: First let g : {0, 1,..., k-1} -+ {0, 1}* be the function defined by g(0) = Ok-1
and g(i) = Oi-11 for 1 < i < k 1. We follow the convention that 001 denotes 1 so
that g(1) is defined.
Now define the function f : {0, 1,...., k 1}* -+ {0, 1}* using g as follows:

f(0) = 0, f(O0) = O", f(^O"0) = f(a)n0", where a is a string with at least one
nonzero symbol, and f(coao *.. n-1) = g(uo)^g(orl)..." g(an-1), where the ai E
{0, 1, ..., k 1} and at least one ai is nonzero.
To prove that f is one-to-one, let x = xoi ... xm-1 and y = yoYi yn-1 be
distinct strings of {0, 1,..., k 1}*. If one of them is 0, then certainly their f-values
are different. If both x and y have length 1, then their g-values, and hence, their f-
values are different. If either x or y is only Os, then again their f-values are different.
Otherwise let 0 < i ( min{m, n}, and let the ith symbol be the first symbol where
x and y differ. Then f(xo *xi-2) = 9 (xo) g(i-2) = 9(yo)... (yi-2) =

f(yo Y* i-2). But g(xi-1) = g(yi-1) as one of these strings has a 1 where the other
does not. Consequently, we have f(x) 6 f(y) as well.
To prove that f is onto, we first note that 0, 0, and 1 all have inverse images
under f. Now suppose that every string of {0, 1}* of length < n has an inverse image
under f. Let x = xoxl' x,,n E {0, 1}*. Then f -1(xl ... x,,) exists by the induction
hypothesis. If x0o = 1, then f-'(x) = 1~-f-l(xi ... x,). But if xo = 0, then either
x is all O's, in which case x is its own inverse, or the first 1 of x occurs in position
i, where 1 < i n. If i < k 1, then since f-1(xi+l ... xn) exists by the induction
hypothesis, we have f-'(x) = i^f-(xi+1 .. xn). If i = q(k 1), for some q > 1,
then f-'(x) = 0q-i^(k 1)f-1(x i+. .x *.jn). And if i = q(k 1) + r for some q > 1
and 0 < r < k 1, then f -(x) = -Oq^r^f -(xi+l .. Xn).
Now to prove that f is in LOGSPACE, suppose we are given a string x of
{0, 1,... ,k 1}* on the input tape. If x = 0, the machine does nothing. If x is
all zeros, the machine copies x on to the output tape. Otherwise, the machine must









determine if x ends in a sequence of O's and the position where this sequence starts.

To that end, the machine adds 1 in binary on a work tape TI each time it reads a

symbol of x. Then, if x ends with a 0, the machine adds 1 in binary on a separate

work tape T2 each time it reads a 0 while it reads x backwards, and stops incrementing

on T2 once it encounters a nonzero symbol of x. After that, the machine subtracts

the binary number on T2 from the number on T1, and stores the result on another

work tape T3. Now the machine begins reading x from the left. Each time it reads a

symbol of x it outputs the g-value of that symbol, subtracts 1 from T3, and proceeds

to the next symbol of x. When the number on T3 becomes 0, the machine copies the

rest of x (which will be zeros only provided the content of T2 is nonzero) on to the

output tape.

Finally, we prove that f-1 can be computed in LOGSPACE. The machine

employs the work tapes T1, T2, and T3 exactly as above. In addition, the machine

has k 1 special states to detect any occurrence of a string of k 1 zeros. Now

suppose we are given x E {0, 1}* on the input tape. If x = 0, then the machine does

nothing. If x is all zeros, then the machine copies x on to the output tape. If x is not

all zeros, the procedure is as follows: Each time the machine reads a 1, it subtracts

1 from T3, and then writes a 1 on the output tape until, if ever, the machine reads

the first 0 of x. As soon as it reads the first 0 of x, the machine switches to the

first of the k 1 special states. It switches to each of these k 1 special states, one

after another, each time it reads a 0, until it reads a 1 of x. (We emphasize that the

machine always subracts a 1 on T3 when it reads a new input symbol). If the last of

the k 1 special states is reached without encountering a 1, then the machine outputs

a 0 (since g(0) = Ok-1). Then if the machine still does not encounter a 1, it switches

back to the first of the k 1 special states and the procedure is repeated. If a 1 is

encountered immediately after switching to the last of the k 1 special states, then

the machine outputs a 1 and proceeds to the next symbol of x. If a 1 is encountered








while the machine is still in one of the k 1 special states, that is, in some special
state i, with 1 < i < k, then the machine outputs i (since g(i) = Oi-11.) Finally,
when the content of T3 becomes 0, the machine copies the rest of x (which will be
zeros only provided the content of T2 is nonzero) on to the output tape.
O

Lemma 2.3.10. For each k > 3, there exists a LOGSPACE bijection f : Bin(w) -+
Bk (w). Furthermore, this f has the following property: There exist constants cI, c2 >
0 such that for every n E w, we have | f(bin(n))| ( cl|bin(n)I and If- (bk(n))I <

c2 bk (n).

Proof: The existence of f is an immediate consequence of Lemmas 2.3.5-2.3.9 and
the Space Composition Lemmas I and II. As for the property of f in the statement
of the current lemma, an examination of the bijections constructed in the proofs of
each of the Lemmas 2.3.5-2.3.9 shows immediately that each of these bijections have
this property. Since f is the composition of these bijections, it is now evident that f
also has this property.


The remainder of this section consists of six lemmas, the first four of which deal
with certain subsets and combinations of Tal(w) and Bin(w). We first prove a certain
boundedness on the lengths of elements of subsets of Bin(w) that are LOGSPACE
set-isomorphic to Tal(w). Our proof is the same as the proof of Lemma 2.4 (a) in
Cenzer & Remmel [3]. But we reproduce that proof since we shall refer to it in the
proof of Lemma 2.3.12, where we show that Tal(w) and Bin(w) are not LOGSPACE
set-isomorphic, and also in the proof of Lemma 2.3.13.

Lemma 2.3.11. Let B be a subset of Bin(w) which is LOGSPACE set-isomorphic
to Tal(w), and let bo, bl, b2 ... list the elements of B in the standard ordering, first by
length and then lexicographically. Then for some j, k and all n > 2, we have n < IbnIj
and IbnI < nk.








Proof: Let q be a LOGSPACE set-isomorphism from Tal(w) onto B. By Corollary
2.2.2, there is a k such that |(1"n) I n k for all n > 2. Moreover, we may assume that
k is large enough so that 1|(0)| and | (1)| are both < 2k. Then, since is a bijection,
there are, for each n & 2, at least n + 1 distinct elements 0(0), 0(1), 0(11),..., (1l")
of B all having length < nk. And since the elements bo, bl, b2 ... of B are listed in
order, all of the elements bo, bl,. .. bn have length < nk
To prove the other inequality, we first note that 0 and 1 are the only elements
of Bin(w) with length < 1, and hence we have bnj > 2 for all n > 2. Since (-1
is in LOGSPACE, again by Corollary 2.2.2 there is a j such that I-(bn)l < bnl
for all n > 2. And as before, we may assume that |-'(0)1 and I1-1(1)1 are both
, 2j. Then, since ~-1 is a bijection, there are, for each n > 2, at least n + 1 distinct
elements 0-'(bo), ^(b), 1(bn) of Tal(w) all having length < Ibn j. It follows
that all of 0,1,..., n are < Ibnlj.



Lemma 2.3.12. For any infinite set M of natural numbers, tal(M) and bin(M) are
not LOGSPACE set-isomorphic. In particular, the sets Tal(w) and Bin(w) are not
LOGSPACE set-isomorphic.

Proof: Let mo, mi,... list the elements of M in order. Let tal(M) = {ao, a,... },
where ai = tal(mi), and let bin(M) = {bo, bi,... }, where bi = bin(mi).
Suppose 4 : tal(M) -4 bin(M) is a LOGSPACE bijection. Then by the argu-
ment in the second paragraph of the proof of the previous lemma, there is a j such that
I -l(b.n)l Ibnji for all n > 2. The same argument then allows us to conclude that for
each n > 2, there are at least n + 1 distinct elements 0-'(bo), '-1(bl), ..., -1(bn) of
tal(M) all having length < Ibnj3. Hence one of these elements of tal(M) must be an. It
follows that lanl < Ibnlj for all n > 2. Since an = tal(mn) and bn = bin(mn), we have
mn < |bin(mn)j for all n > 2. By Lemma 2.1.2 (a), we have 21bin(m-)l-1 < mn + 1,








and hence 2lbin(m,")-1 < Ibin(mn)1 + 1 for all n > 2. This is evidently a contradiction

since j is fixed and M is infinite.
El

We now provide a characterization of those LOGSPACE subsets of Tal(w)

that are LOGSPACE set-isomorphic to the whole of Tal(w).

Lemma 2.3.13. Let A be a LOGSPACE subset of Tal(w), and let ao, a,... list the

elements of A in the standard ordering. Then the following are equivalent :

(a) A is LOGSPACE set-isomorphic to Tal(w).

(b) For some k and all n > 2, we have lan ( nk.

(c) The canonical bijection between Tal(w) and A that associates 1i with an, n > 0,

is in LOGSPACE.

Proof: (a) => (b). We simply use the argument in the first paragraph of the proof

of Lemma 2.3.11.

(c) => (a). This is immediate since bijections are set-isomorphisms.

(b) => (c). We first note that the map an i-+ In is in LOGSPACE even without

(b). To see this, suppose we are given a E A on the input tape. Our machine writes

down bin(a) on a work tape W1 and bin(0) on another work tape W2. It then

composes the LINSPACE conversion of the binary number on W2 to tally and the

testing of whether this tally number is in A. This composition uses up space linear

in the length of the contents of W2, and hence logarithmic in lal. If the test for

membership in A is positive, then the machine writes a 1 on the output tape. Upon

completing the above composition, the machine adds 1 in binary on W2, subtracts
1 in binary on W1, and then repeats the procedure until the content of WI is 0.

At this point, if none of the tests for membership in A had been positive, that is,

the machine had not written a single 1 on the output tape, the machine outputs 0,

signifying a = a0o, the very first element of A. Otherwise, the machine writes one

more 1 on the output tape.








Now to see that the map In '-4 an is in LOGSPACE, assume (b) and suppose

we are given 1" on the input tape. Since \an| < nk for n > 2, the idea is to keep

checking 0, 1, 11, 111, ..., 1"k for membership in A, and to output the nth element
of A found. Our machine begins by writing a 0 in binary on a work tape T. Then

on separate work tapes, the machine simulates the composition of the following two
procedures: (i) The LINSPACE conversion of the binary number i on tape T to

the tally number tal(t), and (ii) The LOGSPACE testing of whether tal(t) E A. If
tal(t) e A, then the cursor on the input tape moves right, a 1 is added in binary

on tape T, and the above simulation of the composition of the two procedures is
repeated. However if tal(t) A, then a 1 is added in binary on tape T, but the
cursor on the input tape does not move right. The simulation of the composition of
procedures (i) and (ii) is then carried out on the current content of T. The whole
process is repeated until the cursor on the input tape encounters a U. This means

that the very last element of A found during the simulation of the composition of

(i) and (ii) is the nth one. At this point, the machine keeps subtracting 1 in binary
from tape T and writes a 1 on the output tape with each such subtraction, thereby

outputing the nth element. Now by the Space Compositon Lemma III, the simulation
of the composition of procedures (i) and (ii) uses up space linear in the contents of T.

Since lani < nk, the correct answer would have been written on the output tape no

later than when the binary number t on tape T gets incremented to bin(nk). We have

Itl < Ibin(nk) = O(log(nk)) = O(logn). This means that although the composition
of procedures (i) and (ii) uses up space linear in the contents of T, the space used up
is logarithmic in the input In.


The next lemma deals with the space complexity classes of certain Cartesian
products and disjoint unions.








Lemma 2.3.14. Let A be a nonempty LOGSPACE subset of Tal(w). Then we have
(a) AeTal(w) is LOGSPACE set-isomorphic to Tal(w) and AeBin(w) is LOGSPACE
set-isomorphic to Bin(w).
(b) AxTal(w) is LOGSPACE set-isomorphic to Tal(w) and AxBin(w) is LOGSPACE
set-isomorphic to Bin(w).
(c) Both Bin(w) E Bin(w) and Bin(w) x Bin(w) are LOGSPACE set-isomorphic to
Bin(w).
(d) If B is a nonempty finite subset of Bin(w), then both B (Bin(w) and B x Bin(w)
are LOGSPACE set-isomorphic to Bin(w).

Proof: (a) First let C = {2a : a E A} U {2n + 1 : n C Tal(w)} C Tal(w) and let
: A D Tal(w) -4 C be defined by ((0, a)) = 2a and 0((1, tal(n))) = 2n + 1. Since
the parity of an input string can be checked by a Turing machine using two special
states, it is evident that 0 is a ZEROSPACE bijection. Now C is a LOGSPACE
subset of Tal(w) because (i) C contains every odd number, and (ii) given an even
number a (in tally) on an input tape, a Turing machine can simulate the composition
of the ZEROSPACE halving of a and the LOGSPACE testing of whether a/2 E A
using up only logarithmic space. And since C is a LOGSPACE subset of Tal(w), we
can enumerate C in increasing order, yielding C = {co, cl,... }. From the observa-
tion that C contains every odd number, we can conclude that cn ( 2n + 1. Hence
by the previous lemma, C is LOGSPACE set-isomorphic to Tal(w). It now follows
that A G Tal(w) is LOGSPACE set-isomorphic to Tal(w) by the Space Composition
Lemma III.
Next let :A Bin(w) -+ [A Tal(w)] e [Bin(w) \ Tal(w)] be defined
by 0((0, a)) = (0,(0, a)), 0((1,bin(n))) = (1,bin(n)) if bin(n) V {0} U {1}*, and
0((1, bin(n))) = (0, (1, bin(n))) if bin(n) G {0} U {1}*. Evidently is a ZEROSPACE
bijection. By the previous paragraph, A E Tal(w) is LOGSPACE set-isomorphic to
Tal(w). Hence AeBin(w) is LOGSPACE set-isomorphic to Tal(w)e[Bin(w)\Tal(w)]








by the Space Composition Lemma III. But Tal(w) )[Bin(w)\Tal(w)] is ZEROSPACE
set-isomorphic to Bin(w). Hence AEBin(w) is LOGSPACE set-isomorphic to Bin(w).
(b) If A has only one element, this is trivial. If A has at least two elements, let
ao be one of them, and let 0: A x Tal(w) -+ [{ao} x Tal(w)] [A \ {ao} x Tal(w)] be
defined by ((ao, tal(n))) = (0, (ao, tal(n))), and 0((a, tal(n))) = (1, (a, tal(n))) for
a 54 ao, a E A. Then is a ZEROSPACE bijection. Now {ao} x Tal(w) is evidently
ZEROSPACE set-isomorphic to Tal(w). And we claim that A \ {ao} x Tal(w) is
LOGSPACE set-isomorphic to some LOGSPACE subset D of Tal(w). To see this, let
g : A \ {ao} x Tal(w) -+ Tal(w) be defined by g((a, tal(n))) = [a, tal(n)], where [.,.]
is the pairing function of Lemma 2.3.2. By that same lemma, g is in LOGSPACE.
Now let D be the range of g. Then D is a LOGSPACE subset of Tal(w) because
to check whether x E D, we just check whether the "first component" of g-'(x) is
in A \ {ao}, and both g-1 and A are in LOGSPACE, the former by Lemma 2.3.2
(b). Thus [{ao} x Tal(w)] E [A \ {ao} x Tal(w)] is LOGSPACE set-isomorphic to
Tal(w) e D, which is itself LOGSPACE set-isomorphic to Tal(w) by part (a). It
now follows from the Space Composition Lemma III that A x Tal(w) is LOGSPACE
set-isomorphic to Tal(w).
Finally, consider the mapping q from Tal(w) x Bin(w) to Bin(w) given by:
(l, bin(n)) '- Om"^lbin(n) for n 5 0, (lm,0) '-+ 0m^1, (0,0) i-4 0, (0,bin(n)) '-+
1~bin(n), and (0, 1) 1. Evidently q is a ZEROSPACE bijection. So A x Bin(w)
is ZEROSPACE set-isomorphic to A x [Tal(w) x Bin(w)], which in turn is evidently
ZEROSPACE set-isomorphic to [A x Tal(w)] x Bin(w), which by the previous para-
graph is LOGSPACE set-isomorphic to Tal(w) x Bin(w), which again is ZEROSPACE
set-isomorphic to Bin(w) via 0. The result now follows from the Space Composition
Lemma III.
(c) Define 4 : Bin(w) @ Bin(w) -+ Bin(w) by (0, bin(n)) '-* bin(2n) and
(1, bin(n)) i-+ bin(2n+l). Since binary multiplication and addition are in LOGSPACE,








it follows that 0 is in LOGSPACE. And -'1 is in ZEROSPACE. This is because the
computations bin(2n) f-4 bin(n) and bin(2n + 1) -+ bin(n), both involve division by
2, which in turn involves the Turing machine simply ignoring the first symbol of the
input and copying the rest of the input on to the output tape.
That Bin(w) x Bin(w) is LOGSPACE set-isomorphic to Bin(w) is simply
Lemma 2.3.3.
(d) Evidently there is a ZEROSPACE set-isomorhism 0 from B to C =
{bin(0),bin(1),...,bin(N 1)} for some N > 1. Therefore, the set B E Bin(w)
is ZEROSPACE set-isomorphic to C e Bin(w). Moreover, C E Bin(w) is evidently
ZEROSPACE set-isomorphic to Bin(w) by the isomorphism (0, bin(n)) i-+ bin(n) and
(1, bin(n)) '-> bin(n + N).
Using the set C of the previous paragraph, the set B x Bin(w) is ZEROSPACE
set-isomorphic to C x Bin(w), which in turn is LOGSPACE set-isomorphic to Bin(w)
via the isomorphism (bin(i), bin(x)) -> bin(i) +bin(N) bin(x), where 0 < i < N 1.


The final two lemmas in this section deal with embedding sets of finite strings
into sets of strings of zeros and ones only.

Lemma 2.3.15. Let E be a finite alphabet. There is an embedding 47 of E* into
Bin(w) which is in ZEROSPACE in either direction.

Proof: We may identify E with {0, 1,...,n} for some n. Let 71(0) = 0 and let

?(izi2 ... ik) = 0oil10i21...0ik l.
To show that 71 is in ZEROSPACE, suppose we are given a string a from E*
on the input tape. If a = 0, our machine outputs 0. Otherwise each time the machine
reads a symbol ij of a, it switches to a state qi, and writes ij zeros on the output
tape, followed by a 1.
As for j7-1, if our machine is given 0 on its input tape, it does nothing. If given
a nonzero b E Bin(w) on the input tape, the machine writes a 0 on the output tape if









the first symbol of b is 1. But if the first symbol of b is 0, then the machine switches

to the first of n special states q1,..., qn. Then each time the machine reads a 0, it

switches to the next special state. Then if it reads a 1 while at state qi, 1 ( i < n,

the machine writes i on the output tape, swiches to the starting state, and repeats

the procedure starting with the next input symbol. But if the machine reads a 0

while in state qn, then it outputs an error message because the input does not have

the correct form.
O


Lemma 2.3.16. The coding function (ao, a1, ... Ok)k for a0, ... Tk E {0, 1}*

defined by (cro, al, ... ak)k = (ao2*o^2* *a_ 12*ak), where 'q is the embedding

of the previous lemma, is in ZEROSPACE in either direction.

Proof: Coding is in ZEROSPACE since our machine simply punctuates the com-

putation of each 7(oai) by writing 001 = q(2) on the output tape. Uncoding is in

ZEROSPACE because each occurrence of 001 induces the machine to output a 2,

while for any other combinations of O's and 1's, the machine needs to use just two

states to count the number of zeros.














CHAPTER 3
SPACE COMPLEXITY OF CERTAIN STRUCTURES

3.1 Basic Structural Lemmas

Our first lemma deals with the basic case where there is a LOGSPACE set-

isomorphism between two sets, one of which is the universe of a structure.

Lemma 3.1.1. Suppose that A is a LOGSPACE structure and is a LOGSPACE

set-isomorphism from A (the universe of A) onto a set B. Then B is a LOGSPACE

structure, where the functions and relations on its universe B are defined to make

an isomorphism of the structures.

Proof: To show that B is a LOGSPACE set, we observe that b e B if and only if
0-1 (b) G A. Hence to test whether b E B, we compose the LOGSPACE computation

of -'1(b) followed by the LOGSPACE testing of whether 0-1(b) e A. It follows from

the Space Composition Lemma I that B e LOGSPACE.

Now let RA be an m-ary relation and let fA be an n-ary function, both defined

on A, and with m, n > 1.

To prove that the relation RB is in LOGSPACE, we make use of the fact

that RB(bl,...,bm) = RA(-l(bl),...,-l(bm)). Let hi(bl,...,bm) = b-'(bi),

h2(bi,..., bm) = 0-1(b2), ..., hm(bi,...,bm) = q-'(bm). The machine which com-
putes hi, ignores all the strings on its m input tapes except the one on its ith
input tape, and simulates the machine for ~-1 on the ith input string. Conse-
quently, the hi are all in LOGSPACE. We can regard RA as a boolean function
g such that we have RA( -1l(b,),...,0-'(bm)) = 1 (resp. 0) if and only if we have

g(hi(bl,. .., bm),..., hm(bl,..., bm)) = 1 (resp. 0). Since g and each of the hi are all
in LOGSPACE, the result follows from the Generalized Space Composition Lemma.








Finally, the proof that the function fB is in LOGSPACE depends upon the
fact that fB(bi,..., b) = O(fA(-l(bl),.. ,-lr(b1))). If we let hi(bi,...,bA) =
~ -(bi), 1 ( i ( n, then the argument of the previous paragraph shows that the
function fA(0-1 (bi),..., 0-1(b,)) G LOGSPACE. And since 0 is in LOGSPACE, the
Space Composition Lemma I implies fB = 0 0 fA e LOGSPACE.


In the next lemma, we examine the effect of the space complexity of the "tally
representation" of a structure on the space complexity of its "k-ary representation,"
k > 2. We do not include statements like "If A E LINSPACE, then B E EXSPACE,"
and "If A EXSPACE, then B e DOUBEXSPACE," since they are incorporated in
parts (c) and (e) respectively.

Lemma 3.1.2. Let M be a structure with universe M C w, and let A = tal(M) and
B = bk(M), where k > 2. Then we have
(a) IfA c LOGSPACE, then B e LINSPACE.
(b) If A e PLOGSPACE, then B e PSPACE.
(c) If A E PSPACE, then B e EXSPACE.
(d) If A E SUPERPSPACE, then B e EXPSPACE.
(e) If A e EXPSPACE, then B E DOUBEXSPACE.

Proof: Let A (resp. B) denote the universe of A (resp. B). Let RA (resp. RB)
be an m-ary relation and let fA (resp. fB) be an n-ary function, both defined on A
(resp. B), and with m, n > 1. Suppose A E LOGSPACE.
(a) To test ifb E B, it suffices to test if pk(b) E A (Lemma 2.1.2). Hence given
b on the input tape, we compose the LINSPACE conversion of b to Ik(b), followed by
the LOGSPACE testing of whether Pk(b) E A. By the Space Composition Lemma
III, it follows that B is a LINSPACE set.
In order to determine the space complexity class of RB, we imitate the argu-
ment for relations in the proof of the previous lemma. We first let hi(b1,..., bm) =








Pk(bi), 1 ( i < m, and regard RA as a boolean valued function g. Now we have
RB(bi,...,bm) = 1 (resp. 0) if and only if g(hi(bl,...,bm),...,hm(bi,...,bm)) = 1
(resp. 0). Since the hi are in LINSPACE and g is in LOGSPACE, the Generalized
Space Composition Lemma imples RB E LINSPACE.
Finally, to determine the space complexity class of fB(bl,..., bn), we imitate
the argument for functions in the proof of the previous lemma. Let hi(b1, ... ,b) =

I1k(bi),1 i < n. Then fB (b,...,bn) = pkl(fA(hi(bi,...,bA),..., h(bl,..., b))).
The function fA(h, (bi,..., bA),..., h,(bl,..., bn))) is in LINSPACE by the argument
used in the previous paragraph. And since k 1 is in LOGSPACE, the Space Compo-
sition Lemma III implies fB E LINSPACE.
The proofs of (b)-(e) are similar except that we use the Space Composition
Lemmas IV-VII as well as III and the Generalized Space Composition Lemma.
E
In the next four lemmas, we examine the effect of the space complexity of
the "k-ary representation," k > 2 of a structure on the space complexity of its "tally
representation." Since the proofs of these lemmas are similar, we shall give the proofs
of Lemma 3.1.3 and Lemma 3.1.5 only.

Lemma 3.1.3. Let M be a structure with universe M C w, and let A = tal(M) and
B = bk(M), where k ) 2. Then we have
(a) IfB E LOGSPACE, then A PLOGSPACE.
(b) If B E LOGSPACE and for all functions fM, we have IfM(mi,.. .,mn)l <

c(Imi| + + Imn) for some fixed constant c and all but finitely many n-tuples, then
A c LOGSPACE.

Proof: As in the proof of the previous lemma, let A (resp. B) denote the universe
of A (resp. B). Let RA (resp. RB) be an m-ary relation and let fA (resp. fB) be
an n-ary function, both defined on A (resp. B), and with m, n > 1. Suppose B E
LOGSPACE.








(a) This time we begin by considering how to compute fA. Suppose we are

given a,,..., an on n input tapes of a machine M. To compute fA (ai,..., an), we
will not do a formal composition as we did to compute f in the proof of the previous
lemma. The reason for this is as follows: Since Pk e LINSPACE, the Generalized
Space Composition Lemma and the Space Composition Lemmas I and V imply only
that the formal composition of p-1, followed by fB, followed by pk, is in PSPACE,
and not necessarily in PLOGSPACE.
Instead, we observe that the length of each bi = plk(ai) is logarithmic in the

length of ai. Thus there is no harm in explicitly writing down the bi on n distinct
work tapes. The machine M then simulates the computation of b = fB(bl,..., bn)
which uses up space logarithmic in r = 1b I + ... + |bn, and hence logarithmic still in

the total length la = lai + + Ian of the input strings. And by Corollary 2.2.2,
there exist nonzero constants c and k > n such that IbI| crk. This means that

IbI is polynomial in the length of the bi, and hence polylogarithmic in the length of
the ai. Hence there is also no harm in explicitly writing down b on a separate work
tape. Finally, M simulates the computation of iLk(b) = fA(a,... an), which uses
up space linear in Ibi, and hence polylogarithmic in |al. We do not concern ourselves
with the actual length of Pk(b) because pk(b) is produced on the output tape of M
in write-only fashion. Thus fA is in PLOGSPACE.
Now we prove that both A and RA are in LOGSPACE. Although it does not
make a difference in this particular case whether we do formal compositions (as in
the proof of the previous lemma) or we occasionally write down certain intermediate
output strings completely (as in the previous paragraph), we follow the latter practice
because this implies lower space complexity classes for sets and relations in the proofs
of the next three lemmas.
To test if a A, it suffices to test if pk (a) e B. So suppose we are given a on
the input tape of a machine M. Then M simulates the computation of b = pk'(a)








and explicitly writes b on a work tape. This uses up space logarithmic in lai and, of

course, Ibl k log(|al) for some nonzero constant k. Now M simulates the testing of
whether b C B, which uses up space logarithmic in |bi. It follows that M operates in

LOGSPACE.
To test if RA(al,..., am,), suppose we are given a,,...,am on m input tapes

of some machine M. In the beginning, M simulates the computation of bi = Ik(ai)

for i = 1,... m, and explicitly writes down each bi on m distinct work tapes. This

procedure requires exactly these m work tapes and, of course, for each bi there exists

a nonzero constant ki such that Ibi| < ki log(| ai). Now M simulates the testing of

whether RB(bl,..., bm), which uses up space logarithmic in lbI = lbi + + Ibm It

follows that RA is in LOGSPACE.
(b) The arguments for A and RA are exactly the same as in part (a).
The argument for fA(ai,..., a,) is very similar to that in part (a). Recall that

at a certain point in in part (a) we arrive at the following situation: lbI is polynomial
in the length of the bi and hence polylogarithmic in the length of the ai. But now,

owing to the restriction on all functions fM, \b| is in fact linear in the length of the

bi, and hence logarithmic in the length of the ai. Of course while simulating f B, the
machine still uses up only logarithmic space on the tape(s) not used in writing out

b, and b is then used to simulate the computation of pJk(b), just as in part (a). This

last procedure is linear in lb| and hence logarithmic in the ai. Thus in this case we

have fA e LOGSPACE.
O

Next, we combine the statements with hypotheses about B being in LINSPACE
and in PSPACE into one lemma for convenience. Also we do not include the state-
ment "If B E PLOGSPACE, then A e PSPACE," since it is covered by part (a) of
the lemma.








Lemma 3.1.4. Let M be a structure with universe M C w, and let A = tal(M) and
B = bk(M), where k > 2. Then we have
(a) If B e LINSPACE, then A E PSPACE.
(b) If B e LINSPACE and for all functions fM, we have IfM(mi,..., mn)) < (Imi|+
S.. + |mni)c for some fixed constant c and all but finitely many n-tuples, then A E

PLOGSPACE.
(c) IfB e LINSPACE and for all functions fM, we have fM(m,. ,mn)l < c(Imi|+
S-.. + |mni) for some fixed constant c and all but finitely many n-tuples, then A E

LOGSPACE.
(d) If B e PSPACE, then A e SUPERPSPACE.
(e) If B e PSPACE and for all functions fM, we have the condition IfM(mi, ,mn)
( 2c(|mi|+-'-+lmnI) for some fixed constant c and all but finitely many n-tuples, then
A c PSPACE.
(f) If B e PSPACE and for all functions fM, we have IfM(mi,..., mn) < (Imil +
... + ImnI)c for some fixed constant c and all but finitely many n-tuples, then A E

PLOGSPACE.

The statement "If B e SUPERPSPACE, then A E EXPSPACE" is covered
by part (a) of the next lemma.

Lemma 3.1.5. Let M be a structure with universe M C w, and let A = tal(M) and
B = bk(M), where k > 2. Then we have
(a) If B EXSPACE, then A C EXPSPACE.
(b) If B e EXSPACE and for all functions fM, we have the additional condition
|fM(mi,..., mn)| (2(Im l+---+mnI)c for some fixed constant c and all but finitely many
n-tuples, then A E SUPERPSPA CE.
(c) If B e EXSPACE and for all functions fM, we have the additional condition
IfM (mi, mn)I | 2c(lm1'I+'"+Imn) for some fixed constant c and all but finitely many
n-tuples, then A C PSPACE.








Proof: As before, let A (resp. B) denote the universe of A (resp. B). Let RA (resp.
RB) be an m-ary relation and let fA (resp. fB) be an n-ary function, both defined
on A (resp. B), and with m, n 1. Suppose B E EXSPACE.
(a) To test if a e A, our machine explicitly writes b = Ik (a), and then
simulates the EXSPACE testing of whether b E B. Since bbI kl og(|al) for some
nonzero constant k, the testing of whether b E B takes space < 2rb1 b< 2rklog(jla) =
a rk, where r is some nonzero constant. Thus A is in PSPACE.
To test if RA(al,... ,am), our machine simulates the computation of bi =
pk (ai) for i = 1,..., m, and explicitly writes down each bi on m distinct work tapes.
For each bi there exists a nonzero constant ki such that |bil < kgilog(l|ai). Let b =
lbi I+. *+ bm and let a = |I++- a+|am Now our machine simulates the EXSPACE
testing of whether RB(bl,..., bin), which uses up space < 2b < 21oga = a8, where r
and s are nonzero constants. It follows that RA is in PSPACE.
We observe here that since A and RA are already in PSPACE, no amount
of restriction on the lengths of outputs of the functions in M can force A to be in
LOGSPACE.
To compute fA(ai,..., an), our machine first explicitly writes down the bi =
1Ik(ai), i = 1,...,n, on n distinct work tapes. Let r = bi + + bn| and let a =
|ai +- -+lan|. Then there is a nonzero constant k such that r < k loga. The machine
now simulates the EXSPACE computation of b = fB(bl, ..., bn). By Corollary 2.2.2,
there exist nonzero constants p and q such that bI| < 2p2qr < 2p2qkloga = 2paqk. It
follows that fA is in EXPSPACE.
(b) The arguments for A and RA are exactly the same as those in part (a).
The argument for f (ai,... ,an) is very similar to that in part (a). Recall
that in part (a) we arrive at the following situation: By Corollary 2.2.2, there exist
nonzero constants p and q such that |bI < 2p2qr < 2p2qkloga = 2paqk. But now, owing to
the restriction on all functions fM, we have |bI < 2"r < 2(kloga)c. The computation



A








of pk(b) = fA(al,..., an) uses up space linear in Ib\, and hence linear in 2(kloga)c I
follows that fA is in SUPERPSPACE.
(c) The proof here is similar to that for part (b).


In our final result dealing with the effect of the space complexity of the "k-ary
representation" of a structure on the space complexity of its "tally representation,"
we collect together statements with hypotheses about B being in EXPSPACE and in
DOUBEXSPACE into one lemma for convenience. Note that part (e) implies part
(b) in Lemma 3.1.6.

Lemma 3.1.6. Let M be a structure with universe M C w, and let A = tal(M) and
B = bk(M), where k >, 2. Then we have:
(a) If B E EXPSPACE, then A e EXSUPERPSPACE.
(b) If B E EXPSPACE and for all functions fM, we have If (mi,...,mn)} I
22c(''1i'+""I1) for some fixed constant c and all but finitely many n-tuples, then A E

EXPSPACE.
(c) If B E EXPSPA CE and for all functions fM, we have IfM(mim,... mn)
2(1ml1+---+1mnlI) for some fixed constant c and all but finitely many n-tuples, then A C
SUPERPSPACE.
(d) If BE DOUBEXSPACE and for all functions fM, we have f M(mi,... ,rmn)I <
220mII+-'+l mnV for some fixed constant c and all but finitely many n-tuples, then A e

EXSUPERPSPACE.
(e) If B e DOUBEXSPACE and for all functions fM, we have If M(mi,. m)|
22c( mil+-+mni) for some fixed constant c and all but finitely many n-tuples, then A E
EXPSPA CE.

The above lemma completes our basic examination of structures with both
relations and functions.








3.2 Relational, Functional, and Permutation Structures

We begin by considering relational structures and prove that every recursive

relational structure is recursively isomorphic to a LOGSPACE structure. However,
we are unable to specify a standard universe for this LOGSPACE structure.

Theorem 3.2.1. If A = (A, {Rf}is, {c}ieUv) is a recursive relational structure,
then A is recursively isomorphic to a LOGSPACE structure with universe a subset
of Bin(w) and to a LOGSPACE structure with universe a subset of Tal(w).

Proof: We recall that by our definition of recursive structure over an effective lan-

guage, there is a recursive function s such that for all i E S, the symbol R4 is an
s(i)-ary relation symbol. In addition, there is a recursive function a such that for all

i E S, a(i) is the index of a Turing machine which computes R;.
If A is finite, the result is trivial. So suppose A is infinite. Then there exists

a recursive bijection f : Bin(w) -+ A. We can define a recursive structure M with
universe Bin(w) which is recursively isomorphic to A by defining the interpretations
of the relation symbols and the constant symbols so as to make f an isomorphism of
M onto A. Hence we may assume, without loss of generality, that A = Bin(w). We
now proceed to define a LOGSPACE structure B with universe a subset of Bin(w)
such that B is recursively isomorphic to A.
Let each a e A = Bin(w) be represented in another binary form by b(a) =
1a+1012', where t is the time, that is, the number of steps required to carry out the

following procedure:
Given the number a in binary as input, test, for each i < a, whether the rela-

tion R;(xi,..., ,x.()) holds for every single s(i)-tuple (xl,... ,xs(i)) from the s(i)-fold
product {0,1,... ,a}s( of the set {0,1,... ,a} C Bin(w).
We call this procedure "Relations Checking" for convenience. We observe that
Relations Checking is a recursive algorithm that is completely uniform in a because









of our definition of recursive structure over an effective language. It now follows that

there is a Turing machine that can carry out Relations Checking.
We can now define B = (B, {R'}ies, {cE}ieu) as follows: Let B = {#(a) :

a E Bin(w)}. For each i E S, let Rf('Q(ai),.... ,(as,())) be true if and only if

R(a,,... ,aq()) holds. And for each i E U, let cf = V(cq). Evidently 0 is a recur-

sive isomorphism from A onto B. To show that B is a LOGSPACE structure, we need

to check that B is a LOGSPACE set and that each relation Rf is in LOGSPACE.
We first show that B is in LOGSPACE. Given b e Bin(w) on the input tape,

our machine uses two special states to check whether b has the form 11 ... 1011 ... 1.

If b is not of this form, then certainly b V B. This verification of form uses up no

space. If b does have the correct form, the machine proceeds to check whether the

terminal segment of 1's of b has length 2t for some t >, 0. The machine does this

by using two states to advance the input cursor until it points to the first 1 of the

terminal segment of 1's, then adding 1 in binary on a work tape T1 each time it

reads a 1, and stopping when the input cursor reads a U. This uses up logarithmic

space. Now the machine uses one state to check the form of the binary number on

T1. If this number is not of the form 00 .. 01, that is, if it is not a power of 2, then

we have b V B. However, if b e B, then b = 1+1012t, for some numbers a and t. At

this point, the machine writes a and t in binary on two separate work tapes T2 and

T3, respectively, as follows: Each time the machine reads a 1 in the initial segment

of 1's of b starting from the second 1, it adds 1 in binary on T2. This procedure

stops when the machine reads the 0 of b. To compute t, the machine uses the fact

that the length of the binary number now on T1 is t + 1. So it adds 1 in binary

on T3 each time it reads a symbol on T1, starting from the second symbol on T1.
These procedures use up logarithmic space. Now the machine simulates the machine

for Relations Checking on separate tapes with a as input, and subtracts 1 in binary

from t on T3 each time one step of Relations Checking is completed. We then have








b e B if and only if Relations Checking finishes in exactly t steps, that is, as soon as

the contents of T3 become 0. Since Relations Checking is "allowed to run" for only

t steps and t is logarithmic in Ib\, we conclude that B is in LOGSPACE.
Now we show that each relation Rf is in LOGSPACE. Let M be a Turing

machine with s(i) input tapes, and suppose we are given bl,... ,b,(i) B on these

input tapes. For k = 1,2,...,s(i), let ak and tk be such that bk = lak+1012tk.

Let T be the maximum number of steps required to test whether R (xI,... ,x,(i))

holds when {xl,..., x,()} C {0,1,...,i} C Bin(w). In the beginning, M employs

the LOGSPACE procedure described in the previous paragraph to write the binary

numbers ak on s(i) separate work tapes. The machine's next operation rests on the

fact that Rf(bl,..., b,(j)) if and only if R;(ai,..., a,(j)). The machine tests whether

Rf(al,...,a,(i)) holds by simply simulating the machine for Rf with the ak as its

input. If {ai,..., a,()} C {0,1,..., i}, then the number of steps, and therefore,

the space required to test whether R (a1,..., as(i)) holds is at most the constant T.

Otherwise, there is a j e {1, 2,..., s(i)} such that aj > i. In that case, the testing

of whether RA(ai,...,a,(j)) holds takes at most tj steps, and tj is logarithmic in

bj I < b I + ... + Ib,(i). It now follows that Rf is in LOGSPACE.
To finish the proof, we note that tal(B), which has universe {tal(n) : n E B},

is in LOGSPACE by Lemma 3.1.3 (b), and is recursively isomorphic to A.
O

The above result does not hold for structures with functions, as we prove next.

Theorem 3.2.2. Let 4o be the language which has no relation symbols, no constant
symbols, and exactly one function symbol which is unary. There is a LINSPACE
structure V = (D, f ) over 4o which is not recursively isomorphic to any LOGSPACE
structure over Lo.

Proof: Let (E0, fo), (El, fl), ... be an effective list of all LOGSPACE structures
over Lo, and let o0, i1, ... be a list of all one-to-one partial recursive functions. We








build our structure D = (D, fV) so that D C {1}* C Tal(w). For the rest of this

proof, all natural numbers are assumed to be given in tally. We need to ensure in

our construction of D that for each i, j e w, the following requirement R,,j is met:

Rij : jj is not a recursive isomorphism from D onto (El, fi).

We recall that the pairing function [.,-] from Tal(w) x Tal(w) to Tal(w) defined by

[i, j] = [f(i + j)2 + 3i + j] is in LOGSPACE by Lemma 2.3.2 (a). Now define the
function b : Tal(w) x Tal(w) x Tal(w) -+ Tal(w) by the recursion ?(0, i, j) = 2[i, j] + 3

and 4V(n + 1, i, j) = 2'(n,'i'). For each i,j E w, let T,j = {IV(n, i, j) : n e w}. And
finally define D = Uie Ti,j. We observe that D is the disjoint union of the Tij,
whose "first elements" are the odd numbers > 3 and whose subsequent elements are
obtained by repeated exponentation.
We now prove that D E LINSPACE. Given z E Tal(w) on the input tape,

our machine first checks if z = 0, 1, or 2, in which cases z V D. But if z ;> 3,

then the machine uses two special states to check the parity of the 1's of z. If z is

odd, then z e D. All this uses up no space. Now suppose z is even. The idea is to
write down each odd number < z and > 3, and to repeatedly exponentiate that odd

number to see if we obtain z. If we obtain a number greater than z after a certain

exponentiation, then we try exponentiating the next odd number to see if we obtain
z. In this way, if we exhaust all odd numbers <; z without ever obtaining z by expo-
nentiation, then z D. More precisely, the machine moves the input cursor to the
extreme-left position and writes 3 (i.e., 111) on a work tape T1. Then it computes
23, as explained in the proof of Lemma 2.3.1. As it writes each 1 of 23 on a work
tape T2, it advances the input cursor one position to the right. If 23 gets written
completely on T2 and the input cursor reads z but not the final 1 of z, then the
machine copies the contents of T2 on to another work tape T3, erases T2, moves the

input cursor back to the extreme-left position, and then exponentiates the current
number on T3, moving the input cursor one position to the right each time it writes








a 1 on T2, which will contain the output for the current exponentiation. If 2' gets

written on T2 and the input cursor is on the final 1 of z, then z e D. But if the

input cursor encounters a U while 23 is not yet completely written, then the machine

adds 2 in tally on tape T1 (thus obtaining the next odd number), checks that the
odd number on TI is < z by advancing the input and TI cursors simultaneously, and
then repeats the exponentiation procedure on the contents of T1. The machine halts
if it determines at some point that z E D or if the number on T1 becomes bigger than

z, in which case z V D. By Lemma 2.3.1, the above exponentiation procedures are
linear in the lengths of the contents of T1 and T3, and hence linear in jzj. Moreover,

the content of T2 can never be longer than z. It follows that D e LINSPACE.
We now fix i,j E w and proceed to define the function f = fD on Ti,j =

{ao, a1,...}. Recall from the definition of Tij that an = V,(n, ,j) = 2a-1 for all
n >, 1, and a0 = 2[i,j] + 3. In order to define f we first need to prove that there

is an m E {0, 1,.. } such that the following computations can be successfully com-
pleted in at most am+3 steps:
(1) Start to compute Oj(ao). If this computation converges, then certainly it

converges in time < am for some m > 0. Let bo = j (ao).
(2) Check that bo G EE.
(3) Compute the sequence bi = fi(bo), b2 = fi(b),..., bm+l = fi(bm).
The above computations are completely uniform in ao. We note that if

( (ao) j, then the condition Rij is automatically satisfied. So assume that b0 = j (ao)
exists. Now it takes some constant amount Co of time to compute bo and to check
that bo e E;. By Lemma 2.3.15, we may assume that Ei C Bin(w). Furthermore,
by Corollary 2.2.2, there exists an integer k > 1 such that for any y E EE C Bin(w)
with ljy > 1, we can compute fi(y) within lylk steps. Let cl be the time required

to compute fi(0) and fi(l), and let c = co + cl. We may assume without loss of
generality that Ibo| > 1. It now follows that carrying out the computations (1)








and (2), and then computing the sequence bi = fi(bo),..., bm+i = fi(bm) takes at
most T steps, where T = c + lbolk + (lbolk)k + + Ibolk. We may assume in
addition that m is large enough so that c, m < Ibo lkm, jbo 2 < 22m, k < 2m, and
m2 +m < 2m. Then T =c + bo k + (lboIk)k + + lbok < (m+ 1) boIkm I|bol2km <
22m 2"'t2 = 22m2+m < 222 = exp3(m). Since ao > 3 by the definition of V, we have
am > ao + m > m + 3, and hence am+3 = 22,am > 222m+ = exp(m + 3). Thus
T < exp3(m) < exp3(m + 3) < am+3, and so there is an m such that the computa-
tions (1), (2), and (3) can be successfully completed within am+3 steps.
It follows from the previous paragraph that there is a least s > 0 such that
the computations (1), (2), and (3) can be successfully completed within a, steps.
The definition of f on T,j = {ao, a,... } now involves considering two cases. For
t : s, we let f(at) = at+,. To compute f(a,), we first let bo = j(ao) and compute

fA~ 0 (bo). Then if fisl)(bo) = bo, we define f(a,) = as,+. But if f(s+l)(bo) / bo, we
define f(a,) = ao. This ensures that condition Ri,j is satisfied.
It remains to prove that f can be computed in linear space. Given x E D
on the input tape, our machine first computes the unique triple (n, i, j) such that
x = V(n, i, j). This computation uses the fact that n, i, and j are all less than
x(> 3). The machine begins by checking if x is odd. If so, then n = 0. Now the
machine lists all i and j less than x, starts to compute 2[i, j] + 3, and moves the
input cursor one place to the right each time it writes a 1 of 2[i, j] + 3, until it writes
the correct i and j such that 2[i, j] + 3 = x. Since the i and j are explicitly written
down and the pairing function [-, -], along with tally addition and multiplication, is
in LOGSPACE, the above procedures use up linear space. If x is even, the machine
carries out the procedure described in the proof that D is in LINSPACE to find the
odd number r < x which, after a certain number of exponentiations, gives x. The
number n of times r has to be exponentiated to yield x is explicitly written down
by adding a 1 with each exponentiation. And the i and j such that 2[i, j] + 3 = r








are obtained in the manner described in the case where x is odd. Once again, these

procedures use up linear space.

As soon as the machine finishes computing the triple (n, i, j) such that x =

O(n,i,j), it "knows" that ao = r = 2[i,j]+3 = 0(0, i,j) and x = an = r" = O(n,i,j).

At this point, the machine simulates the recursive computations (1), (2), and (3),

while keeping track of the number N of steps carried out. We recall that this is

possible for our machine since the computations (1), (2), and (3) are uniform in

a0 = r. If these computations are not completed within an = x steps, then n is not

the "least s" that was used to define f(a,) above. Hence f(an) = an+1 = 2a", and

so the machine outputs f(x) = 2x, which uses up linear space by Lemma 2.3.1. But

if these computations are completed within an = x steps, i.e., if N < x, then the

machine must determine if n is the "least s." The machine does this by computing

ao(= r), ai(= 20), a2(= 2a"), ..., until it arrives at the first a, such that as N.

This uses up linear space. If a, : an, that is, if s < n, then once again, f(an) = an+,

and the machine outputs f(x) = 2x. But if as = an, then the machine simulates

the recursive checking of whether f+l1(bo) = bo. Here bo = Oj(ao) = Oj(V(0, i,j)).

This checking certainly takes < a, steps. Now if fi+'(bo) = b0, then f(an) = an+~ as

before, and the machine outputs f(x) = 2x. Otherwise f(an) = ao and the machine

outputs r. Thus f is in LINSPACE.


The function f' constructed in the proof of the above theorem is a permuta-

tion which has infinite orbits and, possibly, finite orbits. Thus we have in fact proved

the following:

Theorem 3.2.3. There is a recursive permutation structure which is not recursively

isomorphic to any LOGSPA CE permutation structure.

This result now leads us naturally to a consideration of permutation structures

according to the number and size of their orbits. We begin with the simplest case,








namely, permutation structures in which all orbits are finite. In this case, we are
not able to specify in advance the universe of the LOGSPACE structure that will be
proven to be recursively isomorphic to our given permutation structure.

Theorem 3.2.4. Let (A, f) be a finitary recursive permutation structure. Then
(A, f) is recursively isomorphic to a LOGSPACE structure (B, f B), where B is a
LOGSPACE subset of Bin(w).

Proof: As explained in the proof of Theorem 3.2.1, we may assume that A is infinite
and we may further assume that A = Tal(w). For each a E A, let t(a) be the number
of steps required to compute f(a) and let T(a) = EbEOf(a) t(b). Thus T(a) is the
time required to compute the orbit of a. The procedure for computing T(a) is simply
to compute and store the successive iterations f"(a) until one of them comes back to
a previous one, while keeping track of and adding the time required to compute each
f"(a). Since all the orbits of f are finite, the function T is recursive and therefore
can be computed by a Turing machine. We can now define a recursive function
with domain A by 0(a) = (a + 1)^012T(a), and let B = {0(a) : a E A}.
We now prove that B is a LOGSPACE set. Given a string a E Bin(w), our

machine first checks if a is of the form 11+1012' for some a and t using the proce-

dures described in the proof of Theorem 3.2.1. If o- is not of this form, then certainly
a V B. If a has the correct form, then the machine writes down a and t, again
in the manner described in the proof of Theorem 3.2.1. All these procedures use
up logarithmic space. Our machine now simulates the machine for computing the
orbit of a for exactly t steps, keeping track of the steps by subtracting 1 from t. If
the computation is completed in exactly t steps, then t = T(a) and a E B. If the
computation is completed sooner, or is not yet complete after t steps, then a V B.
Since t is logarithmic in lal, we have shown that B is a a LOGSPACE set.
Now we define the permutation fB on B by fB((a + 1)^012T(a)) = (f(a) +
1)^012T(a()). Since T(f(a)) = T(a), no time is involved in the computation of








012T((a)). And the time required to compute f (a) is < T(a) = O(log ( (a+1)012Ta) )).
This shows that fB is a LOGSPACE function.
Finally, to show that 0 is an isomorphism from (A, f) to (B, f B), we observe
that (f (a)) = (f(a) + 1)o0127(()) f= B((a + 1)012T(a)) = fB((a)).


We now consider situations when a recursive permutation structure is recur-
sively isomorphic to a LOGSPACE permutation structure with a specified universe
such as Tal(w) or Bin(w). We in fact obtain results for the more general setting of
one-to-one functions from a set into itself. We begin with the special case where there
are only finitely many orbits.

Theorem 3.2.5. Let f be a recursive injection of an infinite recursive set A into
itself, with only finitely many orbits. Then (A, f) is recursively isomorphic to a
LOGSPACE structure (B, f ), where B may be taken to be either Tal(w) or Bin(w).

Proof: Fix B to be either Tal(w) or Bin(w). We observe that since A is infinite, not
all the orbits under f are finite. So we shall begin by considering a single infinite
orbit 0. There are two cases.
Case 1. There is an element a in 0 which is not in the range of f. Then
0 = { f(a) n E w}. So we define the permutation fB by fB(x) = x + 1 for all
x C B. (Here + is the operation appropriate to either Tal(w) or Bin(w) depending on
our choice of B.) This leads to the recursive isomorphism from (0, f) onto (B, fB)
defined by O(f"(a)) = bk(n), where n > 0, and k = 1 if B = Tal(w), while k = 2 if
B = Bin(w). Since fB is a ZEROSPACE function, it follows that (0, f) is recursively
isomorphic to the ZEROSPACE structure (B, fB).
Case 2. Every element of 0 is in the range of f. Let a be an arbitrary element
of 0. Then 0 = {f"(a) : n E Z}. Now we define fc on the ZEROSPACE set
C = Be B by f ((1,x)) = (1, x+1), f C((O,O)) = (1,0), and fC((O, x + 1l)) = (0, x),
where x > 0. The function fc is evidently in ZEROSPACE and hence (C, fc) is a








ZEROSPACE structure. Now define : C -- 0 by 0((1, 0)) = a, 0((1, n + 1)) =

fn(a), and 0((0, n)) = f-(+')(a), where n )> 0. It is easy to check that (C, fc)
is recursively isomorphic to (O, f) via 0. By Lemma 2.3.14, C is LOGSPACE set-
isomorphic to B, and hence by Lemma 3.1.1, (C, f ) is LOGSPACE isomorphic to
a LOGSPACE structure (B, f ). It follows that (O, f) is recursively isomorphic to
(B, f ).
To complete the proof, let the finite orbits under f be 01,.. ., Om, and let
the infinite orbits under f be O,...,O. For each i = 1,..., m, the finite structure
(O, f) is evidently recursively isomorphic to some ZEROSPACE structure (B,, fBi),
where Bi is a finite subset of B. And owing to either Case 1 or Case 2, each infinite
structure (O f), 1 ( i < n, is recursively isomorphic to some LOGSPACE structure
(B, f/B). The disjoint union B1 ... E D Bm is evidently ZEROSPACE set-isomorphic
to some finite subset D of B, and so the structure (B1i .. Bm, g), where g = fBi on
the ith summand Bi, is LOGSPACE set-isomorphic to a LOGPACE structure (D, fD)
by Lemma 3.1.1. The n-fold disjoint union Be-.. B is LOGSPACE set-isomorphic
to B by Lemma 2.3.14. Hence the disjoint union of D and the n-fold disjoint union
Be (.-. ( eB is also LOGSPACE set-isomorphic to B by Lemma 2.3.14. It now follows
from Lemma 3.1.1 that the structure (D E B (* ... B, h), where h = fD on the first
summand D and g = fB on the ith summand B from the following n summands, is
LOGSPACE isomorphic to a LOGSPACE structure (B, f B). Since A is the disjoint
union of the orbits of f, the structure (A, f) is recursively isomorphic to the structure
(O e ... E Om O E ... O, Y). Here f is simply f acting appropriately on the
ith summand. It now follows that (A, f) is recursively isomorphic to (B, fB).


The next theorem generalizes Theorem 3.4 by considering the case where an
injection has infinitely many orbits.








Theorem 3.2.6. Let f be a recursive injection of an infinite recursive set A into
itself, with at least one but only finitely many infinite orbits. Then (A, f) is recursively
isomorphic to a LOGSPACE structure (B, f "), where B may be taken to be either
Tal(w) or Bin(w).

Proof: Fix B to be either Tal(w) or Bin(w). We recall that the finite and infinite
parts of A are, respectively, Fin (A) = {a e A : IaIf < w} and Inff(A) = {a E A :
|alf = w}. Evidently Finf(A) is an r.e. subset of A. Each infinite orbit under f
is r.e., and since only finitely many orbits are infinite, it follows that Inf/(A) is a
finite union of r.e. sets and is therefore itself r.e. We can now conclude that both
Finf(A) and Inf/(A) are recursive sets since they are both r.e. and they partition the
recursive set A.
Now by Theorem 3.2.4, the structure (Finf(A), f) is recursively isomorphic
to some LOGSPACE structure (C, fc). Moreover, we see in the proof of Theorem
3.2.4 that the elements of C are of the form 1m+1012n for m, n >, 0. Consider the
function 0: C -> Tal(w) x Tal(w) defined by 0(1m+1012n) = (1m+l, 12"n). Evidently (
is in ZEROSPACE. And since C is a LOGSPACE set, the set O(C) is a LOGSPACE
subset of Tal(w) x Tal(w). But Tal(w) x Tal(w) is LOGSPACE set-isomorphic to
Tal(w) by Lemma 2.3.14. It follows that 0(C), and therefore C, is LOGSPACE
set-isomorphic to a LOGSPACE subset D of Tal(w). Now Lemma 3.1.1 allows us
to conclude that (C, fc) is recursively isomorphic to some LOGSPACE structure
(D, fD). Hence (Finf(A), f) is also recursively isomorphic to (D, fD).
As for the structure (Inff(A), f), the previous theorem allows us to conclude
that (Inff(A), f) is recursively isomorphic to a LOGSPACE structure (B, gB).
Now define the structure (E, fE) by letting E = D E B, and by letting

fE((0,x)) = (0, fD(x)) and fE ((1,)) = (1,gB(x)). Evidently (E,fE) is recursively
isomorphic to (A, f). Since E is LOGSPACE set-isomorphic to B by Lemma 2.3.14








it now follows from Lemma 3.1.1, that (E, fE') is LOGSPACE isomorphic to some
LOGSPACE structure (B, f ). Hence (A, f) is recursively isomorphic to (B, fB).


Next we consider what happens when f is a recursive injection from a recursive

set A into itself and f has no infinite orbits. This means that (A, f) is in fact a finitary
recursive permutation structure and we are in the situation of Theorem 3.2.4. But
this time we also assume that all the orbits have the same fixed size, which will allow
us to specify in advance the universe A.

Theorem 3.2.7. Let (A, f) be an infinite recursive permutation structure such that
all orbits have the same size q for some q E w. Then (A, f) is recursively isomorphic
to a LOGSPACE structure (B, fB), where B may be taken to be either Tal(w) or
Bin(w).

Proof: We may assume that A = Bin(w), as explained in the proof of Theorem
3.2.1. We may also assume that q > 1. Otherwise, f is the identity function id on A,
in which case (A, f) = (Bin(w), id) is of course recursively isomorphic to (Tal(w), id)
via the isomorphism that takes each binary number to its tally representation.
Now fix B = Bin(w).
Define the permutation f "' on B,(w) by fBq (nq+ r) = nq+ r + 1, if r + 1 < q,
and nq if r + 1 = q. Here q and n, r > 0 are the appropriate representations of
the natural numbers q, n, and r in B,(w), and + denotes addition with respect to
Bq(w). Thus fIB maps each q-ary number x to its successor unless x happens to be
the immediate predecessor of a multiple of q, in which case x gets mapped to the
largest multiple of q smaller than x. As a result, the orbits under f q are of the form
nq -4 (nq + 1) -+ (nq + q 1) -4 nq, where n >, 0.
We claim that fBq is a ZEROSPACE function. Given x E Bq(w) on the input
tape, our machine first checks if x + 1 is divisible by q. This is immediate because
x is written in reverse q-ary and the machine simply checks if the first symbol of








x is q 1. If this first symbol is not q 1, then x + 1 is not divisible by q, and
the machine outputs x + 1 by using the ZEROSPACE algorithm of Lemma 2.1.1.
Otherwise, it uses up zero space to subtract the constant q 1 from x and output the
result (Lemma 2.1.5). It now follows that (Bq(w), fBq) is a ZEROSPACE structure.
Now we proceed to define a recursive isomorphism from (A, f) to (Bq(w), fB<).
First, we recall that A = Bin(w) and define the set I by i E I = (Va < i)(Of(a) $
Of(i)). For each a c A, let n(a) be the number of elements of I strictly less than a,
let i(a) be the unique element of I n Of(a), and let r(a) be the unique r such that
fr(i(a)) = a. And now let 0 : A --+ Bq(w) be defined by 0(a) = n(a)q + r(a). We
claim that is a recursive isomorphism from (A, f) to (Bq(w), fBq). To show that
0 is one-to-one, let a1, a2 c A with a, < a2. If a, and a2 are in the same orbit,
then n(al) = n(a2) but r(al) / r(a2), and hence 0(al) h 0(a2). If a1 and a2 are
in different orbits, then n(al) < n(a2), and since r(al) and r(a2) are both < q, we
have q(al) < 0(a2). To show that 0 is onto, we observe that since every element of
Bq(w) can be written as lq + m for some 1 > 0 and 0 < m < q, it follows that there
is an element in the orbit of the (1+ 1)th element of I whose 0 value is lq + m. Now
for each a E A, we have n(f(a)) = n(a), and r(f(a)) = r(a) + 1 or r(f(a)) = 0.
If r(f(a)) = r(a) + 1, then 0(f(a)) = n(f(a))q + r(f(a)) = n(a)q + r(a) + 1 =
fB (n(a)q + r(a)) = fB ((a)). Similarly, 0(f(a)) = fBq ((a)) if r(f(a)) = 0. Thus
0 is a recursive isomorphism from (A, f) to (Bq(w), fBq).
To complete the proof for the case B = Bin(w), we observe that by Lemma
2.3.10, there is a LOGSPACE set-isomorphism g from Bq(w) to B = Bin(w). Hence by
Lemma 3.1.1, we can conclude that (Bq(w), fBq) is LOGSPACE isomorphic to some
LOGSPACE structure B = (B, f ). Consequently, (A, f) is recursively isomorphic
to B = (B, fB).
Finally, note that (A, f) is recursively isomorphic to the structure tal(B),
whose universe is, of course, Tal(w). We now prove that tal(B) is a LOGSPACE








structure. We first observe that Lemma 2.3.10 asserts, in addition to the existence
of the LOGSPACE bijection g : Bq(w) -4 B = Bin(w) in the previous paragraph,
the existence of nonzero constants ci and c2 such that Ig(x)\ < ci lx and jg-1(x)l
c21xI for every x in the domain of g or g- as appropriate. Recall that fBq is in
ZEROSPACE and hence IfBq(x) <_ c31x| for some constant c3 > 0. We claim that
fB also has the property that IfB(x)I <, clxi for some constant c > 0. To see this,
let x e B. Then fB(x) = g(fBq(g-'(x))) and so IfB(x)l ( cic3C21Xl. It now follows
from Lemma 3.1.3 (b) that tal(B) is a LOGSPACE structure.


We now generalize the previous result by weakening the assumption that every
orbit have the same fixed finite size.

Theorem 3.2.8. Let (A, f) be a finitary recursive permutation structure such that
for some q E w, there are infinitely many orbits of size q. Then (A, f) is recursively
isomorphic to a LOGSPACE structure (B, f ), where B may be taken to be either
Tal(w) or Bin(w).

Proof: Fix B to be either Tal(w) or Bin(w). Let C = {a E A : |alf = q}.
Now C is a recursive subset of A since to decide if x C C, we need only compute
x, f(x),..., fa(x), and then x E C if and only if x, f(x),..., f(q-1)(x) are distinct
and x = fq(x). It follows from the previous theorem that (C, f [c) is recursively
isomorphic to some LOGSPACE structure (B, gB). And it follows from Theorem
3.2.4 and our argument in the proof of Theorem 3.2.6 that (A \ C, f [(A\C)) is recur-
sively isomorphic to some LOGSPACE structure (E, fE), where E C Tal(w). Now
let K = B E and let fK be defined in the natural way by fK((0,x)) = (0, gB(x))
and fK((1,x)) = (1,fE(x)). Then there is a canonical recursive isomorphism be-
tween (A, f) and (K, fK) which maps C to {0} x B and A \ C to {1} x E. It now
follows from Lemma 2.3.14 that K is LOGSPACE set-isomorphic to B, and then








from Lemma 3.1.1 that (K, fK), and hence (A, f), is recursively isomorphic to a
LOGSPACE structure (B, fB).



Corollary 3.2.9. Every infinite finitary recursive permutation structure (A, f) with
a finite upper bound on the size of the orbits of (A, f) is recursively isomorphic to a
LOGSPACE structure (B,g), where B may be taken to be either Tal(w) or Bin(w).

Now in order to investigate finitary recursive permutations that do not have
infinitely many orbits of any fixed size, we first need to examine the possible spectra
of a recursive permutation. We primarily consider monic permutations, which is
sufficiently general because of Proposition 3.7 and Theorem 3.8 (b) in Cenzer and
Remmel [3]. We quote them next as a theorem.

Theorem 3.2.10. (a) For every finitary recursive permutation structure (A, f), there
is a recursive subset B of A such that f is monic on B and Spec(A, f) = Spec(B, f).
(b) Any two monic finitary recursive permutation structures with the same spectrum
are recursively isomorphic.

Theorem 3.2.11. For every nonempty r.e. subset P of w \ {0}, there is a monic
finitary LOGSPACE permutation f of a LOGSPACE subset A of Bin(w) such that
Spec(A, f) = P.

Proof: Since P is r.e., there is a machine M that eventually accepts all elements of
P. Thus for every a E P, there is a number s > 0 such that M accepts a is s steps.
This means that we can write P as the union of an effective increasing sequence P"
of sets so that it requires time s to check whether a 6 PS for any a and s. Define
the set A by A = {(1", 12i, 1i) : n e P"+1 \ P8 and i < n}. Then A is a LOGSPACE
set because we can let the program for M run for exactly s + 1 steps to see if M
accepts in at the end of those steps. In the proof of Theorem 3.2.1, we explained how








s + 1 can be written on a worktape without using up space more than logarithmic in
the input. Also it requires at most logarithmic space to check if i < n. Now define
the permutation f : A -+ A by f((, 1, 12, )) = (1n, 12, 1i+I) if i + 1 < n, and

f((1", 128, ii)) = (1", 12, 1) if i + 1 = n. Evidently f is in LOGSPACE. Moreover,
f is evidently monic finitary and we have Spec(A, f) = P.
O

Next we strengthen the previous theorem to ensure that we can specify the
universe of A to be either Tal(w) or Bin(w) by assuming that tal(P) = {1" : n e P}
is a LOGSPACE set.

Theorem 3.2.12. For every subset P of w \ {0} such that tal(P) E LOGSPACE,
there is a monic finitary LOGSPACE permutation structure (B, f) with Spec(B, f)
= P, where B may be taken to be either Tal(w) or Bin(w).

Proof: First let B = Tal(w). Let P be enumerated in increasing order as no <
ni < *. We define the permutation f : B -+ B in such a way that the orbit of
size no is an initial segment of B and, for each k > 0, the orbit of size nk is an
initial segment of B minus the orbits of size < nk. Hence if 0 < n < no, then we
define f(tal(n)) = tal(n + 1) if n + 1 < no, and f(tal(n)) = 0 if n + 1 = no. And if
n = no + nl + nk-1 +i for some k > 0 and 0 < i < nk, then we define
f tal(no + n + + nk- + i +1+) if i + 1 < nk,
f(tal(n)) = tal(no + n + + nk-1) if i + 1 = k.

Evidently the permutation f is finitary monic. Moreover, f is in LOGSPACE.
This is because to compute f(ln), our machine tests whether 1m tal(P) for each
m < n + 1, then writes every two such 1" in binary on two worktapes and keeps
adding them to eventually obtain bin(no + n, + --- + nk-1), then computes bin(i) =
bin(n) bin(no + nI + + nk-1), and then finally converts to tally the correct output
f(1n) depending on whether i + 1 < nk or i + 1 = nk, that is, whether n + 1 E P or
n+ 1 P.








It is not evident how to compute the function f in the previous paragraph
within logarithmic space in the case B = Bin(w). So we give a more elaborate
argument where we partition Bin(w) into w copies Bn of Tal(w), and partition P into
w LOGSPACE sets Ps. We then define LOGSPACE permutations fn on Bn with
spectrum Pn as in the proof of the previous theorem. Finally, the structures (B-, fn)
are joined together to make a LOGSPACE structure (B, f B). Instead of partitioning
Bin(w) directly, we partition the more amenable set Bin(w) \ {1}*, and use the fact
(Lemma 2.3.4) proven earlier that Bin(w) \ {1}* is LOGSPACE isomorphic to Bin(w).
We partition B = Bin(w) \ {1}* by letting Bo = {0} U {0+11k+1 : n, k E w},
and, for i > 1, by letting B, = {bin(i)^On+lk+l : n, k E w}. Evidently the sets Bi,
i >, 0, are ZEROSPACE set-isomorphic to Tal(w) x Tal(w), and hence LOGSPACE
set-isomorphic to Tal(w). Let V be a LOGSPACE isomorphism from Tal(w) onto Bo.
Now we proceed to partition P. Since tal(P) E LOGSPACE, for each m w
we can test 0, 1, ..., 1m for membership in tal(P) within space O(logm) using binary
counters. It follows that the function h such that
f 0 if 1m V tal(P),
h(1mH 1k, where k = card[tal(P) n {1l : r < m}], if Im e tal(P),

is in LOGSPACE. Now for each n > 0, we define Pn = {m : h(1m) = tal(2r(2n +
1)) for some r > 0}. Note that each Pn is infinite and that P = Un P,. Since we
can factor a number m in tally into an odd number times a power of two within
logarithmic space by first converting m into binary, it follows that each Pn is a
LOGSPACE set. It also follows that the function 0 such that
S0 if 1x tal(P),
0(1) = (bin(n), 1r+l) if h(1x) = tal(2r(2n + 1)) and n > 1,
(0, 1r+1) if h(1x) = tal(2'),
is also a LOGSPACE function.
Now that we have partitioned B = Bin(w)\{1}* and P, our idea is to uniformly
construct a LOGSPACE monic permutation on Bn with spectrum Pn by making








use of the permutation f constructed for the case B = Tal(w) above. First, given
x E Bin(w) \ {1}*, if x = 0, then we compute tal(n) = V-'(0). Otherwise, we
have x = bin(i)^0s+l11'+ for some i > 1, and s,t E w. In this case, we compute
tal(n) = V-1(0'+'1t+L). Then we compute 0 on 0, 1, ..., n + 1 and find no < ni <
... < nk 1 < nk < n+1 (if there are any) such that 0(1Th) = (bin(i), l'). Now if 1" =

tal(no+- -+nj-+l) where I < nj, we let F(x) = bin(i)Q^(tal(no+.- -+nj-i+1+1)).
If I = nj, then we letF(x) = bin(i)-^V(tal(no + + nj-1)). Since 0, 4, and 0-1 are
in LOGSPACE, it follows that F is a LOGSPACE function. Moreover, the structure
(Bi, F [,) is a finitary monic permutation structure with spectrum Pi. Consequently,
(B, F) is a LOGSPACE finitary monic permutation structure with spectrum P. And
since B = Bin(w) \ {1}* is LOGSPACE set-isomorphic to Bin(w), it follows from
Lemma 3.1.1 that (B, F) is isomorphic to a LOGSPACE structure (Bin(w), f).


We can now state as a corollary our result about finitary recursive permutation
structures that do not have infinitely many orbits of any fixed size. Following that, we
state our final positive result about finitary monic recursive permutation structures.

Corollary 3.2.13. Let Q be an r.e. set with an infinite subset P such that tal(P) is
in LOGSPACE. Then any finitary monic recursive permutation structure (A, f) with
Spec(A, f) = Q is recursively isomorphic to a LOGSPACE permutation structure
(B, f ), where B may be taken to be either Tal(w) or Bin(w).

Proof: We may assume that 0 V P. Fix B = Tal(w) or Bin(w). By the previous
theorem, there is monic finitary LOGSPACE permutation structure (B, g) with
Spec(B, gB) = P. Let C = {a E A : lal e P}. Then C is an infinite recursive subset
of A. The structures (C, f) and (B, gB) are recursively isomorphic by Theorem 3.2.10
(b). It now follows from the proofs of Theorem 3.2.4 and Theorem 3.2.6 that (A\C, f)
is recursively isomorphic to some LOGSPACE structure (E, f E), where E C Tal(w).
Evidently (A, f) is recursively isomorphic to the structure (BeE, g), where g is gB or








fE as appropriate. Since B E is LOGSPACE set-isomorphic to B by Lemma 2.3.14,
it follows from Lemma 3.1.1 that (A, f) is recursively isomorphic to a LOGSPACE
structure (B, f B).



Theorem 3.2.14. For any r.e. degree d, there is an infinite r.e. subset Q of w\ {0}
of degree d such that any monic finitary recursive permutation structure (A, f) with
Spec(A, f) = Q is recursively isomorphic to a LOGSPACE permutation structure
(B, fB), where B may be taken to be either Tal(w) or Bin(w).

Proof: Let D be an r.e. set of degree d and let Q = {2n + 2 : n E D} U {2n + 1 :
n E w}. Then Q has the same degree as D. Moreover, Q has a subset P, the set
of odd numbers, such that tal(P) is in ZEROSPACE. The result now follows from
Corollary 3.2.13.


As for negative results, we next state Theorem 3.14 of Cenzer and Remmel
[3], which, together with Theorem 3.2.14 above shows that the Turing degree of the
set Spec(A, f) does not determine whether a finitary monic recursive permutation
structure (A, f) is recursively isomorphic to LOGSPACE permutation over Tal(w) or
Bin(w).

Theorem 3.2.15. For any r.e. degree d, there is a set P of degree d such that no
finitary monic recursive permutation structure (A, f) with Spec(A, f) = P can be iso-
morphic to any primitive recursive permutation structure (Tal(w), g) or (Bin(w), h).

Finally, we conclude our investigation of permutation structures according to
the number and size of the orbits by stating a negative result about permutation
structures with infinitely many infinite orbits. This result is an immediate conse-
quence of Theorem 3.17 in Cenzer and Remmel [3].








Theorem 3.2.16. (a) There is a recursive permutation f of a recursive set A with
infinitely many orbits, all of type Z, such that (A, f) cannot be recursively embedded
in any LOGSPACE structure.
(b) There is a recursive injection f of a recursive set A with infinitely many orbits,
all of type w, such that (A, f) cannot be recursively embedded in any LOGSPACE
structure.

3.3 Abelian Groups

We now begin our investigation of LOGSPACE Abelian groups. The results

here parallel those for permutation structures. The next theorem is an immediate
consequence of Theorem 4.1 in Cenzer and Remmel [3].

Theorem 3.3.1. There is a recursive Abelian group that is not recursively isomorphic
to any LOGSPACE Abelian group.

Following standard notation in Algebra, we let Z denote the group of integers

with the usual addition. For each natural number n > 1, we let Zn denote the cyclic
group of order n. And for each prime number p, we let Z(p) denote the group of
rational numbers with denominator a power of p and addition modulo 1. Note that
Z(p') is a subgroup of the group Q/Z, the group of rationals modulo 1. We explain
the structure of Z(p') and its operation more fully in the proof of Lemma 3.3.3. And
finally, we denote the additive group of rational numbers by Q. We now define the
product of a sequence of groups which is equivalent to the definition of the external
weak product of groups in Algebra.
Definition. For any sequence Ao, Al, ... of groups, where each A = (Ai, +j,
-j, el), and each Ai C {0, 1}*, the direct product A = (n An is the group defined
to have domain A = {(ao, al, ..., 7k) : k e w, ao E Ai for 0 < i < k and Uk =^ Aek},
identity eA = 0, and group operations addition +A and subtraction -A defined as
follows:








For a = ((o, 1I, ..., rm) and r= (To, TI, ., Tn), we define o+A/--A = p =
(Po, Pi, -.-, Pk), where k = max{i: [(i < m) A (i < n) A (7i +i / -i Ti e,)] V [m <
i n] V [n < i m]} and, for i < k,
f ao +i / -i T for i < min(m, n),
pg = ai for n < i < k,
Ti for m < i < k.
In particular, we let (E), denote the direct product of a countably infinite
number of copies of the group G.
Definition. Let B be either Tal(w) or Bin(w). We say that the sequence Ao, Ai, ...
of groups, where An = (An, +n, -n, en), is fully uniformly LOGSPACE over B if
the following hold:
(i) The set {(b(n),a) : a E An} is a LOGSPACE subset of B x B, where
b(n) = tal(n) if B = Tal(w) and b(n) = bin(n) if B = Bin(w).
(ii) The functions F(b(n), a, b) = a +n b and G(b(n), a, b) = a-nb are both
the restrictions of LOGSPACE functions from B3 to B, where we set F(b(n), a, b) =
G(b(n), a, b) = 0 if either a or b is not in An.
(iii) The function from Tal(w) into B defined by e(tal(i)) = ei is in LOGSPACE.

Lemma 3.3.2. Let B be either Tal(w) or Bin(w). Suppose that the sequence Ai =
(Ai, +j, -j, el) of groups is fully uniformly LOGSPACE over B. Then we have
(a) The direct product A of the sequence Ai is recursively isomorphic to a LOGSPACE
group with universe contained in Bin(w).
(b) If Ai is a subgroup of Asi+ for all i,and if there is a LOGSPACE function f :
{0, 1}* -* B such that for all a UJ Ai, we have a C Af(a), then the union Uj A, is
a LOGSPACE group with universe contained in B.
(c) If the sequence is finite, one of the components has universe B and the remaining
components have universes that are LOGSPACE subsets of Tal(w), then the direct
product is recursively isomorphic to a LOGSPACE group with universe B.
(d) If the sequence is infinite and if each component has universe B, then the direct








product is recursively isomorphic to a LOGSPACE group with universe Bin(w).
(e) If each component has universe Tal(w) and there is a uniform constant c such
that for each i and any a, b E Ai, we have both Ia +i b\ < c(a|I +i Ib|) and \a -i b| <
c(]al +i Ibl), then the direct product is recursively isomorphic to a LOGSPACE group
with universe Tal(w).

Proof: (a) The domain of A is recursively isomorphic to A = Uk, Qk, where

Qk = {(ao, al, *. k)k : ai E Ai and Ok e'k}. Recall that (0 o, a, ..., ak)k
r(ao2 ao2 ,* *- ak- 12 ak), where 7r(i i2.. ik)= 0iO102... 0k1 (Lemma 2.3.16).
Thus A C Bin(w). Since each ai E Ai C {0, 1}*, the string 77(ai) cannot have a 001
= 77(2). Thus the testing that potential elements of A have the correct form requires
zero space.
Now suppose we have an element 71(ao)*001 *7(aci)*001*..- 001*?7(ak) of Qk
on the input tape. For each 0 < i < k, our machine writes bin(I77(ai)l) on a counter
tape. This is done using the occurrences of 001 on the input tape. The machine uses
binfl(r(ai) ) to read the symbols of 7((ai) only and test whether -1'(r7(ai)) E Ai, and,
also uses bin(1r7(ak)l) to check whether r-'(/1(ak)) : ek. Since 'q is in ZEROSPACE
and the sequence Ai of groups is uniformly LOGSPACE over B, these procedures
can be carried out within logarithmic space.
Now we verify that the operations +A and -A are in LOGSPACE. Given
a = (Co, a1, ..., am)m and r = (To, T1, --, n, on two input tapes, our machine
first checks the number of occurrences of q7(2) = 001 to see if m > n. This uses up
zero space. If m = n, then our machine checks whether am +m/--mTm = em, whether
am-1 +m-1 / -m-1 T-r = em-1, and so on, using a binary counter as in the previous
paragraph. If a, +i / -i r = ei for all 0 <, i m = n, then the machine outputs
0. Otherwise, suppose i E {0, 1,..., m} is the largest such that a, +i / -i Ti 7 ei.
Again, using binary counters and by virtue of uniformity of the sequence Ai, the
machine outputs 7[r7-1 (70(ao)) +o / -0o -1 ((7To))] 001 ..- 001 7[7r-1(((aoi)) +i / -i








1- (7(rTi))], using up at most logarithmic space. If m > n, then the machine outputs
[1-'(r(a0o)) +0 / -0 1 (r(To))] 001 ... 001* [r-1(1(an)) +n / -n (-(rn))] *
0'n+1 001 ... 001 ram, and similarly if m < n.
(b) We certainly have UJ Ai C B. Given a E {0, 1}* on the input tape, to test
whether a E A = UJ Ai, it suffices to compose the LOGSPACE computation of f(a)
and the verification that (b(f(a)), a) e { (b(f(a)), a) : a E Af(a)}. This composition is
in LOGSPACE because of uniformity, and the fact that the conversion of bin(f (a))
to tal(f (a)) is linear in Ibin(f(a))| and hence logarithmic in |a|.
Our machine can carry out +A/_A in LOGSPACE as follows: Given inputs
a and b, it uses LOGSPACE compositions to compute and write bin(i), where i =
max{f (a), f(b)}. It then outputs a + / b, which requires no more than logarithmic
space because of uniformity.
(c) We may assume without loss of generality that A0 = B. Then for the finite
sequence A0, A1, ..., An, the universe of the direct product is evidently recursively
isomorphic to A0 x A, x x An, which is LOGSPACE isomorphic to B by Lemma
2.3.14 (b). Since the direct product is ZEROSPACE isomorphic to a LOGSPACE
group by part (a), we can now apply Lemma 3.1.1 to reach our conclusion.
(d) First suppose B = Tal(w). Then the domain of the direct product is
recursively isomorphic to A = {(tal(no), tal(ni), ... tal(nk))k : k E w, tal(ni) E Ai,
and tal(nk) : ek}. The mapping from A to Bin(w) given by (tal(no), tal(ni), ... ,
tal(nk))k -4 tal(no)"0*tal(nl)"0 *-. 0 tal(nk) is a ZEROSPACE bijection. Hence
Lemma 3.1.1 now applies.
Now suppose B = Bin(w). Then the domain of the direct product is A =

{((o0, ,... Crk) : k e w, ai E Bin(w), and Ok : Ck}. For each a E Bin(w),
let a- be the result of deleting the 1 at the end of the string a + 1. Then the
mapping between A and {0,1,2}* given by (ao, al, ..., Uk) -+ ao 2a 2*. 2k is a
ZEROSPACE bijection. By Lemma 2.3.5, Lemma 2.3.9, and the Space Composition








Lemma I, the set {0, 1, 2}* is LOGSPACE set-isomorphic to Bin(w). It follows that
A is LOGSPACE set-isomorphic to Bin(w). Once again, Lemma 3.1.1 now applies.
(e) If the sequence is finite, then this is part (c). Otherwise, by part (d),

the direct product is recursively isomorphic to a LOGSPACE group A with universe

Bin(w). Since |a +j / -i bi < c(lal +j Ib|), we have |a + / -A b\ < c(|a| +A |bl) also.

Lemma 3.1.3 now implies our result.



Lemma 3.3.3. Each of the groups Z, (D) Zk, Z(p'), and Q are recursively isomor-

phic to LOGSPACE groups (a) with universe Bin(w), and (b) with universe Tal(w).

Proof: The mapping f : Z -- w given by f(n) = 2n if n > 0, and f(n) = 2(-n) 1

if n < 0, is a recursive bijection. Let E and e denote, respectively, the corresponding
addition and subtraction operations on w; and let + and denote the addition and

subtraction in Z. Suppose x, y E w. If x and y are both even, that is, x = 2m and
y = 2n for m, n E w, then we must have x y = .f(m + n) = 2(m + n) = x + y. And

x e y = f(m -n) = 2(m n) = x y if m n > 0, while f(m n) = 2(n m) 1 =

y-x-1 ifm-n <0. Hencex y = x-yif m > n, and xOy=y-x- 1 if

m < n. Similarly, if x is even and y is odd, that is, x = 2m and y = 2n 1 for m,

n E w, n > 1, then x y = x + y + l. And we have x y = x -y -lifm n,
while x E y = y x if m < n. The case where x is odd and y is even is completely
symmetric. And finally, if x and y are both odd, that is, x = 2m 1 and y = 2n 1,
for m, n ) 1, then wehavex y=x+y+l. And xE y y- x if m < n, while
x y= x y lifn Whether we work with the binary or the tally representation, finding the m

and n corresponding to the inputs x and y requires only zero space. Then comparing
m and n uses up logarithmic space, and finally outputing x ( y and x e y requires
logarithmic space. Hence by the Space Composition Lemma I, the group (w, E, e, 0),
with universe either Tal(w) or Bin(w), is a LOGSPACE group.








The mapping f : E)Zk -+ Bk(w) defined by (ao, ai,..., an) a oa o..n,
where ai E Zk and aon 0, is a recursive bijection. The corresponding addition and
subtraction operations E and e in Bk(w) are coordinatewise and modulo k, but with-
out any carrying involved. Thus the group g = (Bk(w),, e,0), is a ZEROSPACE
group with the property that |a / e bI < max(lal, Ibl). By Lemma 2.3.10 and
Lemma 3.1.1, the group G is recursively isomorphic to a LOGSPACE group 7 with
universe Bin(w) and with the property that |a +H / H bI ( c(laI +" Ijb), for some
constant c. Hence by Lemma 3.1.3, the group 7 is also recursively isomorphic to a
group with universe Tal(w).
As for Z(p"), we will define a group 9(p`) that represents the group Z(p).
Then we will show that G(p) is recursively isomorphic to a LOGSPACE group with
universe Bin(w) and to a LOGSPACE group with universe Tal(w). But first we need
to recall a few basic facts about the group Z(p'). This group's underlying set is the
set {[a/b] E Q/Z : a, b E Z and b = p' for some i > 0} of equivalence classes of the
congruence relation modulo 1, that is, the relation where a E Q is related to b E Q if
and only if a-b is an integer. So Z(p) is generated by the set {[0]}U{[1/pn] : n E N}.
We have [0] = [1] = [-1] = [2] = [-2] = ... = Z, and [1/pn] = {(kpn+l)/p" : k e Z}.
Moreover, [1/pn] generates the elements [2/pn] = {(kpn+2)/pn : k E Z}, and [3/pn] =
{(kpn+3)/pn : k e Z}, and so on, including [(pn-1)/pn] = {(kpn+pn-1)/pn k E Z}.
The addition and subtraction of equivalence classes [a], [b] E Z(p) are defined as
follows: [a] + / [b] = [a + / b]. Note that for [x/pn], where 1 < x < pn 1, we
have -[x/p] =[-x/p"] = [(pn x)/pn]. Also note that if x > pn, say, x = pn + 3,
then [x/pn] = [(pn + 3)/pn] = [1 + (3/pn)] = [3/pn].
We will now proceed to define g(p'). Let the string eoe ...en-1, where
each ei E Bp(w) and en-I J 0, represent the element (more precisely, equivalence
class) e el + + of Z(p'). And let 0 C Bp(w) represent the identity
s p p2 pn J
element [0] of Z(pw). Define G(pm) to be the group with universe Bp(w) (whose








elements are written in reverse p-ary, as usual) and whose identity element is 0.
And given elements xox0- ...x and yoy ... yt of g(p'), the addition E in G(p')
is defined as follows: Assuming without loss of generality that s > t, we have
XOXI X, X yoy] "" Yt = (xoxI Xt-ixt + YoY t-iYt) Xt+i Xt+2 '' ,
where + is the ordinary p-ary addition with carrying involved from the tth symbol
to the (t 1)th symbol, and so on all the way back to the first symbol x0 + yo, which
is then written modulo p. Furthermore, no terminal segment of O's are included. As
an example, suppose p = 5 and let the three strings 4020111, 3443, and 0002 be
elements of g(p'). Then 4020111 e 3443 = 3013111, and 3443 0002 = 4.
We now claim that Z(p) is represented by G(p"). To see this, we first note
that for each n E N, the distinct elements generated by [1/p"], namely, the pn 1
elements [x/p"], where 1 < x < pn 1, correspond to elements of g(p') according
to the following inductive rule: [1/p"] corresponds to the string O"-1l, while [2/p"]
is represented by the string O"-12, and, in general, if [(x 1)/pn] is represented
by the string a, then [x/pn] is represented by the string a E 1. So, for example,

[(p 1)/pn] is represented by (0"-]) (p- 1), and [(p + 1)/pn] = [p/p"] + [1/p"]
is represented by On-21 e 0"-11 = On-211, while [(pn 1)/pn] is represented by
(p l)-(p 1). .- (p 1), where p 1 occurs n times. Now to check that Z(p)
is completely represented by g(p'), it suffices to check that addition is preserved
among the generators of either group by the above correspondence. Consider two
generators [1/pn] and [1/p"'] of Z(p), and assume without loss of generality that
m > n. The corresponding "generators" in g(p') are 0O-11 and 0m-1l. In Z(p),
we have [1/pn] + [1/pm] = [1/pa + 1/pm], while in g(p"), we have 0-1'1 ( Om-1 =
0"-~10m-O--1. Since the representation of [1/p" + 1/p"'] in G(p,) is on-11Om-n-11,
we see that addition is preserved among generators by the above correspondence.
To finish the proof for Z(p"), we observe that the group g(p") = (Bp(w), E, 0),
is a LOGSPACE group because its addition ( is, more-or-less, p-ary addition, which,








moreover, has the property that |a E b\j max(Ial, Ib|). Now by Lemma 2.3.10 and
Lemma 3.1.1, the group g(p-) is recursively isomorphic to a LOGSPACE group W
with universe Bin(w) and whose addition is such that la+gb << c(aI +H Ibl), for some
constant c > 0. Hence by Lemma 3.1.3, the group W is also recursively isomorphic
to a group with universe Tal(w).
Finally, we prove the result for the group Q. Each rational number r is the
sum of an integer [rj and a positive proper fraction of the form a/(p"pm'2 ... pn),
where the pi are primes and the mi > 1, and where a is an integer strictly less than
pI Pm2 ... Pn". Now let p and q be prime numbers and let m,n > 1. By using
the Euclidean algorithm in reverse and back substitution, we can effectively write
the greatest common divisor (i.e., 1) of p" and q" as the linear combination 1 =
xpm + yq", where x and y are (possibly negative) integers. Thus if |yj < pm and
|lx < q", we can effectively represent 1/(pmqn) = (y/pm) + (x/q") as the element
([y/pm], [x/qn]) of the group Z(p') E Z(qg). If |y| > ptm, that is, y = bpm + c,
and |xl < q", then we can effectively represent 1/(pmqn) = (y/pm) + (x/q") as the
element (b, [c/pm], [x/qn]) of the group Z e Z(p") E Z(q"). Similarly, we can effec-
tively represent 1/(p'mq) as an element of the group Z E Z(p') E Z(q") if either
|xl > qn only, or both \x >, q" and lyj > prm. It follows that we can effectively
represent 1/(p"lpm'2 ... p'") as an element of Z E Z(p-) E Z(p-) 0 ... E Z(pw).
These considerations show that every rational can be uniquely and recursively rep-
resented as an element of the group Z e (@primes p Z(P~)) The addition operation
in Q can be represented in Z E (@primes p Z(pt)) with coordinatewise addition and
with carrying involved, thus making it different from the usual addition operation of
Z E primess p Z(p)). Moreover, the group Z o (@primes p Z(p)) is recursively
isomorphic to a LOGSPACE group with universe Bin(w) and to a LOGSPACE group
with universe Tal(w). This is because the group Z and each of the groups Z(p) are