Applications of infinitary proof theory

Andreas Weiermann

Institut fr mathematische Logik und Grundlagenforschung

der Westflischen Wilhelms-Universitt Mnster

Einsteinstr. 62, D-48149 Mnster, Germany

Summary

This paper consists of four chapters which can be read independently. In Chap-

ter 1 we introduce term rewriting theory for proving non trivial closure prop-

erties of the primitive recursive functions. In Chapter 2 we show that the

derivation lengths for a finite rewrite system for which termination has been

shown by using the lexicographic path ordering are multiple recursive. Chapter

3 provides a termination proof for T using an interpretation of Gdel s system

T into the natural numbers. In Chapter 4 we give a simple characterization of

the provably recursive functions of PA.

1 Proving closure properties of the primitive re-

cursive functions via term rewriting

Abstract

The termination of rewrite systems for parameter recursion, simple

nested recursion and unnested multiple recursion is shown by using mono-

tone interpretations on the natural numbers. We show that the resulting

derivation lengths are primitive recursive. As a corollary we obtain trans-

parent and illuminating proofs of the facts that the schemata of parameter

recursion, simple nested recursion and unnested multiple recursion lead

from primitive recursive functions to primitive recursive functions.

1.1 Introduction

A recursive function f is commonly defined by a defining equation (or more

generally a system of equations) of the form

f(x) =t( y . f(y), x)

1

where t involves some previously defined recursive functions. The term rewriting

approach orients the defining equations into a rewrite system Rf containing the

rule

f(x) t( y . f(y), x)

together with some rules for other functions which appear in t. If the rewrite sys-

tem Rf is confluent and terminating then Rf yields a convenient model for non-

deterministic computations of f. The derivation lengths function, DR , which

f

measures the maximal possible lengths of nondeterministic Rf -computations,

and hence the maximal lengths of nondeterministic Rf computations, then gives

some intrinsic information about the computational complexity of the original

recursive function f. For example, if DR is bounded by a function in a com-

f

plexity class C which is closed under elementary in (hence under coding) then

f is in C-TIME and so f is in C. Furthermore, this will hold for any evaluation

strategy for f. Obviously, the latter point gives essential information concerning

the computational complexity of f. In particular, if C is the class of primitive

recursive functions then f is primitive recursive.

The emphasis of this chapter is on investigations of interesting properties of

primitive recursive functions. In our term rewriting approach, the defining

equations for the primitive recursive functions are oriented naturally into rewrite

rules according to the intended computation strategy. It is well known that such

a rewrite system, when it computes only finitely many functions, can be proved

terminating via the multiset path ordering. Hence, as shown by Hofbauer in [22],

the resulting derivation lengths are bounded by a primitive recursive function.

We concentrate on investigations on certain recursive definition schemata where

it is not immediately obvious that they lead from primitive recursive functions

to primitive recursive functions. In particular, we investigate the computa-

tional complexity of the rewrite systems for the schemata of parameter recur-

sion, PRP, simple nested recursion, SNR, and unnested multiple recursion,

UMR. The characteristic recursion equations for these schemata are

PRP : f(S(x), y) =h(x, y, f (x, p(x, y)))

SNR : f(S(x), y) =h(x, y, f(x, p(x, y, f(x, y))))

UMR : f(S(x), S(y)) = h(x, y, f(x, p(x, y)), f(S(x), y))

where in all three cases h and p denote previously defined primitive recursive

functions.

These schemata figure prominently in the literature. That they do not lead

outside the class of primitive recursive functions has been proved, for example,

in Pter [31] or in Felscher [18] using coding techniques and by Feferman [17]

with elaborate proof-theoretic arguments. Simmons [40] has proved similar

results for strong generalisations of PRP and SNR.

2

1.2 Basic definitions

(i) On denotes the number-theoretic function m1, . . . , mn 0.

n

(ii) For each n " and for each 1 d" i d" n, Pi denotes the number-theoretic

function m1, . . . , mn mi.

(iii) S denotes the number-theoretic function m m +1.

(iv) If f is an m-ary function and if g1, . . . , gm are n-ary functions then

SUBm,n(f, g1, . . . , gm)

denotes the function

m1, . . . , mn f(g1(m1, . . . , mn), . . . , gm(m1, . . . , mn)).

(v) If g is any n-ary function and if h is an n + 2-ary function then

RECn+1(g, h)

denotes the unique n + 1-ary function f which satisfies the following re-

cursion equations :

f(0, m1, . . . , mn) = g(m1, . . . , mn)

f(m +1, m1, . . . , mn) = h(m1, . . . , mn, m, f(m, m1, . . . , mn))

Definition 1.1 The set PR of primitive recursive functions is given by the

following inductive definition :

(i) S " PR.

(ii) On "PRfor every natural number n.

n

(iii) Pi "PRfor every natural number n.

(iv) If f "PRis an m-ary function and if g1, . . . , gm "PRare n-ary func-

tions then SUBm,n(f, g1, . . . , gm) "PR.

(v) If g "PRis an n-ary function and h "PRis an n +2-ary function then

RECn+1(g, h) "PR.

Definition 1.2 For each natural number n we define a set Nn of primitive

recursive function symbols as follows :

(i) S1 " N1.

(ii) On " Nn.

3

(iii) Pin " Nn.

(iv) If f " Nm and g1, . . . , gm " Nn then SUBm,n(f, g1, . . . , gm) " Nn.

(v) If g " Nn and h " Nn+2 then RECn+1(g, h) " Nn+1.

We write 0 for O0 and S for S1. Let N := {Nn | n<}.

Definition 1.3 For each f " N, we define the complexity number of f, comp(f),

as follows:

(i) comp(S1) :=1.

(ii) comp(On) :=1.

(iii) comp(Pin) :=1.

(iv) comp(SUBm,n(f, g1, . . . , gm)) :=

max {comp(f), comp(g1), . . . , comp(gm), m, n} +2.

(v) comp(RECn+1(g, h)) := max {comp(g), comp(h), n} +2.

Definition 1.4 Recursive definition of Ś(f) "PRfor f " N.

(i) Ś(S1) :=S.

(ii) Ś(On) :=On.

n

(iii) Ś(Pin) :=Pi .

(iv) Ś(SUBm,n(f, g1, . . . , gm)) := SUBm,n(Ś(f), Ś(g1), . . . , Ś(gm)).

(v) Ś(RECn+1(g, h)) := RECn+1(Ś(g), Ś(h)).

Lemma 1.1 For every n-ary primitive recursive function f there exists a prim-

itive recursive function symbol g such that Ś(g) =f.

Proof f is defined according to the rules of PR. An unravelling of this definition

yields the desired primitive recursive function symbol g in a canonical way.

We assume familiarity with the basic notions of rewriting theory. For a suitable

overview see Dershowitz and Jouannaud [16].

Definition 1.5 We define a set RPR of rewrite rules (for calculating primitive

recursive functions) over the signature N as follows :

(i) For n e" 1, On(x1, . . . , xn) R 0.

PR

(ii) Pin(x1, . . . , xn) R xi (1 d" i d" n).

PR

4

(iii) SUBm,n(f, g1, . . . , gm)(x1, . . . , xn) R

PR

f(g1(x1, . . . , xn), . . . , gm(x1, . . . , xn)).

(iv) RECn+1(g, h)(0, x1, . . . , xn) R g(x1, . . . , xn).

PR

(v) RECn+1(g, h)(S(x), x1, . . . , xn) R

PR

h(x1, . . . , xn, x, RECn+1(g, h)(x, x1, . . . , xn)).

RPR models the definition schemata of the primitive recursive functions in a

natural way.

Definition 1.6 We define a set RPRP of rewrite rules as follows.

(i) f(0, y) R g(y).

PRP

(ii) f(S(x), y) R h(x, y, f (x, p(x, y))).

PRP

RPRP models the schema of primitive recursion with parameter substitution.

Definition 1.7 We define a set RSNR of rewrite rules as follows.

(i) f(0, y) R g(y).

SNR

(ii) f(S(x), y) R h(x, y, f (x, p(x, y, f(x, y)))).

SNR

RSNR models the schema of simple nested recursion.

Definition 1.8 We define a set RUMR of rewrite rules as follows.

(i) f(0, y) R g1(y).

UMR

(ii) f(x, 0) R g2(x).

UMR

(iii) f(S(x), S(y)) R h(x, y, f (x, p(x, y)), f(S(x), y)).

UMR

RUMR models the schema of unnested multiple recursion (with parameter sub-

stitution).

All of these schemata can be shown terminating using the lexicographic path or-

dering. This, however, does not give us a sufficient information for proving that

they do not go outside the primitive recursive functions, for the lexicographic

path ordering also establishes the termination of the Ackermann function which

is well known for not being primitive recursive.

5

1.3 Monotone number-theoretic interpretations

Definition 1.9 For F ą" N let PR(F ) be the least set T " F so that:

(i) If SUBm,n(f, g1, . . . , gm) " T then f, g1, . . . , gm " T .

(ii) If RECn+1(g, h) " T then g, h " T .

Let RPR(F ) be the restriction of R to the set of terms over the signature

PR

PR(F ).

Notation. If F is a signature then T (F) denotes the set of terms over F and

G(F) denotes the set of ground terms over F. We assume that G(F) is always

non- empty.

Definition 1.10 Recursive definition of the depth of t, dp(t), for t "T (F) for

some signature F.

(i) dp(t) :=0, if t is a variable or a constant function symbol.

(ii) dp(f(t1, . . . , tn)) := max {dp(t1), . . . , dp(tn)} +1.

Definition 1.11 For a rewrite system R over a signature F we define the par-

tial functions dR and DR, the derivation length functions, as follows.

(i) For t " G(F), dR(t) : max {n | (" t1, . . . , tn-1 " G(F)) . t R t1 R

. . . R tn-1}.

(ii) For m " , DR(m) : max {dR(t) | dp(t) d" m}.

The -interpretations we shall give for proving termination of the schemas PR,

PRP, SNR and UMR will be based on the family {Fn | n z" } which is

defined as follows.

Definition 1.12 Given d e" 2, we define :

(i) F0(a) :=da+1. Here the index means exponentiation.

d(1+a)

(ii) Fn+1(a) :=Fn (a). Here the index means iteration.

The following properties of the functions {Fn} are readily established :

Lemma 1.2 For every n " , and for every a, b " , we have

(i) a

d

(iii) a . d d" Fna, and hence ai d" Fn(max ai)

i=1 i=1

6

d

(iv) FnFn+1a

Theorem 1.1 RPR is terminating under primitive recursive interpretations in

case that F " N is finite.

Proof For f " Nn define a number-theoretic interpretation H as follows:

H(f)(m1, . . . , mn) :=Fcomp(f)(m1 + + mn).

Then H is a monotone primitive recursive interpretation under which the rules

are reducing.

Definition 1.13 Recursive definition of a numeral m for m " .

0 := 0; m + 1 := S1(m).

It follows easily that for each f " Nn )" F the term f(m1, . . . , mn) reduces

under RPR(F ) in a uniquely determined way to a numeral which corresponds to

Ś(f)(m1, . . . , mn). In such a situation we say that RPR(F ) computes Ś(f).

Our characterisation results will depend on the following observation adapted

from Hofbauer [22]:

Theorem 1.2 Suppose that R is a finite rewrite system over a finite signature

F, for which termination can be proved using number-theoretic interpretations.

Suppose further that there exists n " such that for every f "F and for every

a1, . . . , ak " , I(f)(a1, . . . , ak) d" Fn(a1 + + ak), where the value of d in

the definitions of the Fn is taken to be at least 2 and greater than the arities of

all function symbols of F. Then, for any closed term t,

dR(t) d" Fn+1(dp(t))

from which it follows immediately that DR(m) d" Fn+1(m).

Proof The proof is by induction on dp(t). It is clear that the -interpretation

of a closed term t is a bound on the value of dR(t). If t is a constant then, from

the hypothesis, we obtain dR(t) d"I(t) d" Fn(0)

dR(t) d" I(f)(I(t1), . . . , I(tk))

d" Fn(I(t1) + + I(tk)) by hypothesis

2

d" Fn(max I(ti)) by lemma 1.2.3 since d >k

2

d" Fn(Fn+1max dp(ti)) by induction hypothesis

d" Fn+1(dp(t)) by lemma 1.2.4, since d e" 2.

7

As a consequence of theorem 1.2, it suffices to show in the sequel that termi-

nation proofs can be achieved using primitive recursive interpretations. As an

immediate corollary, we have :

Theorem 1.3 DR is primitive recursive in case that F " N is finite.

PR(F )

Remark. Theorem 1.1 is a special case of a more general result proved in [22].

Theorem 1.4 RPRP is terminating and DR is primitive recursive.

PRP

Proof Define monotone number-theoretic interpretations as follows: I(0) := 0,

I(S)(a) :=a + 1, and assume, inductively, that there exists a least n " such

that, for all a, b, c,

I(g)(b) d" Fnb

I(h)(a, b, c) d" Fn(a + b + c)

I(p)(a, b) d" Fn(a + b)

We then put

d(1+a)

I(f)(a, b) := Fn (a + b) with d e" 4

Clearly, we have Fn(a + b) < I(f)(a, b) d" Fn+1(a + b), justifying inductively

the assumption on g, h and p. To see that the rules are reducing, writing a for

I(x) and b for I(y) :

d(1+0)

I(f(0, y)) = Fn (0 + b)

d

= Fn b

> Fnb for d e" 2

e" I(g(y)) by hypothesis.

d(1+a)

I(h(x, y, f(x, p(x, y)))) d" Fn(a + b + Fn (a + Fn(a + b)))

d(1+a) 2

d" Fn(a + b + Fn Fn(a + b)) by lemma 1.2 3

d(1+a)+2

= Fn(a + b + Fn (a + b))

2 d(1+a)+2

d" FnFn (a + b) by lemma 1.2 3

d(1+a)+4

= Fn (a + b)

d(1+a+1)

d" Fn (a + b) for d e" 4

d(1+a+1)

< Fn (a +1+b)

= I(f(S(x), y))

The next theorem contains the theorem above and is slightly more involved.

8

Theorem 1.5 RSNR is terminating and DR is primitive recursive.

SNR

Proof Define monotone number-theoretic interpretations as follows. J (0) := 0,

J (S)(a) :=a + 1, and assume, inductively, that there exists a least n " such

that, for all a, b, c,

J (g)(b) d" Fnb

J (h)(a, b, c) d" Fn(a + b + c)

J (p)(a, b, c) d" Fn(a + b + c)

We then put

d1+a

J (f)(a, b) := Fn (a + b) with d e" 5

We show first that the least z such that "ab . J (f)(a, b) d" Fz(a + b) is n +2.

Now,

d(1+a+b)

Fn+2(a + b) = Fn+1 (a + b)

2

e" Fn+1(a + b)

d(1+Fn+1(a+b))

= Fn Fn+1(a + b)

F0(a)

> Fn (a + b)

d1+a

= Fn (a + b)

= J (f)(a, b)

Note, however, that since d e" 2, we have Fn+1(dd) < I(f)(dd, 0). Therefore,

"ab . J (f)(a, b) d" Fn+1(a + b) does not hold.

To see that the rules are reducing under this interpretation, writing a for J (x)

and b for J (y) :

d(1+0)

J (f(0, y)) = Fn (0 + b)

d

= Fn b

> Fnb

e" J (g(y))

J (h(x, y, f(x, p(x, y, f(x, y)))))

d1+a d1+a

d" Fn(a + b + Fn (a + Fn(a + b + Fn (a + b))))

d1+a d1+a

d" FnFnFn FnFnFnFn (a + b) - repeated use of lemma 1.2 3

d1+a.2+5

= Fn (a + b)

d1+a.(d-1)+d1+a

< Fn (a + b) for d e" 5

d1+a+1

= Fn (a + b)

d1+a+1

< Fn (a +1+b)

= J (f(S(x), y))

9

The result now follows.

Theorem 1.6 RUMR is terminating and DR is primitive recursive.

UMR

Proof Define number-theoretic interpretations as follows: K(0) := 0, K(S)(a) :=

a + 1, and assume, inductively, that there exists a least n " such that, for all

a, b, c,

K(g1)(a) d" Fna

K(g2)(b) d" Fnb

K(h)(a, b, c, d) d" Fn(a + b + c + d)

K(p)(a, b) d" Fn(a + b)

We then put

d(1+a)

Fn

K(f)(a, b) := Fn (a+b)+d(1+b)(a + b) with d e" 4

We show first that the least z such that "ab . K(f)(a, b) d" Fz(a + b) is n +2.

d(1+a)

As in the case of PRP, we have Fn (a + b) d" Fn+1(a + b). By induction

on b we have d(1 + b) d" F0(b) for d e" 2. Thus d(1 + b) d" F0(b)

Fn+1(a+b).2

K(f)(a, b) d" Fn (a + b)

2

Fn+1(a+b)

d" Fn (a + b) by lemma 1.2 3

2

d(1+Fn+1(a+b))

2

< Fn Fn+1(a + b)

3

= Fn+1(a + b)

d(1+a+b)

d" Fn+1 (a + b) for d e" 3

= Fn+2(a + b)

Note, however, that Fn+1(a) < K(f)(a, 0), confirming minimality.

To see that the rules are reducing under this interpretation, writing a for K(x)

and b for K(y) :

It is clear that K(f(x, 0)) > K(g2(x)) and that K(f(0, y)) > K(g1(y)). Now,

K(h(x, y, f(x, p(x, y)), f(S(x), y)))

d" Fn(a + b + K(f)(a, Fn(a + b)) + K(f)(a +1, b))

2

d" Fn(max {K(f)(a, Fn(a + b)), K(f)(a +1, b)}) by lemma 1.2 3, since d e" 4,

We show first that max {K(f)(a, Fn(a + b)), K(f)(a +1, b)} = K(f)(a +1, b) :

10

d(1+a)

Fn

K(f)(a, Fn(a + b)) = Fn (a+Fn(a+b))+d(1+Fn(a+b))(a + Fn(a + b))

d(1+a)

Fn 2 2 2

d" Fn Fn(a+b)+Fn(a+b)Fn(a + b) by lemma 1.2 3

d(1+a)

FnFn 2 2

d" Fn Fn(a+b)Fn(a + b) by lemma 1.2 3

d(1+a)+3

Fn

= Fn (a+b)+2(a + b)

d(1+a)+4

Fn

= Fn (a+b)(a + b) by lemma 1.2 3

d(1+a+1)

Fn

d" Fn (a+b)(a + b) for d e" 4

< K(f)(a +1, b)

Now,

d(1+a+1)

Fn

2 2

Fn(K(f)(a +1, b)) = FnFn (a+1+b)+d(1+b)(a +1+b)

d(1+a+1)

Fn

= Fn (a+1+b)+d(1+b)+2(a +1+b)

d(1+a+1)

Fn

d" Fn (a+1+b)+d(1+b+1)(a +1+b) since d >2

d(1+a+1)

Fn

< Fn (a+1+b+1)+d(1+b+1)(a +1+b +1)

= K(f)(a +1, b +1)

The result now follows.

From the above considerations we have a proof of the following classical result.

Theorem 1.7 The primitive recursive functions are closed under parameter

recursion, simple nested recursion and unnested multiple recursion.

Proof Assume that the involved functions Ś(g), Ś(h) and Ś(p) are primitive re-

cursive where g, h, p " N are chosen according to lemma 1.1. Then there exists a

canonical rewrite system, namely RPR({g,h,p}), for Ś(g), Ś(h), Ś(p) which com-

putes Ś(g), Ś(h) and Ś(p) in the natural way. The termination of this system

can be shown according to theorem 1.1 by a monotone interpretation H which

involves only finitely many functions Fl , . . . , Fl . Let n := max {l1, . . . , lm}.

1 m

Then the proofs of theorems 1.4, 1.5 and 1.6 yield an obvious extension of H

to I resp. J , K which shows that for the rewrite system which computes the

function which is introduced by the schema under consideration the resulting

derivation lengths function is primitive recursive. The number of steps for com-

puting such a function on a Turing machine (or register machine) is primitive

recursive (even elementary recursive) in its derivation lengths function. Hence

an application of the (primitive recursive) bounded search operator yields the

assertion.

11

The schemata of primitive recursion with parameter substitution and simple

nested recursion are special cases of the following definition schema of general

simple nested recursion:

(i) f(x, 0) = g(x);

(ii) f(x, y +1) =t( x . f(x, y), x, y);

where g is primitive recursive and t is built up using primitive recursive functions

and the application functional App which is defined by App(x . h(x), z) =h(z).

A slightly more general schema is given by the following definition schema of

nested less than recursion:

(i) f(x, 0) = g(x);

(ii) f(x, y) =t( x . f(x, y), x, y);

where g and t are as before and where x. f(x, y) denotes the course of values

recursion of x. f(x, ) along y.

The schema of un-nested multiple recursion is a special case of the following

schema of general un-nested multiple recursion:

(i) f(0, y) =g1(y);

(ii) f(x, 0) = g2(x);

(iii) f(x +1, y +1) =t(x . f(x, y), f(x, y +1), x, y);

where g and t are as before and f(x, y+1) must not occur nestedly in x . f(x, y).

The methods developed in this chapter easily yield that the derivation lengths of

the rewrite systems for general simple nested recursion, nested less than recur-

sion and general un-nested multiple recursion are primitive recursive. Hence,

these schemata lead from primitive recursive functions to primitive recursive

functions and every evaluation strategy yields a primitive recursive algorithm

for the defined function in question.

The definition schema of nested multiple recursion has the form:

f(x, y, z) =t(. . . , z . f (x, y, z), f(x, y, z), x, y, z),

where t is as before and (...) denotes the corresponding course of values recursion

along y resp. z and the . . . indicate the possibility of more nested recursions.

The termination of the corresponding rewrite system can be proved with the

lexicographic path ordering, hence the derivation lengths function for this sys-

tem is definable by nested multiple recursion as shown in [50]. So any evaluation

strategy for the latter rewrite system yields an algorithms which runs in nested

multiple recursive time.

Similar closure results also hold for the classes of 0-recursive functions and

... -recursive functions.

12

2 Classifying the computational complexity of

the Lexicographic Path Orderings when ap-

plied to rewrite systems

Abstract

It is shown that a termination proof for a term rewriting system using

a lexicographic path ordering yields a multiply recursive bound on the

length of derivations, measured in the depth of the starting term. This

result is essentially optimal since for every multiply recursive function f

a rewrite system (which reduces under the lexicographic path ordering)

can be found such that its derivation length can not be bounded by f.

Introduction

Let R be a (finite) rewrite system over a (finite) set F of function symbols. Let

G(F) be the set of ground terms over F. (We assume that G(F) is not empty.)

For m " Nlet

D(m) : max {n " N: ("t1, . . . , tn "G(F))[dp(t1) d" m & t1 tn]}

R R

where dp(t) denotes the depth of a term t and R denotes the rewrite relation

induced by R. D is called the derivation length function for R. If F is finite

and if R is finite and terminating, then D is a (total) recursive function.

In order to investigate the power as well as the limitations of different termina-

tion proof methods, it is natural to ask how long derivation sequences can be if

a certain termination ordering is used. Feasible bounds on D will yield a good

practical applicability of the rewrite system in question. On the other hand,

unfeasible large bounds on D in the worst cases may indicate a wide range of

applicability of the used termination proof method. In the meantime several

results on classifying derivation lengths have been established. For example it

is known that a termination proof using a polynomial interpretation yields a

double exponential bound on D (see, for example, [20]). Moreover it has been

shown that a termination proof using a multiset path ordering (which is also

known as recursive path ordering) yields a primitive recursive bound on D (cf.

[22]).

In this chapter we show that a termination proof with a lexicographic path

ordering yields a multiply recursive bound on D. In this case there is infor-

mally speaking an n >1 such that D is eventually dominated by a function

x Ackn(x, 0, . . . , 0) where Ackn denotes an n-ary Ackermann-function. This

result is optimal. Therefore we get a characterization of the power of termi-

nation proofs by lexicographic path orderings in terms of derivation lengths.

It turns out that with respect to derivation lengths termination proofs using

lexicographic path orderings are much more powerful than for example termi-

nation proofs with multiset path orderings. In particular the lexicographic path

13

ordering can be used for proving termination of standard rewrite systems which

compute n-ary Ackermann-functions.

2.1 Basic Definitions

Let F = {f1, . . . , fK} be a finite set of function symbols of fixed cardinality

K. Let z" be a total precedence (i. e. a linear order) on F such that fi z"

fj !! i

countable infinite set X of variables. Let G(F) be the set of ground terms over

F which is assumed to be not empty.

Recall that the depth function, dp, is defined on T (F, X ) as follows:

(i) dp(t) :=0, if t is a constant symbol or a variable,

(ii) dp(f(t1, . . . , tn)) := max {dp(t1), . . . , dp(tn)} +1, else.

The set of variables of a term t is denoted by Var(t). A mapping : X

T (F, X ) is called substitution. If t is a term we denote by t the result of

substituting every occurrence of a variable x in t by (x). A term rewriting

system (or rewrite system) R over T (F, X ) is a finite set of ordered pairs s, t

such that s, t " T (F, X ) and Var(t) ą" Var(s). The rewrite relation R on

T (F, X ) is the least binary relation on T (F, X ) such that:

(i) if s, t "R and : X T (F, X ), then s R t,

(ii) if s R t, then f(. . . , s, . . . ) R f(. . . , t, . . . ).

A rewrite system R is said to be terminating if there is no infinite sequence

ti : i " N of terms such that

t1 t2 tm .

R R R R

The lexicographic path ordering on T (F, X ), , is defined as follows (cf. [16]):

lpo

s = f(s1, . . . , sm) g(t1, . . . , tn) =t iff

lpo

(i) sk t for some k "{1, . . . , m}, or

lpo

(ii) f g & s tl for all l "{1, . . . , n}, or

lpo

(iii) f = g & s1 = t1 & . . . & si = ti & si ti & s ti +1

0-1 lpo

0-1 lpo

0 0 0

& . . . & s tn for some i0 "{1, . . . , n}.

lpo

Lemma 2.1 (i) If s t, then Var(t) ą"Var(s).

lpo

(ii) If R is a rewrite system such that R is contained in a lexicographic path

ordering, then R is terminating.

14

Proof: The first assertion is clear. The second assertion is proved, for example,

in [16].

For introducing the extended Grzegorczyk-hierarchy we assume familiarity with

the elementary theory of ordinals. A short introduction into this theory can

be found, for example, in [33] or in [39]. Let ą < be an ordinal such that

ą = 0. Then there are uniquely determined natural numbers m1 > . . . > mn

and positive natural numbers a1, . . . , an (n>0) such that

1 n

ą = m a1 + + m an.

1 n

In this situation we will write ą =NF m a1 + + m an. Let

maxcoef(ą) :=max {a1, . . . , an}

be the maximal coefficient of ą. For each natural number x and limit ordinal

1 n

ą =NF m a1 + + m an < let

1 n n-1

ą[x] :=m a1 + + m (an - 1) + m x.

For ą, < , ą = n a0 + + 0 an and = n b0 + + 0 bn let

ą " := n (a0 + b0) + + 0 (an + bn)

be the (commutative) natural sum of ą and .

For ą< we define recursively the extended Grzegorczyk-hierarchy (cf. [32])

as follows:

(i) F0(x) :=x +1 (x " N),

(ii) Fą+1(x) :=Fą ć% ć%Fą(x) (x " N),

x+1-times

(iii) Fą(x) :=Fą[x](x) (x " N), if ą is a limit.

In the sequel we denote the x-th iterate of a function f by fx.

Lemma 2.2 (i) Fą(x)

(iv) ą<& maxcoef(ą)

Proof: A proof of this lemma can be found, for example, in [41].

A function f : is called multiply recursive if there exists an ą< such

that f is primitive recursive in Fą. (See, for example, [32] for a definition.)

15

2.2 The interpretation theorem

Recall that K denotes the cardinality of F and that M denotes the maximum

of arities of function symbols from F. Let d be a fixed natural number such that

K< d. We define a mapping : G(F) N (which depends on d) recursively

as follows. Let t = fj(t1, . . . , tn) "G(F) (n e" 0) be a ground term and assume

that (t1), . . . , (tn) are defined. Then let

(t) :=FM+1 (d).

j+n(t1)++1(tn)+1

Lemma 2.3 (i) (Subterm) (t) <(f(. . . , t, . . . )),

(ii) (Monotonicity) (s) <(t) ! (f(. . . , s, . . . )) <(f(. . . , t, . . . )).

Proof: This Lemma follows from the assertions 5 and 6 of Lemma 2.

Lemma 2.4 If t "G(F), then

1+d+dp(t)

F (d) >(t).

M+2

Proof by induction on dp(t).

Case 1: dp(t) =0.

Then t is a constant symbol fj. Since j d" K

(t) =FM+1 (d)

Case 2: dp(t) > 0.

Let t = fj(t1, . . . , tn). By induction hypothesis we see

1+d+dp(tl)

F (d) >(tl) for every l "{1, . . . , n}.

M+2

Let

ą := M+1 j + n (t1) + + 1 (tn) +1.

Then

(t) =Fą(d).

Since ą<M+2 and

1+d+dp(t)-1

maxcoef(ą)

we see by assertion 4 of Lemma 2

1+dp(t)-1

Fą(d)

16

Lemma 2.5 (Main Lemma) Let s, t "T (F, X ) and dp(t) d" d. Let : X

G(F) be a ground substitution. If s t, then

lpo

(s) >(t).

Proof by induction on dp(s). Since s t, s is not a variable.

lpo

Case 1: s is a constant symbol fi.

Then t is closed, since s t. Hence s = s and t = t. By subsidiary

lpo

induction on dp(t) we show

dp(t)+1

F i (d) >(t).

M+1

d+1

The claim follows since dp(t) d" d and F i(d) d" FM+1 (d) =(s).

M+1 i+1

Case 1.1: dp(t) =0.

Then t is a constant symbol fj. From fi t we conclude d >i >j. Assertion

lpo

4 of Lemma 2 yields

FM+1 (d) >FM+1 (d) =(t).

i j+1

Case 1.2: dp(t) > 0.

Let t = fj(t1, . . . , tn). From s t we conclude d >i >j and s tl for all

lpo lpo

l "{1, . . . , n}. The induction hypothesis yields

dp(tl)+1

F i (d) >(tl) (l "{1, . . . , n}).

M+1

Assertion 4 of Lemma 2 yields

dp(t)+1 dp(t)

F i (d) = FM+1 (F i(d))

M+1 i M+1

dp(t)

> FM+1 (F i(d))

j+n(t1)+1(tn)+1 M +1

> FM+1 (d)

j+n(t1)+1(tn)+1

= (t)

Case 2: s = fi(s1, . . . , sm) and m>0. The main induction hypothesis yields

dp(u) d" d & sk u ! (sk) >(u)

lpo

for every u "T (F, X ) and every k "{1, . . . , m}.

By subsidiary induction on dp(t) we show

dp(t)+1

F i+m(s1)++1(sm)(d) >(t).

M +1

From this the claim follows as is case 1. Let

ą := M+1 i + m (s1) + + 1 (sm).

17

Case 2.1: dp(t) =0.

Then t is a constant symbol or a variable.

Case 2.1.1: t is a variable.

Since s t we see t "Var(s). Thus

lpo

t "Var(sk)

for some k "{1, . . . , m}. Assertion 1 of Lemma 3 yields

(t) d" (sk)

Since s t we conclude that i >j or sk t for some k "{1, . . . , m} must

lpo lpo

hold. If sk t, then the main induction hypothesis yields

lpo

Fą(d) >(sk) e" (t).

If i >j, then assertion 4 of Lemma 2 yields

Fą(d) >FM+1 (d) =(t),

j+1

since M+1 j +1<ąand j

Let

t = fj(t1, . . . , tn).

Then dp(tl) < dp(t) for l " {1, . . . , n}. The subsidiary induction hypothesis

yields

dp(tl)+1

(") Fą (d) >(tl))

for l "{1, . . . , n}. Since s t we have two consider three subcases.

lpo

Case 2.2.1: sk t for some k " {1, . . . , m}. The main induction hypothesis

lpo

yields

dp(t)+1

(t) d" (sk)

lpo

The subsidiary induction hypothesis and assertion 4 of Lemma 2 yield

dp(t)+1 dp(t)

Fą (d) = Fą(Fą (d))

dp(t)

> FM+1 (Fą (d))

j+n(t1)++1(tn)+1

> FM+1 (d)

j+n(t1)++1(tn)+1

= (t)

since ą>M+1 j + n (t1) + + 1 (tn) +1 and

dp(t)

maxcoef(M+1 j + n (t1) + + 1 (tn) +1)

Case 2.2.3: i = j and s1 = t1, . . . , si = ti and si ti and s tl for

0-1 lpo lpo

0-1

0 0

l "{j0 +1, . . . , n}. The main induction hypothesis yields

(si ) >(ti ).

0 0

The subsidiary induction hypothesis and assertion 4 of Lemma 2 yield

dp(t)+1 dp(t)

Fą (d) = Fą(Fą (d))

dp(t)

> FM+1 (Fą (d))

j+n(t1)++1(tn)+1

> FM+1 (d)

j+n(t1)++1(tn)+1

= (t)

since ą>M+1 j + n (t1) + + 1 (tn) +1 and

dp(t)

maxcoef(M+1 j + n (t1) + + 1 (tn) +1)

R is contained in . Put

lpo

d := 1 + max ({K}*"{dp(t): ("s)[ s, t "R]}).

Let

t1 tm,

R R

where t1, . . . , tm "G(F). Then

m d" FM +2 (d + dp(t1)).

+1

Proof: Assume that t1 R R tm. Then by assumption t1 . . . tm.

lpo lpo

By the second main lemma and the monotonocity of we see

(t1) > . . . > (tm).

Thus

(t1) e" m.

By Lemma 3

FM+2 (d + dp(t1)) >(t1).

+1

For m " Nlet

D(m) : max {n " N: ("t1, . . . , tn "G(F))[dp(t1) d" m & t1 tn]}.

R R

This function D is by definition the derivation-length-function for R.

Corollary 2.1 Let R be a rewrite system over T (F, X ) such that R is con-

tained in a lexicographic path ordering. Then m D(m) is bounded by a

multiply recursive function.

19

It can easily be seen from [22] and [32] that there is a hierarchy of rewrite sys-

tems such that for every rewrite system of this hierarchy the termination can

be proved by using a lexicographic path ordering and such that the resulting

hierachy of derivation lengths cannot be majorized by a single multiply recursive

function.

3 A constructive termination proof for Gdel s

system T

Abstract

Inspired by Pohlers local predicativity approach to Pure Proof Theory

and Howard s ordinal analysis of bar recursion of type zero we present

a short, technically smooth and constructive strong normalization proof

for Gdel s system T of primitive recursive functionals of finite types by

constructing an 0-recursive function [ ]0 : T so that a reduces to b

implies [a]0 > [b]0.

The construction of [ ]0 is based on a careful analysis of the Howard-

Schtte treatment of Gdel s T and utilizes the collapsing function :

0 which has been developed by the author for a local predicativity

style proof-theoretic analysis of PA.

For given n let Tn be the subsystem of T in which the recursors have

type level less than or equal to n + 2. (By definition, case distinction

functionals for every type are also contained in Tn.) As a corollary

of the main theorem of this chapter we obtain optimal bounds for the

Tn-derivation lengths in terms of n+2-descent recursive functions. The

derivation lengths of type one functionals from Tn (hence also their compu-

tational complexities) are classified optimally in terms of <n+2-descent

recursive functions.

In particular we obtain that the derivation lengths function of a type

one functional a " T0 is primitive recursive, thus any type one functional

a in T0 defines a primitive recursive function. Similarly we also obtain a

full classification of T1 in terms of multiple recursion.

3.1 How is it that infinitary proof theory can be applied

to finitary mathematics?

A fascinating feature of Pure Proof Theory is the intrinsic interplay of

different areas of logic like set theory, generalized recursion theory, classical

recursion theory and proof theory. The aims, methods and results of the Pure

Proof Theory program have been manifested and reported by Pohlers in [36].

In this chapter we are concerned with successful applications of versatile meth-

ods from rather advanced Pure Proof Theory to classical recursion theory, the

proof theory of first order Peano Arithmetic (and its fragments) and problems

20

of computational complexity. Having the right principles from advanced proof

theory as guiding lines in mind it is indeed often possible to obtain deep and

combinatorial involved results in a perspicuous way by miniaturizing or pro-

jecting down the appropriate abstract principles.

For example, by projecting Pohlers local predicativity treatment of KP

i.e. Kripke Platek set theory with infinity axiom down to PA (which is

of the same proof-theoretic strength as Kripke Platek set theory without the

infinity axiom) it was possible to reobtain the standard classification of the PA-

provably recursive functions in a perspicuous way [47]. Later on this approach

has been utilized for further investigations on proof and recursion theory by

Arai [2]. Another perspicuous characterization of the provably recursive func-

tions of PA could also be obtained by projecting Buchholz treatment of KP

(via his simplified version of local predicativity [13]) down to PA [7].

(See also chapter 4.) This treatment yields a perspicuous explanation of the in-

trinsic relationship between witness bounding and cut elimination. In fact, cut

reduction corresponds to the composition of bounding functions whereas the

elimination of cuts of highest possible complexity corresponds to an effective

transfinite iteration of the witness bounding function in question.

In this chapter we are going to project Howard s ordinal analysis of bar

recursion of type zero down to Gdel s T to obtain a constructive proof

of strong normalization for T and to obtain as a byproduct a complete and

optimal classification of the computational complexity of Gdel s T .

For this purpose we define by constructive means which are somehow of lowest

possible proof-theoretic complexity a function [ ]0 : T so that a reduces

to b implies [a]0 > [b]0. Hence, there will be no infinite rewritten sequences of

terms and the height of the reduction tree of a is bounded by [a]0 for any a " T .

The concrete definition of [ ]0 given in Section 2 looks involved at first sight.

Therefore the reader may perhaps find some description of the corresponding

phenomena occurring in impredicative proof theory informative.

First, we recall informally and a little oversimplifying some important features

of the Pohlers -style analysis of ID1 i.e. the theory of noniterated inductive

definitions over the natural numbers or KP. His treatment is crucially based

on a collapsing function D : &!+1 &! and a corresponding deduction

relation, , the essentially less than relation, on &!+1. Here &! is the

first uncountable ordinal and &!+1 is the next -number above &!. The relation

is defined so that ą implies ą< and Dą < D. The definition of

can informally be understood and realized as follows:

Axiom&!( ) ą iff ą < and the countable ordinal content of ą is

controlled by the countable ordinal information contained in .

For the proof-theoretic analysis the ordinal assignment is organized so that for

the final cut free derivation of a 1-formula under consideration the ordinals of

1

the premises of a rule are always essentially less than the ordinal assigned to the

conclusion of a rule in that derivation. After all we may apply the collapsing

21

function D which might be thought as a proof-theoretic Mostowski collapse

function to the ordinal assignment of the cut free derivation and we obtain

the desired countable ordinal bound on the 1-complexity of the ID1-provable

1

(or KP-provable) 1-formulas.

1

Now recall the miniaturized collapsing function : 0 and the

corresponding miniaturized essentially less than relation from [46].

Informally speaking we have (by miniaturizing Axiom&!( ))

Axiom( ) ą iff ą< and the finitary content of ą measured by the

length of its term representation is controlled by the finitary content of .

The collapsing function at argument ą is then defined as the Mostowski

collapse of the relation restricted to ą.

By replacing in Pohlers local predicativity treatment of ID1 the ordinal &! by

, the collapsing function D : &!+1 &!by : 0 and ID1 by PA this is

the point where the projecting or miniaturization process comes into play we

more or less automatically obtain appropriate bounds on the 0-complexities of

2

the PA-provable 0-sentences. In fact, the bounds obtained were even optimal

2

for the fragments IŁn+1 of PA.

In a next step we recognized that a local predicativity style treatment of Gdel s

T has been possible. In Schtte s exposition (cf. [39]) of Howard s weak nor-

malization proof for Gdel s T we found an assignment [ ]0 so that if a reduces

to b via a certain reduction strategy then [a]0 > [b]0. By matter of a routine

inspection we found [b]0 [a]0, hence [b]0 < [a]0. As a corollary yields

<0-recursive bounds on the T definable numbertheoretic functions. also

yields bounds for the derivation lengths of the restricted reduction relation.

For converting this argument into a strong normalization proof for general T -

reductions some additional ideas had to be detected. At a crucial point certain

impredicativity phenomena were resolved in [46] by using an appropriate strong

induction hypothesis. A detailed explanation of this idea is given in [46] and

will not be recalled here.

We also have to deal with another problem which is related with the Howard-

Schtte treatment of Gdel s T when one considers the fragments of T . (These

fragments have not been considered in [39].) Let Tn be the subsystem of T in

which all recursors have type level less than or equal to n+2. Then the functions

definable in Tn are <n+2-recursive and therefore the realm of the ordinals less

than n+2 should somehow be sufficient for analyzing Tn. On the other side

the Howard-Schtte assignment only allows a treatment of Tn within n+3 and

it is by no means obvious how to lower the ordinals appropriately. The strong

normalization proof for T given in [46] uses ordinals n+3 in an essential way

and therefore does not yield a solution of this problem.

To get an idea of how to overcome this difficulty (while staying on a firm basis)

we go back the corresponding phenomena in the ordinal analysis of the fragments

of KP treated in [35, 37]. The observation which is important for our purposes

22

is that the cut-elimination result of [34]

ą 2&!+ą

(") !

&!+1 &!

for Ł(&!)-formulas has been improved in [35] by

ą Dą

("") ! .

&!+1 Dą

Thus the simultaneous use of collapsing and eliminating cuts yields the desired

optimal ordinal bounds. (A treatment via Buchholz &!-rule [11] or via Buchholz

simplified version of local predicativity [13] would also be appropriate here.)

Now the original Howard-Schtte assignment

[a b]0 := 2[a b]1 " ([a ]0 " [b ]0)

(here " denotes the (commutative) natural sum of ordinals and " denotes the

(commutative) natural product of ordinals) has some similarity with (") since

no collapsing is involved and cut elimination corresponds to exponentiation with

respect to base 2. Motivated by ("") the modification of [ ]0 proposed here

which is the most important point in our exposition involves simultaneously

collapsing and lowering the height of the exponential tower in question. It is

[a b]0 := ( " [a b]1 " [a ]0 " [b]0 + g)

where g denotes the level of the type . (The inclusion of g() is needed for

technical reasons.)

Another motivation for the definition of [a b]0 can be taken from a fine struc-

ture analysis of Howard s ordinal analysis of bar recursion of type zero (cf. [25]).

This analogy is far from being easy to explain. Nevertheless we will try to indi-

cate some crucial points since they fit nicely into our general program of applying

transfinite methods to finitary mathematics. Howard s approach is based on the

notion of a measure of a term for a given height. Apparently there should be

a connection between this notion and the definition of [ ]i but exactly this has

to be explained in more detail. Well, for heights larger than 3 the measure is

defined via the function b, d 2b d. This is closely related to the definition

of [ab]i for i e" 2 which also involves exponentiation. For height 3 Howard uses

the functions b, d b + d whereas we again use the exponential function in

the definition of [ab]1. This difference can be explained by the difference in the

nesting of recursion of the involved collapsing functions. Howard s function

directly corresponds to the Aczel-Buchholz-Feferman-function which involves

nested ordinal recursion. Thus Howard s differs from the Buchholz function

0 (cf. [9]) which only involves unnested ordinal recursion. Roughly speaking

b, d bd corresponds in growth rate to b, d 0&!b+d. [This reminds one to

the correspondence between the Fast growing and the Hardy hierarchy.] For car-

rying through Howard s treatment in terms of the function 0 one has to replace

23

the function b, d b+d by b, d 2b d in the definition of measure for height 3.

This is mirrored by the technical observation that in Howard s proof of Lemma

2.1 (cf. [25]) the argument on p.112 l.6-7 only works for but not for 0. That

argument only works for 0 when 4 is replaced by b, d 2bd. With this change

in mind one can define measure for height 2 in terms of b, d 0(&! b + d).

Summing up the analogy is then as follows. For height larger than 2 we may

define measure in terms of the exponential function and the same is true for the

definition of [ ]i for i e" 1. For height 2 we can define measure in terms of 0

a function which involves only unnested recursion and in this chapter we

define [ ]0 in terms of a function which is also defined by unnested recursion.

In both cases optimal results are obtained.

The miniaturization process of replacing &! by and D : &!+1 &!by : 0

is reminiscent of the collapsing process relating the slow and fast growing

hierarchy in Girard s hierarchy comparison theorem [21]. In our setting this

analogy can be found again by utilizing a finitary approximation approach to

termination orderings used in rewriting theory due to Buchholz [8]. That

approach has been extended and applied to proof and recursion theory in [2, 49].

By utilizing these results we can obtain again several applications of impred-

icative proof theory to questions of finitary proof theory in the slow growing

spirit. Instead of replacing &! by and D by let us replace only the under-

lying ordering < on the ordinals in question by the first component <0 of its

Buchholz approximation system (

somehow related

involved we shall try to indicate a brief outline of the argument. In a first step

we replace as proposed above in the definition of [ ]i the ordinal by &!

and by the Buchholz-function 0 from [9]. Then it can be verified that the

definition [a b]0 := 0(&! " [a b]1 " [a ]0 " [b]0 + g) is well-defined since

the 0-term on the right side is in normal form. The proof given in this chap-

1

ter yields that a b implies [a]0 > [b]0 and hence the 0-based ordinal

notation system T for the Howard-Bachmann ordinal yields a proof of strong

0

normalization for T . The second step consists in verifying the pointwiseness

of this proof. For this purpose one can adopt the definition of

ą

of orderings one can verify that a b implies [a]0 >0 [b]0. By a direct

calculation or by utilizing the translation (which is compatible with the concept

of

Mostowski collapses along the

appropriate complexity bounds for the derivation lengths in question. In fact,

for given ą let Gą(n) be the Mostowski collapse on

24

(i) (n.(k + n))k< as well as (n.G &!k(n))k< classify the primitive

0

recursive functions.

k

(ii) (n.( + n))k< as well as (n.G (&!&!k )(n))k< classify the multiple

0

recursive functions.

(iii) (n.(l(k)+n))k< as well as (n.G &!l(k))(n))k< classify the <l()-

0

recursive functions if l e" 1. Here 0(k) := k, &!0(k) := k, n+1(k) :=

n n

(k) and &!n+1(k) :=&!&! (k).

For more details on

The next step in the miniaturization program suggested in this chapter is to

study the rle of infinity in the analysis of T and PA. From the viewpoint

of predicative and bounded arithmetic Gdel s T as well as PA have a high

degree of impredicativity. So the question arises if the analysis of T or PA can

be projected down to the realm of predicative arithmetic, bounded arithmetic

and small classes of computational complexity.

Let us again look at the strong normalization proof for T given here. Transfinite

ordinals come into play for dealing with unrestricted (impredicative) recursors

and terms of the form (ab). But note that [ ]0 does not intrinsically assign

transfinite ordinals to restricted recursor occurrences of the form (Rt0).

What happens with T if we drop the impredicative unrestricted occurrences of

recursors? Remember that the classical Hardy functions are defined as follows:

(i) H0(x) :=x.

(ii) Hą+1(x) :=Hą(x +1).

(iii) H(x) :=H[x](x) where is a limit and ([x])x< is the canonical fun-

damental sequence converging to .

Recall that [x] = x and (ą + )[x] = ą + [x] where ą + > . Then

Hk(x) = 2k x. From [14] we know that ( k + n) directly corresponds

to Hk(n) = 2k n. Thus for finite ordinals the definition of [ab]0 given here

essentially agrees with the definition of Howard and Schtte, thence there is no

genuine infinity involved in the definition of [ab]0.

pred

Let T be the restriction of T in which only predicatively restricted occur-

pred

rences of recursors are allowed. Then the restriction of [ ]0 to T can be

defined in terms of elementary functions. Indeed, in [6] it has been shown

pred pred

that T exactly represents the elementary recursive functions. Thus T is

equivalent to Leivant s system of predicative recurrence in higher types [27]. By

pred

matter of inspection the derivation lengths for T hence also the derivation

lengths for the simple typed -calculus are bounded by an E4-function.

25

3.2 Benefits from the achieved results

There is a lot of information on Gdel s T reported in the literature (cf., for

example, [4, 42, 45]). A recent overview can be found, for example, in [43]. In

this subsection we will explain without going too much into details how most

of these results can be drawn as corollaries from the main theorem and we will

indicate what has been achieved additionally here. By constructive means we

will define [ ]0 : T so that a reduces to b implies that [a]0 > [b]0. Obviously

an infinite sequence of rewritten terms would produce an infinite descending

chain of natural numbers, which of course is impossible. Thus [ ]0 witnesses

the strong normalization of T . In particular, given any term a there exists at

least one terminating sequence of rewritten terms starting with a, hence we have

achieved the weak normalization of T .

For a given natural number n let n be the numeral representing n. For a closed

term a of type 0 the value, val(a), is the unique natural number so that a

eventually rewrites into k. In T a given terma of type 1 then represents the

function mapping n to val(an). Assume that we have designed [ ]0 so that

[n]0 = n for any n<. Consider any rewritten sequence

1 1

an . . . k

where k = val(an). Then

[an]0 > . . . > [k]0 = k = val(an).

We will verify that there is a fixed ordinal ą<0 only depending on a and an

ą-recursive function f : so that [an]0 d" f(n) for all n < . Thus the

function n val(an) is bounded by the ą-recursive function f: Moreover, the

number of steps for computing the value of an by some reasonable reduction

strategy is bounded in terms of f(n). Hence n val(an) is elementary recursive

in f (cf. [26]).

Let Dera(n) be the height of the reduction tree for a at input n, i.e. the maximal

length of rewritten sequences of terms starting with an. Then Dera(n) d" f(n) so

that every reasonable rewriting strategy for rewriting an will yield an algorithm

terminating (essentially) in f(n) steps.

A similar complexity consideration can be carried out for general terms. Let

DerT (n), the derivation lengths function for T , be the maximal height of reduc-

tion trees for terms of depths bounded by n. The function DerT is then classified

as follows. By construction n max {[a]0 : depth(a) d" n} is 0-recursive, thus

also DerT .

Let Tn be the subsystem of Gdel s T in which all recursors have type level

less than or equal to n + 2. Then the restriction [ ]n of [ ]0 to Tn witnesses

0

the strong normalization for Tn. Our proof shows that only unnested recursions

along ordinals less than n+2 are needed for defining [ ]n. (The Howard-Schtte

0

assignment [24, 39] only works for ordinals less than n+3 which is not appro-

priate for our purposes.) Thus the derivation lengths function for any type one

26

functional a " Tn is <n+2-descent recursive, hence as for example proved

in [30] the Tn-definable functions are <n+2-descent recursive. Moreover the

derivation lengths function for Tn is n+2-descent recursive. Since Tn can be

employed for a functional interpretation of IŁn+1 and related systems our re-

sults can be used for simplifying several proofs given by Parsons in his paper

about n-quantifier induction [30].

The main theorem of this chapter also yields that T0 defines only primitive re-

cursive functions and that the derivation lengths for any type one functional of

T0 is primitive recursive. In particular, we reobtain from our approach several

non trivial closure properties of primitive recursive functions proved by Parsons

in [29]. This answers a question posed by Feferman 1995 (private communica-

tion). We also obtain that the derivation lengths function for T0 is elementary

recursive in the Ackermann function. Similarly the computational complexity

of T1 can be classified in terms of multiple recursion.

By an adaptation of the methods of this chapter it is even possible to give a

full derivation lengths classification for the extensions of T based on recursions

along primitive recursive well-orderings.

We also can draw some nice (but well-known) corollaries about metamathemat-

ics from the results of this chapter. (See, for example, Takeuti s book on Proof

Theory [44] for a detailed exposition. See also [12, 19].) Here we take advantage

from our finitistic treatment of Gdel s T without unwinding finite terms into

transfinite ones.

Given the standard representation for 0 and a code for some ą<0 we denote

by PRWO(ą) the statement: There does not exist a primitive recursive strictly

descending chain of ordinals below ą. A closer inspection of the definition of

the function when its domain is restricted to ą shows that it can be carried out

in PRA+ PRWO(ą). Thus the definition of [ ]0 can be carried out in PRA +

PRWO(0) and the definition of [ ]0 when restricted to Tn can be carried out in

PRA+ PRWO(n+2). Hence the usual functional interpretation of PA can be

carried out in PRA + PRWO(0) and the functional interpretation of IŁn+1

can be carried out in PRA+ PRWO(n+2). This shows PRA+ PRWO(0)

0 - Refl(PA) and PRA+ PRWO(n+2) 0 - Refl(IŁn+1), hence PRA+

2 2

PRWO(0) Con(PA) and PRA+ PRWO(n+2) Con(IŁn+1). These nice

results are folklore. (See, for example, Takeuti s book on Proof Theory [44]

for a proof. See also [12, 19].) As shown in subsection 2.4 the proof-theoretic

applications of the results of this chapter can be extended to arbitrary primitive

recursive well-orderings z" of -order type for showing:

PRA+ PRWO(z") 0 - Refl(PA+ TI(z"|)).

2

(Here TI(z"|) denotes the scheme of transfinite induction along the initial seg-

ments of z".) Hence, any numbertheoretic function provably recursive in PA +

TI(z"|) is definable by an unnested recursion along a strict initial segment of z".

27

3.3 Connections to term rewriting theory

This chapter is intended to provide powerful methods for proving termination of

term rewriting systems and higher order term rewriting systems. The definition

of [ ]0 is rather flexible and can easily be modified to show that T or one of

its subsystems Tn is closed under various strong recursion principles. Note

that proving termination via variants of [ ]0 automatically yields bounds on the

resulting computational complexities.

A general principle about an intrinsic relationship between reasonable ter-

mination orderings z" and the computational complexity of rewrite systems for

which z" proves termination has been proposed by Cichon. His principle states

that this relationship is established via the slow growing hierarchy along the

order type of z". This principle although the original exposition on it was

erraneous turned out to be a very fruitful idea and it has been verified for a

large number of examples [5, 6, 8, 15, 22, 23, 28, 50, 49, 48]. We have no doubt

that our results together with the outline given above yields that the slow grow-

ing hierarchy up to 0&!+1 classifies exactly the computational complexity of

T . The restrictions of T are classified with the corresponding fragments of the

slow growing hierarchy. Thus Cichon s principle also has been established

for T when one considers the ordering on T as a reasonable termination or-

0

dering for T . (At present time it does not seem possible to us to give a rigorous

definition of the notion reasonable termination ordering. But as reasonable

we consider at least orderings on ordinal notation systems which fit into the

concept of

now look for applications of term rewriting to recursion theory. Assume that we

are interested in an equationally defined complexity class C of number-theoretic

functions like the elementary recursive functions, the primitive recursive func-

tions, the multiple recursive functions. Then it is often possible to find canon-

ically a term rewriting system RC which exactly represents the members of C.

In nonpathological cases RC is confluent and strongly normalizing. A successful

derivation lengths classification for RC then yields intrinsic information on the

computational complexity of the members of C.

It is often possible to prove some non trivial closure properties for C by solving

simple inequations via monotone interpretations for the rewrite rules. For ex-

ample, in [15] the interested reader may find some few line proofs showing that

the primitive recursive functions are closed under primitive recursion with pa-

rameter substitution, simple nested recursion and unnested multiple recursion.

Implicitly this type of approach has been used also in this chapter. Let CT be

the class of number-theoretic functions definable in T . The recursion-theoretic

classification of CT falls out as a corollary from the derivation lengths classifi-

cation of the corresponding rewrite system RC which of course is T itself. We

T

think that our classification of CT is as short, simple and direct as possible.

In addition, it seems to us that the trade-off theorem mentioned in [43] can also

28

be obtained directly from our approach.

3.4 Technical summary

In the first subsection of Section 2 we recall following Schtte s exposition

[39] the definition of Gdel s T . Then we carry out in full detail the modified

definition of the Howard-Schtte assignment of numbers to terms of T . In

Subsection 2.2 we prove in detail by constructive means, i.e. the argument is

formalizable in PRA+ PRWO(0) by intuitionistic reasoning that a rewrites

to b implies that the natural number assigned to a is larger than the natural

number assigned to b. By the very definition of the assignment we can read

off the desired results immediately. In Subsections 2.3 and 2.4 we indicate how

our approach can be extended to T in the -formulation and to T enlarged

by transfinite recursors. Besides from knowledge of chapter VI of Schtte s

exposition [39] the chapter is entirely self contained at least from a technical

point of view.

3.5 Basic definitions

We follow essentially the notation of chapter VI of [39]. We suppose for simplic-

ity that all terms are closed. The successor functional is denoted by SUCC. The

n-th numeral is denoted by n. For dealing with the fragments of T which are

appropriate for functional interpretations of the fragments of PA we replace

for any type the iterator I by the recursor R of type 0(0). According to

Parsons exposition [30] we also add for any type a case distinction functional

D of type 0. For allowing general unrestricted reductions we modify the

1

definition of the one step reduction as follows.

1

Definition 3.1 is the least binary relation on terms so that

1

(i) K a b a .

1

(ii) S a bc a c(bc).

1

(iii) D 0a b a .

1

(iv) D (SUCCt0)a b b .

1

(v) R 0a0 b b .

1

(vi) R (SUCCt0)a0 b a0t0(R t0a0 b ).

1 1

(vii) If b c then a b a c.

1 1

(viii) If a b then a c b c.

Definition 3.2 For ą<0 we define no(ą) < as follows.

29

(i) no(0) := 0.

1 n

(ii) no(ą) :=n+no(ą1)+ +no(ąn) if ą = ą + +ą >ą1 e" . . . e" ąn.

The natural sum is denoted by ". The natural product is denoted by ". For

the reader s convenience we recall the definitions. Let ą " 0 :=0 " ą := ą and

0 n

ą " 0 :=0 " ą := 0. Now suppose ą = + + >0 e" . . . e" n, n e" 0

n+1 m

and = + + >n+1 e" . . . e" m, m e" n +1. Then

Ą(0) Ą(m)

ą " := + + ,

where Ą is a permutation of {0, . . . , m} such that Ą(i) e" Ą(j) if i

0 0 n n

( "n+1 " " "m) " "( "n+1 " " "m).

Proposition 3.1 (i) no(ą " ) =no(ą) +no().

(ii) no(ą) +no() - 1 d" no(ą " ) d" no(ą) no() if ą =0 = .

(iii) no(ą) d" 2 no(2ą).

(iv) no(2ą) d" 2no(ą).

The following definition is taken from Weiermann (1996) [47]. Let Ś(x) =

F5(x + 100) be a reasonably fast growing primitive recursive function. Here

x+1

F0(x) =2x and Fn+1(x) :=Fn (x). As expected the functions Fn satisfy some

standard majorization properties like Fn(x)

properties of the functions Fn which are needed during that proof.

Definition 3.3 Recursive definition of ą< for ą<0.

ą := max ({0}*"{ +1: <ą & no() d" Ś(no(ą))}).

It follows from [14] or [19] that this definition can be carried out in PRA +

PRWO(0).

Proposition 3.2 (i) k = k.

(ii) k d" (ą + k).

(iii) no(ą) d" ą.

(iv) ( +1) e" 1+.

(v) (ą " ()) d" (ą " ).

(vi) ą<& no(ą) d" Ś(no()) ! (ą) <().

(vii) ą e" ! ą e" Ś(no(ą)).

30

Proof. The assertions 1,2,3,4 and 6 follow from the definition of .

Assertion 5 is proved by induction on . If ą = 0 then assertion 1 yields

(ą " ) =() = = (ą " ). Assume that ą >0. Then no(ą) > 0.

If = 0 then then (ą " ) = ą = (ą " ). Assume that > 0. Then

= ł +1 for some ł < such that no(ł) d" Ś(no()). The induction

hypothesis yields (ą " ) =(ą " ł +1) =(ą " 1 " ł) d" (ą " 1 " ł).

If ł +1 = then (ą " 1 " ł) =(ą " ). If ł +1< then no(ą " 1 " ł) d"

no(ą)+1+no(ł) d" no(ą)+1+Ś(no()) d" Ś(no(ą " )). Hence, by definition,

(ą " 1 " ł) <(ą " ) since ą " 1 " ł <ą " .

For the proof of assertion 7 note that for ą e" we have Ś(no(ą)) < ą and

no(Ś(no(ą))) d" Ś(no(ą)).

We now refine Schtte s exposition of Howard s treatment of Gdel s T .

Definition 3.4 Recursive definition of g.

(i) g0 :=0.

(ii) g() :=max {g +1, g}.

Definition 3.5 (i) [0]i := 0.

(ii) [SUCC]0 := 1.

(iii) [SUCC]i := 0 if i >0.

(iv) [a ]i := 1 if i d" g and a is a combinator.

(v) [a ]i := 0 if i >g and a is a combinator.

(vi) [R]i := 1 if i d" g +1.

(vii) [R]i := if i = g +2.

(viii) [R]i := 0 if i >g +2.

(ix) [Rt0]0 := [t0]0 +1.

(x) [Rt0]i := 1 if 1 d" i d" g +1.

(xi) [Rt0]i := [t0]0 +1 if i = g +2.

(xii) [Rt0]i := 0 if i >g +2.

(xiii) [a b]i := 2[a b]i+1 " ([a ]i " [b]i) if 1 d" i d" g and a is not a

recursor.

(xiv) [a b]i := [a ]i if i >g and a is not a recursor.

(xv) [a b]0 := ( " [a b]1 " [a ]0 " [b]0 + g) and a is not a recursor.

31

The definition of [a b]0 has been extracted by a fine structure analysis of

Howard s treatment of bar recursion of type zero [25]. Our final goal is to prove

1

a b ! [a]0 > [b]0.

Corollary 3.1 (i) ("a )("k " )("i >k)[a ]i =0.

(ii) [a ]0 <.

(iii) [a ]i <0.

(iv) i

Proof. Assertions 1,2 and 3 are easy to see. Assertion 4 follows by a straightfor-

ward induction on the length of a . Note that if a is a constant then [a ]i > 0

holds for i

(ii) i e" 1 ! [n]i =0.

Lemma 3.2 (i) a b = Rt0 ! [a ]i d" [a b]i.

(ii) i d" g ! [b]i d" [a b]i.

Proof. 1. This follows immediately from the definition (cf. [39]).

2. If a b = Rt0 then [a b]0 = ("[a b]1"[a ]0"[b]0+g) e" [b]0 by

Proposition 3.2. If i e" 1 then the assertion holds by definition. If a b = Rt0

then g =0 and [Rt0]0 > [t0]0 holds by definition.

Lemma 3.3 (i) [a b]i d" 2[a b]i+1 " ([a ]i " [b]i) if 1 d" i d" g.

(ii) [a b]i d" [a ]i if i >g.

Proof. If a b = Rt0 then equalities hold by definition. Assume that a b =

Rt0. Then = 0 and g = 0. Suppose that 1 d" i d" g +1. Then

[Rt0]i =1 =[R]i. Suppose i = g +2. Then [Rt0]i =[t0]0 +1<=[R]i.

Suppose i >g +2. Then [Rt0]i =0 =[R]i.

The following technical lemma is of crucial importance for the proofs in the

next subsection. Thus following a suggestion of the referee we include its

proof in full detail. Intuitively the lemma can be explained as follows. Since

[a b]i occurs explicitly in the construction of [a b]0 the number-theoretic

content of [a b]i when measured by its norm is bounded in simple terms of the

number-theoretic content of [a b]0. Furthermore since [a b]i is defined by

an explicit construction involving [a ]i, [a ]0, [b]i, [b]0 and g the number-

theoretic content of [a b]i when measured by its norm is bounded by the

number-theoretic content of [a ]0, [b]0 and g. The term g is used here

32

since it bounds the height of the exponential construction in the definition of

[a b]i. In a similar vein one would like to bound the number-theoretic content

of [a (bc)]i in simple terms of [a ]0, [b]0 [c]0, g and g but note that

this only works if a is not a recursor.

Proposition 3.3 (i) If a is not a recursor then no([a]i) d" [a]0 and if a

is a recursor then no([a]i) d" 2 d" no([a]0) +1.

(ii) If i e" 1 then no([a b]i) d" F2([a ]0 +[b]0 + g).

(iii) If i e" 1 then no([a b(cĆdĆ)]i) d" F3([a ]0 +[b]0 +[cĆ]0 +[dĆ]0 +

g +2 g + gĆ +2).

Proof. Ad 1. By induction on the length of a. Let ai := [a]i. If a is a

constant different from R then [a]i d" [a]0. For R on has by definition

no([R ]i) d" no() =2 d" no([R ]0) +1.

Now assume that a = bc. Let bi := [b]i and ci := [c]i. If b is a recursor

then ai d" c0 +1 = a0. Assume that b is not a recursor. We may assume i >0.

Case 1. i >g.

Then the induction hypothesis and assertion 2 of Proposition 2 yield

ai = bi d" b0 d" ( " a1 + b0 + c0 + g) =a0.

Case 2. i d" g.

2

Assertion 4 of Corollary 1 yields a1 =2a " (b1 " c1) e" b1 > 0. Assertion 7 of

Proposition 2 then yields a0 = ( " a1 " b0 + c0 + g) e" Ś(no( " a1 " b0 +

c0 + g)) > 2g no(a1). It hence suffices to prove no(aj+1) d" 2j no(a1) for

1 d" j

Ad 2. Let ai =[a ]i and bi =[b]i. Assume first that a is a recursor. Then

g =0 and no([a b]i) d" b0 +1d" F2(no(a0) +no(b0)). Now assume that a

is not a recursor. We claim

2(g+1-i)

(") no([a b]i) d" F0 (a0 + b0 +2)

for 1 d" i d" g. We prove (") by induction on g - i.

Case 1. i = g.

i+1

Then Proposition 1 and the just proved assertion 1 yield no([a b]i) =no(2a "

2

i+1 0

(ai " bi)) d" 2no(a ) (no(ai)+no(bi)) d" 2a +1 (a0 + b0 +2) d" F0 (a0 + b0 +2).

Case 2. 1 d" i

2(g+1-i)-2

(a0+b0+2)

0

yield no([a b]i) =no(2[a b]i+1 " (ai " bi)) d" 2F (a0 +

2(g+1-i)-1 2(g+1-i)

b0 +2) =F0 (a0 + b0 +2) (a0 + b0 +2) d" F0 (a0 + b0 +2).

2(g+1-i)

Hence (") is proved and we obtain no([a b]i) d" F0 (a0 + b0 +2) d"

F2(a0 + b0 + g) for 1 d" i d" g. The proof of assertion 2 is now completed by

observing that no([a b]i) =no(ai) d" a0 +1

33

the functions Fn which are easily proved: F0(F0(x))(x+x+2) d" F0(F0(x+1)),

y+2 y+3 2y

F0 (x) x d" F0 (x), F0 (x)

Ad 3. Let ai =[a ]i, bi =[b]i, ci =[cĆ]i and di =[dĆ]i. We claim

2(g+1-i)

("") no([a b(cĆdĆ)]i) d" F2 (a0 + b0 + c0 + d0 + g + gĆ +2)

for 1 d" i d" g. We prove ("") by induction on g - i.

Case 1. i = g. Then Proposition 1 and the just proved assertion 2 yield

no([a b(cĆdĆ)]i) =no(2[a b]i+1 " ([a b]i " [cĆdĆ]i)) d"

2no([a b]i+1) (no([a b]i) +no([cĆdĆ]i)) d"

2

2F (a0+b0+g) (F2(a0 + b0 + g) +F2(c0 + d0 + gĆ)) d"

2

F2 (a0 + b0 + c0 + d0 + g + gĆ +2).

Case 2. 1 d" i

no(2[a b(cĆdĆ)]i+1 " ([a b]i " [cĆdĆ]i)) d"

2no([a b(cĆdĆ)]i+1) (no([a b]i) +no([cĆdĆ]i)) d"

2(g+1-i)-2

(a0+b0+c0+d0+g+gĆ+2)

2

2F (F2(a0 + b0 + g) +F2(c0 + d0 + gĆ)) d"

2(g+1-i)

F2 (a0 + b0 + c0 + d0 + g + gĆ +2).

Assertion ("") then yields

2(g+1-i)

no([a b(cĆdĆ)]i) d" F2 (a0 + b0 + c0 + d0 + g + gĆ +2) d" F3(a0 +

b0 + c0 + d0 + g +2 g + gĆ +2) for 1 d" i d" g. For i > g we obtain

no([a b(cĆdĆ)]i) d" no([a b]i) d" F2(a0 + b0 + g) d" F3(a0 + b0 + c0 + d0 +

g +2 g + gĆ +2).

In the proof of this assertion we have used implicitly the following properties

of the functions Fn which are easily proved: 2 F2(x)

F2(x +1), F2(x) +1

Definition 3.6 [a ] < [b ] : !! [a ]0 < [b ]0 &("i d" g) [a ]i d" [b ]i.

Lemma 3.4 [a ] < [K a b].

Proof. As in [39] we see [a ]i d" [K a b]i for all i e" 1 by using Lemma 2. For

i =0 we obtain

[K a b]0 = ( " [K a b]1 " [K a ]0 " [b]0 + g) >

[K a ]0 = ( " [K a ]1 " [K ]0 " [a ]0 + g) > [a ]0.

Lemma 3.5 [a c(bc)] < [S a bc].

Proof. The corresponding proof given in [39] yields

[a c(bc)]1 " [a c]1 " [bc]1 +2d" [S a bc]1 and

[a c(bc)]i d" [S a bc]i for 1 d" i d" g. We prove additionally

[a c(bc)]0 < [S a bc]0.

34

Assertion 5 of Proposition 2 yields

[a c(bc)]0 = ( " [a c(bc)]1 " [a c]0 " [bc]0 + g) d"

( " ([a c(bc)]1 " [a c]1 " [bc]1) " [a ]0 " [c]0 " [b]0 " [c]0 +

2 g + g) =: ą.

Assertion 3 of Proposition 3 and assertion 6 of Proposition 2 yield

ą<("[S a bc]1"[a ]0"[b]0"[c]0+g+g) d" [S a bc]0.

In this proof we have implicitly used that 6F3(2x)+2x d" F5(x+100) = Ś(x)

holds for all x < . This majorization property of the functions Fn can be

verified easily.

Lemma 3.6 [a ] < [D 0a b ].

Lemma 3.7 [b ] < [D (SUCCt0)a b ].

Lemma 3.8 [b ] < [R 0a0b ].

Proof. Lemma 2 yields [b ]i d" [R 0a0 b ]i for i e" 1. Furthermore we have

[b ]0 <( " [R 0a0b ]1 " [R 0a0]0 " [b ]0 + g) =[R 0a0b ]0.

Lemma 3.9 [a0 t0(R t0a0b )] < [R (SUCCt0)a0 b ].

Proof. Similarly as in [39] we get

[a0 t0(R t0a0 b )]i +[R t0a0 b ]i +[a0 ]i +1d" [R (SUCCt0)a0 b ]i

and [R t0a0]i < [R (SUCCt0)a0 ]i for all i such that 0

standing of the rle of the exponentiation function in the proof we would like

to mention explicitly that for i = g +1 we have

[a0 t0(R t0a0 b )]i +[R t0a0 b ]i +[a0 ]i +1

0

=2[t ]0+1 " (1 " [a0]i) " [a0 ]i " [a0 ]i +1

0

d" 2[t ]0+1+1 " (1 " [a0 ]i) =[R (SUCCt0)a0 b ]i.

We now prove

[a0 t0(R t0a0 b )]0 < [R (SUCCt0)a0 b ]0.

Proposition 2 and Proposition 3 yield [R t0a0 ]0 " [a0 ]0 " [t0]0 =

( " [R t0a0]1 " [R t0]0 " [a0 ]0 + g +1) " [a0 ]0 " [t0]0 d"

( " [R t0a0]1 " [R t0]0 " [a0 ]0 " [a0 ]0 " [t0]0 + g +1)

<( " [R (SUCCt0)a0 ]1 " [R (SUCCt0)]0 " [a0]0 + g +1) =

[R (SUCCt0)a0 ]0.

Hence, again by Proposition 2 and Proposition 3, we obtain

[a0 t0(R t0a0 b )]0 =

( " [a0t0(R t0a0 b )]1 " [a0 t0]0 " [R t0a0 b ]0 + g) =

( " [a0t0(R t0a0 b )]1 " ( " [a0 t0]1 " [a0 ]0 " [t0]0 + g0)

" ( " [R t0a0b ]1 " [R t0a0 ]0 " [b ]0 + g) +g) d"

( " ([a0 t0(R t0a0 b )]1 " [R t0a0 b ]1 " [a0 t0]1)

35

" [a0 ]0 " [t0]0 " [R t0a0 ]0 " [b]0 + g 2) <

( " [R (SUCCt0)a0 b ]1 " [R (SUCCt0)a0 ]0 " [b ]0 + g) =

[R (SUCCt0)a0 b ]0.

The crucial point in the proof just given consists in resolving the nested recursion

of the -terms involved into an unnested one of a low ordinal complexity. For

this purpose assertion 5 of Proposition 2 is of fundamental importance.

Lemma 3.10 [b] < [c] ! [a b] < [a c].

Proof. Assume first that a is not a recursor. The proof given in [39] yields

[a b]i d" [a c]i for all i such that 1 d" i d" g. We prove

[a b]0 < [a c]0.

Indeed, by assumption, Proposition 2 and Proposition 3 we have

[a b]0 = ( " [a b]1 " [a ]0 " [b]0 + g) < ( " [a c]1 " [a ]0 "

[c]0 + g) =[a c]0.

Assume now that a = R. [b]0 < [c]0 yields [Rb]0 < [Rc]0. Let i >0.

If i = g +2 then [Rb]i = [Rc]i. Finally, for the critical case, suppose

i = g +2. Then [b]0 < [c]0 yields [Rb]i =[b]0 +1 < [c]0 +1 = [Rc]i.

Lemma 3.11 Assume that b is not a recursor.

[a ] < [b ] ! [a c] < [b c].

Proof. Assume first that a is not a recursor. The proof given in [39] yields

[a c]i d" [b c]i for all i such that 0

Indeed, by assumption, Proposition 2 and Proposition 3 we have [a c]0 =

( " [a c]1 " [a ]0 " [c]0 + g) <( " [b c]1 " [b ]0 " [c]0 + g) =

[b c]0. Suppose that a = R. Then [a c]0 =[c]0 +1 <( " [b c]1 "

[b]0 "[c]0 +g) =[b c]0 since b is not a recursor. By assumption, Lemma

2 and Lemma 3 we furthermore have [a c]i d" [a ]i d" [b ]i d" [b c]i for

1 d" i d" g.

1

Theorem 2 If a b then [b] < [a], hence [b]0 < [a]0.

1

Proof. By induction on the definition of . The basic steps are dealt with by

Lemmata 4,5,6,7,8 and 9.

Lemma 10 and 11 yield closure under application.

For convenience we restate in some more detail several (well known) corol-

laries of Theorem 1 which are already mentioned in the introduction. Let

n

1 = and n+1 := . Similarly let 2n() be defined by 20() := and

36

n

2n+1() :=22 (). Let dp(a) denote the number of occurrences of constants in

a. Thus dp(0) = dp(SUCC) = dp(K ) = dp(S ) = dp(D ) = dp(R) = 1

and dp(ab) = dp(a) +dp(b). Let G(a) := g if a is a constant of type and

set G(ab) :=max {G(a), G(b)} otherwise. Then G(a) is the maximal type level

in a. Let RG(a) := g +2 if a is a constant R , let RG(a) := 0 if a is a

constant which is not a recursor and set RG(ab) :=max {RG(a), RG(b)} oth-

erwise. Then RG(a) is the maximal type level of a recursor occuring in a. The

derivation lengths function for T is defined by

1 1

DT (m) :=max {n: ("a0, ..., an)[a0 ... an & G(a0) d" m & dp(a0) d" m}.

Let Tn be the set of terms a so that RG(a) d" n + 2. The derivation lengths

1 1

function for Tn is defined by DT (m) := max {n : ("a0, ..., an)[a0 ...

n

an & RG(a) d" n +2, G(a0) d" m, dp(a0) d" m}. For a functional a of type one

the derivation lengths function for a is defined by

1 1

Da(m) :=max {n: ("a0, ..., an)[am = a0 ... an]}.

For a functional a of type zero the value val(a) of a is the unique natural number

1 1

k such that there exists a1, . . . , an with a = a1 an = k.

Corollary 3.2 (i) T is strongly normalizing.

(ii) If a is of type one then val(am) d" ( " [a]1 " [a]0 " m). If a " Tn then

" [a]1 " [a]0 < n+2 and thus m val(am) is bounded by a < n+2

descent recursive function. Furthermore Da(m) d" ("[a]1"[a]0"m). In

the case a " Tn the function Da is bounded by a <n+2-descent recursive

function, and hence m val(am) is a <n+2 descent recursive function.

In the case a " T0 the functions m val(am) and Da are primitive

recursive. In the case a " T1 the functions m val(am) and Da are

multiple recursive.

(iii) DT (m) d" (n+2 + m). Hence DT is n+2-recursive.

n n

(iv) DT is primitive recursive in the Ackermann-function.

0

(v) DT is an 0-recursive function.

Proof. The corollary follows easily from Theorem 1 and, for example, [14]. The

reader may verify first the following auxiliary statements by induction on the

depth of a.

(i) i >G(a) ! [a]i =0.

(ii) i >RG(a) ! [a]i d" 2G(a)+1-i(G(a) +1- i +2 dp(a)).

(iii) RG(a) e" i e" 1 ! [a]i d" 2RG(a)-i( " 2G(a)+1-i(G(a)+1- i +2 dp(a))).

37

(iv) [a]0 d" ( " 2RG(a)-1( " 2G(a)+1(G(a) +1+4 dp(a))))

Furthermore 2n+1( ) =n+2.

It has been shown, for example, by Parsons [30] that Tn suffices for a functional

interpretation of IŁn+1. Theorem 1 shows that the weak normalization proof

for Tn can be carried out in PRA + PRWO(n+2). This yields an alternative

proof of the following folklore result.

Corollary 3.3 (i) If f : is provably recursive in IŁn+1 then f is

<n+2 descent recursive.

(ii) PRA+PRWO(n+2) 0-Refl(IŁn+1) hence PRA+PRWO(n+2)

2

Con(IŁn+1)

(iii) PRA+PRWO(0) 0-Refl(PA) hence PRA+PRWO(0) Con(PA).

2

3.7 Treatment of T in the -formulation

As far as we can see Theorem 1 extends easily to variants of T formulated in -

terms. Indeed, let -abstraction be defined as, for example, in [39]. It can easily

be verified that Gdel s T based on -terms can be embedded by this folklore

translation into Gdel s T based on combinators in such a way that reductions

except applications of the -rule are respected. For dealing with the -rule,

1 1

i.e. a b ! x.a x.b, it is convenient to employ Howard s 1970 non-unique

assignment [24] of ordinal numbers to terms, which is a slight variant of [ ]0

1

for showing that under the assumption a b one can assign effectively a natural

number to b which is smaller then the natural number assigned to a. Thus all

results for our formulation of T carry over to T in the -formulation. This also

extends to variants of T where -reductions (cf., for example, [4]) are allowed

since [x.fx]0 > [f]0 holds for subterm reasons.

4 A simple characterization of the provably re-

cursive functions of PA

Abstract

Inspired by Buchholz technique of operator controlled derivations

(which were introduced for simplifying Pohlers method of local pred-

icativity) a straightforward, perspicuous and conceptually simple method

for characterizing the provably recursive functions of Peano arithmetic in

terms of Kreisel s ordinal recursive functions is given. Since only amaz-

ingly little proof and hierarchy theory is used, the chapter is intended to

make the field of ordinally informative proof theory accessible even to non-

prooftheorists whose knowledge in mathematical logic does not exceed a

first graduate level course.

38

4.1 Preliminaries

The set of non logical constants of PA includes the set of function symbols for

primitive recursive functions and the relation symbol =.

(In the sequel 0 denotes the constant symbol for zero and S denotes the successor

function symbol.) The logical operations include '", (", ", ". We have an infinite

list of variables x0, x1, . . . The set of PA-terms (which are denoted in the sequel

by r, s, t. . . ) is the smallest set which contains the variables and constants and

is closed under function application, i.e. if f is a k-ary function symbol and

t1, . . . , tk are terms, so is ft1 . . . tk. If t(x) is a PA-term with FV (t) ą" {x}

then tIN denotes the represented function in the standard structure IN. The set

of PA-formulas (which are in the sequel denoted by A, B, F ) is the smallest set

which includes s = t, Źs = t (prime formulas) and is closed under conjunction,

disjunction and quantification. The notation ŹA, for A arbitrary, is an abbre-

viation for the formula obtained via the de Morgan laws; A B abbreviates

ŹA (" B. We denote finite sets of PA-formulas by , ", . . . As usual , A stands

for *"{A} and , " stands for *" ".

The formal system PA is presented in a Tait calculus. PA includes the logical

axioms , ŹA, A, the equality axioms , t = t and , t = s, ŹA(t), A(s), the

successor function axioms , "x(Ź0 =Sx) and , "x"y(Sx =Sy x = y), the

defining equations for primitive recursive function symbols (cf. [Pohlers (1989)])

and the induction scheme , A(0) '""x(A(x) A(Sx)) "xA(x).

The derivation relation for PA is defined as follows:

(Ax) if is an axiom of PA.

('") , Ai for all i "{0, 1} imply , A0 '" A1.

((") , Ai for some i "{0, 1} implies , A0 (" A1.

(") , A(y) implies , "xA(x) if y does not occur in , "xA(x).

(") , A(t) implies , "xA(x).

(cut) , A and , ŹA imply .

Definition 4.1 (Rank of a formula)

" rk(A) :=0, if A is a prime formula

" rk(A (" B) :=rk(A '" B) :=max {rk(A), rk(B)} +1

" rk("xA(x)) := rk("xA(x)) := rk(A) +1

In this chapter we consider only ordinals less than 0. These ordinals are denoted

#

by ą, , ł, , . Finite ordinals are denoted by k, m, n, . . . denotes the natural

sum of ordinals (cf. [Schtte 1977] or [Pohlers 1989]). For each ą<0 let N(ą)

be the number of occurences of in the Cantor normal form representation of

1 n

ą. Thus N(0) = 0, and N(ą) =n +N(ą1)+ +N(ąn) if ą = ą + + ą >

ą1 e" . . . e" ąn. N satisfies the following conditions:

(N1) N(0) = 0

(N2) N(ą # ) =N(ą) +N()

39

(N3) N(ą) =N(ą) +1

(N4) card {ą<0 : N(ą)

Abbreviation: Ną := N(ą).

Definition 4.2 Let Ś(x) :=3x+1 and f(2)(x) :=f(f(x)) denote iteration.

Fą(x) :=max {2x}*"{Fł (2)(x) | ł <ą & Nł d" Ś(Ną + x)}

This expression is well defined due to (N4).

Remark: It follows immediately from [Buchholz, Cichon et Weiermann (1994)]

that this hierarchy is equivalent to the ordinal recursive functions.

Definition 4.3 Throughout the whole chapter we assume F : IN IN to be a

monotone function. Given Ś " IN we denote by F[Ś] the function defined by

F[Ś] : IN IN; x F(max {x, Ś})

We write F[k1, . . . , kn] instead of F[max {k1, . . . , kn}] and use the abbreviation

F d" F :! "x " IN F(x) d" F (x).

Remark: F[n, m] =F[n][m] and F[n][n] =F[n].

Lemma 4.1 (i) x

(iv) For every n-ary primitive recursive function f there exists a p " IN such

that " k " INn f( k) < Fp(max k).

(v) Fł[Ś] d" Fł [Ś]

#

(vi) k

# #

Proof. (i)-(vii) are simple. In (vi) for example k

(viii) If ą = ą0 + n for some n e" 1 the claim follows from (v). So we can assume

# #

(1) ł ą0 +2<ł ą

40

and the premise Ną0 < Fł(Ś) implies

# #

(2) N(ł ą0 +2) d" Nł +Fł(Ś) + 1 d" Ś(N(ł ą) +Fł ą(Ś))

#

For all x " IN we obtain from (1) and (2) by part (iii) and (ii)

Fł ą0+2[Ś](x) =Fł ą0+2(max {Ś, x}) <(iii) F(2) (max {Ś, x})

# #

#

ł ą

d"(ii) Fł ą+1[Ś](x).

#

Definition 4.4 (F-controlled derivations)

ą

F r holds iff Ną

ą0

((") A0 (" A1 " & "i "{0, 1} "ą0 <ą F r , Ai

ą0

('") A0 & A1 " & "i "{0, 1} "ą0 <ą F r , Ai

ą0

(") "xA(x) " & "n

(") "xA(x) " & "n " IN "ąn <ą F[n] r , A(n)

ą0

(cut) rk(A)

" "(IN) := {A | A is prime formula and IN |= A}

ą0 ą0 ą0

" F r , (Ź)A :! F r , A & F r , ŹA

So the F-controlled derivations are just like usual PA-derivations but with -

rule and some information about the "-witnesses and the derivation length. The

first one is the essential aid for collapsing (lemma 4.6) while the latter is used

to apply lemma 4.1 (viii) in the cut-elimination procedure 4.5.

Lemma 4.2 (Monotonicity)

For r d" s & ą d" & N

F r ! F s ,

Lemma 4.3 (Inversion)

ą ą

(i) F r , A & ŹA " "(IN) ! F r

ą ą

(ii) F r , A0 & A1 ! "i "{0, 1} F r , Ai

ą ą

(iii) F r , "xA(x) ! "n " IN F[n] r , A(n)

Lemma 4.4 (Reduction) Let A be a formula A0 (" A1 or "xB(x) of rank d" r.

ą ą +

Fł[Ś] r , ŹA & Fł[Ś] r ", A ! Fł+1[Ś] r , "

41

Proof by induction on . The interesting case is that A a""xB(x) is the main

formula of the last deduction step. For some k < Fł(Ś) and 0 < we have

the premise

0

Fł[Ś] r ", A, B(k)

The induction hypothesis yields

ą + 0

Fł+1[Ś] r ", B(k)

By the inversion lemma 4.3 (iii) and the fact Fł[Ś][k] d" Fł+1[Ś] (lemma 4.1

(vi)) we can transform the first derivation into

ą

Fł+1[Ś] r , ŹB(k)

N(ą + ) < Fł+1(Ś) is true by lemma 4.1 (vii). Furthermore rk(B(k)) <

rk("xB(x)), so the claim can be obtained by a cut (using monotonicity before).

ą

The underlying idea of the reduction lemma is as follows: If F , ŹA and

r

ą +

G r ", A then F ć%G , " for suitably nice F and G. Thus the composi-

r

tion of bounding functions yields the witness bounding during the cut reduction.

ą ą

Lemma 4.5 (Cut-elimination) Fł[Ś] r +1 ! Fł ą+1[Ś] r

#

Proof by induction on ą. In the interesting case of a cut we have for some

ą0 <ą

ą0

Fł[Ś] r +1 , (Ź)A

for some A of rank d" r. The induction hypothesis yields

0

ą

Fł ą0+1[Ś] r , (Ź)A

#

By reduction 4.4 (or lemma 4.3 (i) for prime formulas (Ź)A) we obtain

0

ą 2

Fł ą0+2[Ś] r

#

0

and the claim follows from lemma 4.1 (vii), (viii), ą 2 <ą and monotonicity.

The motivation behind the Cut elimination theorem is as follows. For a given

suitably nice monotone function F : IN IN let

ą

F (x) :=max ({F (x) +1}*"{F (F (x)) : <ą & N d" Ną + x}.

42

ą ą

If F then Fą r . Thus the ą-th iterate of F (stemming from ą

r +1

applications of the reduction lemma) yields witness bounding for the cut elimi-

nation. This is mirrored by the following observation: (Fł)ą(x) d" Fą#ł+1(x).

Lemma 4.6 (Collapsing)

ą

F 0 "yA(y) & rk(A) =0 ! "n

step was (") so there is an n

F 0 A(n), "yA(y)

If IN |= A(n) we are done otherwise we can use lemma 4.3 (i) and the induction

hypothesis to prove the claim.

Definition 4.5 A <" A :! There are a PA-formula B, pairwise dis-

tinct variables x1, ..., xn and closed PA-terms t1, s1, . . . , tn, sn such that tIN =

i

sIN (i =1, ..., n) and A a" Bx ,...,xn(t1, ..., tn), A a" Bx ,...,xn(s1, ..., sn).

i 1 1

Lemma 4.7 (Tautology and embedding of mathematical axioms)

k

(i) A <" A ! Fk 0 A, ŹA where k := 2 rk(A)

(ii) For every formula A with FV (A) ą"{x} there is a k " IN such that

+3

Fk 0 A(0) &"x(A(x) A(Sx)) "xA(x)

k

(iii) For every other math. axiom of PA A there is a k <such that Fk 0 A

Proof. (i) as usual using Nk < Fk(0). (iii) If contains only -formulas

k

and denotes a usual cut-free PA-deriviation (without F-controlling) we

0

k k

can easily conclude Fk 0 by induction on k. Since the result A for

0

mathematical axioms A except induction is folklore the claim follows.

(ii) Let k := rk(A(0)). We show by induction on n

2k +2n

(") F2k[n] 0 ŹA(0), Ź"x(A(x) A(Sx)), A(n)

2k

n = 0: The tautology lemma 4.7 (i) yields F2k 0 ŹA(0), A(0). So the claim

follows by monotonicity.

2k

n n+1: By tautology lemma we have F2k 0 ŹA(Sn), A(n +1). Connecting

this with the derivation given by induction hypothesis by ('") yields

2k +2n +1

F2k[n +1] 0 ŹA(0), Ź"x(A(x) A(Sx)), A(n) &ŹA(Sn), A(n +1)

So (") follows by an application of ("). The claim follows from (") by an

application of (") and three applications of ((").

43

Lemma 4.8 (Embedding) For every satisfying FV () ą"{x1, . . . , xm} and

PA there exist ł <2, ą < 2, and r < such that

ą

"n " INm Fł[n] r (n)

Proof by induction on the derivation of .

1. A, ŹA " is an axiom. The tautology lemma yields the claim.

2. "xA(x, x) " (x) and results from (x), A(y, x) and y " {x1, . . . , xm}

/

holds true. By induction hypothesis there are ł <2, ą < 2, r < such that

ą

"n " IN "n " INm Fł[n, n] r (n), A(n, n)

We obtain the claim by an application of (") since Fł[n, n] =Fł[n][n].

3. "xA(x, x) " (x) and results from (x), A(t(x), x). We can assume that

x "{x1, . . . .xm} and FV (t) ą"{x1, . . . .xm}. By induction hypothesis there are

/

ł <2, ą0 < 2 and r0 <such that

ą0

(3) "n " INm Fł[n] r0 (n), A(t(n), n)

The tautology lemma 4.7 (i) yields a k <such that

k

(4) "n " INm Fk 0 A(tIN(n), n), ŹA(t(n), n)

Since x.tIN(x) is a primitive recursive function due to lemma 4.1 (iv) there is

a p < satisfying "x " INm tIN(x) < Fp(max x). Choosing p > k implies

Fk d" Fk[n] d" Fp[n] with aid of lemma 4.1 (iii). Letting ą := max {ą0, k} and

r >max {r0, rk(A)} we obtain

ą

"n " INm Fł p[n] r (n), A(tIN(n), n), (Ź)A(t(n), n).

#

from (3) and (4) by monotonicity 4.2. Applying a cut we get

ą +1

"n " INm Fł p[n] r (n), A(tIN(n), n)

#

and (") proves the claim since tIN(n) < Fł p(n).

#

4. results from , A and , ŹA by a cut. By induction hypothesis there are

ł <2, ą < 2 and r <such that

ą

"n " INm Fł[n] r (n), (Ź)A(n, 0)

By choosing r >rk(A) we obtain the claim by an application of (cut).

5. The missing cases are covered by lemma 4.7 or easy (rules for (" and '").

Theorem 4.1 Let A be a prime formula such that PA "x"yA(x, y) and

FV (A) ą"{x, y}. Then there is a ł <0 such that

"x " IN "y

collapsing 4.6.

44

References

[1] P. Aczel, H. Simmons, S.S. Wainer (eds.): Proof Theory: A Selection of

Papers from the Leeds Proof Theory Meeting 1990, Cambridge University

Press (1992).

[2] T. Arai: Variations on a theme by Weiermann. The Journal of Symbolic

Logic (to appear).

[3] T. Arai: Consistency proofs via pointwise induction. Preprint, Hiroshima

1996 (submitted).

[4] H. Barendregt: The Lambda Calculus: Its Syntax and Semantics. North-

Holland.

[5] A. Beckmann and A. Weiermann: A term rewriting characterization of the

polytime functions and related complexity classes. Archive for Mathematical

Logic 36 (1996), 11-30.

[6] A. Beckmann and A. Weiermann: How to characterize the elementary re-

cursive functions by Howard-Schtte style methods. Preprint, Mnster 1996

(submitted).

[7] B. Blankertz and A. Weiermann: How to characterize provably total func-

tions by the Buchholz operator method. Proceedings of Gdel 96. Springer

Lecture Notes in Logic 6 (1996), 205-213..

[8] W. Buchholz: Proof-theoretic analysis of termination proofs. Annals of

Pure and Applied Logic 75 (1995), 57-65.

[9] W. Buchholz: A new system of proof-theoretic ordinal functions. Annals of

Pure and Applied Logic 32 (1986), 195-207.

[10] W. Buchholz, S. Feferman, W. Pohlers, W. Sieg (eds.): Iterated Inductive

Definitions and Subsystems of Analysis: Recent Proof-Theoretical Studies,

Lecture Notes in Mathematics 897. Springer, Berlin, 1981.

[11] W. Buchholz: The &!+1-rule. In [10], 188-233.

[12] W. Buchholz: Notation systems for infinitary derivations. Archive for

Mathematical Logic 30 5/6 (1991), 277-296.

[13] W. Buchholz: A simplified version of local predicativity. In [1], 115-148.

[14] W. Buchholz, E. A. Cichon and A. Weiermann: A uniform approach to

fundamental sequences and hierarchies. Mathematical Logic Quarterly 40

(1994), 273-286.

45

[15] E. A. Cichon and A. Weiermann: Term rewriting theory for the primitive

recursive functions. Annals of Pure and Applied Logic 83 (3) (1997), 199-

223.

[16] N. Dershowitz & J. Jouannaud: Rewrite systems. Handbook of Theoretical

Computer Science B: Formal Methods and Semantics, (J. van Leeuwen ed.),

North-Holland, (1990) 243-320.

[17] S. Feferman. Logics for Termination and Correctness of Functional pro-

grams, II. Logics of strength PA. In: Proof Theory. P. Aczel, H. Simmons,

S. Wainer (eds.). Cambridge University Press (1992), 197-225.

[18] W. Felscher. Berechenbarkeit: Rekursive und Programmierbare Funktionen.

Springer, 1993.

[19] H. Friedman and M. Sheard: Elementary descent recursion and proof the-

ory. Annals of Pure and Applied Logic 71 (1995), 1-45.

[20] Geupel, Terminationsbeweise bei Termersetzungssystemen. Diplomarbeit,

Technische Universitt Dresden, Sektion Mathematik, 1988.

[21] J. Y. Girard: 1-logic: Part 1: Dilators. Annals of Mathematical Logic 21

2

(1981), 75-219.

[22] D. Hofbauer: Termination proofs by multiset path orderings imply primi-

tive recursive derivation lengths. Theoretical Computer Science 105 (1992)

129-140.

[23] D. Hofbauer: Termination Proofs and Derivation Lengths in Term Rewrit-

ing Systems. Forschungsberichte des Fachbereichs Informatik. Bericht 92-

46. Technische Universitt Berlin 1991.

[24] W. A. Howard: Assignment of ordinals to terms for primitive recursive

functionals of finite type. Intuitionism and Proof Theory. North-Holland,

Amsterdam 1970, 443-458.

[25] W. A. Howard: Ordinal analysis of bar recursion of type zero. Compositio

Mathematica 42 (1980), 105-119.

[26] G. Kreisel: On the interpretation of non-finitist proofs II. The Journal of

Symbolic Logic 17 (1952), 43-58.

[27] D. Leivant: Predicative recurrence in finite type. In A. Nerode and Y.V.

Matiyasevich (eds.), Logical Foundations of Computer Science. Springer

Lecture Notes in Computer Science 813 (1994), 227-239.

[28] M. Mllerfeld and A. Weiermann: A uniform approach to z"-recursion.

Preprint, Mnster (1996).

46

[29] C. Parsons: On a number-theoretic choice schema and its relation to in-

duction. Intuitionism and Proof Theory. A. Kino, J. Myhill, R. E. Vesley

(eds.). North-Holland (1970), 459-473.

[30] C. Parsons: On n-quantifier induction. The Journal of Symbolic Logic 37/3

(1972), 466-482.

[31] R. Pter. Recursive Functions. Academic Press, 1967.

[32] H. E. Rose Subrecursion: Functions and Hierarchies. Oxford University

Press (1984).

[33] W. Pohlers: Proof Theory. An Introduction. Springer Lecture Notes in

Mathematics 1407 (1989).

[34] W. Pohlers: Proof-theoretic analysis of ID by the method of local predica-

tivity. In [10], 261-357.

[35] W. Pohlers: A short course in ordinal analysis. In [1], 27-78.

[36] W. Pohlers: Pure Proof Theory: Aims, methods and results. The Bulletin

of Symbolic Logic 2/2 (1996), 159-188.

[37] M. Rathjen: Fragments of Kripke-Platek set theory with infinity. In [1],

251-273.

[38] M. Rathjen and A. Weiermann: Proof-theoretic investigations on Kruskal s

theorem. Annals of Pure and Applied Logic 60 (1993), 49-88.

[39] K. Schtte. Proof Theory. Springer, 1977.

[40] H. Simmons. The Realm of Primitive Recursion. Archive for Mathematical

Logic 27, (1988).

[41] H. Schwichtenberg: Eine Klassifikation der 0-rekursiven Funktionen.

Zeitschrift fr Mathematische Logik und Grundlagen der Mathematik 17,

(1971) 61-74.

[42] H. Schwichtenberg: Elimination of higher type levels in definitions of prim-

itive recursive functionals by means of transfinite recursion. In: Logic Col-

loquium 73. Rose, Sheperdson (editors). North-Holland (1975), 279-303.

[43] H. Schwichtenberg: Classifying recursive functions. Handbook of Recursion

Theory (to appear).

[44] G. Takeuti: Proof Theory. Second edition. North-Holland, Amsterdam

1987.

47

[45] A.S. Troelstra: Metamathematical Investigations of Intuitionistic Arith-

metic and Analysis. Second, corrected edition of LNM 344 (1973). ILLC

Prepublication Series X-93-05.

[46] A. Weiermann: A strongly uniform termination proof for Gdel s T by

methods from local predicativity. Archive for Mathematical Logic 36 (3)

(1997), 445-460.

[47] A. Weiermann: How to characterize provably total functions by local pred-

icativity. The Journal of Symbolic Logic 61 (1996), 52-69.

[48] A. Weiermann: Rewriting theory for the Hydra battle and the extended

Grzegorczyk hierarchy. The Journal of Symbolic Logic (to appear).

[49] A. Weiermann. Bounding derivation lengths with functions from the slow

growing hierarchy. Archive for Mathematical Logic 37 (1998), 427-441.

[50] A. Weiermann. Termination proofs for term rewriting systems by lexico-

graphic path orderings yield multiply recursive derivation lengths. Theoret-

ical Computer Science 139 (1995), 355-362.

48

Wyszukiwarka

Podobne podstrony:

Pohlers Infinitary Proof Theory (1999)

A Erhardt Ferron Theory and Applications of Digital Image Processing

1 3 Application of Forensic Science

Bertalanffy The History and Status of General Systems Theory

Paul K Maciejewski, et al An Empirical Examination of the Stage Theory of Grief

Applications of linear algebra to differential equation [sharethefiles com]

Applications of EMT Programs in Smart Grids and DG(1)

Application of the Electromagnetic Model for Diagnosing Shock Wave Processes in Metals

Doran & Lasenby PHYSICAL APPLICATIONS OF geometrical algebra [sharethefiles com]

Jonathan Edwards The Sermons of Jonathan Edwards; A Reader (1999)(2)

Repeated application of shock waves as a possible method for food preservation

Applications of

18 Applications of HSS

więcej podobnych podstron