## Weiermann Applications of Infinitary Proof Theory (1999)

Applications of infinitary proof theory
Andreas Weiermann
Institut f³r mathematische Logik und Grundlagenforschung
der Westfõlischen Wilhelms-Universitõt M³nster
Einsteinstr. 62, D-48149 M³nster, Germany
Summary
This paper consists of four chapters which can be read independently. In Chap-
ter 1 we introduce term rewriting theory for proving non trivial closure prop-
erties of the primitive recursive functions. In Chapter 2 we show that the
derivation lengths for a finite rewrite system for which termination has been
shown by using the lexicographic path ordering are multiple recursive. Chapter
3 provides a termination proof for T using an interpretation of G÷del s system
T into the natural numbers. In Chapter 4 we give a simple characterization of
the provably recursive functions of PA.
1 Proving closure properties of the primitive re-
cursive functions via term rewriting
Abstract
The termination of rewrite systems for parameter recursion, simple
nested recursion and unnested multiple recursion is shown by using mono-
tone interpretations on the natural numbers. We show that the resulting
derivation lengths are primitive recursive. As a corollary we obtain trans-
parent and illuminating proofs of the facts that the schemata of parameter
recursion, simple nested recursion and unnested multiple recursion lead
from primitive recursive functions to primitive recursive functions.
1.1 Introduction
A recursive function f is commonly defined by a defining equation (or more
generally a system of equations) of the form
f(x) =t( y . f(y), x)
1
where t involves some previously defined recursive functions. The term rewriting
approach orients the defining equations into a rewrite system Rf containing the
rule
f(x) t( y . f(y), x)
together with some rules for other functions which appear in t. If the rewrite sys-
tem Rf is confluent and terminating then Rf yields a convenient model for non-
deterministic computations of f. The derivation lengths function, DR , which
f
measures the maximal possible lengths of nondeterministic Rf -computations,
and hence the maximal lengths of nondeterministic Rf computations, then gives
some intrinsic information about the computational complexity of the original
recursive function f. For example, if DR is bounded by a function in a com-
f
plexity class C which is closed under  elementary in (hence under coding) then
f is in C-TIME and so f is in C. Furthermore, this will hold for any evaluation
strategy for f. Obviously, the latter point gives essential information concerning
the computational complexity of f. In particular, if C is the class of primitive
recursive functions then f is primitive recursive.
The emphasis of this chapter is on investigations of interesting properties of
primitive recursive functions. In our term rewriting approach, the defining
equations for the primitive recursive functions are oriented naturally into rewrite
rules according to the intended computation strategy. It is well known that such
a rewrite system, when it computes only finitely many functions, can be proved
terminating via the multiset path ordering. Hence, as shown by Hofbauer in ,
the resulting derivation lengths are bounded by a primitive recursive function.
We concentrate on investigations on certain recursive definition schemata where
it is not immediately obvious that they lead from primitive recursive functions
to primitive recursive functions. In particular, we investigate the computa-
tional complexity of the rewrite systems for the schemata of parameter recur-
sion, PRP, simple nested recursion, SNR, and unnested multiple recursion,
UMR. The characteristic recursion equations for these schemata are
PRP : f(S(x), y) =h(x, y, f (x, p(x, y)))
SNR : f(S(x), y) =h(x, y, f(x, p(x, y, f(x, y))))
UMR : f(S(x), S(y)) = h(x, y, f(x, p(x, y)), f(S(x), y))
where in all three cases h and p denote previously defined primitive recursive
functions.
These schemata figure prominently in the literature. That they do not lead
outside the class of primitive recursive functions has been proved, for example,
in Pķter  or in Felscher  using coding techniques and by Feferman 
with elaborate proof-theoretic arguments. Simmons  has proved similar
results for strong generalisations of PRP and SNR.
2
1.2 Basic definitions
(i) On denotes the number-theoretic function m1, . . . , mn 0.
n
(ii) For each n " ╔ and for each 1 d" i d" n, Pi denotes the number-theoretic
function m1, . . . , mn mi.
(iii) S denotes the number-theoretic function m m +1.
(iv) If f is an m-ary function and if g1, . . . , gm are n-ary functions then
SUBm,n(f, g1, . . . , gm)
denotes the function
m1, . . . , mn f(g1(m1, . . . , mn), . . . , gm(m1, . . . , mn)).
(v) If g is any n-ary function and if h is an n + 2-ary function then
RECn+1(g, h)
denotes the unique n + 1-ary function f which satisfies the following re-
cursion equations :
f(0, m1, . . . , mn) = g(m1, . . . , mn)
f(m +1, m1, . . . , mn) = h(m1, . . . , mn, m, f(m, m1, . . . , mn))
Definition 1.1 The set PR of primitive recursive functions is given by the
following inductive definition :
(i) S " PR.
(ii) On "PRfor every natural number n.
n
(iii) Pi "PRfor every natural number n.
(iv) If f "PRis an m-ary function and if g1, . . . , gm "PRare n-ary func-
tions then SUBm,n(f, g1, . . . , gm) "PR.
(v) If g "PRis an n-ary function and h "PRis an n +2-ary function then
RECn+1(g, h) "PR.
Definition 1.2 For each natural number n we define a set Nn of primitive
recursive function symbols as follows :
(i) S1 " N1.
(ii) On " Nn.
3
(iii) Pin " Nn.
(iv) If f " Nm and g1, . . . , gm " Nn then SUBm,n(f, g1, . . . , gm) " Nn.
(v) If g " Nn and h " Nn+2 then RECn+1(g, h) " Nn+1.
We write 0 for O0 and S for S1. Let N := {Nn | n<╔}.
Definition 1.3 For each f " N, we define the complexity number of f, comp(f),
as follows:
(i) comp(S1) :=1.
(ii) comp(On) :=1.
(iii) comp(Pin) :=1.
(iv) comp(SUBm,n(f, g1, . . . , gm)) :=
max {comp(f), comp(g1), . . . , comp(gm), m, n} +2.
(v) comp(RECn+1(g, h)) := max {comp(g), comp(h), n} +2.
Definition 1.4 Recursive definition of ┼Ü(f) "PRfor f " N.
(i) ┼Ü(S1) :=S.
(ii) ┼Ü(On) :=On.
n
(iii) ┼Ü(Pin) :=Pi .
(iv) ┼Ü(SUBm,n(f, g1, . . . , gm)) := SUBm,n(┼Ü(f), ┼Ü(g1), . . . , ┼Ü(gm)).
(v) ┼Ü(RECn+1(g, h)) := RECn+1(┼Ü(g), ┼Ü(h)).
Lemma 1.1 For every n-ary primitive recursive function f there exists a prim-
itive recursive function symbol g such that ┼Ü(g) =f.
Proof f is defined according to the rules of PR. An unravelling of this definition
yields the desired primitive recursive function symbol g in a canonical way.
We assume familiarity with the basic notions of rewriting theory. For a suitable
overview see Dershowitz and Jouannaud .
Definition 1.5 We define a set RPR of rewrite rules (for calculating primitive
recursive functions) over the signature N as follows :
(i) For n e" 1, On(x1, . . . , xn) R 0.
PR
(ii) Pin(x1, . . . , xn) R xi (1 d" i d" n).
PR
4
(iii) SUBm,n(f, g1, . . . , gm)(x1, . . . , xn) R
PR
f(g1(x1, . . . , xn), . . . , gm(x1, . . . , xn)).
(iv) RECn+1(g, h)(0, x1, . . . , xn) R g(x1, . . . , xn).
PR
(v) RECn+1(g, h)(S(x), x1, . . . , xn) R
PR
h(x1, . . . , xn, x, RECn+1(g, h)(x, x1, . . . , xn)).
RPR models the definition schemata of the primitive recursive functions in a
natural way.
Definition 1.6 We define a set RPRP of rewrite rules as follows.
(i) f(0, y) R g(y).
PRP
(ii) f(S(x), y) R h(x, y, f (x, p(x, y))).
PRP
RPRP models the schema of primitive recursion with parameter substitution.
Definition 1.7 We define a set RSNR of rewrite rules as follows.
(i) f(0, y) R g(y).
SNR
(ii) f(S(x), y) R h(x, y, f (x, p(x, y, f(x, y)))).
SNR
RSNR models the schema of simple nested recursion.
Definition 1.8 We define a set RUMR of rewrite rules as follows.
(i) f(0, y) R g1(y).
UMR
(ii) f(x, 0) R g2(x).
UMR
(iii) f(S(x), S(y)) R h(x, y, f (x, p(x, y)), f(S(x), y)).
UMR
RUMR models the schema of unnested multiple recursion (with parameter sub-
stitution).
All of these schemata can be shown terminating using the lexicographic path or-
dering. This, however, does not give us a sufficient information for proving that
they do not go outside the primitive recursive functions, for the lexicographic
path ordering also establishes the termination of the Ackermann function which
is well known for not being primitive recursive.
5
1.3 Monotone number-theoretic interpretations
Definition 1.9 For F ─ģ" N let PR(F ) be the least set T ā" F so that:
(i) If SUBm,n(f, g1, . . . , gm) " T then f, g1, . . . , gm " T .
(ii) If RECn+1(g, h) " T then g, h " T .
Let RPR(F ) be the restriction of R to the set of terms over the signature
PR
PR(F ).
Notation. If F is a signature then T (F) denotes the set of terms over F and
G(F) denotes the set of ground terms over F. We assume that G(F) is always
non- empty.
Definition 1.10 Recursive definition of the depth of t, dp(t), for t "T (F) for
some signature F.
(i) dp(t) :=0, if t is a variable or a constant function symbol.
(ii) dp(f(t1, . . . , tn)) := max {dp(t1), . . . , dp(tn)} +1.
Definition 1.11 For a rewrite system R over a signature F we define the par-
tial functions dR and DR, the derivation length functions, as follows.
(i) For t " G(F), dR(t) : max {n | (" t1, . . . , tn-1 " G(F)) . t R t1 R
. . . R tn-1}.
(ii) For m " ╔, DR(m) : max {dR(t) | dp(t) d" m}.
The ╔-interpretations we shall give for proving termination of the schemas PR,
PRP, SNR and UMR will be based on the family {Fn | n z" ╔} which is
defined as follows.
Definition 1.12 Given d e" 2, we define :
(i) F0(a) :=da+1. Here the index means exponentiation.
d(1+a)
(ii) Fn+1(a) :=Fn (a). Here the index means iteration.
The following properties of the functions {Fn} are readily established :
Lemma 1.2 For every n " ╔, and for every a, b " ╔, we have
(i) a (ii) a d
d
(iii) a . d d" Fna, and hence ai d" Fn(max ai)
i=1 i=1
6
d
(iv) FnFn+1a (v) Fn is primitive recursive.
Theorem 1.1 RPR is terminating under primitive recursive interpretations in
case that F é" N is finite.
Proof For f " Nn define a number-theoretic interpretation H as follows:
H(f)(m1, . . . , mn) :=Fcomp(f)(m1 + Ę Ę Ę + mn).
Then H is a monotone primitive recursive interpretation under which the rules
are reducing.
Definition 1.13 Recursive definition of a numeral m for m " ╔.
0 := 0; m + 1 := S1(m).
It follows easily that for each f " Nn )" F the term f(m1, . . . , mn) reduces
under RPR(F ) in a uniquely determined way to a numeral which corresponds to
┼Ü(f)(m1, . . . , mn). In such a situation we say that RPR(F ) computes ┼Ü(f).
Our characterisation results will depend on the following observation adapted
from Hofbauer :
Theorem 1.2 Suppose that R is a finite rewrite system over a finite signature
F, for which termination can be proved using number-theoretic interpretations.
Suppose further that there exists n " ╔ such that for every f "F and for every
a1, . . . , ak " ╔, I(f)(a1, . . . , ak) d" Fn(a1 + Ę Ę Ę + ak), where the value of d in
the definitions of the Fn is taken to be at least 2 and greater than the arities of
all function symbols of F. Then, for any closed term t,
dR(t) d" Fn+1(dp(t))
from which it follows immediately that DR(m) d" Fn+1(m).
Proof The proof is by induction on dp(t). It is clear that the ╔-interpretation
of a closed term t is a bound on the value of dR(t). If t is a constant then, from
the hypothesis, we obtain dR(t) d"I(t) d" Fn(0) t = f(t1, . . . , tk). We then have
dR(t) d" I(f)(I(t1), . . . , I(tk))
d" Fn(I(t1) +Ę Ę Ę + I(tk)) by hypothesis
2
d" Fn(max I(ti)) by lemma 1.2.3 since d >k
2
d" Fn(Fn+1max dp(ti)) by induction hypothesis
d" Fn+1(dp(t)) by lemma 1.2.4, since d e" 2.
7
As a consequence of theorem 1.2, it suffices to show in the sequel that termi-
nation proofs can be achieved using primitive recursive interpretations. As an
immediate corollary, we have :
Theorem 1.3 DR is primitive recursive in case that F é" N is finite.
PR(F )
Remark. Theorem 1.1 is a special case of a more general result proved in .
Theorem 1.4 RPRP is terminating and DR is primitive recursive.
PRP
Proof Define monotone number-theoretic interpretations as follows: I(0) := 0,
I(S)(a) :=a + 1, and assume, inductively, that there exists a least n " ╔ such
that, for all a, b, c,
I(g)(b) d" Fnb
I(h)(a, b, c) d" Fn(a + b + c)
I(p)(a, b) d" Fn(a + b)
We then put
d(1+a)
I(f)(a, b) := Fn (a + b) with d e" 4
Clearly, we have Fn(a + b) < I(f)(a, b) d" Fn+1(a + b), justifying inductively
the assumption on g, h and p. To see that the rules are reducing, writing a for
I(x) and b for I(y) :
d(1+0)
I(f(0, y)) = Fn (0 + b)
d
= Fn b
> Fnb for d e" 2
e" I(g(y)) by hypothesis.
d(1+a)
I(h(x, y, f(x, p(x, y)))) d" Fn(a + b + Fn (a + Fn(a + b)))
d(1+a) 2
d" Fn(a + b + Fn Fn(a + b)) by lemma 1.2 3
d(1+a)+2
= Fn(a + b + Fn (a + b))
2 d(1+a)+2
d" FnFn (a + b) by lemma 1.2 3
d(1+a)+4
= Fn (a + b)
d(1+a+1)
d" Fn (a + b) for d e" 4
d(1+a+1)
< Fn (a +1+b)
= I(f(S(x), y))
The next theorem contains the theorem above and is slightly more involved.
8
Theorem 1.5 RSNR is terminating and DR is primitive recursive.
SNR
Proof Define monotone number-theoretic interpretations as follows. J (0) := 0,
J (S)(a) :=a + 1, and assume, inductively, that there exists a least n " ╔ such
that, for all a, b, c,
J (g)(b) d" Fnb
J (h)(a, b, c) d" Fn(a + b + c)
J (p)(a, b, c) d" Fn(a + b + c)
We then put
d1+a
J (f)(a, b) := Fn (a + b) with d e" 5
We show first that the least z such that "ab . J (f)(a, b) d" Fz(a + b) is n +2.
Now,
d(1+a+b)
Fn+2(a + b) = Fn+1 (a + b)
2
e" Fn+1(a + b)
d(1+Fn+1(a+b))
= Fn Fn+1(a + b)
F0(a)
> Fn (a + b)
d1+a
= Fn (a + b)
= J (f)(a, b)
Note, however, that since d e" 2, we have Fn+1(dd) < I(f)(dd, 0). Therefore,
"ab . J (f)(a, b) d" Fn+1(a + b) does not hold.
To see that the rules are reducing under this interpretation, writing a for J (x)
and b for J (y) :
d(1+0)
J (f(0, y)) = Fn (0 + b)
d
= Fn b
> Fnb
e" J (g(y))
J (h(x, y, f(x, p(x, y, f(x, y)))))
d1+a d1+a
d" Fn(a + b + Fn (a + Fn(a + b + Fn (a + b))))
d1+a d1+a
d" FnFnFn FnFnFnFn (a + b) - repeated use of lemma 1.2 3
d1+a.2+5
= Fn (a + b)
d1+a.(d-1)+d1+a
< Fn (a + b) for d e" 5
d1+a+1
= Fn (a + b)
d1+a+1
< Fn (a +1+b)
= J (f(S(x), y))
9
The result now follows.
Theorem 1.6 RUMR is terminating and DR is primitive recursive.
UMR
Proof Define number-theoretic interpretations as follows: K(0) := 0, K(S)(a) :=
a + 1, and assume, inductively, that there exists a least n " ╔ such that, for all
a, b, c,
K(g1)(a) d" Fna
K(g2)(b) d" Fnb
K(h)(a, b, c, d) d" Fn(a + b + c + d)
K(p)(a, b) d" Fn(a + b)
We then put
d(1+a)
Fn
K(f)(a, b) := Fn (a+b)+d(1+b)(a + b) with d e" 4
We show first that the least z such that "ab . K(f)(a, b) d" Fz(a + b) is n +2.
d(1+a)
As in the case of PRP, we have Fn (a + b) d" Fn+1(a + b). By induction
on b we have d(1 + b) d" F0(b) for d e" 2. Thus d(1 + b) d" F0(b) Hence,
Fn+1(a+b).2
K(f)(a, b) d" Fn (a + b)
2
Fn+1(a+b)
d" Fn (a + b) by lemma 1.2 3
2
d(1+Fn+1(a+b))
2
< Fn Fn+1(a + b)
3
= Fn+1(a + b)
d(1+a+b)
d" Fn+1 (a + b) for d e" 3
= Fn+2(a + b)
Note, however, that Fn+1(a) < K(f)(a, 0), confirming minimality.
To see that the rules are reducing under this interpretation, writing a for K(x)
and b for K(y) :
It is clear that K(f(x, 0)) > K(g2(x)) and that K(f(0, y)) > K(g1(y)). Now,
K(h(x, y, f(x, p(x, y)), f(S(x), y)))
d" Fn(a + b + K(f)(a, Fn(a + b)) + K(f)(a +1, b))
2
d" Fn(max {K(f)(a, Fn(a + b)), K(f)(a +1, b)}) by lemma 1.2 3, since d e" 4,
We show first that max {K(f)(a, Fn(a + b)), K(f)(a +1, b)} = K(f)(a +1, b) :
10
d(1+a)
Fn
K(f)(a, Fn(a + b)) = Fn (a+Fn(a+b))+d(1+Fn(a+b))(a + Fn(a + b))
d(1+a)
Fn 2 2 2
d" Fn Fn(a+b)+Fn(a+b)Fn(a + b) by lemma 1.2 3
d(1+a)
FnFn 2 2
d" Fn Fn(a+b)Fn(a + b) by lemma 1.2 3
d(1+a)+3
Fn
= Fn (a+b)+2(a + b)
d(1+a)+4
Fn
= Fn (a+b)(a + b) by lemma 1.2 3
d(1+a+1)
Fn
d" Fn (a+b)(a + b) for d e" 4
< K(f)(a +1, b)
Now,
d(1+a+1)
Fn
2 2
Fn(K(f)(a +1, b)) = FnFn (a+1+b)+d(1+b)(a +1+b)
d(1+a+1)
Fn
= Fn (a+1+b)+d(1+b)+2(a +1+b)
d(1+a+1)
Fn
d" Fn (a+1+b)+d(1+b+1)(a +1+b) since d >2
d(1+a+1)
Fn
< Fn (a+1+b+1)+d(1+b+1)(a +1+b +1)
= K(f)(a +1, b +1)
The result now follows.
From the above considerations we have a proof of the following classical result.
Theorem 1.7 The primitive recursive functions are closed under parameter
recursion, simple nested recursion and unnested multiple recursion.
Proof Assume that the involved functions ┼Ü(g), ┼Ü(h) and ┼Ü(p) are primitive re-
cursive where g, h, p " N are chosen according to lemma 1.1. Then there exists a
canonical rewrite system, namely RPR({g,h,p}), for ┼Ü(g), ┼Ü(h), ┼Ü(p) which com-
putes ┼Ü(g), ┼Ü(h) and ┼Ü(p) in the natural way. The termination of this system
can be shown according to theorem 1.1 by a monotone interpretation H which
involves only finitely many functions Fl , . . . , Fl . Let n := max {l1, . . . , lm}.
1 m
Then the proofs of theorems 1.4, 1.5 and 1.6 yield an obvious extension of H
to I resp. J , K which shows that for the rewrite system which computes the
function which is introduced by the schema under consideration the resulting
derivation lengths function is primitive recursive. The number of steps for com-
puting such a function on a Turing machine (or register machine) is primitive
recursive (even elementary recursive) in its derivation lengths function. Hence
an application of the (primitive recursive) bounded search operator yields the
assertion.
11
The schemata of primitive recursion with parameter substitution and simple
nested recursion are special cases of the following definition schema of general
simple nested recursion:
(i) f(x, 0) = g(x);
(ii) f(x, y +1) =t( x . f(x, y), x, y);
where g is primitive recursive and t is built up using primitive recursive functions
and the application functional App which is defined by App(x . h(x), z) =h(z).
A slightly more general schema is given by the following definition schema of
nested less than recursion:
(i) f(x, 0) = g(x);
(ii) f(x, y) =t( x . f(x, y), x, y);
where g and t are as before and where  x. f(x, y) denotes the course of values
recursion of  x. f(x, Ę) along y.
The schema of un-nested multiple recursion is a special case of the following
schema of general un-nested multiple recursion:
(i) f(0, y) =g1(y);
(ii) f(x, 0) = g2(x);
(iii) f(x +1, y +1) =t(x . f(x, y), f(x, y +1), x, y);
where g and t are as before and f(x, y+1) must not occur nestedly in x . f(x, y).
The methods developed in this chapter easily yield that the derivation lengths of
the rewrite systems for general simple nested recursion, nested less than recur-
sion and general un-nested multiple recursion are primitive recursive. Hence,
these schemata lead from primitive recursive functions to primitive recursive
functions and every evaluation strategy yields a primitive recursive algorithm
for the defined function in question.
The definition schema of nested multiple recursion has the form:
f(x, y, z) =t(. . . , z . f (x, y, z), f(x, y, z), x, y, z),
where t is as before and (...) denotes the corresponding course of values recursion
along y resp. z and the . . . indicate the possibility of more nested recursions.
The termination of the corresponding rewrite system can be proved with the
lexicographic path ordering, hence the derivation lengths function for this sys-
tem is definable by nested multiple recursion as shown in . So any evaluation
strategy for the latter rewrite system yields an algorithms which runs in nested
multiple recursive time.
Similar closure results also hold for the classes of Ą0-recursive functions and
╔
╔... -recursive functions.
12
2 Classifying the computational complexity of
the Lexicographic Path Orderings when ap-
plied to rewrite systems
Abstract
It is shown that a termination proof for a term rewriting system using
a lexicographic path ordering yields a multiply recursive bound on the
length of derivations, measured in the depth of the starting term. This
result is essentially optimal since for every multiply recursive function f
a rewrite system (which reduces under the lexicographic path ordering)
can be found such that its derivation length can not be bounded by f.
Introduction
Let R be a (finite) rewrite system over a (finite) set F of function symbols. Let
G(F) be the set of ground terms over F. (We assume that G(F) is not empty.)
For m " Nlet
D(m) : max {n " N: ("t1, . . . , tn "G(F))[dp(t1) d" m & t1 Ę Ę Ę tn]}
R R
where dp(t) denotes the depth of a term t and R denotes the rewrite relation
induced by R. D is called the derivation length function for R. If F is finite
and if R is finite and terminating, then D is a (total) recursive function.
In order to investigate the power as well as the limitations of different termina-
tion proof methods, it is natural to ask how long derivation sequences can be if
a certain termination ordering is used. Feasible bounds on D will yield a good
practical applicability of the rewrite system in question. On the other hand,
unfeasible large bounds on D in the worst cases may indicate a wide range of
applicability of the used termination proof method. In the meantime several
results on classifying derivation lengths have been established. For example it
is known that a termination proof using a polynomial interpretation yields a
double exponential bound on D (see, for example, ). Moreover it has been
shown that a termination proof using a multiset path ordering (which is also
known as recursive path ordering) yields a primitive recursive bound on D (cf.
).
In this chapter we show that a termination proof with a lexicographic path
ordering yields a multiply recursive bound on D. In this case there is  infor-
mally speaking  an n >1 such that D is eventually dominated by a function
x Ackn(x, 0, . . . , 0) where Ackn denotes an n-ary Ackermann-function. This
result is optimal. Therefore we get a characterization of the power of termi-
nation proofs by lexicographic path orderings in terms of derivation lengths.
It turns out that with respect to derivation lengths termination proofs using
lexicographic path orderings are much more powerful than for example termi-
nation proofs with multiset path orderings. In particular the lexicographic path
13
ordering can be used for proving termination of standard rewrite systems which
compute n-ary Ackermann-functions.
2.1 Basic Definitions
Let F = {f1, . . . , fK} be a finite set of function symbols of fixed cardinality
K. Let z" be a total precedence (i. e. a linear order) on F such that fi z"
fj ą!ę! i M := max {a(f) : f " F}. Let T (F, X ) be the set of terms over F and a
countable infinite set X of variables. Let G(F) be the set of ground terms over
F which is assumed to be not empty.
Recall that the depth function, dp, is defined on T (F, X ) as follows:
(i) dp(t) :=0, if t is a constant symbol or a variable,
(ii) dp(f(t1, . . . , tn)) := max {dp(t1), . . . , dp(tn)} +1, else.
The set of variables of a term t is denoted by Var(t). A mapping ├ : X
T (F, X ) is called substitution. If t is a term we denote by t├ the result of
substituting every occurrence of a variable x in t by ├(x). A term rewriting
system (or rewrite system) R over T (F, X ) is a finite set of ordered pairs s, t
such that s, t " T (F, X ) and Var(t) ─ģ" Var(s). The rewrite relation R on
T (F, X ) is the least binary relation on T (F, X ) such that:
(i) if s, t "R and ├ : X T (F, X ), then s├ R t├,
(ii) if s R t, then f(. . . , s, . . . ) R f(. . . , t, . . . ).
A rewrite system R is said to be terminating if there is no infinite sequence
ti : i " N of terms such that
t1 t2 Ę Ę Ę tm Ę Ę Ę .
R R R R
The lexicographic path ordering on T (F, X ), , is defined as follows (cf. ):
lpo
s = f(s1, . . . , sm) g(t1, . . . , tn) =t iff
lpo
(i) sk t for some k "{1, . . . , m}, or
lpo
(ii) f g & s tl for all l "{1, . . . , n}, or
lpo
(iii) f = g & s1 = t1 & . . . & si = ti & si ti & s ti +1
0-1 lpo
0-1 lpo
0 0 0
& . . . & s tn for some i0 "{1, . . . , n}.
lpo
Lemma 2.1 (i) If s t, then Var(t) ─ģ"Var(s).
lpo
(ii) If R is a rewrite system such that R is contained in a lexicographic path
ordering, then R is terminating.
14
Proof: The first assertion is clear. The second assertion is proved, for example,
in .
For introducing the extended Grzegorczyk-hierarchy we assume familiarity with
the elementary theory of ordinals. A short introduction into this theory can
be found, for example, in  or in . Let ─ģ <╔╔ be an ordinal such that
─ģ = 0. Then there are uniquely determined natural numbers m1 > . . . > mn

and positive natural numbers a1, . . . , an (n>0) such that
1 n
─ģ = ╔m Ę a1 + Ę Ę Ę + ╔m Ę an.
1 n
In this situation we will write ─ģ =NF ╔m Ę a1 + Ę Ę Ę + ╔m Ę an. Let
maxcoef(─ģ) :=max {a1, . . . , an}
be the maximal coefficient of ─ģ. For each natural number x and limit ordinal
1 n
─ģ =NF ╔m Ę a1 + Ę Ę Ę + ╔m Ę an <╔╔ let
1 n n-1
─ģ[x] :=╔m Ę a1 + Ę Ę Ę + ╔m Ę (an - 1) + ╔m Ę x.
For ─ģ, ▓ < ╔╔, ─ģ = ╔n Ę a0 + Ę Ę Ę + ╔0 Ę an and ▓ = ╔n Ę b0 + Ę Ę Ę + ╔0 Ę bn let
─ģ Ģ" ▓ := ╔n Ę (a0 + b0) +Ę Ę Ę + ╔0 Ę (an + bn)
be the (commutative) natural sum of ─ģ and ▓.
For ─ģ<╔╔ we define recursively the extended Grzegorczyk-hierarchy (cf. )
as follows:
(i) F0(x) :=x +1 (x " N),
(ii) F─ģ+1(x) :=F─ģ ─ć%Ę Ę Ę ─ć%F─ģ(x) (x " N),
x+1-times
(iii) F─ģ(x) :=F─ģ[x](x) (x " N), if ─ģ is a limit.
In the sequel we denote the x-th iterate of a function f by fx.
Lemma 2.2 (i) F─ģ(x) (ii) x(iii) x =0 ę! F─ģ(x)
(iv) ─ģ<▓& maxcoef(─ģ) (v) maxcoef(─ģ) (vi) x =0 &▓ =0 ę! F─ģ(x)
Proof: A proof of this lemma can be found, for example, in .
A function f : ╔ ╔ is called multiply recursive if there exists an ─ģ<╔╔ such
that f is primitive recursive in F─ģ. (See, for example,  for a definition.)
15
2.2 The interpretation theorem
Recall that K denotes the cardinality of F and that M denotes the maximum
of arities of function symbols from F. Let d be a fixed natural number such that
K< d. We define a mapping ╚ : G(F) N (which depends on d) recursively
as follows. Let t = fj(t1, . . . , tn) "G(F) (n e" 0) be a ground term and assume
that ╚(t1), . . . , ╚(tn) are defined. Then let
╚(t) :=F╔M+1 (d).
Ęj+╔nĘ╚(t1)+ĘĘĘ+╔1Ę╚(tn)+1
Lemma 2.3 (i) (Subterm) ╚(t) <╚(f(. . . , t, . . . )),
(ii) (Monotonicity) ╚(s) <╚(t) ę! ╚(f(. . . , s, . . . )) <╚(f(. . . , t, . . . )).
Proof: This Lemma follows from the assertions 5 and 6 of Lemma 2.
Lemma 2.4 If t "G(F), then
1+d+dp(t)
F╔ (d) >╚(t).
M+2
Proof by induction on dp(t).
Case 1: dp(t) =0.
Then t is a constant symbol fj. Since j d" K1+d
╚(t) =F╔M+1 (d) Ęj+1 M+2
Case 2: dp(t) > 0.
Let t = fj(t1, . . . , tn). By induction hypothesis we see
1+d+dp(tl)
F╔ (d) >╚(tl) for every l "{1, . . . , n}.
M+2
Let
─ģ := ╔M+1 Ę j + ╔n Ę ╚(t1) +Ę Ę Ę + ╔1 Ę ╚(tn) +1.
Then
╚(t) =F─ģ(d).
Since ─ģ<╔M+2 and
1+d+dp(t)-1
maxcoef(─ģ) M +2
we see by assertion 4 of Lemma 2
1+dp(t)-1
F─ģ(d) M+2
16
Lemma 2.5 (Main Lemma) Let s, t "T (F, X ) and dp(t) d" d. Let ├ : X
G(F) be a ground substitution. If s t, then
lpo
╚(s├) >╚(t├).
Proof by induction on dp(s). Since s t, s is not a variable.
lpo
Case 1: s is a constant symbol fi.
Then t is closed, since s t. Hence s├ = s and t├ = t. By subsidiary
lpo
induction on dp(t) we show
dp(t)+1
F╔ Ęi (d) >╚(t).
M+1
d+1
The claim follows since dp(t) d" d and F╔ Ęi(d) d" F╔M+1 (d) =╚(s).
M+1 Ęi+1
Case 1.1: dp(t) =0.
Then t is a constant symbol fj. From fi t we conclude d >i >j. Assertion
lpo
4 of Lemma 2 yields
F╔M+1 (d) >F╔M+1 (d) =╚(t).
Ęi Ęj+1
Case 1.2: dp(t) > 0.
Let t = fj(t1, . . . , tn). From s t we conclude d >i >j and s tl for all
lpo lpo
l "{1, . . . , n}. The induction hypothesis yields
dp(tl)+1
F╔ Ęi (d) >╚(tl) (l "{1, . . . , n}).
M+1
Assertion 4 of Lemma 2 yields
dp(t)+1 dp(t)
F╔ Ęi (d) = F╔M+1 (F╔ Ęi(d))
M+1 Ęi M+1
dp(t)
> F╔M+1 (F╔ Ęi(d))
Ęj+╔nĘ╚(t1)+ĘĘĘ╔1Ę╚(tn)+1 M +1
> F╔M+1 (d)
Ęj+╔nĘ╚(t1)+ĘĘĘ╔1Ę╚(tn)+1
= ╚(t)
Case 2: s = fi(s1, . . . , sm) and m>0. The main induction hypothesis yields
dp(u) d" d & sk u ę! ╚(sk├) >╚(u├)
lpo
for every u "T (F, X ) and every k "{1, . . . , m}.
By subsidiary induction on dp(t) we show
dp(t)+1
F╔ Ęi+╔mĘ╚(s1├)+ĘĘĘ+╔1Ę╚(sm├)(d) >╚(t├).
M +1
From this the claim follows as is case 1. Let
─ģ := ╔M+1 Ę i + ╔m Ę ╚(s1├) +Ę Ę Ę + ╔1 Ę ╚(sm├).
17
Case 2.1: dp(t) =0.
Then t is a constant symbol or a variable.
Case 2.1.1: t is a variable.
Since s t we see t "Var(s). Thus
lpo
t "Var(sk)
for some k "{1, . . . , m}. Assertion 1 of Lemma 3 yields
╚(t├) d" ╚(sk├) Case 2.1.2: t is a constant symbol fj. Then t├ = t.
Since s t we conclude that i >j or sk t for some k "{1, . . . , m} must
lpo lpo
hold. If sk t, then the main induction hypothesis yields
lpo
F─ģ(d) >╚(sk├) e" ╚(t).
If i >j, then assertion 4 of Lemma 2 yields
F─ģ(d) >F╔M+1 (d) =╚(t),
Ęj+1
since ╔M+1 Ę j +1<─ģand j Case 2.2: dp(t) > 0.
Let
t = fj(t1, . . . , tn).
Then dp(tl) < dp(t) for l " {1, . . . , n}. The subsidiary induction hypothesis
yields
dp(tl)+1
(") F─ģ (d) >╚(tl├))
for l "{1, . . . , n}. Since s t we have two consider three subcases.
lpo
Case 2.2.1: sk t for some k " {1, . . . , m}. The main induction hypothesis
lpo
yields
dp(t)+1
╚(t├) d" ╚(sk├) Case 2.2.2: i >j and s tl for every l "{1, . . . , n}.
lpo
The subsidiary induction hypothesis and assertion 4 of Lemma 2 yield
dp(t)+1 dp(t)
F─ģ (d) = F─ģ(F─ģ (d))
dp(t)
> F╔M+1 (F─ģ (d))
Ęj+╔nĘ╚(t1├)+ĘĘĘ+╔1Ę╚(tn├)+1
> F╔M+1 (d)
Ęj+╔nĘ╚(t1├)+ĘĘĘ+╔1Ę╚(tn├)+1
= ╚(t├)
since ─ģ>╔M+1 Ę j + ╔n Ę ╚(t1) +Ę Ę Ę + ╔1 Ę ╚(tn) +1 and
dp(t)
maxcoef(╔M+1 Ę j + ╔n Ę ╚(t1) +Ę Ę Ę + ╔1 Ę ╚(tn) +1) 18
Case 2.2.3: i = j and s1 = t1, . . . , si = ti and si ti and s tl for
0-1 lpo lpo
0-1
0 0
l "{j0 +1, . . . , n}. The main induction hypothesis yields
╚(si ├) >╚(ti ├).
0 0
The subsidiary induction hypothesis and assertion 4 of Lemma 2 yield
dp(t)+1 dp(t)
F─ģ (d) = F─ģ(F─ģ (d))
dp(t)
> F╔M+1 (F─ģ (d))
Ęj+╔nĘ╚(t1├)+ĘĘĘ+╔1Ę╚(tn├)+1
> F╔M+1 (d)
Ęj+╔nĘ╚(t1├)+ĘĘĘ+╔1Ę╚(tn├)+1
= ╚(t├)
since ─ģ>╔M+1 Ę j + ╔n Ę ╚(t1├) +Ę Ę Ę + ╔1 Ę ╚(tn├) +1 and
dp(t)
maxcoef(╔M+1 Ę j + ╔n Ę ╚(t1├) +Ę Ę Ę + ╔1 Ę ╚(tn├) +1) Theorem 1 Let R be a finite set of rewrite rules over T (F, X ). Assume that
R is contained in . Put
lpo
d := 1 + max ({K}*"{dp(t): ("s)[ s, t "R]}).
Let
t1 Ę Ę Ę tm,
R R
where t1, . . . , tm "G(F). Then
m d" F╔M +2 (d + dp(t1)).
+1
Proof: Assume that t1 R Ę Ę Ę R tm. Then by assumption t1 . . . tm.
lpo lpo
By the second main lemma and the monotonocity of ╚ we see
╚(t1) > . . . > ╚(tm).
Thus
╚(t1) e" m.
By Lemma 3
F╔M+2 (d + dp(t1)) >╚(t1).
+1
For m " Nlet
D(m) : max {n " N: ("t1, . . . , tn "G(F))[dp(t1) d" m & t1 Ę Ę Ę tn]}.
R R
This function D is by definition the derivation-length-function for R.
Corollary 2.1 Let R be a rewrite system over T (F, X ) such that R is con-
tained in a lexicographic path ordering. Then m D(m) is bounded by a
multiply recursive function.
19
It can easily be seen from  and  that there is a hierarchy of rewrite sys-
tems such that for every rewrite system of this hierarchy the termination can
be proved by using a lexicographic path ordering and such that the resulting
hierachy of derivation lengths cannot be majorized by a single multiply recursive
function.
3 A constructive termination proof for G÷del s
system T
Abstract
Inspired by Pohlers local predicativity approach to Pure Proof Theory
and Howard s ordinal analysis of bar recursion of type zero we present
a short, technically smooth and constructive strong normalization proof
for G÷del s system T of primitive recursive functionals of finite types by
constructing an Ą0-recursive function [ ]0 : T ╔ so that a reduces to b
implies [a]0 > [b]0.
The construction of [ ]0 is based on a careful analysis of the Howard-
Sch³tte treatment of G÷del s T and utilizes the collapsing function ╚ :
Ą0 ╔ which has been developed by the author for a local predicativity
style proof-theoretic analysis of PA.
For given n let Tn be the subsystem of T in which the recursors have
type level less than or equal to n + 2. (By definition, case distinction
functionals for every type are also contained in Tn.) As a corollary
of the main theorem of this chapter we obtain optimal bounds for the
Tn-derivation lengths in terms of ╔n+2-descent recursive functions. The
derivation lengths of type one functionals from Tn (hence also their compu-
tational complexities) are classified optimally in terms of <╔n+2-descent
recursive functions.
In particular we obtain that the derivation lengths function of a type
one functional a " T0 is primitive recursive, thus any type one functional
a in T0 defines a primitive recursive function. Similarly we also obtain a
full classification of T1 in terms of multiple recursion.
3.1 How is it that infinitary proof theory can be applied
to finitary mathematics?
A fascinating feature of Pure Proof Theory is the intrinsic interplay of
different areas of logic like set theory, generalized recursion theory, classical
recursion theory and proof theory. The aims, methods and results of the Pure
Proof Theory program have been manifested and reported by Pohlers in .
In this chapter we are concerned with successful applications of versatile meth-
ods from rather advanced Pure Proof Theory to classical recursion theory, the
proof theory of first order Peano Arithmetic (and its fragments) and problems
20
of computational complexity. Having the right principles from advanced proof
theory as guiding lines in mind it is indeed often possible to obtain deep and
combinatorial involved results in a perspicuous way by miniaturizing or pro-
jecting down the appropriate abstract principles.
For example, by projecting Pohlers local predicativity treatment of KP╔
 i.e. Kripke Platek set theory with infinity axiom  down to PA (which is
of the same proof-theoretic strength as Kripke Platek set theory without the
infinity axiom) it was possible to reobtain the standard classification of the PA-
provably recursive functions in a perspicuous way . Later on this approach
has been utilized for further investigations on proof and recursion theory by
Arai . Another perspicuous characterization of the provably recursive func-
tions of PA could also be obtained by projecting Buchholz treatment of KP╔
(via his simplified version of local predicativity ) down to PA .
(See also chapter 4.) This treatment yields a perspicuous explanation of the in-
trinsic relationship between witness bounding and cut elimination. In fact, cut
reduction corresponds to the composition of bounding functions whereas the
elimination of cuts of highest possible complexity corresponds to an effective
transfinite iteration of the witness bounding function in question.
In this chapter we are going to project Howard s ordinal analysis of bar
recursion of type zero down to G÷del s T to obtain a constructive proof
of strong normalization for T and to obtain  as a byproduct  a complete and
optimal classification of the computational complexity of G÷del s T .
For this purpose we define by constructive means  which are somehow of lowest
possible proof-theoretic complexity  a function [ ]0 : T ╔ so that a reduces
to b implies [a]0 > [b]0. Hence, there will be no infinite rewritten sequences of
terms and the height of the reduction tree of a is bounded by [a]0 for any a " T .
The concrete definition of [ ]0 given in Section 2 looks involved at first sight.
Therefore the reader may perhaps find some description of the corresponding
phenomena occurring in impredicative proof theory informative.
First, we recall informally and a little oversimplifying some important features
of the Pohlers -style analysis of ID1  i.e. the theory of noniterated inductive
definitions over the natural numbers  or KP╔. His treatment is crucially based
on a collapsing function D : Ą&!+1 &! and a corresponding deduction
relation, , the essentially less than relation, on Ą&!+1. Here &! is the
first uncountable ordinal and Ą&!+1 is the next Ą-number above &!. The relation
is defined so that ─ģ ▓ implies ─ģ<▓ and D─ģ < D▓. The definition of
can informally be understood and realized as follows:
Axiom&!( ) ─ģ ▓ iff ─ģ < ▓ and the countable ordinal content of ─ģ is
controlled by the countable ordinal information contained in ▓.
For the proof-theoretic analysis the ordinal assignment is organized so that for
the final cut free derivation of a 1-formula under consideration the ordinals of
1
the premises of a rule are always essentially less than the ordinal assigned to the
conclusion of a rule in that derivation. After all we may apply the collapsing
21
function D  which might be thought as a proof-theoretic Mostowski collapse
function  to the ordinal assignment of the cut free derivation and we obtain
the desired countable ordinal bound on the 1-complexity of the ID1-provable
1
(or KP╔-provable) 1-formulas.
1
Now recall the miniaturized collapsing function ╚ : Ą0 ╔ and the
corresponding miniaturized essentially less than relation from .
Informally speaking we have (by miniaturizing Axiom&!( ))
Axiom╔( ) ─ģ ▓ iff ─ģ<▓ and the finitary content of ─ģ  measured by the
length of its term representation  is controlled by the finitary content of ▓.
The collapsing function ╚ at argument ─ģ is then defined as the Mostowski
collapse of the relation restricted to ─ģ.
By replacing in Pohlers local predicativity treatment of ID1 the ordinal &! by
╔, the collapsing function D : Ą&!+1 &!by ╚ : Ą0 ╔ and ID1 by PA  this is
the point where the projecting or miniaturization process comes into play  we
more or less automatically obtain appropriate bounds on the 0-complexities of
2
the PA-provable 0-sentences. In fact, the bounds obtained were even optimal
2
for the fragments I┼ün+1 of PA.
In a next step we recognized that a local predicativity style treatment of G÷del s
T has been possible. In Sch³tte s exposition (cf. ) of Howard s weak nor-
malization proof for G÷del s T we found an assignment [ ]0 so that if a reduces
to b via a certain reduction strategy then [a]0 > [b]0. By matter of a routine
inspection we found [b]0 [a]0, hence ╚[b]0 < ╚[a]0. As a corollary ╚ yields
<Ą0-recursive bounds on the T definable numbertheoretic functions. ╚ also
yields bounds for the derivation lengths of the restricted reduction relation.
For converting this argument into a strong normalization proof for general T -
reductions some additional ideas had to be detected. At a crucial point certain
impredicativity phenomena were resolved in  by using an appropriate strong
induction hypothesis. A detailed explanation of this idea is given in  and
will not be recalled here.
We also have to deal with another problem which is related with the Howard-
Sch³tte treatment of G÷del s T when one considers the fragments of T . (These
fragments have not been considered in .) Let Tn be the subsystem of T in
which all recursors have type level less than or equal to n+2. Then the functions
definable in Tn are <╔n+2-recursive and therefore the realm of the ordinals less
than ╔n+2 should somehow be sufficient for analyzing Tn. On the other side
the Howard-Sch³tte assignment only allows a treatment of Tn within ╔n+3 and
it is by no means obvious how to lower the ordinals appropriately. The strong
normalization proof for T given in  uses ordinals ╔n+3 in an essential way
and therefore does not yield a solution of this problem.
To get an idea of how to overcome this difficulty (while staying on a firm basis)
we go back the corresponding phenomena in the ordinal analysis of the fragments
of KP╔ treated in [35, 37]. The observation which is important for our purposes
22
is that the cut-elimination result of 
─ģ 2&!+─ģ
(") ō ę! ō
&!+1 &!
for ┼ü(&!)-formulas ō has been improved in  by
─ģ D─ģ
("") ō ę! ō.
&!+1 D─ģ
Thus the simultaneous use of collapsing and eliminating cuts yields the desired
optimal ordinal bounds. (A treatment via Buchholz &!-rule  or via Buchholz
simplified version of local predicativity  would also be appropriate here.)
Now the original Howard-Sch³tte assignment
├─
[a├─ b├]0 := 2[a b├]1 " ([a├─ ]0 Ģ" [b├─ ]0)
(here Ģ" denotes the (commutative) natural sum of ordinals and " denotes the
(commutative) natural product of ordinals) has some similarity with (") since
no collapsing is involved and cut elimination corresponds to exponentiation with
respect to base 2. Motivated by ("") the modification of [ ]0 proposed here 
which is the most important point in our exposition  involves simultaneously
collapsing and lowering the height of the exponential tower in question. It is
[a├─ b├]0 := ╚(╔ " [a├─ b├]1 Ģ" [a├─ ]0 Ģ" [b├]0 + g├)
where g├ denotes the level of the type ├. (The inclusion of g(├) is needed for
technical reasons.)
Another motivation for the definition of [a├─ b├]0 can be taken from a fine struc-
ture analysis of Howard s ordinal analysis of bar recursion of type zero (cf. ).
This analogy is far from being easy to explain. Nevertheless we will try to indi-
cate some crucial points since they fit nicely into our general program of applying
transfinite methods to finitary mathematics. Howard s approach is based on the
notion of a measure of a term for a given height. Apparently there should be
a connection between this notion and the definition of [ ]i but exactly this has
to be explained in more detail. Well, for heights larger than 3 the measure is
defined via the function b, d 2b Ę d. This is closely related to the definition
of [ab]i for i e" 2 which also involves exponentiation. For height 3 Howard uses
the functions b, d b + d whereas we again use the exponential function in
the definition of [ab]1. This difference can be explained by the difference in the
nesting of recursion of the involved collapsing functions. Howard s function ╚
directly corresponds to the Aczel-Buchholz-Feferman-function Ė which involves
nested ordinal recursion. Thus Howard s ╚ differs from the Buchholz function
╚0 (cf. ) which only involves unnested ordinal recursion. Roughly speaking
b, d Ėbd corresponds in growth rate to b, d ╚0╔&!Ęb+d. [This reminds one to
the correspondence between the Fast growing and the Hardy hierarchy.] For car-
rying through Howard s treatment in terms of the function ╚0 one has to replace
23
the function b, d b+d by b, d 2b Ęd in the definition of measure for height 3.
This is mirrored by the technical observation that in Howard s proof of Lemma
2.1 (cf. ) the argument on p.112 l.6-7 only works for Ė but not for ╚0. That
argument only works for ╚0 when Š4 is replaced by b, d 2bĘd. With this change
in mind one can define measure for height 2 in terms of b, d ╚0(&! Ę b + d).
Summing up the analogy is then as follows. For height larger than 2 we may
define measure in terms of the exponential function and the same is true for the
definition of [ ]i for i e" 1. For height 2 we can define measure in terms of ╚0
 a function which involves only unnested recursion  and in this chapter we
define [ ]0 in terms of a function ╚ which is also defined by unnested recursion.
In both cases optimal results are obtained.
The miniaturization process of replacing &! by ╔ and D : Ą&!+1 &!by ╚ : Ą0
╔ is reminiscent of the collapsing process relating the slow and fast growing
hierarchy in Girard s hierarchy comparison theorem . In our setting this
analogy can be found again by utilizing a finitary approximation approach to
termination orderings  used in rewriting theory  due to Buchholz . That
approach has been extended and applied to proof and recursion theory in [2, 49].
By utilizing these results we can obtain again several applications of impred-
icative proof theory to questions of finitary proof theory in the slow growing
spirit. Instead of replacing &! by ╔ and D by ╚ let us replace only the under-
lying ordering < on the ordinals in question by the first component <0 of its
Buchholz approximation system (obtain a complete slow growing analysis of T by matter of miniaturization. (A
somehow related has been given recently by Arai in .) Although this analysis is technically
involved we shall try to indicate a brief outline of the argument. In a first step
we replace  as proposed above  in the definition of [ ]i the ordinal ╔ by &!
and ╚ by the Buchholz-function ╚0 from . Then it can be verified that the
definition [a├─ b├]0 := ╚0(&! " [a├─ b├]1 Ģ" [a├─ ]0 Ģ" [b├]0 + g├) is well-defined since
the ╚0-term on the right side is in normal form. The proof given in this chap-
1
ter yields that a├ b├ implies [a├]0 > [b├]0 and hence the ╚0-based ordinal
notation system T╚ for the Howard-Bachmann ordinal yields a proof of strong
0
normalization for T . The second step consists in verifying the pointwiseness
of this proof. For this purpose one can adopt the definition of or  to the ordering on T╚ . Here one demands in the definition of 0
─ģ 1
of orderings one can verify that a├ b├ implies [a├]0 >0 [b├]0. By a direct
calculation or by utilizing the translation (which is compatible with the concept
of 0
Mostowski collapses along the 0
appropriate complexity bounds for the derivation lengths in question. In fact,
for given ─ģ let G─ģ(n) be the Mostowski collapse on we have the following relationships:
24
(i) (n.╚(╔k + n))k<╔ as well as (n.G╚ &!k(n))k<╔ classify the primitive
0
recursive functions.
k
(ii) (n.╚(╔╔ + n))k<╔ as well as (n.G╚ (&!&!k )(n))k<╔ classify the multiple
0
recursive functions.
(iii) (n.╚(╔l(k)+n))k<╔ as well as (n.G╚ &!l(k))(n))k<╔ classify the <╔l(╔)-
0
recursive functions if l e" 1. Here ╔0(k) := k, &!0(k) := k, ╔n+1(k) :=
n n
╔╔ (k) and &!n+1(k) :=&!&! (k).
For more details on wise transfinite induction we refer the reader to recent results of Arai (cf. [2, 3]).
The next step in the miniaturization program suggested in this chapter is to
study the r¶le of infinity in the analysis of T and PA. From the viewpoint
of predicative and bounded arithmetic G÷del s T as well as PA have a high
degree of impredicativity. So the question arises if the analysis of T or PA can
be projected down to the realm of predicative arithmetic, bounded arithmetic
and small classes of computational complexity.
Let us again look at the strong normalization proof for T given here. Transfinite
ordinals come into play for dealing with unrestricted (impredicative) recursors
and terms of the form (ab). But note that [ ]0 does not intrinsically assign
transfinite ordinals to restricted recursor occurrences of the form (R┴t0).
What happens with T if we drop the impredicative unrestricted occurrences of
recursors? Remember that the classical Hardy functions are defined as follows:
(i) H0(x) :=x.
(ii) H─ģ+1(x) :=H─ģ(x +1).
(iii) H(x) :=H[x](x) where  is a limit and ([x])x<╔ is the canonical fun-
damental sequence converging to .
Recall that ╔[x] = x and (╔─ģ + )[x] = ╔─ģ + [x] where ╔─ģ +  > . Then
H╔Ęk(x) = 2k Ę x. From  we know that ╚(╔ Ę k + n) directly corresponds
to H╔Ęk(n) = 2k Ę n. Thus for finite ordinals the definition of [ab]0 given here
essentially agrees with the definition of Howard and Sch³tte, thence there is no
genuine infinity involved in the definition of [ab]0.
pred
Let T be the restriction of T in which only predicatively restricted occur-
pred
rences of recursors are allowed. Then the restriction of [ ]0 to T can be
defined in terms of elementary functions. Indeed, in  it has been shown
pred pred
that T exactly represents the elementary recursive functions. Thus T is
equivalent to Leivant s system of predicative recurrence in higher types . By
pred
matter of inspection the derivation lengths for T  hence also the derivation
lengths for the simple typed -calculus  are bounded by an E4-function.
25
3.2 Benefits from the achieved results
There is a lot of information on G÷del s T reported in the literature (cf., for
example, [4, 42, 45]). A recent overview can be found, for example, in . In
this subsection we will explain without going too much into details how most
of these results can be drawn as corollaries from the main theorem and we will
indicate what has been achieved additionally here. By constructive means we
will define [ ]0 : T ╔ so that a reduces to b implies that [a]0 > [b]0. Obviously
an infinite sequence of rewritten terms would produce an infinite descending
chain of natural numbers, which of course is impossible. Thus [ ]0 witnesses
the strong normalization of T . In particular, given any term a there exists at
least one terminating sequence of rewritten terms starting with a, hence we have
achieved the weak normalization of T .
For a given natural number n let n be the numeral representing n. For a closed
term a of type 0 the value, val(a), is the unique natural number so that a
eventually rewrites into k. In T a given terma of type 1 then represents the
function mapping n to val(an). Assume that we have designed [ ]0 so that
[n]0 = n for any n<╔. Consider any rewritten sequence
1 1
an . . . k
where k = val(an). Then
[an]0 > . . . > [k]0 = k = val(an).
We will verify that there is a fixed ordinal ─ģ<Ą0 only depending on a and an
─ģ-recursive function f : ╔ ╔ so that [an]0 d" f(n) for all n < ╔. Thus the
function n val(an) is bounded by the ─ģ-recursive function f: Moreover, the
number of steps for computing the value of an by some reasonable reduction
strategy is bounded in terms of f(n). Hence n val(an) is elementary recursive
in f (cf. ).
Let Dera(n) be the height of the reduction tree for a at input n, i.e. the maximal
length of rewritten sequences of terms starting with an. Then Dera(n) d" f(n) so
that every reasonable rewriting strategy for rewriting an will yield an algorithm
terminating (essentially) in f(n) steps.
A similar complexity consideration can be carried out for general terms. Let
DerT (n), the derivation lengths function for T , be the maximal height of reduc-
tion trees for terms of depths bounded by n. The function DerT is then classified
as follows. By construction n max {[a]0 : depth(a) d" n} is Ą0-recursive, thus
also DerT .
Let Tn be the subsystem of G÷del s T in which all recursors have type level
less than or equal to n + 2. Then the restriction [ ]n of [ ]0 to Tn witnesses
0
the strong normalization for Tn. Our proof shows that only unnested recursions
along ordinals less than ╔n+2 are needed for defining [ ]n. (The Howard-Sch³tte
0
assignment [24, 39] only works for ordinals less than ╔n+3 which is not appro-
priate for our purposes.) Thus the derivation lengths function for any type one
26
functional a " Tn is <╔n+2-descent recursive, hence  as for example proved
in   the Tn-definable functions are <╔n+2-descent recursive. Moreover the
derivation lengths function for Tn is ╔n+2-descent recursive. Since Tn can be
employed for a functional interpretation of I┼ün+1 and related systems our re-
sults can be used for simplifying several proofs given by Parsons in his paper
The main theorem of this chapter also yields that T0 defines only primitive re-
cursive functions and that the derivation lengths for any type one functional of
T0 is primitive recursive. In particular, we reobtain from our approach several
non trivial closure properties of primitive recursive functions proved by Parsons
in . This answers a question posed by Feferman 1995 (private communica-
tion). We also obtain that the derivation lengths function for T0 is elementary
recursive in the Ackermann function. Similarly the computational complexity
of T1 can be classified in terms of multiple recursion.
By an adaptation of the methods of this chapter it is even possible to give a
full derivation lengths classification for the extensions of T based on recursions
along primitive recursive well-orderings.
We also can draw some nice (but well-known) corollaries about metamathemat-
ics from the results of this chapter. (See, for example, Takeuti s book on Proof
from our finitistic treatment of G÷del s T without unwinding finite terms into
transfinite ones.
Given the standard representation for Ą0 and a code for some ─ģ<Ą0 we denote
by PRWO(─ģ) the statement:  There does not exist a primitive recursive strictly
descending chain of ordinals below ─ģ. A closer inspection of the definition of
the function ╚ when its domain is restricted to ─ģ shows that it can be carried out
in PRA+ PRWO(─ģ). Thus the definition of [ ]0 can be carried out in PRA +
PRWO(Ą0) and the definition of [ ]0 when restricted to Tn can be carried out in
PRA+ PRWO(╔n+2). Hence the usual functional interpretation of PA can be
carried out in PRA + PRWO(Ą0) and the functional interpretation of I┼ün+1
can be carried out in PRA+ PRWO(╔n+2). This shows PRA+ PRWO(Ą0)
0 - Refl(PA) and PRA+ PRWO(╔n+2) 0 - Refl(I┼ün+1), hence PRA+
2 2
PRWO(Ą0) Con(PA) and PRA+ PRWO(╔n+2) Con(I┼ün+1). These nice
results are folklore. (See, for example, Takeuti s book on Proof Theory 
for a proof. See also [12, 19].) As shown in subsection 2.4 the proof-theoretic
applications of the results of this chapter can be extended to arbitrary primitive
recursive well-orderings z" of Ą-order type for showing:
PRA+ PRWO(z") 0 - Refl(PA+ TI(z"|)).
2
(Here TI(z"|) denotes the scheme of transfinite induction along the initial seg-
ments of z".) Hence, any numbertheoretic function provably recursive in PA +
TI(z"|) is definable by an unnested recursion along a strict initial segment of z".
27
3.3 Connections to term rewriting theory
This chapter is intended to provide powerful methods for proving termination of
term rewriting systems and higher order term rewriting systems. The definition
of [ ]0 is rather flexible and can easily be modified to show that T or one of
its subsystems Tn is closed under various strong recursion principles. Note
that proving termination via variants of [ ]0 automatically yields bounds on the
resulting computational complexities.
A general principle about an intrinsic relationship between  reasonable ter-
mination orderings z" and the computational complexity of rewrite systems for
which z" proves termination has been proposed by Cichon. His principle states
that this relationship is established via the slow growing hierarchy along the
order type of z". This principle  although the original exposition on it was
erraneous  turned out to be a very fruitful idea and it has been verified for a
large number of examples [5, 6, 8, 15, 22, 23, 28, 50, 49, 48]. We have no doubt
that our results together with the outline given above yields that the slow grow-
ing hierarchy up to ╚0Ą&!+1 classifies exactly the computational complexity of
T . The restrictions of T are classified with the corresponding fragments of the
slow growing hierarchy. Thus Cichon s principle also has been established
for T when one considers the ordering on T╚ as a  reasonable termination or-
0
dering for T . (At present time it does not seem possible to us to give a rigorous
definition of the notion  reasonable termination ordering. But as reasonable
we consider at least orderings on ordinal notation systems which fit into the
concept of After having seen some applications to advanced topics of term rewriting we
now look for applications of term rewriting to recursion theory. Assume that we
are interested in an equationally defined complexity class C of number-theoretic
functions like the elementary recursive functions, the primitive recursive func-
tions, the multiple recursive functions. Then it is often possible to find canon-
ically a term rewriting system RC which exactly represents the members of C.
In nonpathological cases RC is confluent and strongly normalizing. A successful
derivation lengths classification for RC then yields intrinsic information on the
computational complexity of the members of C.
It is often possible to prove some non trivial closure properties for C by solving
simple inequations via monotone interpretations for the rewrite rules. For ex-
ample, in  the interested reader may find some few line proofs showing that
the primitive recursive functions are closed under primitive recursion with pa-
rameter substitution, simple nested recursion and unnested multiple recursion.
Implicitly this type of approach has been used also in this chapter. Let CT be
the class of number-theoretic functions definable in T . The recursion-theoretic
classification of CT falls out as a corollary from the derivation lengths classifi-
cation of the corresponding rewrite system RC which of course is T itself. We
T
think that our classification of CT is as short, simple and direct as possible.
In addition, it seems to us that the trade-off theorem mentioned in  can also
28
be obtained directly from our approach.
3.4 Technical summary
In the first subsection of Section 2 we recall  following Sch³tte s exposition
  the definition of G÷del s T . Then we carry out in full detail the modified
definition of the Howard-Sch³tte assignment of numbers to terms of T . In
Subsection 2.2 we prove in detail  by constructive means, i.e. the argument is
formalizable in PRA+ PRWO(Ą0) by intuitionistic reasoning  that a rewrites
to b implies that the natural number assigned to a is larger than the natural
number assigned to b. By the very definition of the assignment we can read
off the desired results immediately. In Subsections 2.3 and 2.4 we indicate how
our approach can be extended to T in the -formulation and to T enlarged
by transfinite recursors. Besides from knowledge of chapter VI of Sch³tte s
exposition  the chapter is entirely self contained at least from a technical
point of view.
3.5 Basic definitions
We follow essentially the notation of chapter VI of . We suppose for simplic-
ity that all terms are closed. The successor functional is denoted by SUCC. The
n-th numeral is denoted by n. For dealing with the fragments of T  which are
appropriate for functional interpretations of the fragments of PA  we replace
for any type ─ the iterator I─ by the recursor R─ of type 0(0──)──. According to
Parsons exposition  we also add for any type ─ a case distinction functional
D─ of type 0───. For allowing general unrestricted reductions we modify the
1
definition of the one step reduction as follows.
1
Definition 3.1 is the least binary relation on terms so that
1
(i) K├─ a─ b├ a─ .
1
(ii) S┴├─ a┴├─ b┴├c┴ a┴├─ c┴(b┴├c┴).
1
(iii) D─ 0a─ b─ a─ .
1
(iv) D─ (SUCCt0)a─ b─ b─ .
1
(v) R─ 0a0── b─ b─ .
1
(vi) R─ (SUCCt0)a0── b─ a0──t0(R─ t0a0── b─ ).
1 1
(vii) If b├ c├ then a├─ b├ a├─ c├.
1 1
(viii) If a├─ b├─ then a├─ c├ b├─ c├.
Definition 3.2 For ─ģ<Ą0 we define no(─ģ) <╔ as follows.
29
(i) no(0) := 0.
1 n
(ii) no(─ģ) :=n+no(─ģ1)+Ę Ę Ę+no(─ģn) if ─ģ = ╔─ģ +Ę Ę Ę+╔─ģ >─ģ1 e" . . . e" ─ģn.
The natural sum is denoted by Ģ". The natural product is denoted by ". For
the reader s convenience we recall the definitions. Let ─ģ Ģ" 0 :=0 Ģ" ─ģ := ─ģ and
0 n
─ģ " 0 :=0 " ─ģ := 0. Now suppose ─ģ = ╔┤ + Ę Ę Ę + ╔┤ >┤0 e" . . . e" ┤n, n e" 0
n+1 m
and ▓ = ╔┤ + Ę Ę Ę + ╔┤ >┤n+1 e" . . . e" ┤m, m e" n +1. Then
─ä(0) ─ä(m)
─ģ Ģ" ▓ := ╔┤ + Ę Ę Ę + ╔┤ ,
where ─ä is a permutation of {0, . . . , m} such that ┤─ä(i) e" ┤─ä(j) if i define ─ģ " ▓ to be
0 0 n n
(╔┤ Ģ"┤n+1 Ģ"Ę Ę Ę Ģ"╔┤ Ģ"┤m) Ģ"Ę Ę Ę Ģ"(╔┤ Ģ"┤n+1 Ģ"Ę Ę Ę Ģ"╔┤ Ģ"┤m).
Proposition 3.1 (i) no(─ģ Ģ" ▓) =no(─ģ) +no(▓).
Ę
(ii) no(─ģ) +no(▓) - 1 d" no(─ģ " ▓) d" no(─ģ) Ę no(▓) if ─ģ =0 = ▓.

(iii) no(─ģ) d" 2 Ę no(2─ģ).
(iv) no(2─ģ) d" 2no(─ģ).
The following definition is taken from Weiermann (1996) . Let ┼Ü(x) =
F5(x + 100) be a reasonably fast growing primitive recursive function. Here
x+1
F0(x) =2x and Fn+1(x) :=Fn (x). As expected the functions Fn satisfy some
standard majorization properties like Fn(x) In the proof of Proposition 3 we mention explicitly some further majorization
properties of the functions Fn which are needed during that proof.
Definition 3.3 Recursive definition of ╚ ─ģ<╔ for ─ģ<Ą0.
╚─ģ := max ({0}*"{╚▓ +1: ▓ <─ģ & no(▓) d" ┼Ü(no(─ģ))}).
It follows from  or  that this definition can be carried out in PRA +
PRWO(Ą0).
Proposition 3.2 (i) k = ╚k.
(ii) k d" ╚(─ģ + k).
(iii) no(─ģ) d" ╚─ģ.
(iv) ╚(▓ +1) e" 1+╚▓.
(v) ╚(─ģ Ģ" ╚(▓)) d" ╚(─ģ Ģ" ▓).
(vi) ─ģ<▓& no(─ģ) d" ┼Ü(no(▓)) ę! ╚(─ģ) <╚(▓).
(vii) ─ģ e" ╔ ę! ╚─ģ e" ┼Ü(no(─ģ)).
30
Proof. The assertions 1,2,3,4 and 6 follow from the definition of ╚.
Assertion 5 is proved by induction on ▓. If ─ģ = 0 then assertion 1 yields
╚(─ģ Ģ" ╚▓) =╚(╚▓) =╚▓ = ╚(─ģ Ģ" ▓). Assume that ─ģ >0. Then no(─ģ) > 0.
If ▓ = 0 then then ╚(─ģ Ģ" ╚▓) = ╚─ģ = ╚(─ģ Ģ" ▓). Assume that ▓ > 0. Then
╚▓ = ╚┼é +1 for some ┼é < ▓ such that no(┼é) d" ┼Ü(no(▓)). The induction
hypothesis yields ╚(─ģ Ģ" ╚▓) =╚(─ģ Ģ" ╚┼é +1) =╚(─ģ Ģ" 1 Ģ" ╚┼é) d" ╚(─ģ Ģ" 1 Ģ" ┼é).
If ┼é +1 =▓ then ╚(─ģ Ģ" 1 Ģ" ┼é) =╚(─ģ Ģ" ▓). If ┼é +1<▓ then no(─ģ Ģ" 1 Ģ" ┼é) d"
no(─ģ)+1+no(┼é) d" no(─ģ)+1+┼Ü(no(▓)) d" ┼Ü(no(─ģ Ģ" ▓)). Hence, by definition,
╚(─ģ Ģ" 1 Ģ" ┼é) <╚(─ģ Ģ" ▓) since ─ģ Ģ" 1 Ģ" ┼é <─ģ Ģ" ▓.
For the proof of assertion 7 note that for ─ģ e" ╔ we have ┼Ü(no(─ģ)) < ─ģ and
no(┼Ü(no(─ģ))) d" ┼Ü(no(─ģ)).
We now refine Sch³tte s exposition of Howard s treatment of G÷del s T .
Definition 3.4 Recursive definition of g─.
(i) g0 :=0.
(ii) g(├─) :=max {g├ +1, g─}.
Definition 3.5 (i) i := 0.
(ii) [SUCC]0 := 1.
(iii) [SUCC]i := 0 if i >0.
(iv) [a─ ]i := 1 if i d" g─ and a─ is a combinator.
(v) [a─ ]i := 0 if i >g─ and a─ is a combinator.
(vi) [R┴]i := 1 if i d" g┴ +1.
(vii) [R┴]i := ╔ if i = g┴ +2.
(viii) [R┴]i := 0 if i >g┴ +2.
(ix) [R┴t0]0 := [t0]0 +1.
(x) [R┴t0]i := 1 if 1 d" i d" g┴ +1.
(xi) [R┴t0]i := [t0]0 +1 if i = g┴ +2.
(xii) [R┴t0]i := 0 if i >g┴ +2.
├─
(xiii) [a├─ b├]i := 2[a b├]i+1 " ([a├─ ]i Ģ" [b├]i) if 1 d" i d" g├ and a├─ is not a
recursor.
(xiv) [a├─ b├]i := [a├─ ]i if i >g├ and a├─ is not a recursor.
(xv) [a├─ b├]0 := ╚(╔ " [a├─ b├]1 Ģ" [a├─ ]0 Ģ" [b├]0 + g├) and a├─ is not a recursor.
31
The definition of [a├─ b├]0 has been extracted by a fine structure analysis of
Howard s treatment of bar recursion of type zero . Our final goal is to prove
1
a b ę! [a]0 > [b]0.
Corollary 3.1 (i) ("a─ )("k " ╔)("i >k)[a─ ]i =0.
(ii) [a─ ]0 <╔.
(iii) [a─ ]i <Ą0.
(iv) i 0.
Proof. Assertions 1,2 and 3 are easy to see. Assertion 4 follows by a straightfor-
ward induction on the length of a─ . Note that if a─ is a constant then [a─ ]i > 0
holds for i Lemma 3.1 (i) [n]0 = n.
(ii) i e" 1 ę! [n]i =0.
Lemma 3.2 (i) a├─ b├ = R┴t0 ę! [a├─ ]i d" [a├─ b├]i.

(ii) i d" g├ ę! [b├]i d" [a├─ b├]i.
Proof. 1. This follows immediately from the definition (cf. ).

2. If a├─ b├ = R┴t0 then [a├─ b├]0 = ╚(╔"[a├─ b├]1Ģ"[a├─ ]0Ģ"[b├]0+g├) e" [b├]0 by
Proposition 3.2. If i e" 1 then the assertion holds by definition. If a├─ b├ = R┴t0
then g├ =0 and [R┴t0]0 > [t0]0 holds by definition.
├─
Lemma 3.3 (i) [a├─ b├]i d" 2[a b├]i+1 " ([a├─ ]i Ģ" [b├]i) if 1 d" i d" g├.
(ii) [a├─ b├]i d" [a├─ ]i if i >g├.
Proof. If a├─ b├ = R┴t0 then equalities hold by definition. Assume that a├─ b├ =

R┴t0. Then ├ = 0 and g├ = 0. Suppose that 1 d" i d" g┴ +1. Then
[R┴t0]i =1 =[R┴]i. Suppose i = g┴ +2. Then [R┴t0]i =[t0]0 +1<╔=[R┴]i.
Suppose i >g┴ +2. Then [R┴t0]i =0 =[R┴]i.
The following technical lemma is of crucial importance for the proofs in the
next subsection. Thus  following a suggestion of the referee  we include its
proof in full detail. Intuitively the lemma can be explained as follows. Since
[a├─ b├]i occurs explicitly in the construction of [a├─ b├]0 the number-theoretic
content of [a├─ b├]i when measured by its norm is bounded in simple terms of the
number-theoretic content of [a├─ b├]0. Furthermore since [a├─ b├]i is defined by
an explicit construction involving [a├─ ]i, [a├─ ]0, [b├]i, [b├]0 and g├ the number-
theoretic content of [a├─ b├]i when measured by its norm is bounded by the
number-theoretic content of [a├─ ]0, [b├]0 and g├. The term g├ is used here
32
since it bounds the height of the exponential construction in the definition of
[a├─ b├]i. In a similar vein one would like to bound the number-theoretic content
of [a┴├─ (b┴├c┴)]i in simple terms of [a┴├─ ]0, [b┴├]0 [c┴]0, g┴ and g├ but note that
this only works if a┴├─ is not a recursor.
Proposition 3.3 (i) If a├ is not a recursor then no([a├]i) d" [a├]0 and if a├
is a recursor then no([a├]i) d" 2 d" no([a├]0) +1.
(ii) If i e" 1 then no([a├─ b├]i) d" F2([a├─ ]0 +[b├]0 + g├).
(iii) If i e" 1 then no([a┴├─ b┴(c─å├d─å)]i) d" F3([a┴├─ ]0 +[b┴]0 +[c─å├]0 +[d─å]0 +
g┴ +2Ę g├ + g─å +2).
Proof. Ad 1. By induction on the length of a├. Let ai := [a├]i. If a├ is a
constant different from R─ then [a├]i d" [a├]0. For R─ on has by definition
no([R─ ]i) d" no(╔) =2 d" no([R─ ]0) +1.
Now assume that a├ = b┴├c┴. Let bi := [b┴├]i and ci := [c┴]i. If b┴├ is a recursor
then ai d" c0 +1 = a0. Assume that b┴├ is not a recursor. We may assume i >0.
Case 1. i >g┴.
Then the induction hypothesis and assertion 2 of Proposition 2 yield
ai = bi d" b0 d" ╚(╔ " a1 + b0 + c0 + g┴) =a0.
Case 2. i d" g┴.
2
Assertion 4 of Corollary 1 yields a1 =2a " (b1 Ģ" c1) e" b1 > 0. Assertion 7 of
Proposition 2 then yields a0 = ╚(╔ " a1 Ģ" b0 + c0 + g┴) e" ┼Ü(no(╔ " a1 Ģ" b0 +
c0 + g┴)) > 2g┴ Ę no(a1). It hence suffices to prove no(aj+1) d" 2j Ę no(a1) for
1 d" j 1.
Ad 2. Let ai =[a├─ ]i and bi =[b├]i. Assume first that a├─ is a recursor. Then
g├ =0 and no([a├─ b├]i) d" b0 +1d" F2(no(a0) +no(b0)). Now assume that a├─
is not a recursor. We claim
2Ę(g├+1-i)
(") no([a├─ b├]i) d" F0 (a0 + b0 +2)
for 1 d" i d" g├. We prove (") by induction on g├ - i.
Case 1. i = g├.
i+1
Then Proposition 1 and the just proved assertion 1 yield no([a├─ b├]i) =no(2a "
2
i+1 0
(ai Ģ" bi)) d" 2no(a ) Ę (no(ai)+no(bi)) d" 2a +1 Ę (a0 + b0 +2) d" F0 (a0 + b0 +2).
Case 2. 1 d" i Then the induction hypothesis, Proposition 1 and the just proved assertion 1
2Ę(g├+1-i)-2
├─
(a0+b0+2)
0
yield no([a├─ b├]i) =no(2[a b├]i+1 " (ai Ģ" bi)) d" 2F Ę (a0 +
2Ę(g├+1-i)-1 2Ę(g├+1-i)
b0 +2) =F0 (a0 + b0 +2) Ę (a0 + b0 +2) d" F0 (a0 + b0 +2).
2Ę(g├+1-i)
Hence (") is proved and we obtain no([a├─ b├]i) d" F0 (a0 + b0 +2) d"
F2(a0 + b0 + g├) for 1 d" i d" g├. The proof of assertion 2 is now completed by
observing that no([a├─ b├]i) =no(ai) d" a0 +1In the proof of this assertion we have used implicitly the following properties of
33
the functions Fn which are easily proved: F0(F0(x))Ę(x+x+2) d" F0(F0(x+1)),
y+2 y+3 2Ęy
F0 (x) Ę x d" F0 (x), F0 (x) F2(z + x +1).
Ad 3. Let ai =[a┴├─ ]i, bi =[b┴]i, ci =[c─å├]i and di =[d─å]i. We claim
2Ę(g├+1-i)
("") no([a┴├─ b┴(c─å├d─å)]i) d" F2 (a0 + b0 + c0 + d0 + g┴ + g─å +2)
for 1 d" i d" g├. We prove ("") by induction on g├ - i.
Case 1. i = g├. Then Proposition 1 and the just proved assertion 2 yield
┴├─
no([a┴├─ b┴(c─å├d─å)]i) =no(2[a b┴]i+1 " ([a┴├─ b┴]i Ģ" [c─å├d─å]i)) d"
┴├─
2no([a b┴]i+1) Ę (no([a┴├─ b┴]i) +no([c─å├d─å]i)) d"
2
2F (a0+b0+g┴) Ę (F2(a0 + b0 + g┴) +F2(c0 + d0 + g─å)) d"
2
F2 (a0 + b0 + c0 + d0 + g┴ + g─å +2).
Case 2. 1 d" i proved assertion 2 yield no([a┴├─ b┴(c─å├d─å)]i) =
┴├─
no(2[a b┴(c─å├d─å)]i+1 " ([a┴├─ b┴]i Ģ" [c─å├d─å]i)) d"
┴├─
2no([a b┴(c─å├d─å)]i+1) Ę (no([a┴├─ b┴]i) +no([c─å├d─å]i)) d"
2Ę(g├+1-i)-2
(a0+b0+c0+d0+g┴+g─å+2)
2
2F Ę (F2(a0 + b0 + g┴) +F2(c0 + d0 + g─å)) d"
2Ę(g├+1-i)
F2 (a0 + b0 + c0 + d0 + g┴ + g─å +2).
Assertion ("") then yields
2Ę(g├+1-i)
no([a┴├─ b┴(c─å├d─å)]i) d" F2 (a0 + b0 + c0 + d0 + g┴ + g─å +2) d" F3(a0 +
b0 + c0 + d0 + g┴ +2 Ę g├ + g─å +2) for 1 d" i d" g├. For i > g├ we obtain
no([a┴├─ b┴(c─å├d─å)]i) d" no([a┴├─ b┴]i) d" F2(a0 + b0 + g┴) d" F3(a0 + b0 + c0 + d0 +
g┴ +2Ę g├ + g─å +2).
In the proof of this assertion we have used implicitly the following properties
of the functions Fn which are easily proved: 2 Ę F2(x) y y+1
F2(x +1), F2(x) +13.6 The constructive strong normalization proof for T
Definition 3.6 [a─ ] < [b─ ] : ą!ę! [a─ ]0 < [b─ ]0 &("i d" g─) [a─ ]i d" [b─ ]i.
Lemma 3.4 [a─ ] < [K├─ a─ b├].
Proof. As in  we see [a─ ]i d" [K├─ a─ b├]i for all i e" 1 by using Lemma 2. For
i =0 we obtain
[K├─ a─ b├]0 = ╚(╔ " [K├─ a─ b├]1 Ģ" [K├─ a─ ]0 Ģ" [b├]0 + g├) >
[K├─ a─ ]0 = ╚(╔ " [K├─ a─ ]1 Ģ" [K├─ ]0 Ģ" [a─ ]0 + g─) > [a─ ]0.
Lemma 3.5 [a┴├─ c┴(b┴├c┴)] < [S┴├─ a┴├─ b┴├c┴].
Proof. The corresponding proof given in  yields
[a┴├─ c┴(b┴├c┴)]1 Ģ" [a┴├─ c┴]1 Ģ" [b┴├c┴]1 +2d" [S┴├─ a┴├─ b┴├c┴]1 and
[a┴├─ c┴(b┴├c┴)]i d" [S┴├─ a┴├─ b┴├c┴]i for 1 d" i d" g─. We prove additionally
[a┴├─ c┴(b┴├c┴)]0 < [S┴├─ a┴├─ b┴├c┴]0.
34
Assertion 5 of Proposition 2 yields
[a┴├─ c┴(b┴├c┴)]0 = ╚(╔ " [a┴├─ c┴(b┴├c┴)]1 Ģ" [a┴├─ c┴]0 Ģ" [b┴├c┴]0 + g├) d"
╚(╔ " ([a┴├─ c┴(b┴├c┴)]1 Ģ" [a┴├─ c┴]1 Ģ" [b┴├c┴]1) Ģ" [a┴├─ ]0 Ģ" [c┴]0 Ģ" [b┴├]0 Ģ" [c┴]0 +
2 Ę g┴ + g├) =: ─ģ.
Assertion 3 of Proposition 3 and assertion 6 of Proposition 2 yield
─ģ<╚(╔"[S┴├─ a┴├─ b┴├c┴]1Ģ"[a┴├─ ]0Ģ"[b┴├]0Ģ"[c┴]0+g┴+g├) d" [S┴├─ a┴├─ b┴├c┴]0.
In this proof we have implicitly used that 6ĘF3(2Ęx)+2Ęx d" F5(x+100) = ┼Ü(x)
holds for all x < ╔. This majorization property of the functions Fn can be
verified easily.
Lemma 3.6 [a─ ] < [D─ 0a─ b─ ].
Lemma 3.7 [b─ ] < [D─ (SUCCt0)a─ b─ ].
Lemma 3.8 [b─ ] < [R─ 0a0──b─ ].
Proof. Lemma 2 yields [b─ ]i d" [R─ 0a0── b─ ]i for i e" 1. Furthermore we have
[b─ ]0 <╚(╔ " [R─ 0a0──b─ ]1 Ģ" [R─ 0a0──]0 Ģ" [b─ ]0 + g─) =[R─ 0a0──b─ ]0.
Lemma 3.9 [a0── t0(R─ t0a0──b─ )] < [R─ (SUCCt0)a0── b─ ].
Proof. Similarly as in  we get
[a0── t0(R─ t0a0── b─ )]i +[R─ t0a0── b─ ]i +[a0── ]i +1d" [R─ (SUCCt0)a0── b─ ]i
and [R─ t0a0──]i < [R─ (SUCCt0)a0── ]i for all i such that 0 that for i e" 1 we have [a0──t0]i =[a0── ]i. Also, for providing a better under-
standing of the r¶le of the exponentiation function in the proof we would like
to mention explicitly that for i = g─ +1 we have
[a0── t0(R─ t0a0── b─ )]i +[R─ t0a0── b─ ]i +[a0── ]i +1
0
=2[t ]0+1 " (1 Ģ" [a0──]i) Ģ" [a0── ]i Ģ" [a0── ]i +1
0
d" 2[t ]0+1+1 " (1 Ģ" [a0── ]i) =[R─ (SUCCt0)a0── b─ ]i.
We now prove
[a0── t0(R─ t0a0── b─ )]0 < [R─ (SUCCt0)a0── b─ ]0.
Proposition 2 and Proposition 3 yield [R─ t0a0── ]0 Ģ" [a0── ]0 Ģ" [t0]0 =
╚(╔ " [R─ t0a0──]1 Ģ" [R─ t0]0 Ģ" [a0── ]0 + g─ +1) Ģ" [a0── ]0 Ģ" [t0]0 d"
╚(╔ " [R─ t0a0──]1 Ģ" [R─ t0]0 Ģ" [a0── ]0 Ģ" [a0── ]0 Ģ" [t0]0 + g─ +1)
<╚(╔ " [R─ (SUCCt0)a0── ]1 Ģ" [R─ (SUCCt0)]0 Ģ" [a0──]0 + g─ +1) =
[R─ (SUCCt0)a0── ]0.
Hence, again by Proposition 2 and Proposition 3, we obtain
[a0── t0(R─ t0a0── b─ )]0 =
╚(╔ " [a0──t0(R─ t0a0── b─ )]1 Ģ" [a0── t0]0 Ģ" [R─ t0a0── b─ ]0 + g─) =
╚(╔ " [a0──t0(R─ t0a0── b─ )]1 Ģ" ╚(╔ " [a0── t0]1 Ģ" [a0── ]0 Ģ" [t0]0 + g0)
Ģ" ╚(╔ " [R─ t0a0──b─ ]1 Ģ" [R─ t0a0── ]0 Ģ" [b─ ]0 + g─) +g─) d"
╚(╔ " ([a0── t0(R─ t0a0── b─ )]1 Ģ" [R─ t0a0── b─ ]1 Ģ" [a0── t0]1)
35
Ģ" [a0── ]0 Ģ" [t0]0 Ģ" [R─ t0a0── ]0 Ģ" [b]0 + g─ Ę 2) <
╚(╔ " [R─ (SUCCt0)a0── b─ ]1 Ģ" [R─ (SUCCt0)a0── ]0 Ģ" [b─ ]0 + g─) =
[R─ (SUCCt0)a0── b─ ]0.
The crucial point in the proof just given consists in resolving the nested recursion
of the ╚-terms involved into an unnested one of a low ordinal complexity. For
this purpose assertion 5 of Proposition 2 is of fundamental importance.
Lemma 3.10 [b├] < [c├] ę! [a├─ b├] < [a├─ c├].
Proof. Assume first that a├─ is not a recursor. The proof given in  yields
[a├─ b├]i d" [a├─ c├]i for all i such that 1 d" i d" g─. We prove
[a├─ b├]0 < [a├─ c├]0.
Indeed, by assumption, Proposition 2 and Proposition 3 we have
[a├─ b├]0 = ╚(╔ " [a├─ b├]1 Ģ" [a├─ ]0 Ģ" [b├]0 + g├) < ╚(╔ " [a├─ c├]1 Ģ" [a├─ ]0 Ģ"
[c├]0 + g├) =[a├─ c├]0.
Assume now that a├─ = R┴. [b├]0 < [c├]0 yields [R┴b├]0 < [R┴c├]0. Let i >0.
If i = g┴ +2 then [R┴b├]i = [R┴c├]i. Finally, for the critical case, suppose

i = g┴ +2. Then [b├]0 < [c├]0 yields [R┴b├]i =[b├]0 +1 < [c├]0 +1 = [R┴c├]i.
Lemma 3.11 Assume that b├─ is not a recursor.
[a├─ ] < [b├─ ] ę! [a├─ c├] < [b├─ c├].
Proof. Assume first that a├─ is not a recursor. The proof given in  yields
[a├─ c├]i d" [b├─ c├]i for all i such that 0 [a├─ c├]0 < [b├─ c├]0.
Indeed, by assumption, Proposition 2 and Proposition 3 we have [a├─ c├]0 =
╚(╔ " [a├─ c├]1 Ģ" [a├─ ]0 Ģ" [c├]0 + g├) <╚(╔ " [b├─ c├]1 Ģ" [b├─ ]0 Ģ" [c├]0 + g├) =
[b├─ c├]0. Suppose that a├─ = R┴. Then [a├─ c├]0 =[c├]0 +1 <╚(╔ " [b├─ c├]1 Ģ"
[b├]0 Ģ"[c├]0 +g├) =[b├─ c├]0 since b├─ is not a recursor. By assumption, Lemma
2 and Lemma 3 we furthermore have [a├─ c├]i d" [a├─ ]i d" [b├─ ]i d" [b├─ c├]i for
1 d" i d" g─.
1
Theorem 2 If a b then [b] < [a], hence [b]0 < [a]0.
1
Proof. By induction on the definition of . The basic steps are dealt with by
Lemmata 4,5,6,7,8 and 9.
Lemma 10 and 11 yield closure under application.
For convenience we restate in some more detail several (well known) corol-
laries of Theorem 1 which are already mentioned in the introduction. Let
n
╔1 = ╔ and ╔n+1 := ╔╔ . Similarly let 2n(▓) be defined by 20(▓) := ▓ and
36
n
2n+1(▓) :=22 (▓). Let dp(a) denote the number of occurrences of constants in
a. Thus dp(0) = dp(SUCC) = dp(K├─ ) = dp(S┴├─ ) = dp(D─ ) = dp(R┴) = 1
and dp(ab) = dp(a) +dp(b). Let G(a) := g─ if a is a constant of type ─ and
set G(ab) :=max {G(a), G(b)} otherwise. Then G(a) is the maximal type level
in a. Let RG(a) := g─ +2 if a is a constant R─ , let RG(a) := 0 if a is a
constant which is not a recursor and set RG(ab) :=max {RG(a), RG(b)} oth-
erwise. Then RG(a) is the maximal type level of a recursor occuring in a. The
derivation lengths function for T is defined by
1 1
DT (m) :=max {n: ("a0, ..., an)[a0 ... an & G(a0) d" m & dp(a0) d" m}.
Let Tn be the set of terms a so that RG(a) d" n + 2. The derivation lengths
1 1
function for Tn is defined by DT (m) := max {n : ("a0, ..., an)[a0 ...
n
an & RG(a) d" n +2, G(a0) d" m, dp(a0) d" m}. For a functional a of type one
the derivation lengths function for a is defined by
1 1
Da(m) :=max {n: ("a0, ..., an)[am = a0 ... an]}.
For a functional a of type zero the value val(a) of a is the unique natural number
1 1
k such that there exists a1, . . . , an with a = a1 Ę Ę Ę an = k.
Corollary 3.2 (i) T is strongly normalizing.
(ii) If a is of type one then val(am) d" ╚(╔ " [a]1 Ģ" [a]0 Ģ" m). If a " Tn then
╔ " [a]1 Ģ" [a]0 < ╔n+2 and thus m val(am) is bounded by a < ╔n+2
descent recursive function. Furthermore Da(m) d" ╚(╔"[a]1Ģ"[a]0Ģ"m). In
the case a " Tn the function Da is bounded by a <╔n+2-descent recursive
function, and hence m val(am) is a <╔n+2 descent recursive function.
In the case a " T0 the functions m val(am) and Da are primitive
recursive. In the case a " T1 the functions m val(am) and Da are
multiple recursive.
(iii) DT (m) d" ╚(╔n+2 + m). Hence DT is ╔n+2-recursive.
n n
(iv) DT is primitive recursive in the Ackermann-function.
0
(v) DT is an Ą0-recursive function.
Proof. The corollary follows easily from Theorem 1 and, for example, . The
reader may verify first the following auxiliary statements by induction on the
depth of a.
(i) i >G(a) ę! [a]i =0.
Ę
(ii) i >RG(a) ę! [a]i d" 2G(a)+1-i(G(a) +1- i +2Ę dp(a)).
Ę
Ę
(iii) RG(a) e" i e" 1 ę! [a]i d" 2RG(a)-i(╔ " 2G(a)+1-i(G(a)+1- i +2Ę dp(a))).
Ę Ę
37
(iv) [a]0 d" ╚(╔ " 2RG(a)-1(╔ " 2G(a)+1(G(a) +1+4Ę dp(a))))
Ę
Furthermore 2n+1(╔ Ę ╔) =╔n+2.
It has been shown, for example, by Parsons  that Tn suffices for a functional
interpretation of I┼ün+1. Theorem 1 shows that the weak normalization proof
for Tn can be carried out in PRA + PRWO(╔n+2). This yields an alternative
proof of the following folklore result.
Corollary 3.3 (i) If f : ╔ ╔ is provably recursive in I┼ün+1 then f is
<╔n+2 descent recursive.
(ii) PRA+PRWO(╔n+2) 0-Refl(I┼ün+1) hence PRA+PRWO(╔n+2)
2
Con(I┼ün+1)
(iii) PRA+PRWO(Ą0) 0-Refl(PA) hence PRA+PRWO(Ą0) Con(PA).
2
3.7 Treatment of T in the -formulation
As far as we can see Theorem 1 extends easily to variants of T formulated in -
terms. Indeed, let -abstraction be defined as, for example, in . It can easily
be verified that G÷del s T based on -terms can be embedded by this folklore
translation into G÷del s T based on combinators in such a way that reductions
 except applications of the Š-rule  are respected. For dealing with the Š-rule,
1 1
i.e. a b ę! x.a x.b, it is convenient to employ Howard s 1970 non-unique
assignment  of ordinal numbers to terms,  which is a slight variant of [ ]0 
1
for showing that under the assumption a b one can assign effectively a natural
number to b which is smaller then the natural number assigned to a. Thus all
results for our formulation of T carry over to T in the -formulation. This also
extends to variants of T where Ę-reductions (cf., for example, ) are allowed
since [x.fx]0 > [f]0 holds for subterm reasons.
4 A simple characterization of the provably re-
cursive functions of PA
Abstract
Inspired by Buchholz technique of operator controlled derivations
(which were introduced for simplifying Pohlers method of local pred-
icativity) a straightforward, perspicuous and conceptually simple method
for characterizing the provably recursive functions of Peano arithmetic in
terms of Kreisel s ordinal recursive functions is given. Since only amaz-
ingly little proof and hierarchy theory is used, the chapter is intended to
make the field of ordinally informative proof theory accessible even to non-
prooftheorists whose knowledge in mathematical logic does not exceed a
38
4.1 Preliminaries
The set of non logical constants of PA includes the set of function symbols for
primitive recursive functions and the relation symbol =.
(In the sequel 0 denotes the constant symbol for zero and S denotes the successor
function symbol.) The logical operations include '", (", ", ". We have an infinite
list of variables x0, x1, . . . The set of PA-terms (which are denoted in the sequel
by r, s, t. . . ) is the smallest set which contains the variables and constants and
is closed under function application, i.e. if f is a k-ary function symbol and
t1, . . . , tk are terms, so is ft1 . . . tk. If t(x) is a PA-term with FV (t) ─ģ" {x}
then tIN denotes the represented function in the standard structure IN. The set
of PA-formulas (which are in the sequel denoted by A, B, F ) is the smallest set
which includes s = t, ┼╣s = t (prime formulas) and is closed under conjunction,
disjunction and quantification. The notation ┼╣A, for A arbitrary, is an abbre-
viation for the formula obtained via the de Morgan laws; A B abbreviates
┼╣A (" B. We denote finite sets of PA-formulas by ō, ", . . . As usual ō, A stands
for ō *"{A} and ō, " stands for ō *" ".
The formal system PA is presented in a Tait calculus. PA includes the logical
axioms ō, ┼╣A, A, the equality axioms ō, t = t and ō, t = s, ┼╣A(t), A(s), the

successor function axioms ō, "x(┼╣0 =Sx) and ō, "x"y(Sx =Sy x = y), the
defining equations for primitive recursive function symbols (cf. [Pohlers (1989)])
and the induction scheme ō, A(0) '""x(A(x) A(Sx)) "xA(x).
The derivation relation for PA is defined as follows:
(Ax) ō if ō is an axiom of PA.
('") ō, Ai for all i "{0, 1} imply ō, A0 '" A1.
((") ō, Ai for some i "{0, 1} implies ō, A0 (" A1.
(") ō, A(y) implies ō, "xA(x) if y does not occur in ō, "xA(x).
(") ō, A(t) implies ō, "xA(x).
(cut) ō, A and ō, ┼╣A imply ō.
Definition 4.1 (Rank of a formula)
" rk(A) :=0, if A is a prime formula
" rk(A (" B) :=rk(A '" B) :=max {rk(A), rk(B)} +1
" rk("xA(x)) := rk("xA(x)) := rk(A) +1
In this chapter we consider only ordinals less than Ą0. These ordinals are denoted
#
by ─ģ, ▓, ┼é, Š, Ę. Finite ordinals are denoted by k, m, n, . . . denotes the natural
sum of ordinals (cf. [Sch³tte 1977] or [Pohlers 1989]). For each ─ģ<Ą0 let N(─ģ)
be the number of occurences of ╔ in the Cantor normal form representation of
1 n
─ģ. Thus N(0) = 0, and N(─ģ) =n +N(─ģ1)+Ę Ę Ę+N(─ģn) if ─ģ = ╔─ģ + Ę Ę Ę+ ╔─ģ >
─ģ1 e" . . . e" ─ģn. N satisfies the following conditions:
(N1) N(0) = 0
(N2) N(─ģ # ▓) =N(─ģ) +N(▓)
39
(N3) N(╔─ģ) =N(─ģ) +1
(N4) card {─ģ<Ą0 : N(─ģ) Furthermore we see that N(─ģ + n) =N(─ģ) +n for n<╔.
Abbreviation: N─ģ := N(─ģ).
Definition 4.2 Let ┼Ü(x) :=3x+1 and f(2)(x) :=f(f(x)) denote iteration.
F─ģ(x) :=max {2x}*"{F┼é (2)(x) | ┼é <─ģ & N┼é d" ┼Ü(N─ģ + x)}
This expression is well defined due to (N4).
Remark: It follows immediately from [Buchholz, Cichon et Weiermann (1994)]
that this hierarchy is equivalent to the ordinal recursive functions.
Definition 4.3 Throughout the whole chapter we assume F : IN IN to be a
monotone function. Given ┼Ü " IN we denote by F[┼Ü] the function defined by
F[┼Ü] : IN IN; x F(max {x, ┼Ü})
We write F[k1, . . . , kn] instead of F[max {k1, . . . , kn}] and use the abbreviation
F d" F :į! "x " IN F(x) d" F (x).
Remark: F[n, m] =F[n][m] and F[n][n] =F[n].
Lemma 4.1 (i) x(ii) N─ģ(iii) ─ģ<┼é & N─ģ d" ┼Ü(N┼é +┼Ü) ę! F─ģ[┼Ü] d" F┼é[┼Ü]
(iv) For every n-ary primitive recursive function f there exists a p " IN such
that " k " INn f( k) < F╔Ęp(max k).
(v) F┼é[┼Ü] d" F┼é ▓[┼Ü]
#
(vi) k (vii) N─ģ, N▓ (viii) ─ģ0 <─ģ & N─ģ0 < F┼é(┼Ü) ę! F┼é ─ģ0+2[┼Ü] d" F┼é ─ģ+1[┼Ü]
# #
Proof. (i)-(vii) are simple. In (vi) for example k F┼é[┼Ü][k](x) =F┼é(max {┼Ü, k, x}) d" F┼é (2)(max {┼Ü, x}) d" F┼é+1(max {┼Ü, x}).
(viii) If ─ģ = ─ģ0 + n for some n e" 1 the claim follows from (v). So we can assume
# #
(1) ┼é ─ģ0 +2<┼é ─ģ
40
and the premise N─ģ0 < F┼é(┼Ü) implies
# #
(2) N(┼é ─ģ0 +2) d" N┼é +F┼é(┼Ü) + 1 d" ┼Ü(N(┼é ─ģ) +F┼é ─ģ(┼Ü))
#
For all x " IN we obtain from (1) and (2) by part (iii) and (ii)
F┼é ─ģ0+2[┼Ü](x) =F┼é ─ģ0+2(max {┼Ü, x}) <(iii) F(2) (max {┼Ü, x})
# #
#
┼é ─ģ
d"(ii) F┼é ─ģ+1[┼Ü](x).
#
Definition 4.4 (F-controlled derivations)
─ģ
F r ō holds iff N─ģ(Ax) ō )" "(IN) = "

─ģ0
((") A0 (" A1 " ō & "i "{0, 1} "─ģ0 <─ģ F r ō, Ai
─ģ0
('") A0 & A1 " ō & "i "{0, 1} "─ģ0 <─ģ F r ō, Ai
─ģ0
(") "xA(x) " ō & "n─ģn
(") "xA(x) " ō & "n " IN "─ģn <─ģ F[n] r ō, A(n)
─ģ0
(cut) rk(A) The abbreviations used in (Ax) and (cut) are the following:
" "(IN) := {A | A is prime formula and IN |= A}
─ģ0 ─ģ0 ─ģ0
" F r ō, (┼╣)A :į! F r ō, A & F r ō, ┼╣A
So the F-controlled derivations are just like usual PA-derivations but with ╔-
rule and some information about the "-witnesses and the derivation length. The
first one is the essential aid for collapsing (lemma 4.6) while the latter is used
to apply lemma 4.1 (viii) in the cut-elimination procedure 4.5.
Lemma 4.2 (Monotonicity)
For r d" s & ─ģ d" ▓ & N▓ ─ģ ▓
F r ō ę! F s ō, ō
Lemma 4.3 (Inversion)
─ģ ─ģ
(i) F r ō, A & ┼╣A " "(IN) ę! F r ō
─ģ ─ģ
(ii) F r ō, A0 & A1 ę! "i "{0, 1} F r ō, Ai
─ģ ─ģ
(iii) F r ō, "xA(x) ę! "n " IN F[n] r ō, A(n)
Lemma 4.4 (Reduction) Let A be a formula A0 (" A1 or "xB(x) of rank d" r.
─ģ ▓ ─ģ + ▓
F┼é[┼Ü] r ō, ┼╣A & F┼é[┼Ü] r ", A ę! F┼é+1[┼Ü] r ō, "
41
Proof by induction on ▓. The interesting case is that A a""xB(x) is the main
formula of the last deduction step. For some k < F┼é(┼Ü) and ▓0 <▓ we have
the premise
▓0
F┼é[┼Ü] r ", A, B(k)
The induction hypothesis yields
─ģ + ▓0
F┼é+1[┼Ü] r ", B(k)
By the inversion lemma 4.3 (iii) and the fact F┼é[┼Ü][k] d" F┼é+1[┼Ü] (lemma 4.1
(vi)) we can transform the first derivation into
─ģ
F┼é+1[┼Ü] r ō, ┼╣B(k)
N(─ģ + ▓) < F┼é+1(┼Ü) is true by lemma 4.1 (vii). Furthermore rk(B(k)) <
rk("xB(x)), so the claim can be obtained by a cut (using monotonicity before).
─ģ
The underlying idea of the reduction lemma is as follows: If F ō, ┼╣A and
r
▓ ─ģ + ▓
G r ", A then F ─ć%G ō, " for suitably nice F and G. Thus the composi-
r
tion of bounding functions yields the witness bounding during the cut reduction.
─ģ ╔─ģ
Lemma 4.5 (Cut-elimination) F┼é[┼Ü] r +1 ō ę! F┼é ─ģ+1[┼Ü] r ō
#
Proof by induction on ─ģ. In the interesting case of a cut we have for some
─ģ0 <─ģ
─ģ0
F┼é[┼Ü] r +1 ō, (┼╣)A
for some A of rank d" r. The induction hypothesis yields
0
╔─ģ
F┼é ─ģ0+1[┼Ü] r ō, (┼╣)A
#
By reduction 4.4 (or lemma 4.3 (i) for prime formulas (┼╣)A) we obtain
0
╔─ģ Ę 2
F┼é ─ģ0+2[┼Ü] r ō
#
0
and the claim follows from lemma 4.1 (vii), (viii), ╔─ģ Ę2 <╔─ģ and monotonicity.
The motivation behind the Cut elimination theorem is as follows. For a given
suitably nice monotone function F : IN IN let
─ģ ▓ ▓
F (x) :=max ({F (x) +1}*"{F (F (x)) : ▓ <─ģ & N▓ d" N─ģ + x}.
42
─ģ ╔─ģ
If F ō then F─ģ r ō. Thus the ─ģ-th iterate of F (stemming from ─ģ
r +1
applications of the reduction lemma) yields witness bounding for the cut elimi-
nation. This is mirrored by the following observation: (F┼é)─ģ(x) d" F─ģ#┼é+1(x).
Lemma 4.6 (Collapsing)
─ģ
F 0 "yA(y) & rk(A) =0 ę! "nProof by induction on ─ģ. Since we have a cut-free derivation the last deduction
step was (") so there is an n─ģ0
F 0 A(n), "yA(y)
If IN |= A(n) we are done otherwise we can use lemma 4.3 (i) and the induction
hypothesis to prove the claim.
Definition 4.5 A <" A :į! There are a PA-formula B, pairwise dis-
tinct variables x1, ..., xn and closed PA-terms t1, s1, . . . , tn, sn such that tIN =
i
sIN (i =1, ..., n) and A a" Bx ,...,xn(t1, ..., tn), A a" Bx ,...,xn(s1, ..., sn).
i 1 1
Lemma 4.7 (Tautology and embedding of mathematical axioms)
k
(i) A <" A ę! Fk 0 A, ┼╣A where k := 2 Ę rk(A)
(ii) For every formula A with FV (A) ─ģ"{x} there is a k " IN such that
╔ +3
Fk 0 A(0) &"x(A(x) A(Sx)) "xA(x)
k
(iii) For every other math. axiom of PA A there is a k <╔such that Fk 0 A
Proof. (i) as usual using Nk < Fk(0). (iii) If ō contains only -formulas
k
and ō denotes a usual cut-free PA-deriviation (without F-controlling) we
0
k k
can easily conclude Fk 0 ō by induction on k. Since the result A for
0
mathematical axioms A except induction is folklore the claim follows.
(ii) Let k := rk(A(0)). We show by induction on n
2k +2n
(") F2k[n] 0 ┼╣A(0), ┼╣"x(A(x) A(Sx)), A(n)
2k
n = 0: The tautology lemma 4.7 (i) yields F2k 0 ┼╣A(0), A(0). So the claim
follows by monotonicity.
2k
n n+1: By tautology lemma we have F2k 0 ┼╣A(Sn), A(n +1). Connecting
this with the derivation given by induction hypothesis by ('") yields
2k +2n +1
F2k[n +1] 0 ┼╣A(0), ┼╣"x(A(x) A(Sx)), A(n) &┼╣A(Sn), A(n +1)
So (") follows by an application of ("). The claim follows from (") by an
application of (") and three applications of ((").
43
Lemma 4.8 (Embedding) For every ō satisfying FV (ō) ─ģ"{x1, . . . , xm} and
PA ō there exist ┼é <╔2, ─ģ < ╔ Ę 2, and r <╔ such that
─ģ
"n " INm F┼é[n] r ō(n)
Proof by induction on the derivation of ō.
1. A, ┼╣A " ō is an axiom. The tautology lemma yields the claim.
2. "xA(x, x) " ō(x) and ō results from ō(x), A(y, x) and y " {x1, . . . , xm}
/
holds true. By induction hypothesis there are ┼é <╔2, ─ģ < ╔Ę 2, r < ╔ such that
─ģ
"n " IN "n " INm F┼é[n, n] r ō(n), A(n, n)
We obtain the claim by an application of (") since F┼é[n, n] =F┼é[n][n].
3. "xA(x, x) " ō(x) and ō results from ō(x), A(t(x), x). We can assume that
x "{x1, . . . .xm} and FV (t) ─ģ"{x1, . . . .xm}. By induction hypothesis there are
/
┼é <╔2, ─ģ0 <╔Ę 2 and r0 <╔such that
─ģ0
(3) "n " INm F┼é[n] r0 ō(n), A(t(n), n)
The tautology lemma 4.7 (i) yields a k <╔such that
k
(4) "n " INm Fk 0 A(tIN(n), n), ┼╣A(t(n), n)
Since x.tIN(x) is a primitive recursive function due to lemma 4.1 (iv) there is
a p < ╔ satisfying "x " INm tIN(x) < F╔Ęp(max x). Choosing p > k implies
Fk d" Fk[n] d" F╔Ęp[n] with aid of lemma 4.1 (iii). Letting ─ģ := max {─ģ0, k} and
r >max {r0, rk(A)} we obtain
─ģ
"n " INm F┼é ╔Ęp[n] r ō(n), A(tIN(n), n), (┼╣)A(t(n), n).
#
from (3) and (4) by monotonicity 4.2. Applying a cut we get
─ģ +1
"n " INm F┼é ╔Ęp[n] r ō(n), A(tIN(n), n)
#
and (") proves the claim since tIN(n) < F┼é ╔Ęp(n).
#
4. ō results from ō, A and ō, ┼╣A by a cut. By induction hypothesis there are
┼é <╔2, ─ģ < ╔ Ę 2 and r <╔such that
─ģ
"n " INm F┼é[n] r ō(n), (┼╣)A(n, 0)
By choosing r >rk(A) we obtain the claim by an application of (cut).
5. The missing cases are covered by lemma 4.7 or easy (rules for (" and '").
Theorem 4.1 Let A be a prime formula such that PA "x"yA(x, y) and
FV (A) ─ģ"{x, y}. Then there is a ┼é <Ą0 such that
"x " IN "y Proof by embedding 4.8, iterated cut-elimination 4.5, inversion 4.3 (iii) and
collapsing 4.6.
44
References
 P. Aczel, H. Simmons, S.S. Wainer (eds.): Proof Theory: A Selection of
Papers from the Leeds Proof Theory Meeting 1990, Cambridge University
Press (1992).
 T. Arai: Variations on a theme by Weiermann. The Journal of Symbolic
Logic (to appear).
 T. Arai: Consistency proofs via pointwise induction. Preprint, Hiroshima
1996 (submitted).
 H. Barendregt: The Lambda Calculus: Its Syntax and Semantics. North-
Holland.
 A. Beckmann and A. Weiermann: A term rewriting characterization of the
polytime functions and related complexity classes. Archive for Mathematical
Logic 36 (1996), 11-30.
 A. Beckmann and A. Weiermann: How to characterize the elementary re-
cursive functions by Howard-Sch³tte style methods. Preprint, M³nster 1996
(submitted).
 B. Blankertz and A. Weiermann: How to characterize provably total func-
tions by the Buchholz operator method. Proceedings of G÷del 96. Springer
Lecture Notes in Logic 6 (1996), 205-213..
 W. Buchholz: Proof-theoretic analysis of termination proofs. Annals of
Pure and Applied Logic 75 (1995), 57-65.
 W. Buchholz: A new system of proof-theoretic ordinal functions. Annals of
Pure and Applied Logic 32 (1986), 195-207.
 W. Buchholz, S. Feferman, W. Pohlers, W. Sieg (eds.): Iterated Inductive
Definitions and Subsystems of Analysis: Recent Proof-Theoretical Studies,
Lecture Notes in Mathematics 897. Springer, Berlin, 1981.
 W. Buchholz: The &!Ą+1-rule. In , 188-233.
 W. Buchholz: Notation systems for infinitary derivations. Archive for
Mathematical Logic 30 5/6 (1991), 277-296.
 W. Buchholz: A simplified version of local predicativity. In , 115-148.
 W. Buchholz, E. A. Cichon and A. Weiermann: A uniform approach to
fundamental sequences and hierarchies. Mathematical Logic Quarterly 40
(1994), 273-286.
45
 E. A. Cichon and A. Weiermann: Term rewriting theory for the primitive
recursive functions. Annals of Pure and Applied Logic 83 (3) (1997), 199-
223.
 N. Dershowitz & J. Jouannaud: Rewrite systems. Handbook of Theoretical
Computer Science B: Formal Methods and Semantics, (J. van Leeuwen ed.),
North-Holland, (1990) 243-320.
 S. Feferman. Logics for Termination and Correctness of Functional pro-
grams, II. Logics of strength PA. In: Proof Theory. P. Aczel, H. Simmons,
S. Wainer (eds.). Cambridge University Press (1992), 197-225.
 W. Felscher. Berechenbarkeit: Rekursive und Programmierbare Funktionen.
Springer, 1993.
 H. Friedman and M. Sheard: Elementary descent recursion and proof the-
ory. Annals of Pure and Applied Logic 71 (1995), 1-45.
 Geupel, Terminationsbeweise bei Termersetzungssystemen. Diplomarbeit,
Technische Universitõt Dresden, Sektion Mathematik, 1988.
 J. Y. Girard: 1-logic: Part 1: Dilators. Annals of Mathematical Logic 21
2
(1981), 75-219.
 D. Hofbauer: Termination proofs by multiset path orderings imply primi-
tive recursive derivation lengths. Theoretical Computer Science 105 (1992)
129-140.
 D. Hofbauer: Termination Proofs and Derivation Lengths in Term Rewrit-
ing Systems. Forschungsberichte des Fachbereichs Informatik. Bericht 92-
46. Technische Universitõt Berlin 1991.
 W. A. Howard: Assignment of ordinals to terms for primitive recursive
functionals of finite type. Intuitionism and Proof Theory. North-Holland,
Amsterdam 1970, 443-458.
 W. A. Howard: Ordinal analysis of bar recursion of type zero. Compositio
Mathematica 42 (1980), 105-119.
 G. Kreisel: On the interpretation of non-finitist proofs II. The Journal of
Symbolic Logic 17 (1952), 43-58.
 D. Leivant: Predicative recurrence in finite type. In A. Nerode and Y.V.
Matiyasevich (eds.), Logical Foundations of Computer Science. Springer
Lecture Notes in Computer Science 813 (1994), 227-239.
 M. M÷llerfeld and A. Weiermann: A uniform approach to z"-recursion.
Preprint, M³nster (1996).
46
 C. Parsons: On a number-theoretic choice schema and its relation to in-
duction. Intuitionism and Proof Theory. A. Kino, J. Myhill, R. E. Vesley
(eds.). North-Holland (1970), 459-473.
 C. Parsons: On n-quantifier induction. The Journal of Symbolic Logic 37/3
(1972), 466-482.
 R. Pķter. Recursive Functions. Academic Press, 1967.
 H. E. Rose Subrecursion: Functions and Hierarchies. Oxford University
Press (1984).
 W. Pohlers: Proof Theory. An Introduction. Springer Lecture Notes in
Mathematics 1407 (1989).
 W. Pohlers: Proof-theoretic analysis of IDĮ by the method of local predica-
tivity. In , 261-357.
 W. Pohlers: A short course in ordinal analysis. In , 27-78.
 W. Pohlers: Pure Proof Theory: Aims, methods and results. The Bulletin
of Symbolic Logic 2/2 (1996), 159-188.
 M. Rathjen: Fragments of Kripke-Platek set theory with infinity. In ,
251-273.
 M. Rathjen and A. Weiermann: Proof-theoretic investigations on Kruskal s
theorem. Annals of Pure and Applied Logic 60 (1993), 49-88.
 K. Sch³tte. Proof Theory. Springer, 1977.
 H. Simmons. The Realm of Primitive Recursion. Archive for Mathematical
Logic 27, (1988).
 H. Schwichtenberg: Eine Klassifikation der Ą0-rekursiven Funktionen.
Zeitschrift f³r Mathematische Logik und Grundlagen der Mathematik 17,
(1971) 61-74.
 H. Schwichtenberg: Elimination of higher type levels in definitions of prim-
itive recursive functionals by means of transfinite recursion. In: Logic Col-
loquium  73. Rose, Sheperdson (editors). North-Holland (1975), 279-303.
 H. Schwichtenberg: Classifying recursive functions. Handbook of Recursion
Theory (to appear).
 G. Takeuti: Proof Theory. Second edition. North-Holland, Amsterdam
1987.
47
 A.S. Troelstra: Metamathematical Investigations of Intuitionistic Arith-
metic and Analysis. Second, corrected edition of LNM 344 (1973). ILLC
Prepublication Series X-93-05.
 A. Weiermann: A strongly uniform termination proof for G÷del s T by
methods from local predicativity. Archive for Mathematical Logic 36 (3)
(1997), 445-460.
 A. Weiermann: How to characterize provably total functions by local pred-
icativity. The Journal of Symbolic Logic 61 (1996), 52-69.
 A. Weiermann: Rewriting theory for the Hydra battle and the extended
Grzegorczyk hierarchy. The Journal of Symbolic Logic (to appear).
 A. Weiermann. Bounding derivation lengths with functions from the slow
growing hierarchy. Archive for Mathematical Logic 37 (1998), 427-441.
 A. Weiermann. Termination proofs for term rewriting systems by lexico-
graphic path orderings yield multiply recursive derivation lengths. Theoret-
ical Computer Science 139 (1995), 355-362.
48

Wyszukiwarka

Podobne podstrony:
Pohlers Infinitary Proof Theory (1999)
A Erhardt Ferron Theory and Applications of Digital Image Processing
1 3 Application of Forensic Science
Bertalanffy The History and Status of General Systems Theory
Paul K Maciejewski, et al An Empirical Examination of the Stage Theory of Grief
Applications of linear algebra to differential equation [sharethefiles com]
Applications of EMT Programs in Smart Grids and DG(1)
Application of the Electromagnetic Model for Diagnosing Shock Wave Processes in Metals
Doran & Lasenby PHYSICAL APPLICATIONS OF geometrical algebra [sharethefiles com]
Jonathan Edwards The Sermons of Jonathan Edwards; A Reader (1999)(2)
Repeated application of shock waves as a possible method for food preservation
Applications of
18 Applications of HSS

wi─Öcej podobnych podstron