Type Theory |

Type theory, in one sense, is the view that some category of abstract entities—sets, in the simplest example, but there are analogous views of properties, relations, concepts, and functions—come in a hierarchy of levels, with an entity of one level applying to (having as members, or having as instances, or ...) entities only of a lower level.

Such a view gives an intuitively comprehensible picture of the universe of abstracta and provides a principled way of avoiding

**Bertrand Arthur William Russell**’s Paradox and its analogues. In a second sense, the term refers to any of a wide range of formal axiomatic systems embodying some form of the view. The present entry gives a short history of the view and a brief survey of the systems.

The systems are generally formulated in many-sorted quantificational logic, with a separate alphabet of quantified variables ranging over each type of entity. Axiomatically, they incorporate the rules of propositional logic (usually though not always classical) and of quantifier logic, the latter reduplicated for each alphabet of variables.

Bertrand Arthur William Russell |

Beyond this the most important axioms postulate the existence entities of the various types. For versions of Simple Type Theory, these are typically unrestricted comprehension principles: For any type, t, there is a set (or property, or ...) of a higher type containing as members (or having as instances, or ...) all the entities of type t satisfying an arbitrarily chosen formula of the language.

For versions of Ramified Type Theory, this is restricted: Only such entities are postulated as can have their membership (or ...) specified by formulas in which certain sorts of variables do not occur. References are given below to works in which precise formulations can be found; Alonzo Church is particularly helpful in this matter.

Type theory as a way of avoiding the set-theoretic paradoxes is one of Russell’s great contributions to the study of the foundations of mathematics, but the idea of a hierarchy with sets (or set-like entities ...) coming in levels is a bit older.

Gödel Frege |

Schroeder had presented a version of it, and

**Gödel Frege**had based his foundational system on it. For Frege, the hierarchy of entities reflected the hierarchy of grammatical categories in an (idealized) language.

At the bottom there were objects, the referents of singular terms. Predicates (either simple or complex) then stood for concepts, which he conceived of as so different from objects that it was an abuse of language to try to say anything of the two together (hence his avowedly nonsensical dictum about the concept horse not being a concept).

Linguistic constructions with blanks that can be filled by simple or complex predicates—his prime examples were the first-order quantifiers “it holds of every object that it ___” and “it holds of at least one object that it ___”—he construed as a sort of second-level predicate and took to denote second-level concepts: entities as different from (first-level) concepts as they are from objects. And so (in principle; in practice he made little use of higher levels) on up.

formal system |

The grammar of his

**formal system**reflected this: No term or variable for an item of one level was allowed to stand in the positions filled by terms or variables for items of other levels. Had he rested content with this machinery, his formal system would have been a version of (what later came to be called) the Simple Theory of Types and demonstrably consistent.

To carry out his project of giving a logicist foundation for arithmetic, however, he had to prove that there were infinitely many objects, and to do this, he postulated that every concept had an object—its Werthverlauf—as a kind of shadow.

These objects functioned essentially as sets: Frege was able to define membership by saying that one object was a member of a second iff the second was the Werthverlauf of a concept holding of the first. Since a Werthverlauf could have any objects whatever—including Werthverlaufs!—as members, the derived set theory was untyped.

Peano axioms |

Frege was able to prove in it the existence of an infinite set, and to interpret (a variant of) the

**Peano axioms**for Arithmetic. Russell was able to derive in it his contradiction. (There is a readable account of the derivation of Arithmetic in the untyped set theory in Hatcher 1982.)

Russell was not initially attracted to Frege’s linguistic hierarchy. He wanted to formulate a general metaphysical theory and to describe the differences between horses and concepts by denying of the one the very same predicates he affirmed of the other. During his period of experimentation after the discovery of the paradoxes, he toyed with and rejected versions of type theory, finally coming to it by an indirect route.

Sets (Russell said classes) themselves—entities satisfying an axiom of extensionality—he was willing to give up as excess ontological baggage. A set is typically defined by giving an open formula that specifies its membership, and Russell preferred to think in terms of nonextensional entities designated by the formulas.

syntactic construction |

He thought of sentences as standing for propositions, which he took to be complex entities built up out of the items designated by the words in a sentence in a way that paralleled the

**syntactic construction**of the sentence.

The items expressed by open formulas he called propositional functions: things which, when given some entity as argument, would yield the proposition that would be expressed by inserting a name of the argument in place of the free variable. Russell’s Paradox, however, does not depend on the assumption of extensionality: A naïve theory of propositional functions is inconsistent in the same way as naïve set theory.

If a consistent theory of propositional functions could be found, however, a theory of sets could be interpreted in it by contextual definition: Statements about sets would be interpreted as statements about propositional functions to which the differences between extensionally equivalent functions were irrelevant.

virtual propositional |

Apparently almost immediately after seeing how to dispense with the strange entities he called denoting concepts—“On Denoting” eliminates them by giving a new analysis of the propositions he had thought of as containing them—he thought of what might be called a theory of

**virtual propositional**functions, a theory in which, though neither classes nor propositional functions were postulated as entities, statements apparently referring to them could be formulated.

On this theory reference to a propositional function (say, X is a horse) would be replaced by reference to a pair of entities: one of the propositions that might have been taken as a value of the function (for example, Bucephalus is a horse) along with one of the component entities of that proposition (Bucephalus, in this case).

The key notion was one of the substitution of an entity for one of the constituents of a proposition: Rather than saying that Traveler, for example, satisfies the propositional function X is a horse, on the new theory, we will say that the proposition obtained from Bucephalus is a horse by substituting Traveler for Bucephalus in it is a true one. (Note that this notion of substitution is not a syntactic one: We are substituting one flesh-and-blood horse for another in a proposition, construed as a complex but nonlinguistic entity. The developed formalization of the theory, of course, has provisions for substitution of names in the sentences expressing propositions!)

first-level |

Since the place of a variable for propositional functions is taken by two variables for entities (one for a proposition, one for a designated argument), and a variable for a higher-level propositional function taking

**first-level**propositional functions will similarly be replaced by three variables and so on, this theory gives the effect of a typed theory of propositional functions: When references to and quantifications over propositional functions are replaced with terms and variables for propositions and other entities recognized by the theory in the way sketched, it will be impossible to say that a higher-level item serves as an argument for a lower! (Russell described the theory in his 1906 essay, which he withdrew before publication.

Remarkably, essentially the same system was developed again, apparently independently, several decades later. For discussions of Russell’s theory and his reasons for abandoning it, see Peter Hylton [1980] and Gregory Landini [1998].)

Russell took propositions and propositional functions to be the objects of cognitive attitudes and the meanings of linguistic expressions as well as the fundamental objects of mathematics.

general theory |

In trying to formulate a

**general theory**of entities that could serve all these functions, he confronted not only the set-theoretic paradoxes but also those now classed as semantic or intentional, and they drove him to an even more restrictive form of type theory.

This was the Ramified Theory of Types, first presented in Russell (1908) (a paper largely recycled in the introduction to Principia Mathematica two years later). On this theory there is a kind of double hierarchy.

Propositional functions are classified not only by the arguments they take but also by the conceptual resources that go into their definitions: A propositional function can only have arguments of certain lower levels, but two propositional functions taking exactly the same arguments may be of different types if in formulating them (it is best, here, to think of the functions as the meanings of open formulas) one quantifies over entities of a different level.

nonconceptual entities |

Start with a domain of nonabstract or

**nonconceptual entities**as a bottom level. The level of a propositional function will be at least one higher than that of its argument (or, in the case of a relational function, the highest level of its arguments).

It will only have this minimum level, however, if no quantified variables are used in its formulation that range over entities of a higher level than its arguments, and the general rule is that the level of a propositional function is one greater than the highest level of its arguments or of the entities quantified over in its formulation, whichever is higher. (Propositions, since they do not take arguments, formed a single type on his earlier approaches, but the Ramified Theory divides them into a hierarchy based on the quantifications involved in them. This makes possible a quick dissolution of many semantic paradoxes: When Epimenides says that every proposition asserted by a Cretan is false, he asserts a proposition of a higher level than those he quantifies over, and so his assertion does not cover the proposition he himself has asserted.)

The Ramified Theory, though notationally complicated, has a perspicuous semantic interpretation which make its ontological commitments seem fairly innocent: Kurt Gödel, in a note added to reprintings of his 1944 essay, speaks of it as embodying a strictly nominalistic (or strictly antirealistic) kind of constructivism about abstract entities. From a mathematical point of view, it is a very weak theory:

Axiom of Infinity |

When it is supplemented by an

**Axiom of Infinity**(stating that there are infinitely many objects of the lowest level), it suffices to derive a certain portion of elementary number theory, but only a restricted portion. In order to provide a foundation for classical mathematics, Russell added the Axioms of Reducibility.

These maintained the type distinctions of the Ramified Theory (allowing Russell to appeal to them in dealing with the semantic and intentional paradoxes) but postulated the existence of enough predicative propositional functions (functions, that is, of the lowest possible level for their arguments) to provide a model of the mathematically stronger Simple Theory of Types, and the mathematical work of Principia Mathematica is then essentially conducted in the Simple Theory. (The clearest account of Ramified Type Theory and its use in analyzing the paradoxes is in Church [1976], to which sections 58 and 59 of Church [1956] can serve as an introduction.)

In the early 1920s two alternatives to the

**Ramified Theory**were proposed. One was described in the introduction and appendices Russell wrote for the second edition of Principia Mathematica (1925). It was noted by Gödel in 1944, but otherwise seems to have been ignored until the 1990s. On this theory the two factors in the Ramified Theory’s classification are separated.

Ramified Theory |

Each function has a simple type depending only on the arguments it takes, and also a ramification level determined by what entities are quantified over in its formulation. A function of higher simple type (one, that is, that can take functions as arguments) can be affirmed of any argument of the appropriate simple type, even an argument whose ramification level is higher than its own.

Each quantified variable for propositional functions, however, is restricted to range only over functions of a certain ramification level. Gödel (1944) noted that this system was acceptable to the same nominalistic constructivism as Ramified Type Theory.

One way of making this precise is that, as shown in A. P. Hazen and

**J. M. Davoren**(2001), the 1925 system, like the Ramified system, can be given a semantics on which quantification over objects other than the basic, bottom-level, ones is interpreted substitutionally.

J. M. Davoren |

In Appendix B to the second edition of Principia Mathematica, Russell gave what he claimed was a derivation of the principle of mathematical induction in his new system, but the proof contains an essential error. Landini (1996) gives a correct proof of induction but uses an additional extensionality axiom that is not valid on the nominalistic semantics.

The exact mathematical strength of the 1925 system, supplemented by an Axiom of Infinity, is not clear: It will not suffice for the full strength of (first-order) Peano Arithmetic, but it may yield a richer fragment of it than the Ramified system.

At about the same time, F. P. Ramsey (1925) proposed abandoning ramification altogether, giving a formulation of the Simple Theory of Types. On this view, the basic objects, or individuals, form one type and the types of other entities are defined exclusively by the types of arguments they can take:

Aldo Bressan |

Properties of individuals will form one type, properties of properties of individuals another, relations between individuals and properties of individuals a third, and so on.

The theory need not be extensional: It can allow distinctions between properties holding of exactly the same objects, and both

**Aldo Bressan**(1972) and Montague (cf. Daniel Gallin 1976) developed versions based on modal logic, the first seeking applications in the formalization of physical theory and the second a variety of semantic and conceptual analyses.

For mathematical purposes Ramsey assumed extensionality; on this assumption, propositional functions of a single argument amount to sets, those of more than one to relations-in-extension. The resulting system is described (and compared with the Ramified Theory) in sections 34–36 of W. V. Quine (1969).

Simple Theory |

Obviously, Ramsey and those who have followed him have abandoned Russell’s attempt to deal with the semantic and intentional paradoxes through type distinctions. Their view was that the set-theoretic paradoxes were adequately handled by the

**Simple Theory**of Types, and the others essentially involved other concepts—semantic or cognitive/epistemological—and were so properly dealt with by separate theories.

Alfred Tarski showed that the semantic paradoxes could be avoided by invoking a doctrine of levels of language (clearly foreshadowed by Russell at the end of his introduction to Ludwig Wittgenstein’s essay [1921].)

Russell seems to have thought the intentional paradoxes were best handled by assimilating them to semantic paradoxes through a kind of language of thought idea, which he discussed in Appendix C to the second edition of Principia Mathematica.

Gerhard Gentzen |

The extensional Simple Theory of Types, without an axiom of infinity, was proven consistent by

**Gerhard Gentzen**(1936) (one of the successes of Hilbert’s Program!). With an axiom asserting the existence of infinitely many individuals, it becomes a usable system of set theory, strong enough to derive most of the mathematics actually used in the natural sciences.

As such it was taken as standard by many researchers between the publication of Principia Mathematica and the 1930s: Gödel (1931) and Tarski (1935) both assume it as their background system.

Subsequent set theorists have preferred other axiomatizations, such as Zermelo-Fraenkel set theory, but (as described in sections 37–38 of Quine [1969]), they can be seen as natural generalizations of Simple Type Theory. To get from a system like Ramsey’s to one like Zermelo’s, one makes five changes:

Wiener-Kuratowski |

- abandoning relational types (reducing relations to sets by using the Wiener-Kuratowski analysis of ordered pairs),
- abandoning the many-sorted formal language, with its separate alphabets of variables ranging over different types of entity, in favor of a description of the whole hierarchy in a first-order language with a single sort of variable,
- making the hierarchy cumulative, so a set can have members of any lower level rather than being restricted to members of the immediately preceding level,
- allowing sets of infinitely high level, which can have members of all finite levels,
- reformulate the axioms to give an elegant systematization of the new framework.

The fifth is not trivial: The resulting systems are stronger than the Type Theory we started with, and would be even if we left out the infinite types. Conceptually, however, the generalized type-theoretic way of thinking about set theory is very satisfying. The stages of George Boolos (1971) are very reminiscent of Russellian types.

recursive functions |

Church (1940) makes a different generalization. As Russell’s propositional function suggests, a property can be thought of as a function mapping arguments (of appropriate type) to propositions (or, assuming extensionality, to truth values).

Church assumes two basic types, of individuals and truth values, and represents properties as functions from entities of some type to truth values, and then adds types for other kinds of function: Thus, there is a type of functions from individuals to individuals, a type of functions from individuals to (functions from individuals to individuals), and so on.

The formal language embodying this conception is based on a typed version of Church’s lambda calculus, an elegant notation for the representation of

**recursive functions**. (Montague’s intensional logic, mentioned above, is essentially a modal version of Church’s system.)

classical logic |

All the type theories mentioned so far have been based on

**classical logic**(or, in modal extensions thereof). The considerations that motivate intuitionistic logic are independent of those leading to type theories (recall that Russell’s Paradox works in essentially the same way in intuitionistic as in classical logic!), and variants of all of these systems based on intuitionistic logic are possible.

They have received some study under the name theory of species. Joachim Lambek and P. J. Scott (1986) present what is essentially an intuitionistic variant of the system of Church (1940), showing that it has natural connections with mathematical category theory.

Per Martin-Löf (1984), with greater philosophical attention to intuitionistic concerns about the meaningfulness of mathematical assertions, has developed fragments of intuitionistic type theory in a series of publications, with Martin-Löf serving as a summary of his work to that point.

computer scientists |

Since intuitionistic proofs often provide information that can be used to define algorithms, there has been considerable interest in Martin-Löf ’s and similar systems among

**computer scientists**; Thompson (1991) provides an introduction to the systems and their applications.

The area is one of active research by logicians, and efforts to develop more powerful and general theories have encountered difficulties, as witnessed by Thierry Coquand (1994), of a kind that would have been familiar to early twentiethcentury researchers in the foundations of mathematics.