# Simply typed lambda calculus

Simply typed lambda calculus

The simply typed lambda calculus ($lambda^ o$) is a typed interpretation of the lambda calculus with only one type combinator: $o$ (function type). It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical uses of the untyped lambda calculus, and it exhibits many desirable and interesting properties.

The word "simple types" is also used to refer to extensions of the simply typed lambda calculus such as products, coproducts or natural numbers (System T) or even full recursion (like PCF). In contrast, systems which introduce polymorphic types (like System F) or dependent types (like the Logical Framework) are not considered "simply typed". The former are still considered "simple" because the Church encodings of such structures can be done using only $o$ and suitable type variables, while polymorphism and dependency cannot.

Syntax

$e = x mid lambda x: au.~e mid e~e$

That is, "variable reference", "abstractions", and "application". A variable reference $x$ is "bound" if it is inside of an abstraction binding $x$. A term is "closed" if there are no unbound variables.

It is possible to define the syntax as in the untyped lambda calculus, with unannotated $lambda$ abstractions. Since type inference or reconstruction is tractable for the simply typed lambda calculus, the decision doesn't matter from a "typeability" perspective. The type system for the annotated version, however, has an algorithmic interpretation, while the algorithm for type-checking unannotated and partially annotated terms is more complex. For more information, see the article on Algorithm W.

The syntax of types is:

$au = alpha mid au o au$

We take $alpha$ and to be "type variables" drawn from a predetermined set, while we reserve $sigma$ and $au$ as metavariables, meaning "some type". The "function type" $sigma o au$ refers to the set of functions that, given an input of type $sigma$, produce an output of type $au$.

By convention, $o$ associates to the right: we read $sigma o au o ho$ as $sigma o\left( au o ho\right)$.

Each type $sigma$ is assigned an order, a number $o\left(sigma\right)$. For base types, $o\left(alpha\right)=0$; for function types, $o\left(sigma o au\right)=mbox\left\{max\right\}\left(o\left(sigma\right)+1,o\left( au\right)\right)$. That is, the order of a type measures the depth of the most left-nested arrow.

Semantics

There are two distinct presentations of simply typed lambda calculus. In "Curry-style" semantics, the operational semantics comes first, and the static semantics (typing rules) comes second. In "Church-style" semantics, the static semantics is given first; the operational semantics apply "only" to terms that are accepted by the static semantics.

Note that this distinction is not precisely the same as that between unannotated and annotated abstractions. It is possible to give a Curry-style semantics using annotated lambdas: the operational semantics simply ignore types altogether.

Beyond these distinctions, there are then further variations based on calling conventions and evaluation strategy (call by name, call by value, etc.) and presentation (big-step, small-step, etc.).

Typing rules

To define the set of well typed lambda terms of a given type, we will define a typing relation between terms and types. First, we introduce "typing contexts"$Gamma,Delta,dots$, which are sets of typing assumptions. A "typing assumption" has the form $x:sigma$, meaning $x$ has type $sigma$.

The "typing relation" $Gammavdash e : sigma$ indicates that $e$ is a term of type $sigma$ in context $Gamma$. It is therefore said that "$e$ is "well-typed" (at $sigma$)". A "typing judgment" is constructed via of the following rules, wherein the premises above the line allow us to derive the conclusion below the line:

In other words,
# If $x$ has type $sigma$ in the context, we know that $x$ has type $sigma$.
# If, in a certain context with $x$ having type $sigma$, $e$ has type $au$, then, in the same context without $x$, $lambda x : sigma.~e$ has type $sigma o au$.
# If, in a certain context, $e_1$ has type $sigma o au$, and $e_2$ has type $sigma$, then $e_1~e_2$ has type $au$.One can construe these judgments not merely as relations between terms and types, but as the construction of terms themselves. This coincides with Church-style semantics, as it is then only possible to construct well-typed terms.

Examples of closed terms, i.e. terms typable in the empty context, are:
*$lambda x:alpha.x : alpha oalpha$ (the I-combinator/identity function),
* (the K-combinator), and
* (the S-combinator).These are the typed lambda calculus representations of the basic combinators of combinatory logic.

In the original presentation, Church used only two base types: $o$ for the type of propositions and $iota$ for the type of individuals. Frequently the calculus with only one base type, usually $o$, is considered.

Terms of the same type are identified via -equivalence, which is generated by the equations , where $t \left[x:=u\right]$ stands for $t$ with all "free" occurrences of $x$ replaced by $u$, and $lambda x:sigma.t,x =_eta t$, if $x$ does not appear free in $t$. The simply typed lambda calculus (with -equivalence) is the internal language of Cartesian Closed Categories (CCCs), this was first observed by Lambek.

Typing without annotations

In the unannotated syntax, lambda abstractions carry no type information. As a consequence, terms can have more than one type. If we consider the type variables to be implicitly universally quantified, then it is possible to find a type more general than the others, called the "principal type".

For example, the identity function $I = lambda x.~x$ can have many types: $alpha o alpha$, say, or $\left(alpha o alpha\right) o \left(alpha o alpha\right)$, and so on. Without quantifiers in our type language (as in System F), it is impossible to say what type $I$ "really" has. Extensions of simply typed lambda calculus can add support for "polymorphism": terms that can have multiple types. System F, for example, adds general abstractions over types: $I_F = Lambda alpha.lambda x:alpha.~x$ takes a "type" $alpha$ and returns the identity function "at that type". The term $I_F$ has type $forall alpha.~alpha o alpha$.

If one considers the unannotated lambda calculus extended with constants, such as integers, booleans, and tuples, the problem of typing becomes easier: $lambda x.~x + 1$ might be easily determined to have type $exttt\left\{int\right\} o exttt\left\{int\right\}$. The problem above does not go away entirely, though: what type does $lambda x.~1$ have? There are an infinite number ($exttt\left\{bool\right\} o exttt\left\{int\right\}$, $exttt\left\{int\right\} o exttt\left\{int\right\}$, etc.). The principal type of the term is $forall alpha.~alpha o exttt\left\{int\right\}$.

General observations

The simply typed lambda calculus is closely related to propositional intuitionistic logic using only implication ($o$) as a connective (minimal logic) via the Curry-Howard isomorphism: the types inhabited by closed terms are precisely the tautologies of minimal logic.

Given the standard semantics, the simply typed lambda calculus is strongly normalizing: that is, well-typed terms always reduce to a value, i.e., a $lambda$ abstraction. This is because recursion is not allowed by the typing rules: it is impossible to find types for fixed-point combinators and the looping term $Omega = \left(lambda x.~x~x\right) \left(lambda x.~x~x\right)$. Recursion can be added to the language by either having a special operator $exttt\left\{fix\right\}_alpha$of type $\left(alpha o alpha\right) o alpha$ or adding general recursive types, though both eliminate strong normalization.

Since it is strongly normalizing, it is decidable whether or not a simply typed lambda calculus program halts: it does! We can therefore conclude that the language is "not" Turing complete, which has an undecidable halting problem.

Important results

* Tait showed in 1967 that -reduction is strongly normalizing. As a corollary -equivalence is decidable. Statman showed in 1977 that the normalisation problem is not elementary recursive. A purely semantic normalisation proof (see normalisation by evaluation) was given by Berger and Schwichtenberg in 1991.
* The unification problem for -equivalence is undecidable. Huet showed in 1973 that 3rd order unification is undecidable and this was improved upon by Baxter in 1978 then by Goldfarb in 1981 by showing that 2nd order unification is already undecidable. Whether higher order matching (unification where only one term contains existential variables) is decidable is still open. [2006: Colin Stirling, Edinburgh, has published a proof-sketch in which he claims that the problem is decidable; however, the complete version of the proof is still unpublished]
* We can encode natural numbers by terms of the type $\left(o o o\right) o\left(o o o\right)$ (Church numerals). Schwichtenberg showed in 1976 that in $lambda^ o$ exactly the extended polynomials are representable as functions over Church numerals; these are roughly the polynomials closed up under a conditional operator.
* A "full model" of $lambda^ o$ is given by interpreting base types as sets and function types by the set-theoretic function space. Friedman showed in 1975 that this interpretation is complete for -equivalence, if the base types are interpreted by infinite sets. Statman showed in 1983 that -equivalence is the maximal equivalence which is "typically ambiguous",i.e. closed under type substitutions ("Statman's Typical Ambiguity Theorem"). A corollary of this is that the "finite model property" holds, i.e. finite sets are sufficient to distinguish terms which are not identified by -equivalence.
* Plotkin introduced logical relations in 1973 to characterize the elements of a model which are definable by lambda terms. In 1993 Jung and Tiuryn showed that a general form of logical relation (Kripke logical relations with varying arity) exactly characterizes lambda definability. Plotkin and Statman conjectured that it is decidable whether a given element of a model generated from finite sets is definable by a lambda term ("Plotkin-Statman-conjecture"). The conjecture was shown to be false by Loader in 1993.

References

* A. Church: A Formulation of the Simple Theory of Types, JSL 5, 1940
* W.W.Tait: Intensional Interpretations of Functionals of Finite Type I, JSL 32(2), 1967
* G.D. Plotkin: Lambda-definability and logical relations, Technical report, 1973
* G.P. Huet: The Undecidability of Unification in Third Order Logic Information and Control 22(3): 257-267 (1973)
* H. Friedman: Equality between functionals. LogicColl. '73, pages 22-37, LNM 453, 1975.
* H. Schwichtenberg: Functions definable in the simply-typed lambda calculus, Arch. Math Logik 17 (1976) 113-114.
* R. Statman: The Typed lambda-Calculus Is not Elementary Recursive FOCS 1977: 90-94
* W. D. Goldfarb: The undecidability of the 2nd order unification problem, TCS (1981), no. 13, 225- 230.
* R. Statman. $lambda$-definable functionals and conversion. Arch. Math. Logik, 23:21--26, 1983.
* J. Lambek: Cartesian Closed Categories and Typed Lambda-calculi. Combinators and Functional Programming Languages 1985: 136-175
* U. Berger, H. Schwichtenberg: An Inverse of the Evaluation Functional for Typed lambda-calculus LICS 1991: 203-211
* Jung, A.,Tiuryn, J.:A New Characterization of Lambda Definability, TLCA 1993
* R. Loader: [http://homepages.ihug.co.nz/~suckfish/papers/Church.pdf The Undecidability of -definability] , appeared in the Church Festschrift, 2001
* H. Barendregt, [ftp://ftp.cs.ru.nl/pub/CompMath.Found/HBK.ps Lambda Calculi with Types] , Handbook of Logic in Computer Science, Volume II, Oxford University Press, 1993. ISBN 0198537611.
* L. Baxter: The undecidability of the third order dyadic unification problem, Information and Control 38(2), 170-178 (1978)

* Hindley-Milner type inference algorithm

*

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Typed lambda calculus — A typed lambda calculus is a typed formalism that uses the lambda symbol (lambda) to denote anonymous function abstraction. Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages… …   Wikipedia

• Normalization property (lambda-calculus) — In mathematical logic and theoretical computer science, a rewrite system has the strong normalization property (in short: the normalization property) if every term is strongly normalizing ; that is, if every sequence of rewrites eventually… …   Wikipedia

• Lambda calculus — In mathematical logic and computer science, lambda calculus, also written as λ calculus, is a formal system designed to investigate function definition, function application and recursion. It was introduced by Alonzo Church and Stephen Cole… …   Wikipedia

• Lambda cube — In mathematical logic and type theory, the λ cube is a framework for exploring the axes of refinement in Coquand s calculus of constructions, starting from the simply typed lambda calculus as the vertex of a cube placed at the origin, and the… …   Wikipedia

• Calculus of constructions — The calculus of constructions (CoC) is a higher order typed lambda calculus, initially developed by Thierry Coquand, where types are first class values. It is thus possible, within the CoC, to define functions from, say, integers to types, types… …   Wikipedia

• SKI combinator calculus — is a computational system that is a reduced, untyped version of Lambda calculus. All operations in Lambda calculus are expressed in SKI as binary trees whose leaves are one of the three symbols S, K, and I (called combinators). In fact, the… …   Wikipedia

• Curry–Howard correspondence — A proof written as a functional program: the proof of commutativity of addition on natural numbers in the proof assistant Coq. nat ind stands for mathematical induction, eq ind for substitution of equals and f equal for taking the same function… …   Wikipedia

• Dependent type — Type systems Type safety Inferred vs. Manifest Dynamic vs. Static Strong vs. Weak Nominal vs. Structural Dependent typing Duck typing Latent typing Linear typing Uniqueness typing …   Wikipedia

• Fixed-point combinator — Y combinator redirects here. For the technology venture capital firm, see Y Combinator (company). In computer science, a fixed point combinator (or fixpoint combinator[1] ) is a higher order function that computes a fixed point of other functions …   Wikipedia

• Curry-Howard correspondence — The Curry Howard correspondence is the direct relationship between computer programs and mathematical proofs. Also known as Curry Howard isomorphism, proofs as programs correspondence and formulae as types correspondence, it refers to the… …   Wikipedia