Programming in Dependent Type Theory
All code in this article was written in the Lean theorem prover, which means you can copy any of it and paste it here to try it out.
Arithmetic and Interaction
While Lean is nominally an interactive theorem prover, much of the power of type theory comes from the fact that you can treat it like a programming language.
There are a few different commands for interacting with Lean.
The one I make the most use of is the check
command, which prints out the type of the expression following it.
So, for example, check 3
outputs num
, and check (3 : nat)
outputs nat
.
We can also make definitions
definition five : num := 3 + 2
This declares a new constant five
of type num
to which we give the value 3 + 2
.
We can define functions with similar syntax
definition num.const₀ : num → num → num := λ x y, x -- this is a comment
-- Lean can infer types whenever they're unique
definition num.const₁ := λ (x y : num), x
-- we can also name the arguments in the definition rather than the function body
definition num.const₂ (x y : num) := x
The definition of polymorphic functions becomes the first point where we get a hint about what makes programming in dependent type theory different from, say, Haskell. In dependent type theory, the term and type languages are unified, so in order to write a polymorphic function we must take the type as an argument.
definition poly_id (A : Type) := λ (a : A), a
-- or, equivalently
definition poly_id₁ := λ A (a : A), a
-- applied to arguments
check poly_id num 1 -- num
check poly_id (num → num → num) num.const -- num → num → num
Exercise: write a polymorphic version of num.const₀
.
Having to explicitly indicate types everywhere is a pain. In order to get around that, most proof assistants provide support for implicit arguments, which let you leave out arguments that only have one valid value. In Lean, the syntax for implicits looks like this:
definition id {A : Type} := λ (a : A), a
Inductive types
Of course, none of this would be that useful if we couldn’t define new types. There are lots of ways to craft new types in dependent type theory, but among the most fundamental is the creation of inductive types.
To define a new inductive type, you give a list of constructor tags, each associated with a type representing the arguments it takes. The simplest ones are just enumerations. For example, the days of the week:
inductive weekday : Type :=
| mon : weekday
| tue : weekday
| wed : weekday
| thu : weekday
| fri : weekday
| sat : weekday
| sun : weekday
This creates a new type weekday
, and seven new constants (weekday.mon
, weekday.tue
, weekday.wed
…) of type weekday
.
If you’re familiar with Haskell, you’ll correctly notice that this looks an awful lot like GADT declarations.
Just like in Haskell, we can parametrize our types over other types, making new types like either
:
inductive either (A B : Type) : Type :=
| inl {} : A → either A B``
| inr {} : B → either A B
From this declaration, we get a new constant either : Type → Type → Type
.
This represents the union of the types A
and B
, the type of values that belong either to A
or B
.
We can also define recursive types, such as natural numbers
inductive nat : Type :=
| zero : nat
| succ : nat → nat
The easiest way to define functions over nat
s is recursively.
For example, we can define addition as
definition add (n : nat) : nat -> nat
| nat.zero := n
| (nat.succ m) := nat.succ (add m) -- n is constant at every recursive call
Bringing both of these together we can define the type of linked lists
inductive list (A : Type) : Type :=
| nil {} : list A
| cons : A → list A → list A
We can also define functions over lists by pattern matching
definition map {A B : Type} (f : A → B) : list A → list B
| list.nil := list.nil
| (list.cons x xs) := list.cons (f x) (map xs) -- f is constant at every recursive call
Exercise: write foldr
and foldl
by pattern matching
definition foldl {A B : Type} (r : B → A → B) : B → list A → B :=
| b list.nil := b
| b (list.cons x xs) := foldl (r b x) xs
<div><div>
Parents:
- Type theory
Modern foundations for formal mathematics.