Δ ℚuantitative √ourney

Science, Math, Statistics, Machine Learning ...

Jul 04, 2022

Learning Lean (Part 1)

I'm going to be starting a series of blog posts on Lean 4, a dependently typed functional programming language and theorem prover. If none of that sounds familiar, no problem, we'll get to that. First, Lean 4 has not been officially released as a 1.0, so breaking changes to the code in these posts may occur, but I'll try to update from time to time. The current stable release of Lean is Lean 3.

Let me also preface that these blog posts, although meant to be instructive, mirror my own learning of Lean. I've been playing with Lean off and on since April 2020 but I am currently not an expert at Lean or functional programming in general, so there are likely going to be instances of me writing non-idiomatic Lean and doing things in not the most efficient way. Also, my goal in learning Lean was mostly to learn formalizing mathematics rather than functional programming, so these blogs will hence be mostly focused on the mathematics use of Lean. In terms of pre-requisites if you want to be able to follow along, anyone with a background programming in some programming language and at least a high school mathematics background should be able to follow along.

Go ahead and get Lean installed by following the guide here: Lean Setup Guide.

There's also a lively online community for Lean where friendly individuals from around the world will answer your questions surprisingly quickly: https://leanprover.zulipchat.com/

What is Lean?

Lean is a programming language. More specifically, Lean is a functional programming language. Probably the most well-known (at least to me) functional programming language is Haskell. A functional programming language usually refers to a purely functional programming language. Many programming languages support functions as first-class citizens, allowing functions to be passed as arguments and returned, but a purely functional language imposes that all functions have a well-specified and static type signature and no other side effects can occur within a function (e.g. writing to disk, playing a sound file, etc.). These are called pure functions. Moreover, the only data these pure functions have access to is what gets explicitly passed as an argument. And every expression in Lean has a type, including types themselves.

The mandate that all functions are pure is quite onerous coming from typical imperative programming languages such as C or Python. It is no longer trivial in a pure functional language to do something like read an input string, do some computation, write to a log and return a value. The benefit is that pure functions are completely predictable at compile-time, preventing a whole class of bugs.

Lean is not only a functional programming language, but a dependently-typed functional programming language. This means that types can depend on other types and values. For example, in Lean, you can define a type of list of integers of length 3, and thus if you try to construct an instance of that type by giving it 4 integers, it will fail. In essence, dependent types give you extreme expressiveness and granularity when defining new types.

Why should you learn Learn?

I think Lean is going to be the next big thing in the functional programming world and hopefully the formal software verification and mathematics world. So if any of those things interest you, it may be worth learning Lean.

One goal for me in learning Lean is to learn proofs in mathematics. I do data analysis, statistics and machine learning so my understanding of mathematics is very applied and calculational. I want to get a flavor of pure mathematics and how mathematical structures are made and theorems are proved. Because Lean is a dependently typed pure functional programming language, it can be used to encode all of modern mathematics using its type theory in place of traditional set theory.

In any case, doing math in Lean is actually fun a lot of the time. It's like a programming and logic puzzle. So stop with the Sudoku and just prove theorems in Lean instead.

Natural Numbers

Before we can do much of anything in mathematics, we're going to need some numbers. Now Lean already has numbers defined, of course, but we will pretend it doesn't.

Let's define the natural numbers, that is, the set of numbers 0, 1, 2, 3 ..., or the counting numbers.

namespace Tutorial

inductive Nat : Type where 
| zero : Nat
| succ : Nat  Nat

end Tutorial

The namespace section is what it sounds like and works similarly to how namespaces work in other programming languages. Outside the namespace you have to refer to identifiers inside the namespace by prefixing [namespace name].[identifier] (without the brackets). We enclose our code in this namespace to avoid name clashes because we are defining types and functions that already exist in Lean with the same names.

First, the keyword inductive means we are defining a new (inductive) type. In Lean's type theory, there are basically just two kinds of types, inductive types and function types, so we will use inductive a lot.

After the keyword inductive comes the name of the type we are defining, in this case, we are calling it Nat, for natural numbers. After the name of the type comes a colon and the keyword Type.

Recall, every expression in Lean must have a type, even the new type we are defining must itself be assigned a type. So in general, new inductive types will by default be assigned the type Type, which is an alias for Type 0, because if you ask Lean what the type of Type is, it must give you an answer, so it just creates an infinite hierarchy of types Type : Type 1, Type 1 : Type 2 etc., meaning that the type of Type is Type 1.

You can ask Lean what the type of an expression is by using the #check command.

#check Nat
--Output: Nat : Type
#check Type
--Output: Type : Type 1

Now Lean tries to be smart and save you keystokes, so whenever possible, it will infer types when it can do so unambiguously. So it is fine to also declare our Nat type without explicitly assigning its type:

-- This also works
inductive Nat where
| zero : Nat
| succ: Nat  Nat

And two dashes is how you start a comment line. Or a multiline comment can be delimited using

/-
Multi
line 
comment
-/

Back to our new type.

inductive Nat : Type where 
| zero : Nat
| succ : Nat  Nat

After assigning Nat as being the type Type is the keyword where which is also optional, as this is also valid:

inductive Nat 
| zero : Nat
| succ : Nat  Nat

I guess where is mostly of descriptive use, as you can translate the type declaration into English as "We are making a new inductive type named Nat of type Type where zero is declared to be of type Nat and succ is a function that maps values of type Nat to values of type Nat.

Back to the more verbose type declaration:

inductive Nat : Type where 
| zero : Nat
| succ : Nat  Nat

So after the where keyword, we start a new line beginning with the pipe | character. Each line beginning with | is called a constructor since these lines specify how to construct terms (aka elements, values or members) of the new inductive type.

First, we invent a name for a value of type Nat, in this case I am declaring that the string of characters zero is hereby defined to be a term (or value) of type Nat.

We could stop here and we'd have a new (inductive) type with a single value, but that wouldn't help us create numbers, which we expect to be indefinite (infinite).

Next, we start a new line beginning with a pipe, and this time instead of declaring a new term of type Nat, we declare the first function that operates on terms of type Nat. We call this function succ (short for successor). We know that succ is a function and not a term because we assign it the type Nat → Nat after the colon. Whenever you see the pattern some typesome type, you're looking at a function type.

Functions are programs that map terms from one type to another (or the same) type. In this case, succ is a function that does not compute anything, in fact it doesn't do anything at all. All we can do with it is apply it.

#check Nat.succ (Nat.succ Nat.zero)
--- Output: Nat.succ (Nat.succ Nat.zero) : Nat

As you can see, we apply a function by putting the function name, a space, and then the term (value). To avoid ambiguity we must use parentheses when applying multiple times. Declaring the type Nat creates a local namespace Nat so we must prefix references to the value zero or function succ with Nat. e.g. Nat.zero

We can save keystokes by opening the namespace.

open Nat
--- Now we can do this
#check succ (succ zero)
--- Output: succ (succ zero) : Nat

Now we have defined the natural numbers, 0, 1, 2, ... and so on, identified in our new type as zero, succ (zero), succ (succ (zero)). Obviously writing numbers using function application is not as convenient as using our normal numerals 1, 2 etc. There is a way to map numerals to our more verbose natural numbers, but we will wait awhile before doing that.

When I first understood what was going on here in this very simple inductive type, it was quite profound. By declaring this empty function succ and applying it to the only "real" term of type Nat, we get an infinite new set of terms of type Nat. Any string of characters that fits the pattern succ x where x is either zero or also of the pattern succ (succ ... zero ) is also of type Nat.

You can think of types as specifiying a pattern of characters, a type-checker as checking whether some expression matches a particular pattern, and a value or term is just an expression that matches a particular type pattern. So succ (succ zero) is of type Nat because that pattern of characters matches the pattern we defined as type Nat.

Let's define our first function on our new Nat type. In Lean, all functions are pure as we discussed earlier, and they are also total. A total function is one where every possible input term gets mapped to an output term. That means for a function of type Nat → Nat, every natural number must get mapped to some output term that is also a natural number. You cannot have input terms that are left undefined. In mathematics, if we treat division as a function, we say that \(\frac{x}{0}\) is undefined. In Lean, that is not acceptable, even \(\frac{x}{0}\) is defined.

Total functions can sometimes be an onerous constraint when using Lean for non-mathematical purposes, especially as you have to prove to Lean that your function is total, so Lean does provide a way to define partial functions, but we will not address that yet.

Here's our first function:

def natId : Nat  Nat := fun x : Nat => x

First we define a new function using the def keyword, then the name of our function, in this case we're calling it natId, then we have a colon, which indicates we're going to be assigning a type and after the colon we have the type Nat → Nat. Following that, we have the symbol := which is an assignment operator, and then we have the body of the function which is fun x : Nat => x

This last part is called an anonymous function (or lambda function). An anonymous function is a function expression without giving it a name. As is clear, an anonymous function is declared using the fun keyword, following by an identifier (can be more than one) representing the input, then its type annotation, then the => symbol following by the function body, which in this case just returns the input x, and hence this is defining the identity function that does nothing but return its input unadulterated.

One challenge when getting started with Lean is that Lean has a lot of syntactic sugar, so there are often multiple ways to express the same thing. Here are 3 other ways we could have defined the same identity function:

def natId2 : Nat  Nat :=
    fun a =>
        match a with
            | a => a

And:

def natId3 : Nat  Nat
| a => a

And:

def natId4 (a : Nat) : Nat := a

The first of these alternatives also uses an anonymous function but then has a match ... with pattern. As we discussed, inductive types are essentially defined as a set of base terms and then one or more functions that define a pattern for creating other terms of that type using the base terms. So since we construct types by defining base terms and patterns over those base terms, we also define functions on types by deconstructing types into their base terms and patterns over those base terms, and then map each deconstructed pattern into terms of another type.

Notice that the variable a after fun represents the input term and we can name it whatever we want. For multiple input functions we will have to introduce multiple input variables after fun.

Also we can check how Lean actually defines types by using the #print command. Let's check the last of these alternatives for idNat4

#print natId4

/-    
Output: 
def Tutorial.natId4 : Nat  Nat := fun a => a   
-/

As you can see, even though Lean lets us omit the explicit anonymous function, behind the scenes it is filling in the anonymous function for us.

Okay, moving on. Let's write a simple function that just subtracts one from any natural number.

def subOne : Nat  Nat :=
  fun a =>
    match a with
    | zero => zero
    | succ b => b

We define a function called subOne with the type Nat → Nat. We implement the function by assigning it to an anonymous function that takes an input variable a (which must be of type Nat according to the function type signature) and matches it against the patterns that a term of type Nat can have, namely it can either be the base term zero or of the pattern succ b where b is just a placeholder for whatever is inside the succ function. We could have also used a in place of b and Lean is smart enough to figure out what we mean based on the context.

If the input a happens to be zero then we just return zero since natural numbers don't get any lower than zero. If the input a happens to be succ b with b being a term of type Nat then we return b, which effectively removes one application of succ and thus decrements the number by one.

There are no other possible patterns that a term of type Nat could be and since we covered them in our function pattern match, Lean is satisfied our function is total.

We can ask Lean to evaluate our function on the input zero.succ.succ (the number 2) by using the #reduce command.

#reduce subOne zero.succ.succ
--Output: succ zero

It works, if we subtract one from 2 we get 1. Notice that the expression zero.succ.succ is equivalent to succ (succ zero) but easier to read as it avoids parentheses. Again, this is one challenge in learning Lean; there are many ways to do the same thing. But ultimately these are ways to save keystrokes and improve readability, at the expense of taking longer to learn.

We can also write a function where we explicitly name the inputs and then pattern match on them:

def subOne (a : Nat) : Nat :=
  match a with
  | zero => zero
  | succ b => b

In this style we name the inputs and annotate their types and then we give the output type after the last free colon.

Now let's define the addition function on natural numbers.

def add (a b : Nat) : Nat :=
  match b with
  | zero => a
  | succ c => succ (add a c)

We define our function add to take two inputs named a and b and both are of type Nat so we include them together separated by a space. The output type of our function is also of type Nat. We then match the pattern of input b to define the computation of the function.

If the second input b is zero, then that means we are dealing with a + 0 and that obviously just equals a, so we return a.

If b is greater than 0, i.e. of the form succ c where c is another natural number, then we add together a and c (and c = b - 1), then we apply succ to the result, which is the same as adding one.

In other words, we are recursively doing 1 + (a + (b - 1)). Because we are in a purely functional programming language, we do not have access to things like for loops or while loops. Any iterative computations must be done using recursive (self-referential) function calls.

When we compute 1 + (a + (b - 1)), Lean will then call the add function again, with input a (the same as the original input a), and the second input will be b - 1. It keeps recursively calling itself until b - 1 = 0 and then we hit the base case where the second input is 0 and add just returns the first input a.

#reduce add zero.succ zero.succ
--Output: succ (succ zero)

As you can see, our function successfully computes 1 + 1 = 2. Let's do it by hand to make sure we really understand what is going on.

First, add zero.succ zero.succ (again, this represents 1 + 1) will pattern match on the second argument b = zero.succ, so it will return succ (add zero.succ zero) since the pattern match will "pull off" a succ from the input b.

So now we're calling add within itself, namely add zero.succ zero (1 + 0). Now again, we pattern match on the second input b = zero and that matches the base case where we just return a. So add zero.succ zero = zero.succ.

Now we substitute that into the expression above, succ (add zero.succ zero), so we get succ (succ zero), which is the final answer. So the add function works by recursively decrementing b by 1 while adding 1 to a until b = 0.

In order for functions to be total (as described above), they need to be terminating. Lean has a component called a termination checker that makes sure every function you define will terminate in a finite number of steps. It does this by making sure that when you're recursively calling a function that the input arguments are structurally decreasing. In the case of the add function, the second input b will structurally decrease each recursive call of add because a succ is "pulled off" (i.e. b becomes b - 1 each call). Once b = zero in the recursive calls then the function terminates.

We'll end this post here, but we have a lot more to learn. In the next post we'll prove our first theorems about the natural numbers and learn a lot more Lean along the way.

PS: My goal with these Learning Lean posts are to assume as few pre-requisites as possible, so please leave a comment or email me if anything needs additional explanation and you meet the pre-requisites of knowing how to program in some language and having a high school math background.

posted at 14:35 by Brandon Brown  ·   ·  Lean