Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
27 KiB
Lean Tutorial
Introduction
Lean is an automatic and interactive theorem prover. It can be used to create specifications, build mathematical libraries, and solve constraints. In this tutorial, we introduce basic concepts, the logic used in Lean, and the main commands.
Getting started
We can use Lean in interactive or batch mode. The following example just displays the message `hello world`.
print "hello world"
All we have to do to run your first example is to call the lean
executable
with the name of the text file that contains the command above.
If you saved the above command in the file hello.lean
, then you just have
to execute
lean hello.lean
As a more complex example, the next example defines a function that doubles the input value.
import data.nat
open nat
-- defines the double function
definition double (x : nat) := x + x
Basics
We can also view Lean as a suite of tools for evaluating and processing expressions representing terms, definitions, and theorems.
Every expression has a unique type in Lean. The command check
returns the
type of a given expression.
import logic
check true
check and
The last command returns Prop → Prop → Prop
. That is, the type of
and
is a function that takes two propositions and return a
proposition, Prop
is the type of propositions.
The command import
loads existing libraries and extensions.
import data.nat
check nat.ge
We say nat.ge
is a hierarchical name comprised of two parts: nat
and ge
.
The command open
creates aliases based on a given prefix. The
command also imports notation, hints, and other features. We will
discuss its other applications later. Regarding aliases,
the following command creates aliases for all objects starting with
nat
, and imports all notations defined in this namespace.
import data.nat
open nat
check ge -- display the type of nat.ge
The command variable
assigns a type to an identifier. The following command postulates/assumes
that n
, m
and o
have type nat
.
import data.nat
open nat
variable n : nat
variable m : nat
variable o : nat
-- The command 'open nat' also imported the notation defined at the namespace 'nat'
check n + m
check n ≤ m
The command variables n m o : nat
can be used as a shorthand for the three commands above.
In Lean, proofs are also expressions, and all functionality provided for manipulating
expressions is also available for manipulating proofs. For example, eq.refl n
is a proof
for n = n
. In Lean, eq.refl
is the reflexivity theorem.
import data.nat
open nat
variable n : nat
check eq.refl n
The command axiom
postulates that a given proposition holds.
The following commands postulate two axioms Ax1
and Ax2
that state that n = m
and
m = o
. Ax1
and Ax2
are not just names. For example, eq.trans Ax1 Ax2
is a proof that
n = o
, where eq.trans
is the transitivity theorem.
import data.nat
open nat
variables m n o : nat
axiom Ax1 : n = m
axiom Ax2 : m = o
check eq.trans Ax1 Ax2
The expression eq.trans Ax1 Ax2
is just a function application like any other.
Moreover, in Lean, propositions are types. Any proposition P
can be used
as a type. The elements of type P
can be viewed as the proofs of P
.
Moreover, in Lean, proof checking is type checking. For example, the Lean type checker
will reject the type incorrect term eq.trans Ax2 Ax1
.
Because we use proposition as types, we must support empty types. For example,
the type false
must be empty, since we don't have a proof for false
.
Most systems based on the propositions as types paradigm are based on constructive logic.
In Lean, we support classical and constructive logic. We can load
classical axiom by using import classical
. When the classical
extensions are loaded, the excluded middle is a theorem,
and em p
is a proof for p ∨ ¬ p
.
import logic.axioms.classical
variable p : Prop
check em p
The commands axiom
and variable
are essentially the same command. We provide both
just to make Lean files more readable. We encourage users to use axiom
only for
propositions, and variable
for everything else.
Similarly, a theorem is just a definition. The following command defines a new theorem
called nat_trans3
, and then use it to prove something else. In this
example, eq.symm
is the symmetry theorem.
import data.nat
open nat
theorem nat_trans3 (a b c d : nat) (H1 : a = b) (H2 : c = b) (H3 : c = d) : a = d :=
eq.trans (eq.trans H1 (eq.symm H2)) H3
-- Example using nat_trans3
variables x y z w : nat
axiom Hxy : x = y
axiom Hzy : z = y
axiom Hzw : z = w
check nat_trans3 x y z w Hxy Hzy Hzw
The theorem nat_trans3
has 7 parameters, it takes for natural numbers a
, b
, c
and d
,
and three proofs showing that a = b
, c = b
and c = d
, and returns a proof that a = d
.
The theorem nat_trans3
is somewhat inconvenient to use because it has 7 parameters.
However, the first four parameters can be inferred from the last 3. We can use _
as a placeholder
that instructs Lean to synthesize this expression. The synthesis process is based on type inference, and it is
the most basic form of automation provided by Lean.
In the example above, we can use check nat_trans3 _ _ _ _ Hxy Hzy Hzw
.
Lean also supports implicit arguments.
We mark implicit arguments using curly braces instead of parenthesis.
In the following example, we define the theorem nat_trans3i
using
implicit arguments.
import data.nat
open nat
theorem nat_trans3i {a b c d : nat} (H1 : a = b) (H2 : c = b) (H3 : c = d) : a = d :=
eq.trans (eq.trans H1 (eq.symm H2)) H3
-- Example using nat_trans3
variables x y z w : nat
axiom Hxy : x = y
axiom Hzy : z = y
axiom Hzw : z = w
check nat_trans3i Hxy Hzy Hzw
It is identical to nat_trans3
, the only difference is the use of curly braces.
Lean will (try to) infer the implicit arguments. The idea behind implicit arguments
is quite simple, we are just instructing Lean to automatically insert the placeholders
_
for us.
Sometimes, Lean will not be able to infer the parameters automatically.
The annotation @f
instructs Lean that we want to provide the
implicit arguments for f
explicitly.
The theorems eq.refl
, eq.trans
and eq.symm
all have implicit arguments.
import logic
check @eq.refl
check @eq.symm
check @eq.trans
We can also instruct Lean to display all implicit arguments when it prints expressions. This is useful when debugging non-trivial problems.
import data.nat
open nat
variables a b c : nat
axiom H1 : a = b
axiom H2 : b = c
check eq.trans H1 H2
set_option pp.implicit true
-- Now, Lean will display all implicit arguments
check eq.trans H1 H2
In the previous example, the check
command stated that eq.trans H1 H2
has type @eq ℕ a c
. The expression a = c
is just notational convenience.
We have seen many occurrences of Type
.
In Lean, the type of nat
and Prop
is Type
.
What is the type of Type
?
check Type
Lean reports Type : Type
, is it Lean inconsistent? Now, it is not.
Internally, Lean maintains a hierarchy of Types. We say each one of
them lives in a universe. Lean is universe polymorphic, and by
default all universes are hidden from the user. Like implicit
arguments, we can instruct Lean to display the universe levels
explicitly.
set_option pp.universes true
check Type
In the command above, Lean reports that Type.{l_1}
that lives in
universe l_1
has type Type.{succ l_1}
. That is, its type lives in
the universe l_1 + 1
.
Definitions such as eq.refl
, eq.symm
and eq.trans
are all universe
polymorphic.
import logic
set_option pp.universes true
check @eq.refl
check @eq.symm
check @eq.trans
Whenever we declare a new constant, Lean automatically infers the universe parameters. We can also provide the universe levels explicitly.
import logic
definition id.{l} {A : Type.{l}} (a : A) : A := a
check id true
The universes can be explicitly provided for each constant and Type
by using the notation .{ ... }
. Unlike other systems, Lean does not
have universe cumulativity. That is, the type Type.{i}
is not an
element of Type.{j}
for j > i
.
Propositional logic
To manipulate formulas with a richer logical structure, it is important to master the notation Lean uses for building
composite logical expressions out of basic formulas using logical connectives. The logical connectives (and
, or
, not
, etc)
are defined in the file logic.lean. This file also defines notational convention for writing formulas
in a natural way. Here is a table showing the notation for the so called propositional (or Boolean) connectives.
Ascii | Unicode | Definition |
---|---|---|
true | true | |
false | false | |
not | ¬ | not |
/\ | ∧ | and |
\/ | ∨ | or |
-> | → | |
<-> | ↔ | iff |
true
and false
are logical constants to denote the true and false propositions. Logical negation is a unary operator just like
arithmetical negation on numbers. The other connectives are all binary operators. The meaning of the operators is the usual one.
The table above makes clear that Lean supports unicode characters. We can use Ascii or/and unicode versions.
Here is a simple example using the connectives above.
import logic
variables p q : Prop
check p → q → p ∧ q
check ¬p → p ↔ false
check p ∨ q → q ∨ p
-- Ascii version
check p -> q -> p /\ q
check not p -> p <-> false
check p \/ q -> q \/ p
Depending on the platform, Lean uses unicode characters by default when printing expressions. The following commands can be used to change this behavior.
import logic
set_option pp.unicode false
variables p q : Prop
check p → q → p ∧ q
set_option pp.unicode true
check p → q → p ∧ q
Note that, it may seem that the symbols ->
and →
are overloaded, and Lean uses them to represent implication and the type
of functions. Actually, they are not overloaded, they are the same symbols. In Lean, the Proposition p → q
expression is also the type
of the functions that given a proof for p
, returns a proof for q
. This is very convenient for writing proofs.
import logic
variables p q : Prop
-- Hpq is a function that takes a proof for p and returns a proof for q
axiom Hpq : p → q
-- Hq is a proof/certificate for p
axiom Hp : p
-- The expression Hpq Hp is a proof/certificate for q
check Hpq Hp
In composite expressions, the precedences of the various binary
connectives are in order of the above table, with and
being the
strongest and iff
the weakest. For example, a ∧ b → c ∨ d ∧ e
means (a ∧ b) → (c ∨ (d ∧ e))
. All of them are right-associative.
So, p ∧ q ∧ r
means p ∧ (q ∧ r)
. The actual precedence and fixity of all
logical connectives is defined in the Lean
logic definition file.
Finally, not
, and
, or
and iff
are the actual names used when
defining the Boolean connectives. They can be used as any other function.
Lean supports currying and true
is a function from Prop
to Prop
.
Functions
There are many variable-binding constructs in mathematics. Lean expresses
all of them using just one abstraction, which is a converse operation to
function application. Given a variable x
, a type A
, and a term t
that
may or may not contain x
, one can construct the so-called lambda abstraction
fun x : A, t
, or using unicode notation λ x : A, t
. Here is some simple
examples.
import data.nat
open nat
check fun x : nat, x + 1
check fun x y : nat, x + 2 * y
check fun x y : Prop, not (x ∧ y)
check λ x : nat, x + 1
check λ (x : nat) (p : Prop), x = 0 ∨ p
In many cases, Lean can automatically infer the type of the variable. Actually, In all examples above, the type can be inferred automatically.
import data.nat
open nat
check fun x, x + 1
check fun x y, x + 2 * y
check fun x y, not (x ∧ y)
check λ x, x + 1
check λ x p, x = 0 ∨ p
However, Lean will complain that it cannot infer the type of the
variable x
in fun x, x
because any type would work in this example.
The following example shows how to use lambda abstractions in function applications
import data.nat
open nat
check (fun x y, x + 2 * y) 1
check (fun x y, x + 2 * y) 1 2
check (fun x y, not (x ∧ y)) true false
Lambda abstractions are also used to create proofs for propositions of the form A → B
.
This should be natural since we can "view" A → B
as the type of functions that given
a proof for A
returns a proof for B
.
For example, a proof for p → p
is just fun H : p, H
(the identity function).
import logic
variable p : Prop
check fun H : p, H
Definitional equality
The command eval t
computes a normal form for the term t
.
In Lean, we say two terms are definitionally equal if the have the same
normal form. For example, the terms (λ x : nat, x + 1) a
and a + 1
are definitionally equal. The Lean type/proof checker uses the normalizer when
checking types/proofs. So, we can prove that two definitionally equal terms
are equal using just eq.refl
. Here is a simple example.
import data.nat
open nat
theorem def_eq_th (a : nat) : ((λ x : nat, x + 1) a) = a + 1 := eq.refl (a+1)
Provable equality
In the previous examples, we have used nat_trans3 x y z w Hxy Hzy Hzw
to show that x = w
. In this case, x
and w
are not definitionally equal,
but they are provably equal in the environment that contains nat_trans3
and
axioms Hxy
, Hzy
and Hzw
.
Proving
The Lean standard library contains basic theorems for creating proof terms. The basic theorems are useful for creating manual proofs. The are also the basic building blocks used by all automated proof engines available in Lean. The theorems can be broken into three different categories: introduction, elimination, and rewriting. First, we cover the introduction and elimination theorems for the basic Boolean connectives.
And (conjunction)
The expression and.intro H1 H2
creates a proof for a ∧ b
using proofs
H1 : a
and H2 : b
. We say and.intro
is the and-introduction operation.
In the following example we use and.intro
for creating a proof for
p → q → p ∧ q
.
import logic
variables p q : Prop
check fun (Hp : p) (Hq : q), and.intro Hp Hq
The expression and.elim_left H
creates a proof a
from a proof H : a ∧ b
.
Similarly and.elim_right H
is a proof for b
. We say they are the left/right and-eliminators.
import logic
variables p q : Prop
-- Proof for p ∧ q → p
check fun H : p ∧ q, and.elim_left H
-- Proof for p ∧ q → q
check fun H : p ∧ q, and.elim_right H
Now, we prove p ∧ q → q ∧ p
with the following simple proof term.
import logic
variables p q : Prop
check fun H : p ∧ q, and.intro (and.elim_right H) (and.elim_left H)
Note that the proof term is very similar to a function that just swaps the elements of a pair.
(disjunction)
The expression or.intro_left b H1
creates a proof for a ∨ b
using a proof H1 : a
.
Similarly, or.intro_right a H2
creates a proof for a ∨ b
using a proof H2 : b
.
We say they are the left/right or-introduction.
import logic
variables p q : Prop
-- Proof for p → p ∨ q
check fun H : p, or.intro_left q H
-- Proof for q → p ∨ q
check fun H : q, or.intro_right p H
The or-elimination rule is slightly more complicated. The basic idea is the
following, we can prove c
from a ∨ b
, by showing we can prove c
by assuming a
or by assuming b
. It is essentially a proof by cases.
or.elim Hab Hac Hbc
takes three arguments Hab : a ∨ b
, Hac : a → c
and Hbc : b → c
and produces a proof for c
.
In the following example, we use or.elim
to prove that p v q → q ∨ p
.
import logic
variables p q : Prop
check fun H : p ∨ q,
or.elim H
(fun Hp : p, or.intro_right q Hp)
(fun Hq : q, or.intro_left p Hq)
In most cases, the first argument of or.intro_right
and
or.intro_left
can be inferred automatically by Lean. Moreover, Lean
provides or.inr
and or.inl
as shorthands for or.intro_right _
and or.intro_left _
. These two shorthands are extensively used in
the Lean standard library.
import logic
variables p q : Prop
check fun H : p ∨ q,
or.elim H
(fun Hp : p, or.inr Hp)
(fun Hq : q, or.inl Hq)
Not (negation)
not_intro H
produces a proof for ¬ a
from H : a → false
. That is,
we obtain ¬ a
if we can derive false
from a
. The expression
absurd Ha Hna
produces a proof for some b
from Ha : a
and Hna : ¬ a
.
That is, we can deduce anything if we have a
and ¬ a
.
We now use not_intro
and absurd
to produce a proof term for
(a → b) → ¬b → ¬a
.
import logic
variables a b : Prop
check fun (Hab : a → b) (Hnb : ¬ b),
not_intro (fun Ha : a, absurd (Hab Ha) Hnb)
In the standard library, not a
is actually just an abbreviation
for a → false
. Thus, we don't really need to use not_intro
explicitly.
import logic
variables a b : Prop
check fun (Hab : a → b) (Hnb : ¬ b),
(fun Ha : a, Hnb (Hab Ha))
Now, here is the proof term for ¬a → b → (b → a) → c
import logic
variables a b c : Prop
check fun (Hna : ¬ a) (Hb : b) (Hba : b → a),
absurd (Hba Hb) Hna
Iff (if-and-only-if)
The expression iff.intro H1 H2
produces a proof for a ↔ b
from H1 : a → b
and H2 : b → a
.
iff.elim_left H
produces a proof for a → b
from H : a ↔ b
. Similarly,
iff.elim_right H
produces a proof for b → a
from H : a ↔ b
.
Here is the proof term for a ∧ b ↔ b ∧ a
import logic
variables a b : Prop
check iff.intro
(fun H : a ∧ b, and.intro (and.elim_right H) (and.elim_left H))
(fun H : b ∧ a, and.intro (and.elim_right H) (and.elim_left H))
In Lean, we can use assume
instead of fun
to make proof terms look
more like proofs found in text books.
import logic
variables a b : Prop
check iff.intro
(assume H : a ∧ b, and.intro (and.elim_right H) (and.elim_left H))
(assume H : b ∧ a, and.intro (and.elim_right H) (and.elim_left H))
True and False
The expression trivial
is a proof term for true
, and false_elim a H
produces a proof for a
from H : false
.
Rewrite rules
WARNING: We did not port this section to Lean 0.2 yet
The Lean kernel also contains many theorems that are meant to be used as rewriting/simplification rules.
The conclusion of these theorems is of the form t = s
or t ↔ s
. For example, and_id a
is proof term for
a ∧ a ↔ a
. The Lean simplifier can use these theorems to automatically create proof terms for us.
The expression (by simp [rule-set])
is similar to _
, but it tells Lean to synthesize the proof term using the simplifier
using the rewrite rule set named [rule-set]
. In the following example, we create a simple rewrite rule set
and use it to prove a theorem that would be quite tedious to prove by hand.
-- import module that defines several tactics/strategies including "simp"
import tactic
-- create a rewrite rule set with name 'simple'
rewrite_set simple
-- add some theorems to the rewrite rule set 'simple'
add_rewrite and_id and_truer and_truel and_comm and.assoc and_left_comm iff_id : simple
theorem th1 (a b : Bool) : a ∧ b ∧ true ∧ b ∧ true ∧ b ↔ a ∧ b
:= (by simp simple)
In Lean, we can combine manual and automated proofs in a natural way. We can manually write the proof
skeleton and use the by
construct to invoke automated proof engines like the simplifier for filling the
tedious steps. Here is a very simple example.
theorem th2 (a b : Prop) : a ∧ b ↔ b ∧ a
:= iff.intro
(fun H : a ∧ b, (by simp simple))
(fun H : b ∧ a, (by simp simple))
Dependent functions and quantifiers
Lean supports dependent functions. In type theory, they are also called dependent product types or Pi-types.
The idea is quite simple, suppose we have a type A : Type
, and a family of types B : A → Type
which assigns to each a : A
a type B a
. So a dependent function is a function whose range varies depending on its arguments.
In Lean, the dependent functions is written as forall a : A, B a
,
Pi a : A, B a
, ∀ x : A, B a
, or Π x : A, B a
. We usually use
forall
and ∀
for propositions, and Pi
and Π
for everything
else. In the previous examples, we have seen many examples of
dependent functions. The theorems eq.refl
, eq.trans
and eq.symm
, and the
equality are all dependent functions.
The universal quantifier is just a dependent function.
In Lean, if we have a family of types B : A → Prop
,
then ∀ x : A, B a
has type Prop
.
This features complicates the Lean set-theoretic model, but it
improves usability.
Several theorem provers have a forall elimination
(aka
instantiation) proof rule.
In Lean (and other systems based on proposition as types), this rule
is just function application.
In the following example we add an axiom stating that f x
is 0
forall x
.
Then we instantiate the axiom using function application.
import data.nat
open nat
variable f : nat → nat
axiom fzero : ∀ x, f x = 0
check fzero 1
variable a : nat
check fzero a
Since we instantiate quantifiers using function application, it is
natural to create proof terms for universal quantifiers using lambda
abstraction. In the following example, we create a proof term showing that for all
x
and y
, f x = f y
.
import data.nat
open nat
variable f : nat → nat
axiom fzero : ∀ x, f x = 0
check λ x y, eq.trans (fzero x) (eq.symm (fzero y))
We can view the proof term above as a simple function or "recipe" for showing that
f x = f y
for any x
and y
. The function "invokes" fzero
for creating
proof terms for f x = 0
and f y = 0
. Then, it uses symmetry eq.symm
to create
a proof term for 0 = f y
. Finally, transitivity is used to combine the proofs
for f x = 0
and 0 = f y
.
In Lean, the existential quantifier can be written as exists x : A, B
x
or ∃ x : A, B x
. Actually both versions are just
notational convenience for Exists (fun x : A, B x)
. That is, the existential quantifier
is actually a constant defined in the file logic.lean
.
This file also defines the exists_intro
and exists_elim
.
To build a proof for ∃ x : A, B x
, we should provide a term w : A
and a proof term Hw : B w
to exists_intro
.
We say w
is the witness for the existential introduction. In previous examples,
nat_trans3i Hxy Hzy Hzw
was a proof term for x = w
. Then, we can create a proof term
for ∃ a : nat, a = w
using
import data.nat
open nat
theorem nat_trans3i {a b c d : nat} (H1 : a = b) (H2 : c = b) (H3 : c = d) : a = d :=
eq.trans (eq.trans H1 (eq.symm H2)) H3
variables x y z w : nat
axiom Hxy : x = y
axiom Hzy : z = y
axiom Hzw : z = w
theorem ex_a_eq_w : exists a, a = w := exists_intro x (nat_trans3i Hxy Hzy Hzw)
check ex_a_eq_w
Note that exists_intro
also has implicit arguments. For example, Lean has to infer the implicit argument
P : A → Bool
, a predicate (aka function to Prop). This creates complications. For example, suppose
we have Hg : g 0 0 = 0
and we invoke exists_intro 0 Hg
. There are different possible values for P
.
Each possible value corresponds to a different theorem: ∃ x, g x x = x
, ∃ x, g x x = 0
,
∃ x, g x 0 = x
, etc. Lean uses the context where exists_intro
occurs to infer the users intent.
In the example above, we were trying to prove the theorem ∃ a, a = w
. So, we are implicitly telling
Lean how to choose P
. In the following example, we demonstrate this issue. We ask Lean to display
the implicit arguments using the option pp.implicit
. We see that each instance of exists_intro 0 Hg
has different values for the implicit argument P
.
import data.nat
open nat
check @exists_intro
variable g : nat → nat → nat
axiom Hg : g 0 0 = 0
theorem gex1 : ∃ x, g x x = x := exists_intro 0 Hg
theorem gex2 : ∃ x, g x 0 = x := exists_intro 0 Hg
theorem gex3 : ∃ x, g 0 0 = x := exists_intro 0 Hg
theorem gex4 : ∃ x, g x x = 0 := exists_intro 0 Hg
set_option pp.implicit true -- display implicit arguments
check gex1
check gex2
check gex3
check gex4
We can view exists_intro
(aka existential introduction) as an information hiding procedure.
We are "hiding" what is the witness for some fact. The existential elimination performs the opposite
operation. The exists_elim
theorem allows us to prove some proposition B
from ∃ x : A, B x
if we can derive B
using an "abstract" witness w
and a proof term Hw : B w
.
import logic
check @exists_elim
In the following example, we define even a
as ∃ b, a = 2*b
, and then we show that the sum
of two even numbers is an even number.
import data.nat
open nat
definition even (a : nat) := ∃ b, a = 2*b
theorem EvenPlusEven {a b : nat} (H1 : even a) (H2 : even b) : even (a + b) :=
exists_elim H1 (fun (w1 : nat) (Hw1 : a = 2*w1),
exists_elim H2 (fun (w2 : nat) (Hw2 : b = 2*w2),
exists_intro (w1 + w2)
(calc a + b = 2*w1 + b : {Hw1}
... = 2*w1 + 2*w2 : {Hw2}
... = 2*(w1 + w2) : eq.symm mul_distr_left)))
The example above also uses calculational proofs to show that a + b = 2*(w1 + w2)
.
The calc
construct is just syntax sugar for creating proofs using transitivity and substitution.
In Lean, we can use obtain _, from _, _
as syntax sugar for exists_elim
.
With this macro we can write the example above in a more natural way
import data.nat
open nat
definition even (a : nat) := ∃ b, a = 2*b
theorem EvenPlusEven {a b : nat} (H1 : even a) (H2 : even b) : even (a + b) :=
obtain (w1 : nat) (Hw1 : a = 2*w1), from H1,
obtain (w2 : nat) (Hw2 : b = 2*w2), from H2,
exists_intro (w1 + w2)
(calc a + b = 2*w1 + b : {Hw1}
... = 2*w1 + 2*w2 : {Hw2}
... = 2*(w1 + w2) : eq.symm mul_distr_left)