* Lean Tutorial ** Introduction Lean is an automatic and interactive theorem prover. It can be used to create specifications, build mathematical libraries, and solve constraints. In this tutorial, we introduce basic concepts, the logic used in Lean, and the main commands. ** Getting started We can use Lean in interactive or batch mode. The following example just displays the message `hello world`. #+BEGIN_SRC lean print "hello world" #+END_SRC All we have to do to run your first example is to call the =lean= executable with the name of the text file that contains the command above. If you saved the above command in the file =hello.lean=, then you just have to execute #+BEGIN_SRC shell lean hello.lean #+END_SRC As a more complex example, the next example defines a function that doubles the input value. #+BEGIN_SRC lean import nat using nat -- defines the double function definition double (x : nat) := x + x #+END_SRC ** Basics We can also view Lean as a suite of tools for evaluating and processing expressions representing terms, definitions, and theorems. Every expression has a unique type in Lean. The command =check= returns the type of a given expression. #+BEGIN_SRC lean import logic check true check and #+END_SRC The last command returns =Prop → Prop → Prop=. That is, the type of =and= is a function that takes two _propositions_ and return a proposition, =Prop= is the type of propositions. The command =import= loads existing libraries and extensions. #+BEGIN_SRC lean import nat check nat.ge #+END_SRC We say =nat.ge= is a hierarchical name comprised of two parts: =nat= and =ge=. The command =using= creates aliases based on a given prefix. The command also imports notation, hints, and other features. We will discuss its other applications later. Regarding aliases, the following command creates aliases for all objects starting with =nat=, and imports all notations defined in this namespace. #+BEGIN_SRC lean import nat using nat check ge -- display the type of nat.ge #+END_SRC The command =variable= assigns a type to an identifier. The following command postulates/assumes that =n=, =m= and =o= have type =nat=. #+BEGIN_SRC lean import nat using nat variable n : nat variable m : nat variable o : nat -- The command 'using nat' also imported the notation defined at the namespace 'nat' check n + m check n ≤ m #+END_SRC The command =variables n m o : nat= can be used as a shorthand for the three commands above. In Lean, proofs are also expressions, and all functionality provided for manipulating expressions is also available for manipulating proofs. For example, =refl n= is a proof for =n = n=. In Lean, =refl= is the reflexivity theorem. #+BEGIN_SRC lean import nat using nat variable n : nat check refl n #+END_SRC The command =axiom= postulates that a given proposition holds. The following commands postulate two axioms =Ax1= and =Ax2= that state that =n = m= and =m = o=. =Ax1= and =Ax2= are not just names. For example, =trans Ax1 Ax2= is a proof that =n = o=, where =trans= is the transitivity theorem. #+BEGIN_SRC lean import nat using nat variables m n o : nat axiom Ax1 : n = m axiom Ax2 : m = o check trans Ax1 Ax2 #+END_SRC The expression =trans Ax1 Ax2= is just a function application like any other. Moreover, in Lean, _propositions are types_. Any proposition =P= can be used as a type. The elements of type =P= can be viewed as the proofs of =P=. Moreover, in Lean, _proof checking is type checking_. For example, the Lean type checker will reject the type incorrect term =trans Ax2 Ax1=. Because we use _proposition as types_, we must support _empty types_. For example, the type =false= must be empty, since we don't have a proof for =false=. Most systems based on the _propositions as types_ paradigm are based on constructive logic. In Lean, we support classical and constructive logic. We can load _classical axiom_ by using =import classical=. When the classical extensions are loaded, the _excluded middle_ is a theorem, and =em p= is a proof for =p ∨ ¬ p=. #+BEGIN_SRC lean import classical variable p : Prop check em p #+END_SRC The commands =axiom= and =variable= are essentially the same command. We provide both just to make Lean files more readable. We encourage users to use =axiom= only for propositions, and =variable= for everything else. Similarly, a theorem is just a definition. The following command defines a new theorem called =nat_trans3=, and then use it to prove something else. In this example, =symm= is the symmetry theorem. #+BEGIN_SRC lean import nat using nat theorem nat_trans3 (a b c d : nat) (H1 : a = b) (H2 : c = b) (H3 : c = d) : a = d := trans (trans H1 (symm H2)) H3 -- Example using nat_trans3 variables x y z w : nat axiom Hxy : x = y axiom Hzy : z = y axiom Hzw : z = w check nat_trans3 x y z w Hxy Hzy Hzw #+END_SRC The theorem =nat_trans3= has 7 parameters, it takes for natural numbers =a=, =b=, =c= and =d=, and three proofs showing that =a = b=, =c = b= and =c = d=, and returns a proof that =a = d=. The theorem =nat_trans3= is somewhat inconvenient to use because it has 7 parameters. However, the first four parameters can be inferred from the last 3. We can use =_= as a placeholder that instructs Lean to synthesize this expression. The synthesis process is based on type inference, and it is the most basic form of automation provided by Lean. In the example above, we can use =check nat_trans3 _ _ _ _ Hxy Hzy Hzw=. Lean also supports _implicit arguments_. We mark implicit arguments using curly braces instead of parenthesis. In the following example, we define the theorem =nat_trans3i= using implicit arguments. #+BEGIN_SRC lean import nat using nat theorem nat_trans3i {a b c d : nat} (H1 : a = b) (H2 : c = b) (H3 : c = d) : a = d := trans (trans H1 (symm H2)) H3 -- Example using nat_trans3 variables x y z w : nat axiom Hxy : x = y axiom Hzy : z = y axiom Hzw : z = w check nat_trans3i Hxy Hzy Hzw #+END_SRC It is identical to =nat_trans3=, the only difference is the use of curly braces. Lean will (try to) infer the implicit arguments. The idea behind implicit arguments is quite simple, we are just instructing Lean to automatically insert the placeholders =_= for us. Sometimes, Lean will not be able to infer the parameters automatically. The annotation =@f= instructs Lean that we want to provide the implicit arguments for =f= explicitly. The theorems =refl=, =trans= and =symm= all have implicit arguments. #+BEGIN_SRC lean import logic check @refl check @symm check @trans #+END_SRC We can also instruct Lean to display all implicit arguments when it prints expressions. This is useful when debugging non-trivial problems. #+BEGIN_SRC lean import nat using nat variables a b c : nat axiom H1 : a = b axiom H2 : b = c check trans H1 H2 set_option pp.implicit true -- Now, Lean will display all implicit arguments check trans H1 H2 #+END_SRC In the previous example, the =check= command stated that =trans H1 H2= has type =@eq ℕ a c=. The expression =a = c= is just notational convenience. We have seen many occurrences of =Type=. In Lean, the type of =nat= and =Prop= is =Type=. What is the type of =Type=? #+BEGIN_SRC lean check Type #+END_SRC Lean reports =Type : Type=, is it Lean inconsistent? Now, it is not. Internally, Lean maintains a hierarchy of Types. We say each one of them _lives_ in a universe. Lean is universe polymorphic, and by default all universes are hidden from the user. Like implicit arguments, we can instruct Lean to display the universe levels explicitly. #+BEGIN_SRC lean set_option pp.universes true check Type #+END_SRC In the command above, Lean reports that =Type.{l_1}= that lives in universe =l_1= has type =Type.{succ l_1}=. That is, its type lives in the universe =l_1 + 1=. Definitions such as =refl=, =symm= and =trans= are all universe polymorphic. #+BEGIN_SRC lean import logic set_option pp.universes true check @refl check @symm check @trans #+END_SRC Whenever we declare a new constant, Lean automatically infers the universe parameters. We can also provide the universe levels explicitly. #+BEGIN_SRC lean import logic definition id.{l} {A : Type.{l}} (a : A) : Type.{l} := a check id true #+END_SRC The universes can be explicitly provided for each constant and =Type= by using the notation =.{ ... }=. Unlike other systems, Lean does not have _universe cumulativity_. That is, the type =Type.{i}= is *not* an element of =Type.{j}= for =j > i=. ** Propositional logic To manipulate formulas with a richer logical structure, it is important to master the notation Lean uses for building composite logical expressions out of basic formulas using _logical connectives_. The logical connectives (=and=, =or=, =not=, etc) are defined in the file [[../../library/standard/logic.lean][logic.lean]]. This file also defines notational convention for writing formulas in a natural way. Here is a table showing the notation for the so called propositional (or Boolean) connectives. | Ascii | Unicode | Definition | |-------|-----------------------|--------------| | true | | true | | false | | false | | not | ¬ | not | | /\ | ∧ | and | | ‌\/ | ∨ | or | | -> | → | | | <-> | ↔ | iff | =true= and =false= are logical constants to denote the true and false propositions. Logical negation is a unary operator just like arithmetical negation on numbers. The other connectives are all binary operators. The meaning of the operators is the usual one. The table above makes clear that Lean supports unicode characters. We can use Ascii or/and unicode versions. Here is a simple example using the connectives above. #+BEGIN_SRC lean import logic variables p q : Prop check p → q → p ∧ q check ¬p → p ↔ false check p ∨ q → q ∨ p -- Ascii version check p -> q -> p /\ q check not p -> p <-> false check p \/ q -> q \/ p #+END_SRC Depending on the platform, Lean uses unicode characters by default when printing expressions. The following commands can be used to change this behavior. #+BEGIN_SRC lean import logic set_option pp.unicode false variables p q : Prop check p → q → p ∧ q set_option pp.unicode true check p → q → p ∧ q #+END_SRC Note that, it may seem that the symbols =->= and =→= are overloaded, and Lean uses them to represent implication and the type of functions. Actually, they are not overloaded, they are the same symbols. In Lean, the Proposition =p → q= expression is also the type of the functions that given a proof for =p=, returns a proof for =q=. This is very convenient for writing proofs. #+BEGIN_SRC lean import logic variables p q : Prop -- Hpq is a function that takes a proof for p and returns a proof for q axiom Hpq : p → q -- Hq is a proof/certificate for p axiom Hp : p -- The expression Hpq Hp is a proof/certificate for q check Hpq Hp #+END_SRC In composite expressions, the precedences of the various binary connectives are in order of the above table, with =and= being the strongest and =iff= the weakest. For example, =a ∧ b → c ∨ d ∧ e= means =(a ∧ b) → (c ∨ (d ∧ e))=. All of them are right-associative. So, =p ∧ q ∧ r= means =p ∧ (q ∧ r)=. The actual precedence and fixity of all logical connectives is defined in the Lean [[../../library/standard/logic.lean][logic definition file]]. Finally, =not=, =and=, =or= and =iff= are the actual names used when defining the Boolean connectives. They can be used as any other function. Lean supports _currying_ =and true= is a function from =Prop= to =Prop=. ** Functions There are many variable-binding constructs in mathematics. Lean expresses all of them using just one _abstraction_, which is a converse operation to function application. Given a variable =x=, a type =A=, and a term =t= that may or may not contain =x=, one can construct the so-called _lambda abstraction_ =fun x : A, t=, or using unicode notation =λ x : A, t=. Here is some simple examples. #+BEGIN_SRC lean import nat using nat check fun x : nat, x + 1 check fun x y : nat, x + 2 * y check fun x y : Prop, not (x ∧ y) check λ x : nat, x + 1 check λ (x : nat) (p : Prop), x = 0 ∨ p #+END_SRC In many cases, Lean can automatically infer the type of the variable. Actually, In all examples above, the type can be inferred automatically. #+BEGIN_SRC lean import nat using nat check fun x, x + 1 check fun x y, x + 2 * y check fun x y, not (x ∧ y) check λ x, x + 1 check λ x p, x = 0 ∨ p #+END_SRC However, Lean will complain that it cannot infer the type of the variable =x= in =fun x, x= because any type would work in this example. The following example shows how to use lambda abstractions in function applications #+BEGIN_SRC lean import nat using nat check (fun x y, x + 2 * y) 1 check (fun x y, x + 2 * y) 1 2 check (fun x y, not (x ∧ y)) true false #+END_SRC Lambda abstractions are also used to create proofs for propositions of the form =A → B=. This should be natural since we can "view" =A → B= as the type of functions that given a proof for =A= returns a proof for =B=. For example, a proof for =p → p= is just =fun H : p, H= (the identity function). #+BEGIN_SRC lean import logic variable p : Prop check fun H : p, H #+END_SRC ** Definitional equality The command =eval t= computes a normal form for the term =t=. In Lean, we say two terms are _definitionally equal_ if the have the same normal form. For example, the terms =(λ x : nat, x + 1) a= and =a + 1= are definitionally equal. The Lean type/proof checker uses the normalizer when checking types/proofs. So, we can prove that two definitionally equal terms are equal using just =refl=. Here is a simple example. #+BEGIN_SRC lean import nat using nat theorem def_eq_th (a : nat) : ((λ x : nat, x + 1) a) = a + 1 := refl (a+1) #+END_SRC ** Provable equality In the previous examples, we have used =nat_trans3 x y z w Hxy Hzy Hzw= to show that =x = w=. In this case, =x= and =w= are not definitionally equal, but they are provably equal in the environment that contains =nat_trans3= and axioms =Hxy=, =Hzy= and =Hzw=. ** Proving The Lean standard library contains basic theorems for creating proof terms. The basic theorems are useful for creating manual proofs. The are also the basic building blocks used by all automated proof engines available in Lean. The theorems can be broken into three different categories: introduction, elimination, and rewriting. First, we cover the introduction and elimination theorems for the basic Boolean connectives. *** And (conjuction) The expression =and_intro H1 H2= creates a proof for =a ∧ b= using proofs =H1 : a= and =H2 : b=. We say =and_intro= is the _and-introduction_ operation. In the following example we use =and_intro= for creating a proof for =p → q → p ∧ q=. #+BEGIN_SRC lean import logic variables p q : Prop check fun (Hp : p) (Hq : q), and_intro Hp Hq #+END_SRC The expression =and_elim_left H= creates a proof =a= from a proof =H : a ∧ b=. Similarly =and_elim_right H= is a proof for =b=. We say they are the _left/right and-eliminators_. #+BEGIN_SRC lean import logic variables p q : Prop -- Proof for p ∧ q → p check fun H : p ∧ q, and_elim_left H -- Proof for p ∧ q → q check fun H : p ∧ q, and_elim_right H #+END_SRC Now, we prove =p ∧ q → q ∧ p= with the following simple proof term. #+BEGIN_SRC lean import logic variables p q : Prop check fun H : p ∧ q, and_intro (and_elim_right H) (and_elim_left H) #+END_SRC Note that the proof term is very similar to a function that just swaps the elements of a pair. *** (disjuction) The expression =or_intro_left b H1= creates a proof for =a ∨ b= using a proof =H1 : a=. Similarly, =or_intro_right a H2= creates a proof for =a ∨ b= using a proof =H2 : b=. We say they are the _left/right or-introduction_. #+BEGIN_SRC lean import logic variables p q : Prop -- Proof for p → p ∨ q check fun H : p, or_intro_left q H -- Proof for q → p ∨ q check fun H : q, or_intro_right p H #+END_SRC The or-elimination rule is slightly more complicated. The basic idea is the following, we can prove =c= from =a ∨ b=, by showing we can prove =c= by assuming =a= or by assuming =b=. It is essentially a proof by cases. =or_elim Hab Hac Hbc= takes three arguments =Hab : a ∨ b=, =Hac : a → c= and =Hbc : b → c= and produces a proof for =c=. In the following example, we use =or_elim= to prove that =p v q → q ∨ p=. #+BEGIN_SRC lean import logic variables p q : Prop check fun H : p ∨ q, or_elim H (fun Hp : p, or_intro_right q Hp) (fun Hq : q, or_intro_left p Hq) #+END_SRC In most cases, the first argument of =or_intro_right= and =or_intro_left= can be inferred automatically by Lean. Moreover, Lean provides =or_inr= and =or_inl= as shorthands for =or_intro_right _= and =or_intro_left _=. These two shorthands are extensively used in the Lean standard library. #+BEGIN_SRC lean import logic variables p q : Prop check fun H : p ∨ q, or_elim H (fun Hp : p, or_inr Hp) (fun Hq : q, or_inl Hq) #+END_SRC *** Not (negation) =not_intro H= produces a proof for =¬ a= from =H : a → false=. That is, we obtain =¬ a= if we can derive =false= from =a=. The expression =absurd_elim b Ha Hna= produces a proof for =b= from =Ha : a= and =Hna : ¬ a=. That is, we can deduce anything if we have =a= and =¬ a=. We now use =not_intro= and =absurd_elim= to produce a proof term for =(a → b) → ¬b → ¬a=. =absurd Ha Hna= is just =absurd_elim false Ha Hna=. #+BEGIN_SRC lean import logic variables a b : Prop check fun (Hab : a → b) (Hnb : ¬ b), not_intro (fun Ha : a, absurd (Hab Ha) Hnb) #+END_SRC In the standard library, =not a= is actually just an _abbreviation_ for =a → false=. Thus, we don't really need to use =not_intro= explicitly. Moreover, =absurd Ha Hna= is just =Hna Ha=. We can suppress both of them in the previous example #+BEGIN_SRC lean import logic variables a b : Prop check fun (Hab : a → b) (Hnb : ¬ b), (fun Ha : a, Hnb (Hab Ha)) #+END_SRC Now, here is the proof term for =¬a → b → (b → a) → c= #+BEGIN_SRC lean import logic variables a b c : Prop check fun (Hna : ¬ a) (Hb : b) (Hba : b → a), absurd_elim c (Hba Hb) Hna #+END_SRC *** Iff (if-and-only-if) The expression =iff_intro H1 H2= produces a proof for =a ↔ b= from =H1 : a → b= and =H2 : b → a=. =iff_elim_left H= produces a proof for =a → b= from =H : a ↔ b=. Similarly, =iff_elim_right H= produces a proof for =b → a= from =H : a ↔ b=. Here is the proof term for =a ∧ b ↔ b ∧ a= #+BEGIN_SRC lean import logic variables a b : Prop check iff_intro (fun H : a ∧ b, and_intro (and_elim_right H) (and_elim_left H)) (fun H : b ∧ a, and_intro (and_elim_right H) (and_elim_left H)) #+END_SRC In Lean, we can use =assume= instead of =fun= to make proof terms look more like proofs found in text books. #+BEGIN_SRC lean import logic variables a b : Prop check iff_intro (assume H : a ∧ b, and_intro (and_elim_right H) (and_elim_left H)) (assume H : b ∧ a, and_intro (and_elim_right H) (and_elim_left H)) #+END_SRC *** True and False The expression =trivial= is a proof term for =true=, and =false_elim a H= produces a proof for =a= from =H : false=. *** Rewrite rules *WARNING: We did not port this section to Lean 0.2 yet* The Lean kernel also contains many theorems that are meant to be used as rewriting/simplification rules. The conclusion of these theorems is of the form =t = s= or =t ↔ s=. For example, =and_id a= is proof term for =a ∧ a ↔ a=. The Lean simplifier can use these theorems to automatically create proof terms for us. The expression =(by simp [rule-set])= is similar to =_=, but it tells Lean to synthesize the proof term using the simplifier using the rewrite rule set named =[rule-set]=. In the following example, we create a simple rewrite rule set and use it to prove a theorem that would be quite tedious to prove by hand. #+BEGIN_SRC lean -- import module that defines several tactics/strategies including "simp" import tactic -- create a rewrite rule set with name 'simple' rewrite_set simple -- add some theorems to the rewrite rule set 'simple' add_rewrite and_id and_truer and_truel and_comm and_assoc and_left_comm iff_id : simple theorem th1 (a b : Bool) : a ∧ b ∧ true ∧ b ∧ true ∧ b ↔ a ∧ b := (by simp simple) #+END_SRC In Lean, we can combine manual and automated proofs in a natural way. We can manually write the proof skeleton and use the =by= construct to invoke automated proof engines like the simplifier for filling the tedious steps. Here is a very simple example. #+BEGIN_SRC lean theorem th2 (a b : Prop) : a ∧ b ↔ b ∧ a := iff_intro (fun H : a ∧ b, (by simp simple)) (fun H : b ∧ a, (by simp simple)) #+END_SRC ** Dependent functions and quantifiers Lean supports _dependent functions_. In type theory, they are also called dependent product types or Pi-types. The idea is quite simple, suppose we have a type =A : Type=, and a family of types =B : A → Type= which assigns to each =a : A= a type =B a=. So a dependent function is a function whose range varies depending on its arguments. In Lean, the dependent functions is written as =forall a : A, B a=, =Pi a : A, B a=, =∀ x : A, B a=, or =Π x : A, B a=. We usually use =forall= and =∀= for propositions, and =Pi= and =Π= for everything else. In the previous examples, we have seen many examples of dependent functions. The theorems =refl=, =trans= and =symm=, and the equality are all dependent functions. The universal quantifier is just a dependent function. In Lean, if we have a family of types =B : A → Prop=, then =∀ x : A, B a= has type =Prop=. This features complicates the Lean set-theoretic model, but it improves usability. Several theorem provers have a =forall elimination= (aka instantiation) proof rule. In Lean (and other systems based on proposition as types), this rule is just function application. In the following example we add an axiom stating that =f x= is =0= forall =x=. Then we instantiate the axiom using function application. #+BEGIN_SRC lean import nat using nat variable f : nat → nat axiom fzero : ∀ x, f x = 0 check fzero 1 variable a : nat check fzero a #+END_SRC Since we instantiate quantifiers using function application, it is natural to create proof terms for universal quantifiers using lambda abstraction. In the following example, we create a proof term showing that for all =x= and =y=, =f x = f y=. #+BEGIN_SRC lean import nat using nat variable f : nat → nat axiom fzero : ∀ x, f x = 0 check λ x y, trans (fzero x) (symm (fzero y)) #+END_SRC We can view the proof term above as a simple function or "recipe" for showing that =f x = f y= for any =x= and =y=. The function "invokes" =fzero= for creating proof terms for =f x = 0= and =f y = 0=. Then, it uses symmetry =symm= to create a proof term for =0 = f y=. Finally, transitivity is used to combine the proofs for =f x = 0= and =0 = f y=. In Lean, the existential quantifier can be written as =exists x : A, B x= or =∃ x : A, B x=. Actually both versions are just notational convenience for =Exists (fun x : A, B x)=. That is, the existential quantifier is actually a constant defined in the file =logic.lean=. This file also defines the =exists_intro= and =exists_elim=. To build a proof for =∃ x : A, B x=, we should provide a term =w : A= and a proof term =Hw : B w= to =exists_intro=. We say =w= is the witness for the existential introduction. In previous examples, =nat_trans3i Hxy Hzy Hzw= was a proof term for =x = w=. Then, we can create a proof term for =∃ a : nat, a = w= using #+BEGIN_SRC lean import nat using nat theorem nat_trans3i {a b c d : nat} (H1 : a = b) (H2 : c = b) (H3 : c = d) : a = d := trans (trans H1 (symm H2)) H3 variables x y z w : nat axiom Hxy : x = y axiom Hzy : z = y axiom Hzw : z = w theorem ex_a_eq_w : exists a, a = w := exists_intro x (nat_trans3i Hxy Hzy Hzw) check ex_a_eq_w #+END_SRC Note that =exists_intro= also has implicit arguments. For example, Lean has to infer the implicit argument =P : A → Bool=, a predicate (aka function to Prop). This creates complications. For example, suppose we have =Hg : g 0 0 = 0= and we invoke =exists_intro 0 Hg=. There are different possible values for =P=. Each possible value corresponds to a different theorem: =∃ x, g x x = x=, =∃ x, g x x = 0=, =∃ x, g x 0 = x=, etc. Lean uses the context where =exists_intro= occurs to infer the users intent. In the example above, we were trying to prove the theorem =∃ a, a = w=. So, we are implicitly telling Lean how to choose =P=. In the following example, we demonstrate this issue. We ask Lean to display the implicit arguments using the option =pp.implicit=. We see that each instance of =exists_intro 0 Hg= has different values for the implicit argument =P=. #+BEGIN_SRC lean import nat using nat check @exists_intro variable g : nat → nat → nat axiom Hg : g 0 0 = 0 theorem gex1 : ∃ x, g x x = x := exists_intro 0 Hg theorem gex2 : ∃ x, g x 0 = x := exists_intro 0 Hg theorem gex3 : ∃ x, g 0 0 = x := exists_intro 0 Hg theorem gex4 : ∃ x, g x x = 0 := exists_intro 0 Hg set_option pp.implicit true -- display implicit arguments check gex1 check gex2 check gex3 check gex4 #+END_SRC We can view =exists_intro= (aka existential introduction) as an information hiding procedure. We are "hiding" what is the witness for some fact. The existential elimination performs the opposite operation. The =exists_elim= theorem allows us to prove some proposition =B= from =∃ x : A, B x= if we can derive =B= using an "abstract" witness =w= and a proof term =Hw : B w=. #+BEGIN_SRC lean import logic check @exists_elim #+END_SRC In the following example, we define =even a= as =∃ b, a = 2*b=, and then we show that the sum of two even numbers is an even number. #+BEGIN_SRC lean import nat using nat definition even (a : nat) := ∃ b, a = 2*b theorem EvenPlusEven {a b : nat} (H1 : even a) (H2 : even b) : even (a + b) := exists_elim H1 (fun (w1 : nat) (Hw1 : a = 2*w1), exists_elim H2 (fun (w2 : nat) (Hw2 : b = 2*w2), exists_intro (w1 + w2) (calc a + b = 2*w1 + b : {Hw1} ... = 2*w1 + 2*w2 : {Hw2} ... = 2*(w1 + w2) : symm (mul_add_distr_right 2 w1 w2)))) #+END_SRC The example above also uses [[./calc.org][calculational proofs]] to show that =a + b = 2*(w1 + w2)=. The =calc= construct is just syntax sugar for creating proofs using transitivity and substitution. In Lean, we can use =obtain _, from _, _= as syntax sugar for =exists_elim=. With this macro we can write the example above in a more natural way #+BEGIN_SRC lean import nat using nat definition even (a : nat) := ∃ b, a = 2*b theorem EvenPlusEven {a b : nat} (H1 : even a) (H2 : even b) : even (a + b) := obtain (w1 : nat) (Hw1 : a = 2*w1), from H1, obtain (w2 : nat) (Hw2 : b = 2*w2), from H2, exists_intro (w1 + w2) (calc a + b = 2*w1 + b : {Hw1} ... = 2*w1 + 2*w2 : {Hw2} ... = 2*(w1 + w2) : symm (mul_add_distr_right 2 w1 w2)) #+END_SRC