diff --git a/Basics.lagda b/Basics.lagda
new file mode 100644
index 00000000..193ba6d1
--- /dev/null
+++ b/Basics.lagda
@@ -0,0 +1,145 @@
+---
+title : "Basics: Functional Programming in Agda"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+---
+
+
+\begin{code}
+module Basics where
+
+ open import Relation.Binary.PropositionalEquality using (_≡_; refl)
+\end{code}
+
+
+# Basics: Functional Programming in Agda
+
+The functional programming style brings programming closer to
+simple, everyday mathematics: If a procedure or method has no side
+effects, then (ignoring efficiency) all we need to understand
+about it is how it maps inputs to outputs -- that is, we can think
+of it as just a concrete method for computing a mathematical
+function. This is one sense of the word "functional" in
+"functional programming." The direct connection between programs
+and simple mathematical objects supports both formal correctness
+proofs and sound informal reasoning about program behavior.
+
+The other sense in which functional programming is "functional" is
+that it emphasizes the use of functions (or methods) as
+*first-class* values -- i.e., values that can be passed as
+arguments to other functions, returned as results, included in
+data structures, etc. The recognition that functions can be
+treated as data in this way enables a host of useful and powerful
+idioms.
+
+Other common features of functional languages include *algebraic
+data types* and *pattern matching*, which make it easy to
+construct and manipulate rich data structures, and sophisticated
+*polymorphic type systems* supporting abstraction and code reuse.
+Agda shares all of these features.
+
+This chapter introduces the most essential elements of Agda.
+
+## Enumerated Types
+
+One unusual aspect of Agda is that its set of built-in
+features is *extremely* small. For example, instead of providing
+the usual palette of atomic data types (booleans, integers,
+strings, etc.), Agda offers a powerful mechanism for defining new
+data types from scratch, from which all these familiar types arise
+as instances.
+
+Naturally, the Agda distribution comes with an extensive standard
+library providing definitions of booleans, numbers, and many
+common data structures like lists and hash tables. But there is
+nothing magic or primitive about these library definitions. To
+illustrate this, we will explicitly recapitulate all the
+definitions we need in this course, rather than just getting them
+implicitly from the library.
+
+To see how this definition mechanism works, let's start with a
+very simple example.
+
+### Days of the Week
+
+The following declaration tells Agda that we are defining
+a new set of data values -- a *type*.
+
+\begin{code}
+ data Day : Set where
+ monday : Day
+ tuesday : Day
+ wednesday : Day
+ thursday : Day
+ friday : Day
+ saturday : Day
+ sunday : Day
+\end{code}
+
+The type is called `day`, and its members are `monday`,
+`tuesday`, etc. The second and following lines of the definition
+can be read "`monday` is a `day`, `tuesday` is a `day`, etc."
+
+Having defined `day`, we can write functions that operate on
+days.
+
+\begin{code}
+ nextWeekday : Day -> Day
+ nextWeekday monday = tuesday
+ nextWeekday tuesday = wednesday
+ nextWeekday wednesday = thursday
+ nextWeekday thursday = friday
+ nextWeekday friday = monday
+ nextWeekday saturday = monday
+ nextWeekday sunday = monday
+\end{code}
+
+One thing to note is that the argument and return types of
+this function are explicitly declared. Like most functional
+programming languages, Agda can often figure out these types for
+itself when they are not given explicitly -- i.e., it performs
+*type inference* -- but we'll include them to make reading
+easier.
+
+Having defined a function, we should check that it works on
+some examples. There are actually three different ways to do this
+in Agda.
+
+First, we can use the Emacs command `C-c C-n` to evaluate a
+compound expression involving `nextWeekday`. For instance, `nextWeekday
+friday` should evaluate to `monday`. If you have a computer handy, this
+would be an excellent moment to fire up Agda and try this for yourself.
+Load this file, `Basics.lagda`, load it using `C-c C-l`, submit the
+above example to Agda, and observe the result.
+
+Second, we can record what we *expect* the result to be in the
+form of an Agda type:
+
+\begin{code}
+ test_nextWeekday : nextWeekday (nextWeekday saturday) ≡ tuesday
+\end{code}
+
+This declaration does two things: it makes an assertion (that the second
+weekday after `saturday` is `tuesday`), and it gives the assertion a name
+that can be used to refer to it later.
+
+Having made the assertion, we must also verify it. We do this by giving
+a term of the above type:
+
+\begin{code}
+ test_nextWeekday = refl
+\end{code}
+
+There is no essential difference between the definition for
+`test_nextWeekday` here and the definition for `nextWeekday` above,
+except for the new symbol for equality `≡` and the constructor `refl`.
+The details of these are not important for now (we'll come back to them in
+a bit), but essentially `refl` can be read as "The assertion we've made
+can be proved by observing that both sides of the equality evaluate to the
+same thing, after some simplification."
+
+Third, we can ask Agda to *compile* some program involving our definition,
+This facility is very interesting, since it gives us a way to construct
+*fully certified* programs. We'll come back to this topic in later chapters.
diff --git a/Basics.md b/Basics.md
new file mode 100644
index 00000000..340a22da
--- /dev/null
+++ b/Basics.md
@@ -0,0 +1,457 @@
+---
+title : "Basics: Functional Programming in Agda"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+---
+
+
+
+# Basics: Functional Programming in Agda
+
+The functional programming style brings programming closer to
+simple, everyday mathematics: If a procedure or method has no side
+effects, then (ignoring efficiency) all we need to understand
+about it is how it maps inputs to outputs -- that is, we can think
+of it as just a concrete method for computing a mathematical
+function. This is one sense of the word "functional" in
+"functional programming." The direct connection between programs
+and simple mathematical objects supports both formal correctness
+proofs and sound informal reasoning about program behavior.
+
+The other sense in which functional programming is "functional" is
+that it emphasizes the use of functions (or methods) as
+*first-class* values -- i.e., values that can be passed as
+arguments to other functions, returned as results, included in
+data structures, etc. The recognition that functions can be
+treated as data in this way enables a host of useful and powerful
+idioms.
+
+Other common features of functional languages include *algebraic
+data types* and *pattern matching*, which make it easy to
+construct and manipulate rich data structures, and sophisticated
+*polymorphic type systems* supporting abstraction and code reuse.
+Agda shares all of these features.
+
+This chapter introduces the most essential elements of Agda.
+
+## Enumerated Types
+
+One unusual aspect of Agda is that its set of built-in
+features is *extremely* small. For example, instead of providing
+the usual palette of atomic data types (booleans, integers,
+strings, etc.), Agda offers a powerful mechanism for defining new
+data types from scratch, from which all these familiar types arise
+as instances.
+
+Naturally, the Agda distribution comes with an extensive standard
+library providing definitions of booleans, numbers, and many
+common data structures like lists and hash tables. But there is
+nothing magic or primitive about these library definitions. To
+illustrate this, we will explicitly recapitulate all the
+definitions we need in this course, rather than just getting them
+implicitly from the library.
+
+To see how this definition mechanism works, let's start with a
+very simple example.
+
+### Days of the Week
+
+The following declaration tells Agda that we are defining
+a new set of data values -- a *type*.
+
+
+ data Day : Set where
+ monday : Day
+ tuesday : Day
+ wednesday : Day
+ thursday : Day
+ friday : Day
+ saturday : Day
+ sunday : Day
+
+
+The type is called `day`, and its members are `monday`,
+`tuesday`, etc. The second and following lines of the definition
+can be read "`monday` is a `day`, `tuesday` is a `day`, etc."
+
+Having defined `day`, we can write functions that operate on
+days.
+
+
+ nextWeekday : Day -> Day
+ nextWeekday monday = tuesday
+ nextWeekday tuesday = wednesday
+ nextWeekday wednesday = thursday
+ nextWeekday thursday = friday
+ nextWeekday friday = monday
+ nextWeekday saturday = monday
+ nextWeekday sunday = monday
+
+
+One thing to note is that the argument and return types of
+this function are explicitly declared. Like most functional
+programming languages, Agda can often figure out these types for
+itself when they are not given explicitly -- i.e., it performs
+*type inference* -- but we'll include them to make reading
+easier.
+
+Having defined a function, we should check that it works on
+some examples. There are actually three different ways to do this
+in Agda.
+
+First, we can use the Emacs command `C-c C-n` to evaluate a
+compound expression involving `nextWeekday`. For instance, `nextWeekday
+friday` should evaluate to `monday`. If you have a computer handy, this
+would be an excellent moment to fire up Agda and try this for yourself.
+Load this file, `Basics.lagda`, load it using `C-c C-l`, submit the
+above example to Agda, and observe the result.
+
+Second, we can record what we *expect* the result to be in the
+form of an Agda type:
+
+
+ test_nextWeekday : nextWeekday (nextWeekday saturday) ≡ tuesday
+
+
+This declaration does two things: it makes an assertion (that the second
+weekday after `saturday` is `tuesday`), and it gives the assertion a name
+that can be used to refer to it later.
+
+Having made the assertion, we must also verify it. We do this by giving
+a term of the above type:
+
+
+ test_nextWeekday = refl
+
+
+There is no essential difference between the definition for
+`test_nextWeekday` here and the definition for `nextWeekday` above,
+except for the new symbol for equality `≡` and the constructor `refl`.
+The details of these are not important for now (we'll come back to them in
+a bit), but essentially `refl` can be read as "The assertion we've made
+can be proved by observing that both sides of the equality evaluate to the
+same thing, after some simplification."
+
+Third, we can ask Agda to *compile* some program involving our definition,
+This facility is very interesting, since it gives us a way to construct
+*fully certified* programs. We'll come back to this topic in later chapters.
diff --git a/Maps.lagda b/Maps.lagda
new file mode 100644
index 00000000..36d306ce
--- /dev/null
+++ b/Maps.lagda
@@ -0,0 +1,345 @@
+---
+title : "Maps: Total and Partial Maps"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+permalink : "sf/Maps.html"
+---
+
+# Maps: Total and Partial Maps
+
+Maps (or dictionaries) are ubiquitous data structures, both in
+software construction generally and in the theory of programming
+languages in particular; we're going to need them in many places in
+the coming chapters. They also make a nice case study using ideas
+we've seen in previous chapters, including building data structures
+out of higher-order functions (from [Basics](sf/Basics.html)
+and [Poly](sf/Poly.html) and the use of reflection to streamline
+proofs (from [IndProp](sf/IndProp.html).
+
+We'll define two flavors of maps: _total_ maps, which include a
+"default" element to be returned when a key being looked up
+doesn't exist, and _partial_ maps, which return an `Maybe` to
+indicate success or failure. The latter is defined in terms of
+the former, using `nothing` as the default element.
+
+## The Agda Standard Library
+
+One small digression before we start.
+
+Unlike the chapters we have seen so far, this one does not
+import the chapter before it (and, transitively, all the
+earlier chapters). Instead, in this chapter and from now, on
+we're going to import the definitions and theorems we need
+directly from Agda's standard library stuff. You should not notice
+much difference, though, because we've been careful to name our
+own definitions and theorems the same as their counterparts in the
+standard library, wherever they overlap.
+
+\begin{code}
+open import Function using (_∘_)
+open import Data.Bool using (Bool; true; false)
+open import Data.Empty using (⊥; ⊥-elim)
+open import Data.Maybe using (Maybe; just; nothing)
+open import Data.Nat using (ℕ)
+open import Relation.Nullary using (Dec; yes; no)
+open import Relation.Binary.PropositionalEquality
+\end{code}
+
+Documentation for the standard library can be found at
+.
+
+## Identifiers
+
+First, we need a type for the keys that we use to index into our
+maps. For this purpose, we again use the type Id` from the
+[Lists](sf/Lists.html) chapter. To make this chapter self contained,
+we repeat its definition here, together with the equality comparison
+function for ids and its fundamental property.
+
+\begin{code}
+data Id : Set where
+ id : ℕ → Id
+\end{code}
+
+\begin{code}
+_≟_ : (x y : Id) → Dec (x ≡ y)
+id x ≟ id y with x Data.Nat.≟ y
+id x ≟ id y | yes x=y rewrite x=y = yes refl
+id x ≟ id y | no x≠y = no (x≠y ∘ id-inj)
+ where
+ id-inj : ∀ {x y} → id x ≡ id y → x ≡ y
+ id-inj refl = refl
+\end{code}
+
+## Total Maps
+
+Our main job in this chapter will be to build a definition of
+partial maps that is similar in behavior to the one we saw in the
+[Lists](sf/Lists.html) chapter, plus accompanying lemmas about their
+behavior.
+
+This time around, though, we're going to use _functions_, rather
+than lists of key-value pairs, to build maps. The advantage of
+this representation is that it offers a more _extensional_ view of
+maps, where two maps that respond to queries in the same way will
+be represented as literally the same thing (the same function),
+rather than just "equivalent" data structures. This, in turn,
+simplifies proofs that use maps.
+
+We build partial maps in two steps. First, we define a type of
+_total maps_ that return a default value when we look up a key
+that is not present in the map.
+
+\begin{code}
+TotalMap : Set → Set
+TotalMap A = Id → A
+\end{code}
+
+Intuitively, a total map over anfi element type $$A$$ _is_ just a
+function that can be used to look up ids, yielding $$A$$s.
+
+\begin{code}
+module TotalMap where
+\end{code}
+
+The function `empty` yields an empty total map, given a
+default element; this map always returns the default element when
+applied to any id.
+
+\begin{code}
+ empty : ∀ {A} → A → TotalMap A
+ empty x = λ _ → x
+\end{code}
+
+More interesting is the `update` function, which (as before) takes
+a map $$m$$, a key $$x$$, and a value $$v$$ and returns a new map that
+takes $$x$$ to $$v$$ and takes every other key to whatever $$m$$ does.
+
+\begin{code}
+ update : ∀ {A} → TotalMap A -> Id -> A -> TotalMap A
+ update m x v y with x ≟ y
+ ... | yes x=y = v
+ ... | no x≠y = m y
+\end{code}
+
+This definition is a nice example of higher-order programming.
+The `update` function takes a _function_ $$m$$ and yields a new
+function $$\lambda x'.\vdots$$ that behaves like the desired map.
+
+For example, we can build a map taking ids to bools, where `id
+3` is mapped to `true` and every other key is mapped to `false`,
+like this:
+
+\begin{code}
+ examplemap : TotalMap Bool
+ examplemap = update (update (empty false) (id 1) false) (id 3) true
+\end{code}
+
+This completes the definition of total maps. Note that we don't
+need to define a `find` operation because it is just function
+application!
+
+\begin{code}
+ test_examplemap0 : examplemap (id 0) ≡ false
+ test_examplemap0 = refl
+
+ test_examplemap1 : examplemap (id 1) ≡ false
+ test_examplemap1 = refl
+
+ test_examplemap2 : examplemap (id 2) ≡ false
+ test_examplemap2 = refl
+
+ test_examplemap3 : examplemap (id 3) ≡ true
+ test_examplemap3 = refl
+\end{code}
+
+To use maps in later chapters, we'll need several fundamental
+facts about how they behave. Even if you don't work the following
+exercises, make sure you thoroughly understand the statements of
+the lemmas! (Some of the proofs require the functional
+extensionality axiom, which is discussed in the [Logic]
+chapter and included in the Agda standard library.)
+
+#### Exercise: 1 star, optional (apply-empty)
+First, the empty map returns its default element for all keys:
+
+\begin{code}
+ postulate
+ apply-empty : ∀ {A x v} → empty {A} v x ≡ v
+\end{code}
+
+
+\begin{code}
+ apply-empty′ : ∀ {A x v} → empty {A} v x ≡ v
+ apply-empty′ = refl
+\end{code}
+
+
+#### Exercise: 2 stars, optional (update-eq)
+Next, if we update a map $$m$$ at a key $$x$$ with a new value $$v$$
+and then look up $$x$$ in the map resulting from the `update`, we get
+back $$v$$:
+
+\begin{code}
+ postulate
+ update-eq : ∀ {A v} (m : TotalMap A) (x : Id)
+ → (update m x v) x ≡ v
+\end{code}
+
+
+\begin{code}
+ update-eq′ : ∀ {A v} (m : TotalMap A) (x : Id)
+ → (update m x v) x ≡ v
+ update-eq′ m x with x ≟ x
+ ... | yes x=x = refl
+ ... | no x≠x = ⊥-elim (x≠x refl)
+\end{code}
+
+
+#### Exercise: 2 stars, optional (update-neq)
+On the other hand, if we update a map $$m$$ at a key $$x$$ and
+then look up a _different_ key $$y$$ in the resulting map, we get
+the same result that $$m$$ would have given:
+
+\begin{code}
+ update-neq : ∀ {A v} (m : TotalMap A) (x y : Id)
+ → x ≢ y → (update m x v) y ≡ m y
+ update-neq m x y x≠y with x ≟ y
+ ... | yes x=y = ⊥-elim (x≠y x=y)
+ ... | no _ = refl
+\end{code}
+
+#### Exercise: 2 stars, optional (update-shadow)
+If we update a map $$m$$ at a key $$x$$ with a value $$v_1$$ and then
+update again with the same key $$x$$ and another value $$v_2$$, the
+resulting map behaves the same (gives the same result when applied
+to any key) as the simpler map obtained by performing just
+the second `update` on $$m$$:
+
+\begin{code}
+ postulate
+ update-shadow : ∀ {A v1 v2} (m : TotalMap A) (x y : Id)
+ → (update (update m x v1) x v2) y ≡ (update m x v2) y
+\end{code}
+
+
+\begin{code}
+ update-shadow′ : ∀ {A v1 v2} (m : TotalMap A) (x y : Id)
+ → (update (update m x v1) x v2) y ≡ (update m x v2) y
+ update-shadow′ m x y
+ with x ≟ y
+ ... | yes x=y = refl
+ ... | no x≠y = update-neq m x y x≠y
+\end{code}
+
+
+#### Exercise: 2 stars (update-same)
+Prove the following theorem, which states that if we update a map to
+assign key $$x$$ the same value as it already has in $$m$$, then the
+result is equal to $$m$$:
+
+\begin{code}
+ postulate
+ update-same : ∀ {A} (m : TotalMap A) (x y : Id)
+ → (update m x (m x)) y ≡ m y
+\end{code}
+
+
+\begin{code}
+ update-same′ : ∀ {A} (m : TotalMap A) (x y : Id)
+ → (update m x (m x)) y ≡ m y
+ update-same′ m x y
+ with x ≟ y
+ ... | yes x=y rewrite x=y = refl
+ ... | no x≠y = refl
+\end{code}
+
+
+#### Exercise: 3 stars, recommended (update-permute)
+Prove one final property of the `update` function: If we update a map
+$$m$$ at two distinct keys, it doesn't matter in which order we do the
+updates.
+
+\begin{code}
+ postulate
+ update-permute : ∀ {A v1 v2} (m : TotalMap A) (x1 x2 y : Id)
+ → x1 ≢ x2 → (update (update m x2 v2) x1 v1) y
+ ≡ (update (update m x1 v1) x2 v2) y
+\end{code}
+
+
+\begin{code}
+ update-permute′ : ∀ {A v1 v2} (m : TotalMap A) (x1 x2 y : Id)
+ → x1 ≢ x2 → (update (update m x2 v2) x1 v1) y
+ ≡ (update (update m x1 v1) x2 v2) y
+ update-permute′ {A} {v1} {v2} m x1 x2 y x1≠x2
+ with x1 ≟ y | x2 ≟ y
+ ... | yes x1=y | yes x2=y = ⊥-elim (x1≠x2 (trans x1=y (sym x2=y)))
+ ... | no x1≠y | yes x2=y rewrite x2=y = update-eq′ m y
+ ... | yes x1=y | no x2≠y rewrite x1=y = sym (update-eq′ m y)
+ ... | no x1≠y | no x2≠y =
+ trans (update-neq m x2 y x2≠y) (sym (update-neq m x1 y x1≠y))
+\end{code}
+
+
+## Partial maps
+
+Finally, we define _partial maps_ on top of total maps. A partial
+map with elements of type `A` is simply a total map with elements
+of type `Maybe A` and default element `nothing`.
+
+\begin{code}
+PartialMap : Set → Set
+PartialMap A = TotalMap (Maybe A)
+\end{code}
+
+\begin{code}
+module PartialMap where
+\end{code}
+
+\begin{code}
+ empty : ∀ {A} → PartialMap A
+ empty = TotalMap.empty nothing
+\end{code}
+
+\begin{code}
+ update : ∀ {A} (m : PartialMap A) (x : Id) (v : A) → PartialMap A
+ update m x v = TotalMap.update m x (just v)
+\end{code}
+
+We can now lift all of the basic lemmas about total maps to
+partial maps.
+
+\begin{code}
+ update-eq : ∀ {A v} (m : PartialMap A) (x : Id)
+ → (update m x v) x ≡ just v
+ update-eq m x = TotalMap.update-eq m x
+\end{code}
+
+\begin{code}
+ update-neq : ∀ {A v} (m : PartialMap A) (x y : Id)
+ → x ≢ y → (update m x v) y ≡ m y
+ update-neq m x y x≠y = TotalMap.update-neq m x y x≠y
+\end{code}
+
+\begin{code}
+ update-shadow : ∀ {A v1 v2} (m : PartialMap A) (x y : Id)
+ → (update (update m x v1) x v2) y ≡ (update m x v2) y
+ update-shadow m x y = TotalMap.update-shadow m x y
+\end{code}
+
+\begin{code}
+ update-same : ∀ {A v} (m : PartialMap A) (x y : Id)
+ → m x ≡ just v
+ → (update m x v) y ≡ m y
+ update-same m x y mx=v rewrite sym mx=v = TotalMap.update-same m x y
+\end{code}
+
+\begin{code}
+ update-permute : ∀ {A v1 v2} (m : PartialMap A) (x1 x2 y : Id)
+ → x1 ≢ x2 → (update (update m x2 v2) x1 v1) y
+ ≡ (update (update m x1 v1) x2 v2) y
+ update-permute m x1 x2 y x1≠x2 = TotalMap.update-permute m x1 x2 y x1≠x2
+\end{code}
diff --git a/Maps.md b/Maps.md
new file mode 100644
index 00000000..8902d3aa
--- /dev/null
+++ b/Maps.md
@@ -0,0 +1,4467 @@
+---
+title : "Maps: Total and Partial Maps"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+permalink : "sf/Maps.html"
+---
+
+# Maps: Total and Partial Maps
+
+Maps (or dictionaries) are ubiquitous data structures, both in
+software construction generally and in the theory of programming
+languages in particular; we're going to need them in many places in
+the coming chapters. They also make a nice case study using ideas
+we've seen in previous chapters, including building data structures
+out of higher-order functions (from [Basics](sf/Basics.html)
+and [Poly](sf/Poly.html) and the use of reflection to streamline
+proofs (from [IndProp](sf/IndProp.html).
+
+We'll define two flavors of maps: _total_ maps, which include a
+"default" element to be returned when a key being looked up
+doesn't exist, and _partial_ maps, which return an `Maybe` to
+indicate success or failure. The latter is defined in terms of
+the former, using `nothing` as the default element.
+
+## The Agda Standard Library
+
+One small digression before we start.
+
+Unlike the chapters we have seen so far, this one does not
+import the chapter before it (and, transitively, all the
+earlier chapters). Instead, in this chapter and from now, on
+we're going to import the definitions and theorems we need
+directly from Agda's standard library stuff. You should not notice
+much difference, though, because we've been careful to name our
+own definitions and theorems the same as their counterparts in the
+standard library, wherever they overlap.
+
+
+open import Function using (_∘_)
+open import Data.Bool using (Bool; true; false)
+open import Data.Empty using (⊥; ⊥-elim)
+open import Data.Maybe using (Maybe; just; nothing)
+open import Data.Nat using (ℕ)
+open import Relation.Nullary using (Dec; yes; no)
+open import Relation.Binary.PropositionalEquality
+
+
+Documentation for the standard library can be found at
+.
+
+## Identifiers
+
+First, we need a type for the keys that we use to index into our
+maps. For this purpose, we again use the type Id` from the
+[Lists](sf/Lists.html) chapter. To make this chapter self contained,
+we repeat its definition here, together with the equality comparison
+function for ids and its fundamental property.
+
+
+data Id : Set where
+ id : ℕ → Id
+
+
+
+_≟_ : (x y : Id) → Dec (x ≡ y)
+id x ≟ id y with x Data.Nat.≟ y
+id x ≟ id y | yes x=y rewrite x=y = yes refl
+id x ≟ id y | no x≠y = no (x≠y ∘ id-inj)
+ where
+ id-inj : ∀ {x y} → id x ≡ id y → x ≡ y
+ id-inj refl = refl
+
+
+## Total Maps
+
+Our main job in this chapter will be to build a definition of
+partial maps that is similar in behavior to the one we saw in the
+[Lists](sf/Lists.html) chapter, plus accompanying lemmas about their
+behavior.
+
+This time around, though, we're going to use _functions_, rather
+than lists of key-value pairs, to build maps. The advantage of
+this representation is that it offers a more _extensional_ view of
+maps, where two maps that respond to queries in the same way will
+be represented as literally the same thing (the same function),
+rather than just "equivalent" data structures. This, in turn,
+simplifies proofs that use maps.
+
+We build partial maps in two steps. First, we define a type of
+_total maps_ that return a default value when we look up a key
+that is not present in the map.
+
+
+TotalMap : Set → Set
+TotalMap A = Id → A
+
+
+Intuitively, a total map over anfi element type $$A$$ _is_ just a
+function that can be used to look up ids, yielding $$A$$s.
+
+
+module TotalMap where
+
+
+The function `empty` yields an empty total map, given a
+default element; this map always returns the default element when
+applied to any id.
+
+
+ empty : ∀ {A} → A → TotalMap A
+ empty x = λ _ → x
+
+
+More interesting is the `update` function, which (as before) takes
+a map $$m$$, a key $$x$$, and a value $$v$$ and returns a new map that
+takes $$x$$ to $$v$$ and takes every other key to whatever $$m$$ does.
+
+
+ update : ∀ {A} → TotalMap A -> Id -> A -> TotalMap A
+ update m x v y with x ≟ y
+ ... | yes x=y = v
+ ... | no x≠y = m y
+
+
+This definition is a nice example of higher-order programming.
+The `update` function takes a _function_ $$m$$ and yields a new
+function $$\lambda x'.\vdots$$ that behaves like the desired map.
+
+For example, we can build a map taking ids to bools, where `id
+3` is mapped to `true` and every other key is mapped to `false`,
+like this:
+
+
+ examplemap : TotalMap Bool
+ examplemap = update (update (empty false) (id 1) false) (id 3) true
+
+
+This completes the definition of total maps. Note that we don't
+need to define a `find` operation because it is just function
+application!
+
+
+ test_examplemap0 : examplemap (id 0) ≡ false
+ test_examplemap0 = refl
+
+ test_examplemap1 : examplemap (id 1) ≡ false
+ test_examplemap1 = refl
+
+ test_examplemap2 : examplemap (id 2) ≡ false
+ test_examplemap2 = refl
+
+ test_examplemap3 : examplemap (id 3) ≡ true
+ test_examplemap3 = refl
+
+
+To use maps in later chapters, we'll need several fundamental
+facts about how they behave. Even if you don't work the following
+exercises, make sure you thoroughly understand the statements of
+the lemmas! (Some of the proofs require the functional
+extensionality axiom, which is discussed in the [Logic]
+chapter and included in the Agda standard library.)
+
+#### Exercise: 1 star, optional (apply-empty)
+First, the empty map returns its default element for all keys:
+
+
+ postulate
+ apply-empty : ∀ {A x v} → empty {A} v x ≡ v
+
+
+
+
+#### Exercise: 2 stars, optional (update-eq)
+Next, if we update a map $$m$$ at a key $$x$$ with a new value $$v$$
+and then look up $$x$$ in the map resulting from the `update`, we get
+back $$v$$:
+
+
+ postulate
+ update-eq : ∀ {A v} (m : TotalMap A) (x : Id)
+ → (update m x v) x ≡ v
+
+
+
+
+#### Exercise: 2 stars, optional (update-neq)
+On the other hand, if we update a map $$m$$ at a key $$x$$ and
+then look up a _different_ key $$y$$ in the resulting map, we get
+the same result that $$m$$ would have given:
+
+
+ update-neq : ∀ {A v} (m : TotalMap A) (x y : Id)
+ → x ≢ y → (update m x v) y ≡ m y
+ update-neq m x y x≠y with x ≟ y
+ ... | yes x=y = ⊥-elim (x≠y x=y)
+ ... | no _ = refl
+
+
+#### Exercise: 2 stars, optional (update-shadow)
+If we update a map $$m$$ at a key $$x$$ with a value $$v_1$$ and then
+update again with the same key $$x$$ and another value $$v_2$$, the
+resulting map behaves the same (gives the same result when applied
+to any key) as the simpler map obtained by performing just
+the second `update` on $$m$$:
+
+
+ postulate
+ update-shadow : ∀ {A v1 v2} (m : TotalMap A) (x y : Id)
+ → (update (update m x v1) x v2) y ≡ (update m x v2) y
+
+
+
+
+#### Exercise: 2 stars (update-same)
+Prove the following theorem, which states that if we update a map to
+assign key $$x$$ the same value as it already has in $$m$$, then the
+result is equal to $$m$$:
+
+
+ postulate
+ update-same : ∀ {A} (m : TotalMap A) (x y : Id)
+ → (update m x (m x)) y ≡ m y
+
+
+
+
+#### Exercise: 3 stars, recommended (update-permute)
+Prove one final property of the `update` function: If we update a map
+$$m$$ at two distinct keys, it doesn't matter in which order we do the
+updates.
+
+
+ postulate
+ update-permute : ∀ {A v1 v2} (m : TotalMap A) (x1 x2 y : Id)
+ → x1 ≢ x2 → (update (update m x2 v2) x1 v1) y
+ ≡ (update (update m x1 v1) x2 v2) y
+
+
+
+
+## Partial maps
+
+Finally, we define _partial maps_ on top of total maps. A partial
+map with elements of type `A` is simply a total map with elements
+of type `Maybe A` and default element `nothing`.
+
+
+PartialMap : Set → Set
+PartialMap A = TotalMap (Maybe A)
+
+
+
+module PartialMap where
+
+
+
+ empty : ∀ {A} → PartialMap A
+ empty = TotalMap.empty nothing
+
+
+
+ update : ∀ {A} (m : PartialMap A) (x : Id) (v : A) → PartialMap A
+ update m x v = TotalMap.update m x (just v)
+
+
+We can now lift all of the basic lemmas about total maps to
+partial maps.
+
+
+ update-eq : ∀ {A v} (m : PartialMap A) (x : Id)
+ → (update m x v) x ≡ just v
+ update-eq m x = TotalMap.update-eq m x
+
+
+
+ update-neq : ∀ {A v} (m : PartialMap A) (x y : Id)
+ → x ≢ y → (update m x v) y ≡ m y
+ update-neq m x y x≠y = TotalMap.update-neq m x y x≠y
+
+
+
+ update-shadow : ∀ {A v1 v2} (m : PartialMap A) (x y : Id)
+ → (update (update m x v1) x v2) y ≡ (update m x v2) y
+ update-shadow m x y = TotalMap.update-shadow m x y
+
+
+
+ update-same : ∀ {A v} (m : PartialMap A) (x y : Id)
+ → m x ≡ just v
+ → (update m x v) y ≡ m y
+ update-same m x y mx=v rewrite sym mx=v = TotalMap.update-same m x y
+
+
+
+ update-permute : ∀ {A v1 v2} (m : PartialMap A) (x1 x2 y : Id)
+ → x1 ≢ x2 → (update (update m x2 v2) x1 v1) y
+ ≡ (update (update m x1 v1) x2 v2) y
+ update-permute m x1 x2 y x1≠x2 = TotalMap.update-permute m x1 x2 y x1≠x2
+
diff --git a/Stlc.lagda b/Stlc.lagda
new file mode 100644
index 00000000..10416197
--- /dev/null
+++ b/Stlc.lagda
@@ -0,0 +1,735 @@
+---
+title : "Stlc: The Simply Typed Lambda-Calculus"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+permalink : "sf/Stlc.html"
+---
+
+\begin{code}
+module Stlc where
+\end{code}
+
+
+\begin{code}
+open import Maps using (Id; id; _≟_; PartialMap; module PartialMap)
+open import Data.Empty using (⊥; ⊥-elim)
+open import Data.Maybe using (Maybe; just; nothing)
+open import Data.Nat using (ℕ; suc; zero; _+_)
+open import Data.Product using (∃; ∄; _,_)
+open import Function using (_∘_; _$_)
+open import Relation.Nullary using (Dec; yes; no)
+open import Relation.Binary.PropositionalEquality using (_≡_; _≢_; refl)
+\end{code}
+
+
+# The Simply Typed Lambda-Calculus
+
+The simply typed lambda-calculus (STLC) is a tiny core
+calculus embodying the key concept of _functional abstraction_,
+which shows up in pretty much every real-world programming
+language in some form (functions, procedures, methods, etc.).
+
+We will follow exactly the same pattern as in the previous chapter
+when formalizing this calculus (syntax, small-step semantics,
+typing rules) and its main properties (progress and preservation).
+The new technical challenges arise from the mechanisms of
+_variable binding_ and _substitution_. It which will take some
+work to deal with these.
+
+
+## Overview
+
+The STLC is built on some collection of _base types_:
+booleans, numbers, strings, etc. The exact choice of base types
+doesn't matter much---the construction of the language and its
+theoretical properties work out the same no matter what we
+choose---so for the sake of brevity let's take just $$bool$$ for
+the moment. At the end of the chapter we'll see how to add more
+base types, and in later chapters we'll enrich the pure STLC with
+other useful constructs like pairs, records, subtyping, and
+mutable state.
+
+Starting from boolean constants and conditionals, we add three
+things:
+
+ - variables
+ - function abstractions
+ - application
+
+This gives us the following collection of abstract syntax
+constructors (written out first in informal BNF notation---we'll
+formalize it below).
+
+$$
+ \begin{array}{rll}
+ \text{Terms}\;s,t,u
+ ::= & x & \text{variable} \\
+ \mid & \lambda x : A . t & \text{abstraction} \\
+ \mid & s\;t & \text{application} \\
+ \mid & true & \text{constant true} \\
+ \mid & false & \text{constant false} \\
+ \mid & \text{if }s\text{ then }t\text{ else }u & \text{conditional}
+ \end{array}
+$$
+
+In a lambda abstraction $$\lambda x : A . t$$, the variable $$x$$ is called the
+_parameter_ to the function; the term $$t$$ is its _body_. The annotation $$:A$$
+specifies the type of arguments that the function can be applied to.
+
+Some examples:
+
+ - The identity function for booleans:
+
+ $$\lambda x:bool. x$$.
+ - The identity function for booleans, applied to the boolean $$true$$:
+
+ $$(\lambda x:bool. x)\;true$$.
+ - The boolean "not" function:
+
+ $$\lambda x:bool. \text{if }x\text{ then }false\text{ else }true$$.
+ - The constant function that takes every (boolean) argument to $$true$$:
+
+ $$\lambda x:bool. true$$.
+ - A two-argument function that takes two booleans and returns the
+ first one:
+
+ $$\lambda x:bool. \lambda y:bool. x$$.
+
+ As in Agda, a two-argument function is really a
+ one-argument function whose body is also a one-argument function.
+ - A two-argument function that takes two booleans and returns the
+ first one, applied to the booleans $$false$$ and $$true$$:
+
+ $$(\lambda x:bool. \lambda y:bool. x)\;false\;true$$.
+
+ As in Agda, application associates to the left---i.e., this
+ expression is parsed as
+
+ $$((\lambda x:bool. \lambda y:bool. x)\;false)\;true$$.
+
+ - A higher-order function that takes a _function_ $$f$$ (from booleans
+ to booleans) as an argument, applies $$f$$ to $$true$$, and applies
+ $$f$$ again to the result:
+
+ $$\lambda f:bool\rightarrow bool. f\;(f\;true)$$.
+
+ - The same higher-order function, applied to the constantly $$false$$
+ function:
+
+ $$(\lambda f:bool\rightarrow bool. f\;(f\;true))\;(\lambda x:bool. false)$$.
+
+As the last several examples show, the STLC is a language of
+_higher-order_ functions: we can write down functions that take
+other functions as arguments and/or return other functions as
+results.
+
+The STLC doesn't provide any primitive syntax for defining _named_
+functions---all functions are "anonymous." We'll see in chapter
+`MoreStlc` that it is easy to add named functions to what we've
+got---indeed, the fundamental naming and binding mechanisms are
+exactly the same.
+
+The _types_ of the STLC include $$bool$$, which classifies the
+boolean constants $$true$$ and $$false$$ as well as more complex
+computations that yield booleans, plus _arrow types_ that classify
+functions.
+
+$$
+ \text{Types}\;A,B ::= bool \mid A \rightarrow B
+$$
+
+For example:
+
+ - $$\lambda x:bool. false$$ has type $$bool\rightarrow bool$$;
+ - $$\lambda x:bool. x$$ has type $$bool\rightarrow bool$$;
+ - $$(\lambda x:bool. x)\;true$$ has type $$bool$$;
+ - $$\lambda x:bool. \lambda y:bool. x$$ has type
+ $$bool\rightarrow bool\rightarrow bool$$
+ (i.e., $$bool\rightarrow (bool\rightarrow bool)$$)
+ - $$(\lambda x:bool. \lambda y:bool. x)\;false$$ has type $$bool\rightarrow bool$$
+ - $$(\lambda x:bool. \lambda y:bool. x)\;false\;true$$ has type $$bool$$
+
+## Syntax
+
+We begin by formalizing the syntax of the STLC.
+Unfortunately, $$\rightarrow$$ is already used for Agda's function type,
+so we will STLC's function type as `_⇒_`.
+
+
+### Types
+
+\begin{code}
+data Type : Set where
+ bool : Type
+ _⇒_ : Type → Type → Type
+\end{code}
+
+
+\begin{code}
+infixr 5 _⇒_
+\end{code}
+
+
+
+### Terms
+
+\begin{code}
+data Term : Set where
+ var : Id → Term
+ app : Term → Term → Term
+ abs : Id → Type → Term → Term
+ true : Term
+ false : Term
+ if_then_else_ : Term → Term → Term → Term
+\end{code}
+
+
+\begin{code}
+infixr 8 if_then_else_
+\end{code}
+
+
+Note that an abstraction $$\lambda x:A.t$$ (formally, `abs x A t`) is
+always annotated with the type $$A$$ of its parameter, in contrast
+to Agda (and other functional languages like ML, Haskell, etc.),
+which use _type inference_ to fill in missing annotations. We're
+not considering type inference here.
+
+Some examples...
+
+\begin{code}
+x = id 0
+y = id 1
+z = id 2
+\end{code}
+
+
+\begin{code}
+{-# DISPLAY zero = x #-}
+{-# DISPLAY suc zero = y #-}
+{-# DISPLAY suc (suc zero) = z #-}
+\end{code}
+
+
+$$\text{idB} = \lambda x:bool. x$$.
+
+\begin{code}
+idB = (abs x bool (var x))
+\end{code}
+
+$$\text{idBB} = \lambda x:bool\rightarrow bool. x$$.
+
+\begin{code}
+idBB = (abs x (bool ⇒ bool) (var x))
+\end{code}
+
+$$\text{idBBBB} = \lambda x:(bool\rightarrow bool)\rightarrow (bool\rightarrow bool). x$$.
+
+\begin{code}
+idBBBB = (abs x ((bool ⇒ bool) ⇒ (bool ⇒ bool)) (var x))
+\end{code}
+
+$$\text{k} = \lambda x:bool. \lambda y:bool. x$$.
+
+\begin{code}
+k = (abs x bool (abs y bool (var x)))
+\end{code}
+
+$$\text{notB} = \lambda x:bool. \text{if }x\text{ then }false\text{ else }true$$.
+
+\begin{code}
+notB = (abs x bool (if (var x) then false else true))
+\end{code}
+
+
+\begin{code}
+{-# DISPLAY abs 0 bool (var 0) = idB #-}
+{-# DISPLAY abs 0 (bool ⇒ bool) (var 0) = idBB #-}
+{-# DISPLAY abs 0 ((bool ⇒ bool) ⇒ (bool ⇒ bool)) (var 0) = idBBBB #-}
+{-# DISPLAY abs 0 bool (abs y bool (var 0)) = k #-}
+{-# DISPLAY abs 0 bool (if (var 0) then false else true) = notB #-}
+\end{code}
+
+
+## Operational Semantics
+
+To define the small-step semantics of STLC terms, we begin,
+as always, by defining the set of values. Next, we define the
+critical notions of _free variables_ and _substitution_, which are
+used in the reduction rule for application expressions. And
+finally we give the small-step relation itself.
+
+### Values
+
+To define the values of the STLC, we have a few cases to consider.
+
+First, for the boolean part of the language, the situation is
+clear: $$true$$ and $$false$$ are the only values. An $$\text{if}$$
+expression is never a value.
+
+Second, an application is clearly not a value: It represents a
+function being invoked on some argument, which clearly still has
+work left to do.
+
+Third, for abstractions, we have a choice:
+
+ - We can say that $$\lambda x:A. t$$ is a value only when $$t$$ is a
+ value---i.e., only if the function's body has been
+ reduced (as much as it can be without knowing what argument it
+ is going to be applied to).
+
+ - Or we can say that $$\lambda x:A. t$$ is always a value, no matter
+ whether $$t$$ is one or not---in other words, we can say that
+ reduction stops at abstractions.
+
+Agda makes the first choice---for example,
+
+\begin{code}
+test_normalizeUnderLambda : (λ (x : ℕ) → 3 + 4) ≡ (λ (x : ℕ) → 7)
+test_normalizeUnderLambda = refl
+\end{code}
+
+Most real-world functional programming languages make the second
+choice---reduction of a function's body only begins when the
+function is actually applied to an argument. We also make the
+second choice here.
+
+\begin{code}
+data Value : Term → Set where
+ abs : ∀ {x A t}
+ → Value (abs x A t)
+ true : Value true
+ false : Value false
+\end{code}
+
+Finally, we must consider what constitutes a _complete_ program.
+
+Intuitively, a "complete program" must not refer to any undefined
+variables. We'll see shortly how to define the _free_ variables
+in a STLC term. A complete program is _closed_---that is, it
+contains no free variables.
+
+Having made the choice not to reduce under abstractions, we don't
+need to worry about whether variables are values, since we'll
+always be reducing programs "from the outside in," and that means
+the small-step relation will always be working with closed terms.
+
+
+### Substitution
+
+Now we come to the heart of the STLC: the operation of
+substituting one term for a variable in another term. This
+operation is used below to define the operational semantics of
+function application, where we will need to substitute the
+argument term for the function parameter in the function's body.
+For example, we reduce
+
+$$(\lambda x:bool. \text{if }x\text{ then }true\text{ else }x)\;false$$
+
+to
+
+$$\text{if }false\text{ then }true\text{ else }false$$
+
+by substituting $$false$$ for the parameter $$x$$ in the body of the
+function.
+
+In general, we need to be able to substitute some given term $$s$$
+for occurrences of some variable $$x$$ in another term $$t$$. In
+informal discussions, this is usually written $$[x:=s]t$$ and
+pronounced "substitute $$x$$ with $$s$$ in $$t$$."
+
+Here are some examples:
+
+ - $$[x:=true](\text{if }x\text{ then }x\text{ else }false)$$
+ yields $$\text{if }true\text{ then }true\text{ else }false$$
+
+ - $$[x:=true]x$$
+ yields $$true$$
+
+ - $$[x:=true](\text{if }x\text{ then }x\text{ else }y)$$
+ yields $$\text{if }true\text{ then }true\text{ else }y$$
+
+ - $$[x:=true]y$$
+ yields $$y$$
+
+ - $$[x:=true]false$$
+ yields $$false$$ (vacuous substitution)
+
+ - $$[x:=true](\lambda y:bool. \text{if }y\text{ then }x\text{ else }false)$$
+ yields $$\lambda y:bool. \text{if }y\text{ then }true\text{ else }false$$
+
+ - $$[x:=true](\lambda y:bool. x)$$
+ yields $$\lambda y:bool. true$$
+
+ - $$[x:=true](\lambda y:bool. y)$$
+ yields $$\lambda y:bool. y$$
+
+ - $$[x:=true](\lambda x:bool. x)$$
+ yields $$\lambda x:bool. x$$
+
+The last example is very important: substituting $$x$$ with $$true$$ in
+$$\lambda x:bool. x$$ does _not_ yield $$\lambda x:bool. true$$! The reason for
+this is that the $$x$$ in the body of $$\lambda x:bool. x$$ is _bound_ by the
+abstraction: it is a new, local name that just happens to be
+spelled the same as some global name $$x$$.
+
+Here is the definition, informally...
+
+$$
+ \begin{aligned}
+ &[x:=s]x &&= s \\
+ &[x:=s]y &&= y \;\{\text{if }x\neq y\} \\
+ &[x:=s](\lambda x:A. t) &&= \lambda x:A. t \\
+ &[x:=s](\lambda y:A. t) &&= \lambda y:A. [x:=s]t \;\{\text{if }x\neq y\} \\
+ &[x:=s](t1\;t2) &&= ([x:=s]t1) ([x:=s]t2) \\
+ &[x:=s]true &&= true \\
+ &[x:=s]false &&= false \\
+ &[x:=s](\text{if }t1\text{ then }t2\text{ else }t3) &&=
+ \text{if }[x:=s]t1\text{ then }[x:=s]t2\text{ else }[x:=s]t3
+ \end{aligned}
+$$
+
+... and formally:
+
+\begin{code}
+[_:=_]_ : Id -> Term -> Term -> Term
+[ x := v ] (var y) with x ≟ y
+... | yes x=y = v
+... | no x≠y = var y
+[ x := v ] (app s t) = app ([ x := v ] s) ([ x := v ] t)
+[ x := v ] (abs y A t) with x ≟ y
+... | yes x=y = abs y A t
+... | no x≠y = abs y A ([ x := v ] t)
+[ x := v ] true = true
+[ x := v ] false = false
+[ x := v ] (if s then t else u) =
+ if [ x := v ] s then [ x := v ] t else [ x := v ] u
+\end{code}
+
+
+\begin{code}
+infix 9 [_:=_]_
+\end{code}
+
+
+_Technical note_: Substitution becomes trickier to define if we
+consider the case where $$s$$, the term being substituted for a
+variable in some other term, may itself contain free variables.
+Since we are only interested here in defining the small-step relation
+on closed terms (i.e., terms like $$\lambda x:bool. x$$ that include
+binders for all of the variables they mention), we can avoid this
+extra complexity here, but it must be dealt with when formalizing
+richer languages.
+
+
+#### Exercise: 3 stars (subst-correct)
+The definition that we gave above defines substitution as a _function_.
+Suppose, instead, we wanted to define substitution as an inductive _relation_.
+We've begun the definition by providing the `data` header and
+one of the constructors; your job is to fill in the rest of the constructors
+and prove that the relation you've defined coincides with the function given
+above.
+\begin{code}
+data [_:=_]_==>_ (x : Id) (s : Term) : Term -> Term -> Set where
+ var1 : [ x := s ] (var x) ==> s
+ {- FILL IN HERE -}
+\end{code}
+
+\begin{code}
+postulate
+ subst-correct : ∀ s x t t'
+ → [ x := s ] t ≡ t'
+ → [ x := s ] t ==> t'
+\end{code}
+
+
+### Reduction
+
+The small-step reduction relation for STLC now follows the
+same pattern as the ones we have seen before. Intuitively, to
+reduce a function application, we first reduce its left-hand
+side (the function) until it becomes an abstraction; then we
+reduce its right-hand side (the argument) until it is also a
+value; and finally we substitute the argument for the bound
+variable in the body of the abstraction. This last rule, written
+informally as
+
+$$
+(\lambda x : A . t) v \Longrightarrow [x:=v]t
+$$
+
+is traditionally called "beta-reduction".
+
+$$
+\begin{array}{cl}
+ \frac{value(v)}{(\lambda x : A . t) v \Longrightarrow [x:=v]t}&(red)\\\\
+ \frac{s \Longrightarrow s'}{s\;t \Longrightarrow s'\;t}&(app1)\\\\
+ \frac{value(v)\quad t \Longrightarrow t'}{v\;t \Longrightarrow v\;t'}&(app2)
+\end{array}
+$$
+
+... plus the usual rules for booleans:
+
+$$
+\begin{array}{cl}
+ \frac{}{(\text{if }true\text{ then }t_1\text{ else }t_2) \Longrightarrow t_1}&(iftrue)\\\\
+ \frac{}{(\text{if }false\text{ then }t_1\text{ else }t_2) \Longrightarrow t_2}&(iffalse)\\\\
+ \frac{s \Longrightarrow s'}{\text{if }s\text{ then }t_1\text{ else }t_2
+ \Longrightarrow \text{if }s\text{ then }t_1\text{ else }t_2}&(if)
+\end{array}
+$$
+
+Formally:
+
+\begin{code}
+data _==>_ : Term → Term → Set where
+ red : ∀ {x A s t}
+ → Value t
+ → app (abs x A s) t ==> [ x := t ] s
+ app1 : ∀ {s s' t}
+ → s ==> s'
+ → app s t ==> app s' t
+ app2 : ∀ {s t t'}
+ → Value s
+ → t ==> t'
+ → app s t ==> app s t'
+ if : ∀ {s s' t u}
+ → s ==> s'
+ → if s then t else u ==> if s' then t else u
+ iftrue : ∀ {s t}
+ → if true then s else t ==> s
+ iffalse : ∀ {s t}
+ → if false then s else t ==> t
+\end{code}
+
+
+\begin{code}
+infix 1 _==>_
+\end{code}
+
+
+\begin{code}
+data Multi (R : Term → Term → Set) : Term → Term → Set where
+ refl : ∀ {x} -> Multi R x x
+ step : ∀ {x y z} -> R x y -> Multi R y z -> Multi R x z
+\end{code}
+
+\begin{code}
+_==>*_ : Term → Term → Set
+_==>*_ = Multi _==>_
+\end{code}
+
+
+\begin{code}
+{-# DISPLAY Multi _==>_ = _==>*_ #-}
+\end{code}
+
+
+### Examples
+
+Example:
+
+$$((\lambda x:bool\rightarrow bool. x) (\lambda x:bool. x)) \Longrightarrow^* (\lambda x:bool. x)$$.
+
+\begin{code}
+step-example1 : (app idBB idB) ==>* idB
+step-example1 = step (red abs)
+ $ refl
+\end{code}
+
+Example:
+
+$$(\lambda x:bool\rightarrow bool. x) \;((\lambda x:bool\rightarrow bool. x)\;(\lambda x:bool. x))) \Longrightarrow^* (\lambda x:bool. x)$$.
+
+\begin{code}
+step-example2 : (app idBB (app idBB idB)) ==>* idB
+step-example2 = step (app2 abs (red abs))
+ $ step (red abs)
+ $ refl
+\end{code}
+
+Example:
+
+$$((\lambda x:bool\rightarrow bool. x)\;(\lambda x:bool. \text{if }x\text{ then }false\text{ else }true))\;true\Longrightarrow^* false$$.
+
+\begin{code}
+step-example3 : (app (app idBB notB) true) ==>* false
+step-example3 = step (app1 (red abs))
+ $ step (red true)
+ $ step iftrue
+ $ refl
+\end{code}
+
+Example:
+
+$$((\lambda x:bool\rightarrow bool. x)\;((\lambda x:bool. \text{if }x\text{ then }false\text{ else }true)\;true))\Longrightarrow^* false$$.
+
+\begin{code}
+step-example4 : (app idBB (app notB true)) ==>* false
+step-example4 = step (app2 abs (red true))
+ $ step (app2 abs iftrue)
+ $ step (red false)
+ $ refl
+\end{code}
+
+#### Exercise: 2 stars (step-example5)
+
+\begin{code}
+postulate
+ step-example5 : (app (app idBBBB idBB) idB) ==>* idB
+\end{code}
+
+## Typing
+
+Next we consider the typing relation of the STLC.
+
+### Contexts
+
+_Question_: What is the type of the term "$$x\;y$$"?
+
+_Answer_: It depends on the types of $$x$$ and $$y$$!
+
+I.e., in order to assign a type to a term, we need to know
+what assumptions we should make about the types of its free
+variables.
+
+This leads us to a three-place _typing judgment_, informally
+written $$\Gamma\vdash t : A$$, where $$\Gamma$$ is a
+"typing context"---a mapping from variables to their types.
+
+Informally, we'll write $$\Gamma , x:A$$ for "extend the partial function
+$$\Gamma$$ to also map $$x$$ to $$A$$." Formally, we use the function `_,_∶_`
+(or "update") to add a binding to a context.
+
+\begin{code}
+Ctxt : Set
+Ctxt = PartialMap Type
+
+∅ : Ctxt
+∅ = PartialMap.empty
+
+_,_∶_ : Ctxt -> Id -> Type -> Ctxt
+_,_∶_ = PartialMap.update
+\end{code}
+
+
+\begin{code}
+infixl 3 _,_∶_
+\end{code}
+
+
+
+### Typing Relation
+
+$$
+ \begin{array}{cl}
+ \frac{\Gamma\;x = A}{\Gamma\vdash{x:A}}&(var)\\\\
+ \frac{\Gamma,x:A\vdash t:B}{\Gamma\vdash (\lambda x:A.t) : A\rightarrow B}&(abs)\\\\
+ \frac{\Gamma\vdash s:A\rightarrow B\quad\Gamma\vdash t:A}{\Gamma\vdash (s\;t) : B}&(app)\\\\
+ \frac{}{\Gamma\vdash true : bool}&(true)\\\\
+ \frac{}{\Gamma\vdash false : bool}&(true)\\\\
+ \frac{\Gamma\vdash s:bool \quad \Gamma\vdash t1:A \quad \Gamma\vdash t2:A}{\Gamma\vdash\text{if }s\text{ then }t1\text{ else }t2 : A}&(if)
+ \end{array}
+$$
+
+We can read the three-place relation $$\Gamma\vdash (t : A)$$ as:
+"to the term $$t$$ we can assign the type $$A$$ using as types for
+the free variables of $$t$$ the ones specified in the context
+$$\Gamma$$."
+
+\begin{code}
+data _⊢_∶_ : Ctxt -> Term -> Type -> Set where
+ var : ∀ {Γ} x {A}
+ → Γ x ≡ just A
+ → Γ ⊢ var x ∶ A
+ abs : ∀ {Γ} {x} {A} {B} {s}
+ → Γ , x ∶ A ⊢ s ∶ B
+ → Γ ⊢ abs x A s ∶ A ⇒ B
+ app : ∀ {Γ} {A} {B} {s} {t}
+ → Γ ⊢ s ∶ A ⇒ B
+ → Γ ⊢ t ∶ A
+ → Γ ⊢ app s t ∶ B
+ true : ∀ {Γ}
+ → Γ ⊢ true ∶ bool
+ false : ∀ {Γ}
+ → Γ ⊢ false ∶ bool
+ if_then_else_ : ∀ {Γ} {s} {t} {u} {A}
+ → Γ ⊢ s ∶ bool
+ → Γ ⊢ t ∶ A
+ → Γ ⊢ u ∶ A
+ → Γ ⊢ if s then t else u ∶ A
+\end{code}
+
+
+\begin{code}
+infix 1 _⊢_∶_
+\end{code}
+
+
+
+### Examples
+
+\begin{code}
+typing-example1 : ∅ ⊢ idB ∶ bool ⇒ bool
+typing-example1 = abs (var x refl)
+\end{code}
+
+Another example:
+
+$$\varnothing\vdash \lambda x:A. \lambda y:A\rightarrow A. y\;(y\;x) : A\rightarrow (A\rightarrow A)\rightarrow A$$.
+
+\begin{code}
+typing-example2 : ∅ ⊢
+ (abs x bool
+ (abs y (bool ⇒ bool)
+ (app (var y)
+ (app (var y) (var x)))))
+ ∶ (bool ⇒ (bool ⇒ bool) ⇒ bool)
+typing-example2 =
+ (abs
+ (abs
+ (app (var y refl)
+ (app (var y refl) (var x refl) ))))
+\end{code}
+
+#### Exercise: 2 stars (typing-example3)
+Formally prove the following typing derivation holds:
+
+$$\exists A, \varnothing\vdash \lambda x:bool\rightarrow B. \lambda y:bool\rightarrow bool. \lambda z:bool. y\;(x\;z) : A$$.
+
+\begin{code}
+postulate
+ typing-example3 : ∃ λ A → ∅ ⊢
+ (abs x (bool ⇒ bool)
+ (abs y (bool ⇒ bool)
+ (abs z bool
+ (app (var y) (app (var x) (var z)))))) ∶ A
+\end{code}
+
+We can also show that terms are _not_ typable. For example, let's
+formally check that there is no typing derivation assigning a type
+to the term $$\lambda x:bool. \lambda y:bool. x\;y$$---i.e.,
+
+
+$$\nexists A, \varnothing\vdash \lambda x:bool. \lambda y:bool. x\;y : A$$.
+
+\begin{code}
+postulate
+ typing-nonexample1 : ∄ λ A → ∅ ⊢
+ (abs x bool
+ (abs y bool
+ (app (var x) (var y)))) ∶ A
+\end{code}
+
+#### Exercise: 3 stars, optional (typing-nonexample2)
+Another nonexample:
+
+$$\nexists A, \exists B, \varnothing\vdash \lambda x:A. x\;x : B$$.
+
+\begin{code}
+postulate
+ typing-nonexample2 : ∄ λ A → ∃ λ B → ∅ ⊢
+ (abs x B (app (var x) (var x))) ∶ A
+\end{code}
diff --git a/Stlc.md b/Stlc.md
new file mode 100644
index 00000000..3610cb48
--- /dev/null
+++ b/Stlc.md
@@ -0,0 +1,5497 @@
+---
+title : "Stlc: The Simply Typed Lambda-Calculus"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+permalink : "sf/Stlc.html"
+---
+
+
+module Stlc where
+
+
+
+
+# The Simply Typed Lambda-Calculus
+
+The simply typed lambda-calculus (STLC) is a tiny core
+calculus embodying the key concept of _functional abstraction_,
+which shows up in pretty much every real-world programming
+language in some form (functions, procedures, methods, etc.).
+
+We will follow exactly the same pattern as in the previous chapter
+when formalizing this calculus (syntax, small-step semantics,
+typing rules) and its main properties (progress and preservation).
+The new technical challenges arise from the mechanisms of
+_variable binding_ and _substitution_. It which will take some
+work to deal with these.
+
+
+## Overview
+
+The STLC is built on some collection of _base types_:
+booleans, numbers, strings, etc. The exact choice of base types
+doesn't matter much---the construction of the language and its
+theoretical properties work out the same no matter what we
+choose---so for the sake of brevity let's take just $$bool$$ for
+the moment. At the end of the chapter we'll see how to add more
+base types, and in later chapters we'll enrich the pure STLC with
+other useful constructs like pairs, records, subtyping, and
+mutable state.
+
+Starting from boolean constants and conditionals, we add three
+things:
+
+ - variables
+ - function abstractions
+ - application
+
+This gives us the following collection of abstract syntax
+constructors (written out first in informal BNF notation---we'll
+formalize it below).
+
+$$
+ \begin{array}{rll}
+ \text{Terms}\;s,t,u
+ ::= & x & \text{variable} \\
+ \mid & \lambda x : A . t & \text{abstraction} \\
+ \mid & s\;t & \text{application} \\
+ \mid & true & \text{constant true} \\
+ \mid & false & \text{constant false} \\
+ \mid & \text{if }s\text{ then }t\text{ else }u & \text{conditional}
+ \end{array}
+$$
+
+In a lambda abstraction $$\lambda x : A . t$$, the variable $$x$$ is called the
+_parameter_ to the function; the term $$t$$ is its _body_. The annotation $$:A$$
+specifies the type of arguments that the function can be applied to.
+
+Some examples:
+
+ - The identity function for booleans:
+
+ $$\lambda x:bool. x$$.
+ - The identity function for booleans, applied to the boolean $$true$$:
+
+ $$(\lambda x:bool. x)\;true$$.
+ - The boolean "not" function:
+
+ $$\lambda x:bool. \text{if }x\text{ then }false\text{ else }true$$.
+ - The constant function that takes every (boolean) argument to $$true$$:
+
+ $$\lambda x:bool. true$$.
+ - A two-argument function that takes two booleans and returns the
+ first one:
+
+ $$\lambda x:bool. \lambda y:bool. x$$.
+
+ As in Agda, a two-argument function is really a
+ one-argument function whose body is also a one-argument function.
+ - A two-argument function that takes two booleans and returns the
+ first one, applied to the booleans $$false$$ and $$true$$:
+
+ $$(\lambda x:bool. \lambda y:bool. x)\;false\;true$$.
+
+ As in Agda, application associates to the left---i.e., this
+ expression is parsed as
+
+ $$((\lambda x:bool. \lambda y:bool. x)\;false)\;true$$.
+
+ - A higher-order function that takes a _function_ $$f$$ (from booleans
+ to booleans) as an argument, applies $$f$$ to $$true$$, and applies
+ $$f$$ again to the result:
+
+ $$\lambda f:bool\rightarrow bool. f\;(f\;true)$$.
+
+ - The same higher-order function, applied to the constantly $$false$$
+ function:
+
+ $$(\lambda f:bool\rightarrow bool. f\;(f\;true))\;(\lambda x:bool. false)$$.
+
+As the last several examples show, the STLC is a language of
+_higher-order_ functions: we can write down functions that take
+other functions as arguments and/or return other functions as
+results.
+
+The STLC doesn't provide any primitive syntax for defining _named_
+functions---all functions are "anonymous." We'll see in chapter
+`MoreStlc` that it is easy to add named functions to what we've
+got---indeed, the fundamental naming and binding mechanisms are
+exactly the same.
+
+The _types_ of the STLC include $$bool$$, which classifies the
+boolean constants $$true$$ and $$false$$ as well as more complex
+computations that yield booleans, plus _arrow types_ that classify
+functions.
+
+$$
+ \text{Types}\;A,B ::= bool \mid A \rightarrow B
+$$
+
+For example:
+
+ - $$\lambda x:bool. false$$ has type $$bool\rightarrow bool$$;
+ - $$\lambda x:bool. x$$ has type $$bool\rightarrow bool$$;
+ - $$(\lambda x:bool. x)\;true$$ has type $$bool$$;
+ - $$\lambda x:bool. \lambda y:bool. x$$ has type
+ $$bool\rightarrow bool\rightarrow bool$$
+ (i.e., $$bool\rightarrow (bool\rightarrow bool)$$)
+ - $$(\lambda x:bool. \lambda y:bool. x)\;false$$ has type $$bool\rightarrow bool$$
+ - $$(\lambda x:bool. \lambda y:bool. x)\;false\;true$$ has type $$bool$$
+
+## Syntax
+
+We begin by formalizing the syntax of the STLC.
+Unfortunately, $$\rightarrow$$ is already used for Agda's function type,
+so we will STLC's function type as `_⇒_`.
+
+
+### Types
+
+
+data Type : Set where
+ bool : Type
+ _⇒_ : Type → Type → Type
+
+
+
+
+
+### Terms
+
+
+data Term : Set where
+ var : Id → Term
+ app : Term → Term → Term
+ abs : Id → Type → Term → Term
+ true : Term
+ false : Term
+ if_then_else_ : Term → Term → Term → Term
+
+
+
+
+Note that an abstraction $$\lambda x:A.t$$ (formally, `abs x A t`) is
+always annotated with the type $$A$$ of its parameter, in contrast
+to Agda (and other functional languages like ML, Haskell, etc.),
+which use _type inference_ to fill in missing annotations. We're
+not considering type inference here.
+
+Some examples...
+
+
+x = id 0
+y = id 1
+z = id 2
+
+
+
+
+$$\text{idB} = \lambda x:bool. x$$.
+
+
+idB = (abs x bool (var x))
+
+
+$$\text{idBB} = \lambda x:bool\rightarrow bool. x$$.
+
+
+idBB = (abs x (bool ⇒ bool) (var x))
+
+
+$$\text{idBBBB} = \lambda x:(bool\rightarrow bool)\rightarrow (bool\rightarrow bool). x$$.
+
+
+idBBBB = (abs x ((bool ⇒ bool) ⇒ (bool ⇒ bool)) (var x))
+
+
+$$\text{k} = \lambda x:bool. \lambda y:bool. x$$.
+
+
+k = (abs x bool (abs y bool (var x)))
+
+
+$$\text{notB} = \lambda x:bool. \text{if }x\text{ then }false\text{ else }true$$.
+
+
+notB = (abs x bool (if (var x) then false else true))
+
+
+
+
+## Operational Semantics
+
+To define the small-step semantics of STLC terms, we begin,
+as always, by defining the set of values. Next, we define the
+critical notions of _free variables_ and _substitution_, which are
+used in the reduction rule for application expressions. And
+finally we give the small-step relation itself.
+
+### Values
+
+To define the values of the STLC, we have a few cases to consider.
+
+First, for the boolean part of the language, the situation is
+clear: $$true$$ and $$false$$ are the only values. An $$\text{if}$$
+expression is never a value.
+
+Second, an application is clearly not a value: It represents a
+function being invoked on some argument, which clearly still has
+work left to do.
+
+Third, for abstractions, we have a choice:
+
+ - We can say that $$\lambda x:A. t$$ is a value only when $$t$$ is a
+ value---i.e., only if the function's body has been
+ reduced (as much as it can be without knowing what argument it
+ is going to be applied to).
+
+ - Or we can say that $$\lambda x:A. t$$ is always a value, no matter
+ whether $$t$$ is one or not---in other words, we can say that
+ reduction stops at abstractions.
+
+Agda makes the first choice---for example,
+
+
+test_normalizeUnderLambda : (λ (x : ℕ) → 3 + 4) ≡ (λ (x : ℕ) → 7)
+test_normalizeUnderLambda = refl
+
+
+Most real-world functional programming languages make the second
+choice---reduction of a function's body only begins when the
+function is actually applied to an argument. We also make the
+second choice here.
+
+
+data Value : Term → Set where
+ abs : ∀ {x A t}
+ → Value (abs x A t)
+ true : Value true
+ false : Value false
+
+
+Finally, we must consider what constitutes a _complete_ program.
+
+Intuitively, a "complete program" must not refer to any undefined
+variables. We'll see shortly how to define the _free_ variables
+in a STLC term. A complete program is _closed_---that is, it
+contains no free variables.
+
+Having made the choice not to reduce under abstractions, we don't
+need to worry about whether variables are values, since we'll
+always be reducing programs "from the outside in," and that means
+the small-step relation will always be working with closed terms.
+
+
+### Substitution
+
+Now we come to the heart of the STLC: the operation of
+substituting one term for a variable in another term. This
+operation is used below to define the operational semantics of
+function application, where we will need to substitute the
+argument term for the function parameter in the function's body.
+For example, we reduce
+
+$$(\lambda x:bool. \text{if }x\text{ then }true\text{ else }x)\;false$$
+
+to
+
+$$\text{if }false\text{ then }true\text{ else }false$$
+
+by substituting $$false$$ for the parameter $$x$$ in the body of the
+function.
+
+In general, we need to be able to substitute some given term $$s$$
+for occurrences of some variable $$x$$ in another term $$t$$. In
+informal discussions, this is usually written $$[x:=s]t$$ and
+pronounced "substitute $$x$$ with $$s$$ in $$t$$."
+
+Here are some examples:
+
+ - $$[x:=true](\text{if }x\text{ then }x\text{ else }false)$$
+ yields $$\text{if }true\text{ then }true\text{ else }false$$
+
+ - $$[x:=true]x$$
+ yields $$true$$
+
+ - $$[x:=true](\text{if }x\text{ then }x\text{ else }y)$$
+ yields $$\text{if }true\text{ then }true\text{ else }y$$
+
+ - $$[x:=true]y$$
+ yields $$y$$
+
+ - $$[x:=true]false$$
+ yields $$false$$ (vacuous substitution)
+
+ - $$[x:=true](\lambda y:bool. \text{if }y\text{ then }x\text{ else }false)$$
+ yields $$\lambda y:bool. \text{if }y\text{ then }true\text{ else }false$$
+
+ - $$[x:=true](\lambda y:bool. x)$$
+ yields $$\lambda y:bool. true$$
+
+ - $$[x:=true](\lambda y:bool. y)$$
+ yields $$\lambda y:bool. y$$
+
+ - $$[x:=true](\lambda x:bool. x)$$
+ yields $$\lambda x:bool. x$$
+
+The last example is very important: substituting $$x$$ with $$true$$ in
+$$\lambda x:bool. x$$ does _not_ yield $$\lambda x:bool. true$$! The reason for
+this is that the $$x$$ in the body of $$\lambda x:bool. x$$ is _bound_ by the
+abstraction: it is a new, local name that just happens to be
+spelled the same as some global name $$x$$.
+
+Here is the definition, informally...
+
+$$
+ \begin{aligned}
+ &[x:=s]x &&= s \\
+ &[x:=s]y &&= y \;\{\text{if }x\neq y\} \\
+ &[x:=s](\lambda x:A. t) &&= \lambda x:A. t \\
+ &[x:=s](\lambda y:A. t) &&= \lambda y:A. [x:=s]t \;\{\text{if }x\neq y\} \\
+ &[x:=s](t1\;t2) &&= ([x:=s]t1) ([x:=s]t2) \\
+ &[x:=s]true &&= true \\
+ &[x:=s]false &&= false \\
+ &[x:=s](\text{if }t1\text{ then }t2\text{ else }t3) &&=
+ \text{if }[x:=s]t1\text{ then }[x:=s]t2\text{ else }[x:=s]t3
+ \end{aligned}
+$$
+
+... and formally:
+
+
+[_:=_]_ : Id -> Term -> Term -> Term
+[ x := v ] (var y) with x ≟ y
+... | yes x=y = v
+... | no x≠y = var y
+[ x := v ] (app s t) = app ([ x := v ] s) ([ x := v ] t)
+[ x := v ] (abs y A t) with x ≟ y
+... | yes x=y = abs y A t
+... | no x≠y = abs y A ([ x := v ] t)
+[ x := v ] true = true
+[ x := v ] false = false
+[ x := v ] (if s then t else u) =
+ if [ x := v ] s then [ x := v ] t else [ x := v ] u
+
+
+
+
+_Technical note_: Substitution becomes trickier to define if we
+consider the case where $$s$$, the term being substituted for a
+variable in some other term, may itself contain free variables.
+Since we are only interested here in defining the small-step relation
+on closed terms (i.e., terms like $$\lambda x:bool. x$$ that include
+binders for all of the variables they mention), we can avoid this
+extra complexity here, but it must be dealt with when formalizing
+richer languages.
+
+
+#### Exercise: 3 stars (subst-correct)
+The definition that we gave above defines substitution as a _function_.
+Suppose, instead, we wanted to define substitution as an inductive _relation_.
+We've begun the definition by providing the `data` header and
+one of the constructors; your job is to fill in the rest of the constructors
+and prove that the relation you've defined coincides with the function given
+above.
+
+data [_:=_]_==>_ (x : Id) (s : Term) : Term -> Term -> Set where
+ var1 : [ x := s ] (var x) ==> s
+
+
+
+
+postulate
+ subst-correct : ∀ s x t t'
+ → [ x := s ] t ≡ t'
+ → [ x := s ] t ==> t'
+
+
+
+### Reduction
+
+The small-step reduction relation for STLC now follows the
+same pattern as the ones we have seen before. Intuitively, to
+reduce a function application, we first reduce its left-hand
+side (the function) until it becomes an abstraction; then we
+reduce its right-hand side (the argument) until it is also a
+value; and finally we substitute the argument for the bound
+variable in the body of the abstraction. This last rule, written
+informally as
+
+$$
+(\lambda x : A . t) v \Longrightarrow [x:=v]t
+$$
+
+is traditionally called "beta-reduction".
+
+$$
+\begin{array}{cl}
+ \frac{value(v)}{(\lambda x : A . t) v \Longrightarrow [x:=v]t}&(red)\\\\
+ \frac{s \Longrightarrow s'}{s\;t \Longrightarrow s'\;t}&(app1)\\\\
+ \frac{value(v)\quad t \Longrightarrow t'}{v\;t \Longrightarrow v\;t'}&(app2)
+\end{array}
+$$
+
+... plus the usual rules for booleans:
+
+$$
+\begin{array}{cl}
+ \frac{}{(\text{if }true\text{ then }t_1\text{ else }t_2) \Longrightarrow t_1}&(iftrue)\\\\
+ \frac{}{(\text{if }false\text{ then }t_1\text{ else }t_2) \Longrightarrow t_2}&(iffalse)\\\\
+ \frac{s \Longrightarrow s'}{\text{if }s\text{ then }t_1\text{ else }t_2
+ \Longrightarrow \text{if }s\text{ then }t_1\text{ else }t_2}&(if)
+\end{array}
+$$
+
+Formally:
+
+
+data _==>_ : Term → Term → Set where
+ red : ∀ {x A s t}
+ → Value t
+ → app (abs x A s) t ==> [ x := t ] s
+ app1 : ∀ {s s' t}
+ → s ==> s'
+ → app s t ==> app s' t
+ app2 : ∀ {s t t'}
+ → Value s
+ → t ==> t'
+ → app s t ==> app s t'
+ if : ∀ {s s' t u}
+ → s ==> s'
+ → if s then t else u ==> if s' then t else u
+ iftrue : ∀ {s t}
+ → if true then s else t ==> s
+ iffalse : ∀ {s t}
+ → if false then s else t ==> t
+
+
+
+
+
+data Multi (R : Term → Term → Set) : Term → Term → Set where
+ refl : ∀ {x} -> Multi R x x
+ step : ∀ {x y z} -> R x y -> Multi R y z -> Multi R x z
+
+
+
+_==>*_ : Term → Term → Set
+_==>*_ = Multi _==>_
+
+
+
+
+### Examples
+
+Example:
+
+$$((\lambda x:bool\rightarrow bool. x) (\lambda x:bool. x)) \Longrightarrow^* (\lambda x:bool. x)$$.
+
+
+step-example1 : (app idBB idB) ==>* idB
+step-example1 = step (red abs)
+ $ refl
+
+
+Example:
+
+$$(\lambda x:bool\rightarrow bool. x) \;((\lambda x:bool\rightarrow bool. x)\;(\lambda x:bool. x))) \Longrightarrow^* (\lambda x:bool. x)$$.
+
+
+step-example2 : (app idBB (app idBB idB)) ==>* idB
+step-example2 = step (app2 abs (red abs))
+ $ step (red abs)
+ $ refl
+
+
+Example:
+
+$$((\lambda x:bool\rightarrow bool. x)\;(\lambda x:bool. \text{if }x\text{ then }false\text{ else }true))\;true\Longrightarrow^* false$$.
+
+
+step-example3 : (app (app idBB notB) true) ==>* false
+step-example3 = step (app1 (red abs))
+ $ step (red true)
+ $ step iftrue
+ $ refl
+
+
+Example:
+
+$$((\lambda x:bool\rightarrow bool. x)\;((\lambda x:bool. \text{if }x\text{ then }false\text{ else }true)\;true))\Longrightarrow^* false$$.
+
+
+step-example4 : (app idBB (app notB true)) ==>* false
+step-example4 = step (app2 abs (red true))
+ $ step (app2 abs iftrue)
+ $ step (red false)
+ $ refl
+
+
+#### Exercise: 2 stars (step-example5)
+
+
+postulate
+ step-example5 : (app (app idBBBB idBB) idB) ==>* idB
+
+
+## Typing
+
+Next we consider the typing relation of the STLC.
+
+### Contexts
+
+_Question_: What is the type of the term "$$x\;y$$"?
+
+_Answer_: It depends on the types of $$x$$ and $$y$$!
+
+I.e., in order to assign a type to a term, we need to know
+what assumptions we should make about the types of its free
+variables.
+
+This leads us to a three-place _typing judgment_, informally
+written $$\Gamma\vdash t : A$$, where $$\Gamma$$ is a
+"typing context"---a mapping from variables to their types.
+
+Informally, we'll write $$\Gamma , x:A$$ for "extend the partial function
+$$\Gamma$$ to also map $$x$$ to $$A$$." Formally, we use the function `_,_∶_`
+(or "update") to add a binding to a context.
+
+
+Ctxt : Set
+Ctxt = PartialMap Type
+
+∅ : Ctxt
+∅ = PartialMap.empty
+
+_,_∶_ : Ctxt -> Id -> Type -> Ctxt
+_,_∶_ = PartialMap.update
+
+
+
+
+
+### Typing Relation
+
+$$
+ \begin{array}{cl}
+ \frac{\Gamma\;x = A}{\Gamma\vdash{x:A}}&(var)\\\\
+ \frac{\Gamma,x:A\vdash t:B}{\Gamma\vdash (\lambda x:A.t) : A\rightarrow B}&(abs)\\\\
+ \frac{\Gamma\vdash s:A\rightarrow B\quad\Gamma\vdash t:A}{\Gamma\vdash (s\;t) : B}&(app)\\\\
+ \frac{}{\Gamma\vdash true : bool}&(true)\\\\
+ \frac{}{\Gamma\vdash false : bool}&(true)\\\\
+ \frac{\Gamma\vdash s:bool \quad \Gamma\vdash t1:A \quad \Gamma\vdash t2:A}{\Gamma\vdash\text{if }s\text{ then }t1\text{ else }t2 : A}&(if)
+ \end{array}
+$$
+
+We can read the three-place relation $$\Gamma\vdash (t : A)$$ as:
+"to the term $$t$$ we can assign the type $$A$$ using as types for
+the free variables of $$t$$ the ones specified in the context
+$$\Gamma$$."
+
+
+data _⊢_∶_ : Ctxt -> Term -> Type -> Set where
+ var : ∀ {Γ} x {A}
+ → Γ x ≡ just A
+ → Γ ⊢ var x ∶ A
+ abs : ∀ {Γ} {x} {A} {B} {s}
+ → Γ , x ∶ A ⊢ s ∶ B
+ → Γ ⊢ abs x A s ∶ A ⇒ B
+ app : ∀ {Γ} {A} {B} {s} {t}
+ → Γ ⊢ s ∶ A ⇒ B
+ → Γ ⊢ t ∶ A
+ → Γ ⊢ app s t ∶ B
+ true : ∀ {Γ}
+ → Γ ⊢ true ∶ bool
+ false : ∀ {Γ}
+ → Γ ⊢ false ∶ bool
+ if_then_else_ : ∀ {Γ} {s} {t} {u} {A}
+ → Γ ⊢ s ∶ bool
+ → Γ ⊢ t ∶ A
+ → Γ ⊢ u ∶ A
+ → Γ ⊢ if s then t else u ∶ A
+
+
+
+
+
+### Examples
+
+
+typing-example1 : ∅ ⊢ idB ∶ bool ⇒ bool
+typing-example1 = abs (var x refl)
+
+
+Another example:
+
+$$\varnothing\vdash \lambda x:A. \lambda y:A\rightarrow A. y\;(y\;x) : A\rightarrow (A\rightarrow A)\rightarrow A$$.
+
+
+typing-example2 : ∅ ⊢
+ (abs x bool
+ (abs y (bool ⇒ bool)
+ (app (var y)
+ (app (var y) (var x)))))
+ ∶ (bool ⇒ (bool ⇒ bool) ⇒ bool)
+typing-example2 =
+ (abs
+ (abs
+ (app (var y refl)
+ (app (var y refl) (var x refl) ))))
+
+
+#### Exercise: 2 stars (typing-example3)
+Formally prove the following typing derivation holds:
+
+$$\exists A, \varnothing\vdash \lambda x:bool\rightarrow B. \lambda y:bool\rightarrow bool. \lambda z:bool. y\;(x\;z) : A$$.
+
+
+postulate
+ typing-example3 : ∃ λ A → ∅ ⊢
+ (abs x (bool ⇒ bool)
+ (abs y (bool ⇒ bool)
+ (abs z bool
+ (app (var y) (app (var x) (var z)))))) ∶ A
+
+
+We can also show that terms are _not_ typable. For example, let's
+formally check that there is no typing derivation assigning a type
+to the term $$\lambda x:bool. \lambda y:bool. x\;y$$---i.e.,
+
+
+$$\nexists A, \varnothing\vdash \lambda x:bool. \lambda y:bool. x\;y : A$$.
+
+
+postulate
+ typing-nonexample1 : ∄ λ A → ∅ ⊢
+ (abs x bool
+ (abs y bool
+ (app (var x) (var y)))) ∶ A
+
+
+#### Exercise: 3 stars, optional (typing-nonexample2)
+Another nonexample:
+
+$$\nexists A, \exists B, \varnothing\vdash \lambda x:A. x\;x : B$$.
+
+
+postulate
+ typing-nonexample2 : ∄ λ A → ∃ λ B → ∅ ⊢
+ (abs x B (app (var x) (var x))) ∶ A
+
diff --git a/StlcProp.lagda b/StlcProp.lagda
new file mode 100644
index 00000000..74357a6a
--- /dev/null
+++ b/StlcProp.lagda
@@ -0,0 +1,777 @@
+---
+title : "StlcProp: Properties of STLC"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+permalink : "sf/StlcProp.html"
+---
+
+\begin{code}
+module StlcProp where
+\end{code}
+
+
+\begin{code}
+open import Function using (_∘_)
+open import Data.Empty using (⊥; ⊥-elim)
+open import Data.Maybe using (Maybe; just; nothing)
+open import Data.Product using (∃; ∃₂; _,_; ,_)
+open import Data.Sum using (_⊎_; inj₁; inj₂)
+open import Relation.Nullary using (¬_; Dec; yes; no)
+open import Relation.Binary.PropositionalEquality using (_≡_; _≢_; refl)
+open import Maps
+open import Stlc
+\end{code}
+
+
+# Properties of STLC
+
+In this chapter, we develop the fundamental theory of the Simply
+Typed Lambda Calculus -- in particular, the type safety
+theorem.
+
+
+## Canonical Forms
+
+As we saw for the simple calculus in the [Stlc](Stlc.html) chapter, the
+first step in establishing basic properties of reduction and types
+is to identify the possible _canonical forms_ (i.e., well-typed
+closed values) belonging to each type. For $$bool$$, these are the boolean
+values $$true$$ and $$false$$. For arrow types, the canonical forms
+are lambda-abstractions.
+
+\begin{code}
+CanonicalForms : Term → Type → Set
+CanonicalForms t bool = t ≡ true ⊎ t ≡ false
+CanonicalForms t (A ⇒ B) = ∃₂ λ x t′ → t ≡ abs x A t′
+
+canonicalForms : ∀ {t A} → ∅ ⊢ t ∶ A → Value t → CanonicalForms t A
+canonicalForms (abs t′) abs = _ , _ , refl
+canonicalForms true true = inj₁ refl
+canonicalForms false false = inj₂ refl
+\end{code}
+
+## Progress
+
+As before, the _progress_ theorem tells us that closed, well-typed
+terms are not stuck: either a well-typed term is a value, or it
+can take a reduction step. The proof is a relatively
+straightforward extension of the progress proof we saw in the
+[Stlc](Stlc.html) chapter. We'll give the proof in English first,
+then the formal version.
+
+\begin{code}
+progress : ∀ {t A} → ∅ ⊢ t ∶ A → Value t ⊎ ∃ λ t′ → t ==> t′
+\end{code}
+
+_Proof_: By induction on the derivation of $$\vdash t : A$$.
+
+ - The last rule of the derivation cannot be `var`,
+ since a variable is never well typed in an empty context.
+
+ - The `true`, `false`, and `abs` cases are trivial, since in
+ each of these cases we can see by inspecting the rule that $$t$$
+ is a value.
+
+ - If the last rule of the derivation is `app`, then $$t$$ has the
+ form $$t_1\;t_2$$ for som e$$t_1$$ and $$t_2$$, where we know that
+ $$t_1$$ and $$t_2$$ are also well typed in the empty context; in particular,
+ there exists a type $$B$$ such that $$\vdash t_1 : A\to T$$ and
+ $$\vdash t_2 : B$$. By the induction hypothesis, either $$t_1$$ is a
+ value or it can take a reduction step.
+
+ - If $$t_1$$ is a value, then consider $$t_2$$, which by the other
+ induction hypothesis must also either be a value or take a step.
+
+ - Suppose $$t_2$$ is a value. Since $$t_1$$ is a value with an
+ arrow type, it must be a lambda abstraction; hence $$t_1\;t_2$$
+ can take a step by `red`.
+
+ - Otherwise, $$t_2$$ can take a step, and hence so can $$t_1\;t_2$$
+ by `app2`.
+
+ - If $$t_1$$ can take a step, then so can $$t_1 t_2$$ by `app1`.
+
+ - If the last rule of the derivation is `if`, then $$t = \text{if }t_1
+ \text{ then }t_2\text{ else }t_3$$, where $$t_1$$ has type $$bool$$. By
+ the IH, $$t_1$$ either is a value or takes a step.
+
+ - If $$t_1$$ is a value, then since it has type $$bool$$ it must be
+ either $$true$$ or $$false$$. If it is $$true$$, then $$t$$ steps
+ to $$t_2$$; otherwise it steps to $$t_3$$.
+
+ - Otherwise, $$t_1$$ takes a step, and therefore so does $$t$$ (by `if`).
+
+\begin{code}
+progress (var x ())
+progress true = inj₁ true
+progress false = inj₁ false
+progress (abs t∶A) = inj₁ abs
+progress (app t₁∶A⇒B t₂∶B)
+ with progress t₁∶A⇒B
+... | inj₂ (_ , t₁⇒t₁′) = inj₂ (_ , app1 t₁⇒t₁′)
+... | inj₁ v₁
+ with progress t₂∶B
+... | inj₂ (_ , t₂⇒t₂′) = inj₂ (_ , app2 v₁ t₂⇒t₂′)
+... | inj₁ v₂
+ with canonicalForms t₁∶A⇒B v₁
+... | (_ , _ , refl) = inj₂ (_ , red v₂)
+progress (if t₁∶bool then t₂∶A else t₃∶A)
+ with progress t₁∶bool
+... | inj₂ (_ , t₁⇒t₁′) = inj₂ (_ , if t₁⇒t₁′)
+... | inj₁ v₁
+ with canonicalForms t₁∶bool v₁
+... | inj₁ refl = inj₂ (_ , iftrue)
+... | inj₂ refl = inj₂ (_ , iffalse)
+\end{code}
+
+#### Exercise: 3 stars, optional (progress_from_term_ind)
+Show that progress can also be proved by induction on terms
+instead of induction on typing derivations.
+
+\begin{code}
+postulate
+ progress′ : ∀ {t A} → ∅ ⊢ t ∶ A → Value t ⊎ ∃ λ t′ → t ==> t′
+\end{code}
+
+## Preservation
+
+The other half of the type soundness property is the preservation
+of types during reduction. For this, we need to develop some
+technical machinery for reasoning about variables and
+substitution. Working from top to bottom (from the high-level
+property we are actually interested in to the lowest-level
+technical lemmas that are needed by various cases of the more
+interesting proofs), the story goes like this:
+
+ - The _preservation theorem_ is proved by induction on a typing
+ derivation, pretty much as we did in the [Stlc](Stlc.html)
+ chapter. The one case that is significantly different is the
+ one for the $$red$$ rule, whose definition uses the substitution
+ operation. To see that this step preserves typing, we need to
+ know that the substitution itself does. So we prove a...
+
+ - _substitution lemma_, stating that substituting a (closed)
+ term $$s$$ for a variable $$x$$ in a term $$t$$ preserves the type
+ of $$t$$. The proof goes by induction on the form of $$t$$ and
+ requires looking at all the different cases in the definition
+ of substitition. This time, the tricky cases are the ones for
+ variables and for function abstractions. In both cases, we
+ discover that we need to take a term $$s$$ that has been shown
+ to be well-typed in some context $$\Gamma$$ and consider the same
+ term $$s$$ in a slightly different context $$\Gamma'$$. For this
+ we prove a...
+
+ - _context invariance_ lemma, showing that typing is preserved
+ under "inessential changes" to the context $$\Gamma$$ -- in
+ particular, changes that do not affect any of the free
+ variables of the term. And finally, for this, we need a
+ careful definition of...
+
+ - the _free variables_ of a term -- i.e., those variables
+ mentioned in a term and not in the scope of an enclosing
+ function abstraction binding a variable of the same name.
+
+To make Agda happy, we need to formalize the story in the opposite
+order...
+
+
+### Free Occurrences
+
+A variable $$x$$ _appears free in_ a term _t_ if $$t$$ contains some
+occurrence of $$x$$ that is not under an abstraction labeled $$x$$.
+For example:
+
+ - $$y$$ appears free, but $$x$$ does not, in $$\lambda x : A\to B. x\;y$$
+ - both $$x$$ and $$y$$ appear free in $$(\lambda x:A\to B. x\;y) x$$
+ - no variables appear free in $$\lambda x:A\to B. \lambda y:A. x\;y$$
+
+Formally:
+
+\begin{code}
+data _FreeIn_ (x : Id) : Term → Set where
+ var : x FreeIn var x
+ abs : ∀ {y A t} → y ≢ x → x FreeIn t → x FreeIn abs y A t
+ app1 : ∀ {t₁ t₂} → x FreeIn t₁ → x FreeIn app t₁ t₂
+ app2 : ∀ {t₁ t₂} → x FreeIn t₂ → x FreeIn app t₁ t₂
+ if1 : ∀ {t₁ t₂ t₃} → x FreeIn t₁ → x FreeIn (if t₁ then t₂ else t₃)
+ if2 : ∀ {t₁ t₂ t₃} → x FreeIn t₂ → x FreeIn (if t₁ then t₂ else t₃)
+ if3 : ∀ {t₁ t₂ t₃} → x FreeIn t₃ → x FreeIn (if t₁ then t₂ else t₃)
+\end{code}
+
+A term in which no variables appear free is said to be _closed_.
+
+\begin{code}
+Closed : Term → Set
+Closed t = ∀ {x} → ¬ (x FreeIn t)
+\end{code}
+
+#### Exercise: 1 star (free-in)
+If the definition of `_FreeIn_` is not crystal clear to
+you, it is a good idea to take a piece of paper and write out the
+rules in informal inference-rule notation. (Although it is a
+rather low-level, technical definition, understanding it is
+crucial to understanding substitution and its properties, which
+are really the crux of the lambda-calculus.)
+
+### Substitution
+To prove that substitution preserves typing, we first need a
+technical lemma connecting free variables and typing contexts: If
+a variable $$x$$ appears free in a term $$t$$, and if we know $$t$$ is
+well typed in context $$\Gamma$$, then it must be the case that
+$$\Gamma$$ assigns a type to $$x$$.
+
+\begin{code}
+freeInCtxt : ∀ {x t A Γ} → x FreeIn t → Γ ⊢ t ∶ A → ∃ λ B → Γ x ≡ just B
+\end{code}
+
+_Proof_: We show, by induction on the proof that $$x$$ appears
+ free in $$t$$, that, for all contexts $$\Gamma$$, if $$t$$ is well
+ typed under $$\Gamma$$, then $$\Gamma$$ assigns some type to $$x$$.
+
+ - If the last rule used was `var`, then $$t = x$$, and from
+ the assumption that $$t$$ is well typed under $$\Gamma$$ we have
+ immediately that $$\Gamma$$ assigns a type to $$x$$.
+
+ - If the last rule used was `app1`, then $$t = t_1\;t_2$$ and $$x$$
+ appears free in $$t_1$$. Since $$t$$ is well typed under $$\Gamma$$,
+ we can see from the typing rules that $$t_1$$ must also be, and
+ the IH then tells us that $$\Gamma$$ assigns $$x$$ a type.
+
+ - Almost all the other cases are similar: $$x$$ appears free in a
+ subterm of $$t$$, and since $$t$$ is well typed under $$\Gamma$$, we
+ know the subterm of $$t$$ in which $$x$$ appears is well typed
+ under $$\Gamma$$ as well, and the IH gives us exactly the
+ conclusion we want.
+
+ - The only remaining case is `abs`. In this case $$t =
+ \lambda y:A.t'$$, and $$x$$ appears free in $$t'$$; we also know that
+ $$x$$ is different from $$y$$. The difference from the previous
+ cases is that whereas $$t$$ is well typed under $$\Gamma$$, its
+ body $$t'$$ is well typed under $$(\Gamma, y:A)$$, so the IH
+ allows us to conclude that $$x$$ is assigned some type by the
+ extended context $$(\Gamma, y:A)$$. To conclude that $$\Gamma$$
+ assigns a type to $$x$$, we appeal the decidable equality for names
+ `_≟_`, noting that $$x$$ and $$y$$ are different variables.
+
+\begin{code}
+freeInCtxt var (var _ x∶A) = (_ , x∶A)
+freeInCtxt (app1 x∈t₁) (app t₁∶A _ ) = freeInCtxt x∈t₁ t₁∶A
+freeInCtxt (app2 x∈t₂) (app _ t₂∶A) = freeInCtxt x∈t₂ t₂∶A
+freeInCtxt (if1 x∈t₁) (if t₁∶A then _ else _ ) = freeInCtxt x∈t₁ t₁∶A
+freeInCtxt (if2 x∈t₂) (if _ then t₂∶A else _ ) = freeInCtxt x∈t₂ t₂∶A
+freeInCtxt (if3 x∈t₃) (if _ then _ else t₃∶A) = freeInCtxt x∈t₃ t₃∶A
+freeInCtxt {x} (abs {y} y≠x x∈t) (abs t∶B)
+ with freeInCtxt x∈t t∶B
+... | x∶A
+ with y ≟ x
+... | yes y=x = ⊥-elim (y≠x y=x)
+... | no _ = x∶A
+\end{code}
+
+Next, we'll need the fact that any term $$t$$ which is well typed in
+the empty context is closed (it has no free variables).
+
+#### Exercise: 2 stars, optional (∅⊢-closed)
+
+\begin{code}
+postulate
+ ∅⊢-closed : ∀ {t A} → ∅ ⊢ t ∶ A → Closed t
+\end{code}
+
+
+\begin{code}
+∅⊢-closed′ : ∀ {t A} → ∅ ⊢ t ∶ A → Closed t
+∅⊢-closed′ (var x ())
+∅⊢-closed′ (app t₁∶A⇒B t₂∶A) (app1 x∈t₁) = ∅⊢-closed′ t₁∶A⇒B x∈t₁
+∅⊢-closed′ (app t₁∶A⇒B t₂∶A) (app2 x∈t₂) = ∅⊢-closed′ t₂∶A x∈t₂
+∅⊢-closed′ true ()
+∅⊢-closed′ false ()
+∅⊢-closed′ (if t₁∶bool then t₂∶A else t₃∶A) (if1 x∈t₁) = ∅⊢-closed′ t₁∶bool x∈t₁
+∅⊢-closed′ (if t₁∶bool then t₂∶A else t₃∶A) (if2 x∈t₂) = ∅⊢-closed′ t₂∶A x∈t₂
+∅⊢-closed′ (if t₁∶bool then t₂∶A else t₃∶A) (if3 x∈t₃) = ∅⊢-closed′ t₃∶A x∈t₃
+∅⊢-closed′ (abs {x = x} t′∶A) {y} (abs x≠y y∈t′) with freeInCtxt y∈t′ t′∶A
+∅⊢-closed′ (abs {x = x} t′∶A) {y} (abs x≠y y∈t′) | A , y∶A with x ≟ y
+∅⊢-closed′ (abs {x = x} t′∶A) {y} (abs x≠y y∈t′) | A , y∶A | yes x=y = x≠y x=y
+∅⊢-closed′ (abs {x = x} t′∶A) {y} (abs x≠y y∈t′) | A , () | no _
+\end{code}
+
+
+Sometimes, when we have a proof $$\Gamma\vdash t : A$$, we will need to
+replace $$\Gamma$$ by a different context $$\Gamma'$$. When is it safe
+to do this? Intuitively, it must at least be the case that
+$$\Gamma'$$ assigns the same types as $$\Gamma$$ to all the variables
+that appear free in $$t$$. In fact, this is the only condition that
+is needed.
+
+\begin{code}
+replaceCtxt : ∀ {Γ Γ′ t A}
+ → (∀ {x} → x FreeIn t → Γ x ≡ Γ′ x)
+ → Γ ⊢ t ∶ A
+ → Γ′ ⊢ t ∶ A
+\end{code}
+
+_Proof_: By induction on the derivation of
+$$\Gamma \vdash t \in T$$.
+
+ - If the last rule in the derivation was `var`, then $$t = x$$
+ and $$\Gamma x = T$$. By assumption, $$\Gamma' x = T$$ as well, and
+ hence $$\Gamma' \vdash t : T$$ by `var`.
+
+ - If the last rule was `abs`, then $$t = \lambda y:A. t'$$, with
+ $$T = A\to B$$ and $$\Gamma, y : A \vdash t' : B$$. The
+ induction hypothesis is that, for any context $$\Gamma''$$, if
+ $$\Gamma, y:A$$ and $$\Gamma''$$ assign the same types to all the
+ free variables in $$t'$$, then $$t'$$ has type $$B$$ under
+ $$\Gamma''$$. Let $$\Gamma'$$ be a context which agrees with
+ $$\Gamma$$ on the free variables in $$t$$; we must show
+ $$\Gamma' \vdash \lambda y:A. t' : A\to B$$.
+
+ By $$abs$$, it suffices to show that $$\Gamma', y:A \vdash t' : t'$$.
+ By the IH (setting $$\Gamma'' = \Gamma', y:A$$), it suffices to show
+ that $$\Gamma, y:A$$ and $$\Gamma', y:A$$ agree on all the variables
+ that appear free in $$t'$$.
+
+ Any variable occurring free in $$t'$$ must be either $$y$$ or
+ some other variable. $$\Gamma, y:A$$ and $$\Gamma', y:A$$
+ clearly agree on $$y$$. Otherwise, note that any variable other
+ than $$y$$ that occurs free in $$t'$$ also occurs free in
+ $$t = \lambda y:A. t'$$, and by assumption $$\Gamma$$ and
+ $$\Gamma'$$ agree on all such variables; hence so do $$\Gamma, y:A$$ and
+ $$\Gamma', y:A$$.
+
+ - If the last rule was `app`, then $$t = t_1\;t_2$$, with
+ $$\Gamma \vdash t_1 : A\to T$$ and $$\Gamma \vdash t_2 : A$$.
+ One induction hypothesis states that for all contexts $$\Gamma'$$,
+ if $$\Gamma'$$ agrees with $$\Gamma$$ on the free variables in $$t_1$$,
+ then $$t_1$$ has type $$A\to T$$ under $$\Gamma'$$; there is a similar IH
+ for $$t_2$$. We must show that $$t_1\;t_2$$ also has type $$T$$ under
+ $$\Gamma'$$, given the assumption that $$\Gamma'$$ agrees with
+ $$\Gamma$$ on all the free variables in $$t_1\;t_2$$. By `app`, it
+ suffices to show that $$t_1$$ and $$t_2$$ each have the same type
+ under $$\Gamma'$$ as under $$\Gamma$$. But all free variables in
+ $$t_1$$ are also free in $$t_1\;t_2$$, and similarly for $$t_2$$;
+ hence the desired result follows from the induction hypotheses.
+
+\begin{code}
+replaceCtxt f (var x x∶A) rewrite f var = var x x∶A
+replaceCtxt f (app t₁∶A⇒B t₂∶A)
+ = app (replaceCtxt (f ∘ app1) t₁∶A⇒B) (replaceCtxt (f ∘ app2) t₂∶A)
+replaceCtxt {Γ} {Γ′} f (abs {.Γ} {x} {A} {B} {t′} t′∶B)
+ = abs (replaceCtxt f′ t′∶B)
+ where
+ f′ : ∀ {y} → y FreeIn t′ → (Γ , x ∶ A) y ≡ (Γ′ , x ∶ A) y
+ f′ {y} y∈t′ with x ≟ y
+ ... | yes _ = refl
+ ... | no x≠y = f (abs x≠y y∈t′)
+replaceCtxt _ true = true
+replaceCtxt _ false = false
+replaceCtxt f (if t₁∶bool then t₂∶A else t₃∶A)
+ = if replaceCtxt (f ∘ if1) t₁∶bool
+ then replaceCtxt (f ∘ if2) t₂∶A
+ else replaceCtxt (f ∘ if3) t₃∶A
+\end{code}
+
+Now we come to the conceptual heart of the proof that reduction
+preserves types -- namely, the observation that _substitution_
+preserves types.
+
+Formally, the so-called _Substitution Lemma_ says this: Suppose we
+have a term $$t$$ with a free variable $$x$$, and suppose we've been
+able to assign a type $$T$$ to $$t$$ under the assumption that $$x$$ has
+some type $$U$$. Also, suppose that we have some other term $$v$$ and
+that we've shown that $$v$$ has type $$U$$. Then, since $$v$$ satisfies
+the assumption we made about $$x$$ when typing $$t$$, we should be
+able to substitute $$v$$ for each of the occurrences of $$x$$ in $$t$$
+and obtain a new term that still has type $$T$$.
+
+_Lemma_: If $$\Gamma,x:U \vdash t : T$$ and $$\vdash v : U$$, then
+$$\Gamma \vdash [x:=v]t : T$$.
+
+\begin{code}
+[:=]-preserves-⊢ : ∀ {Γ x A t v B}
+ → ∅ ⊢ v ∶ A
+ → Γ , x ∶ A ⊢ t ∶ B
+ → Γ , x ∶ A ⊢ [ x := v ] t ∶ B
+\end{code}
+
+One technical subtlety in the statement of the lemma is that
+we assign $$v$$ the type $$U$$ in the _empty_ context -- in other
+words, we assume $$v$$ is closed. This assumption considerably
+simplifies the $$abs$$ case of the proof (compared to assuming
+$$\Gamma \vdash v : U$$, which would be the other reasonable assumption
+at this point) because the context invariance lemma then tells us
+that $$v$$ has type $$U$$ in any context at all -- we don't have to
+worry about free variables in $$v$$ clashing with the variable being
+introduced into the context by $$abs$$.
+
+The substitution lemma can be viewed as a kind of "commutation"
+property. Intuitively, it says that substitution and typing can
+be done in either order: we can either assign types to the terms
+$$t$$ and $$v$$ separately (under suitable contexts) and then combine
+them using substitution, or we can substitute first and then
+assign a type to $$ $$x:=v$$ t $$ -- the result is the same either
+way.
+
+_Proof_: We show, by induction on $$t$$, that for all $$T$$ and
+$$\Gamma$$, if $$\Gamma,x:U \vdash t : T$$ and $$\vdash v : U$$, then $$\Gamma
+\vdash $$x:=v$$t : T$$.
+
+ - If $$t$$ is a variable there are two cases to consider,
+ depending on whether $$t$$ is $$x$$ or some other variable.
+
+ - If $$t = x$$, then from the fact that $$\Gamma, x:U \vdash x :
+ T$$ we conclude that $$U = T$$. We must show that $$$$x:=v$$x =
+ v$$ has type $$T$$ under $$\Gamma$$, given the assumption that
+ $$v$$ has type $$U = T$$ under the empty context. This
+ follows from context invariance: if a closed term has type
+ $$T$$ in the empty context, it has that type in any context.
+
+ - If $$t$$ is some variable $$y$$ that is not equal to $$x$$, then
+ we need only note that $$y$$ has the same type under $$\Gamma,
+ x:U$$ as under $$\Gamma$$.
+
+ - If $$t$$ is an abstraction $$\lambda y:T_11. t_12$$, then the IH tells us,
+ for all $$\Gamma'$$ and $$T'$$, that if $$\Gamma',x:U \vdash t_12 : T'$$
+ and $$\vdash v : U$$, then $$\Gamma' \vdash $$x:=v$$t_12 : T'$$.
+
+ The substitution in the conclusion behaves differently
+ depending on whether $$x$$ and $$y$$ are the same variable.
+
+ First, suppose $$x = y$$. Then, by the definition of
+ substitution, $$[x:=v]t = t$$, so we just need to show $$\Gamma \vdash
+ t : T$$. But we know $$\Gamma,x:U \vdash t : T$$, and, since $$y$$
+ does not appear free in $$\lambda y:T_11. t_12$$, the context invariance
+ lemma yields $$\Gamma \vdash t : T$$.
+
+ Second, suppose $$x <> y$$. We know $$\Gamma,x:U,y:T_11 \vdash t_12 :
+ T_12$$ by inversion of the typing relation, from which
+ $$\Gamma,y:T_11,x:U \vdash t_12 : T_12$$ follows by the context
+ invariance lemma, so the IH applies, giving us $$\Gamma,y:T_11 \vdash
+ $$x:=v$$t_12 : T_12$$. By $$abs$$, $$\Gamma \vdash \lambda y:T_11. $$x:=v$$t_12
+ : T_11→T_12$$, and by the definition of substitution (noting
+ that $$x <> y$$), $$\Gamma \vdash \lambda y:T_11. $$x:=v$$t_12 : T_11→T_12$$ as
+ required.
+
+ - If $$t$$ is an application $$t_1 t_2$$, the result follows
+ straightforwardly from the definition of substitution and the
+ induction hypotheses.
+
+ - The remaining cases are similar to the application case.
+
+One more technical note: This proof is a rare case where an
+induction on terms, rather than typing derivations, yields a
+simpler argument. The reason for this is that the assumption
+$$update Gamma x U \vdash t : T$$ is not completely generic, in the
+sense that one of the "slots" in the typing relation -- namely the
+context -- is not just a variable, and this means that Agda's
+native induction tactic does not give us the induction hypothesis
+that we want. It is possible to work around this, but the needed
+generalization is a little tricky. The term $$t$$, on the other
+hand, _is_ completely generic.
+
+\begin{code}
+[:=]-preserves-⊢ {Γ} {x} v∶A (var y y∈Γ) with x ≟ y
+... | yes x=y = {!!}
+... | no x≠y = {!!}
+[:=]-preserves-⊢ v∶A (abs t′∶B) = {!!}
+[:=]-preserves-⊢ v∶A (app t₁∶A⇒B t₂∶A) =
+ app ([:=]-preserves-⊢ v∶A t₁∶A⇒B) ([:=]-preserves-⊢ v∶A t₂∶A)
+[:=]-preserves-⊢ v∶A true = true
+[:=]-preserves-⊢ v∶A false = false
+[:=]-preserves-⊢ v∶A (if t₁∶bool then t₂∶B else t₃∶B) =
+ if [:=]-preserves-⊢ v∶A t₁∶bool
+ then [:=]-preserves-⊢ v∶A t₂∶B
+ else [:=]-preserves-⊢ v∶A t₃∶B
+\end{code}
+
+
+### Main Theorem
+
+We now have the tools we need to prove preservation: if a closed
+term $$t$$ has type $$T$$ and takes a step to $$t'$$, then $$t'$$
+is also a closed term with type $$T$$. In other words, the small-step
+reduction relation preserves types.
+
+Theorem preservation : forall t t' T,
+ empty \vdash t : T →
+ t ==> t' →
+ empty \vdash t' : T.
+
+_Proof_: By induction on the derivation of $$\vdash t : T$$.
+
+- We can immediately rule out $$var$$, $$abs$$, $$T_True$$, and
+ $$T_False$$ as the final rules in the derivation, since in each of
+ these cases $$t$$ cannot take a step.
+
+- If the last rule in the derivation was $$app$$, then $$t = t_1
+ t_2$$. There are three cases to consider, one for each rule that
+ could have been used to show that $$t_1 t_2$$ takes a step to $$t'$$.
+
+ - If $$t_1 t_2$$ takes a step by $$Sapp1$$, with $$t_1$$ stepping to
+ $$t_1'$$, then by the IH $$t_1'$$ has the same type as $$t_1$$, and
+ hence $$t_1' t_2$$ has the same type as $$t_1 t_2$$.
+
+ - The $$Sapp2$$ case is similar.
+
+ - If $$t_1 t_2$$ takes a step by $$Sred$$, then $$t_1 =
+ \lambda x:T_11.t_12$$ and $$t_1 t_2$$ steps to $$$$x:=t_2$$t_12$$; the
+ desired result now follows from the fact that substitution
+ preserves types.
+
+ - If the last rule in the derivation was $$if$$, then $$t = if t_1
+ then t_2 else t_3$$, and there are again three cases depending on
+ how $$t$$ steps.
+
+ - If $$t$$ steps to $$t_2$$ or $$t_3$$, the result is immediate, since
+ $$t_2$$ and $$t_3$$ have the same type as $$t$$.
+
+ - Otherwise, $$t$$ steps by $$Sif$$, and the desired conclusion
+ follows directly from the induction hypothesis.
+
+Proof with eauto.
+ remember (@empty ty) as Gamma.
+ intros t t' T HT. generalize dependent t'.
+ induction HT;
+ intros t' HE; subst Gamma; subst;
+ try solve $$inversion HE; subst; auto$$.
+ - (* app
+ inversion HE; subst...
+ (* Most of the cases are immediate by induction,
+ and $$eauto$$ takes care of them
+ + (* Sred
+ apply substitution_preserves_typing with T_11...
+ inversion HT_1...
+Qed.
+
+#### Exercise: 2 stars, recommended (subject_expansion_stlc)
+An exercise in the $$Stlc$$$$sf/Stlc.html$$ chapter asked about the subject
+expansion property for the simple language of arithmetic and
+boolean expressions. Does this property hold for STLC? That is,
+is it always the case that, if $$t ==> t'$$ and $$has_type t' T$$,
+then $$empty \vdash t : T$$? If so, prove it. If not, give a
+counter-example not involving conditionals.
+
+(* FILL IN HERE
+
+## Type Soundness
+
+#### Exercise: 2 stars, optional (type_soundness)
+Put progress and preservation together and show that a well-typed
+term can _never_ reach a stuck state.
+
+Definition stuck (t:tm) : Prop :=
+ (normal_form step) t /\ ~ value t.
+
+Corollary soundness : forall t t' T,
+ empty \vdash t : T →
+ t ==>* t' →
+ ~(stuck t').
+Proof.
+ intros t t' T Hhas_type Hmulti. unfold stuck.
+ intros $$Hnf Hnot_val$$. unfold normal_form in Hnf.
+ induction Hmulti.
+ (* FILL IN HERE Admitted.
+(** $$$$
+
+
+## Uniqueness of Types
+
+#### Exercise: 3 stars (types_unique)
+Another nice property of the STLC is that types are unique: a
+given term (in a given context) has at most one type.
+Formalize this statement and prove it.
+
+(* FILL IN HERE
+(** $$$$
+
+
+## Additional Exercises
+
+#### Exercise: 1 star (progress_preservation_statement)
+Without peeking at their statements above, write down the progress
+and preservation theorems for the simply typed lambda-calculus.
+$$$$
+
+#### Exercise: 2 stars (stlc_variation1)
+Suppose we add a new term $$zap$$ with the following reduction rule
+
+ --------- (ST_Zap)
+ t ==> zap
+
+and the following typing rule:
+
+ ---------------- (T_Zap)
+ Gamma \vdash zap : T
+
+Which of the following properties of the STLC remain true in
+the presence of these rules? For each property, write either
+"remains true" or "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+#### Exercise: 2 stars (stlc_variation2)
+Suppose instead that we add a new term $$foo$$ with the following
+reduction rules:
+
+ ----------------- (ST_Foo1)
+ (\lambda x:A. x) ==> foo
+
+ ------------ (ST_Foo2)
+ foo ==> true
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+#### Exercise: 2 stars (stlc_variation3)
+Suppose instead that we remove the rule $$Sapp1$$ from the $$step$$
+relation. Which of the following properties of the STLC remain
+true in the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+#### Exercise: 2 stars, optional (stlc_variation4)
+Suppose instead that we add the following new rule to the
+reduction relation:
+
+ ---------------------------------- (ST_FunnyIfTrue)
+ (if true then t_1 else t_2) ==> true
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+
+#### Exercise: 2 stars, optional (stlc_variation5)
+Suppose instead that we add the following new rule to the typing
+relation:
+
+ Gamma \vdash t_1 : bool→bool→bool
+ Gamma \vdash t_2 : bool
+ ------------------------------ (T_FunnyApp)
+ Gamma \vdash t_1 t_2 : bool
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+
+#### Exercise: 2 stars, optional (stlc_variation6)
+Suppose instead that we add the following new rule to the typing
+relation:
+
+ Gamma \vdash t_1 : bool
+ Gamma \vdash t_2 : bool
+ --------------------- (T_FunnyApp')
+ Gamma \vdash t_1 t_2 : bool
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+
+#### Exercise: 2 stars, optional (stlc_variation7)
+Suppose we add the following new rule to the typing relation
+of the STLC:
+
+ ------------------- (T_FunnyAbs)
+ \vdash \lambda x:bool.t : bool
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+### Exercise: STLC with Arithmetic
+
+To see how the STLC might function as the core of a real
+programming language, let's extend it with a concrete base
+type of numbers and some constants and primitive
+operators.
+
+To types, we add a base type of natural numbers (and remove
+booleans, for brevity).
+
+Inductive ty : Type :=
+ | TArrow : ty → ty → ty
+ | TNat : ty.
+
+To terms, we add natural number constants, along with
+successor, predecessor, multiplication, and zero-testing.
+
+Inductive tm : Type :=
+ | tvar : id → tm
+ | tapp : tm → tm → tm
+ | tabs : id → ty → tm → tm
+ | tnat : nat → tm
+ | tsucc : tm → tm
+ | tpred : tm → tm
+ | tmult : tm → tm → tm
+ | tif0 : tm → tm → tm → tm.
+
+#### Exercise: 4 stars (stlc_arith)
+Finish formalizing the definition and properties of the STLC extended
+with arithmetic. Specifically:
+
+ - Copy the whole development of STLC that we went through above (from
+ the definition of values through the Type Soundness theorem), and
+ paste it into the file at this point.
+
+ - Extend the definitions of the $$subst$$ operation and the $$step$$
+ relation to include appropriate clauses for the arithmetic operators.
+
+ - Extend the proofs of all the properties (up to $$soundness$$) of
+ the original STLC to deal with the new syntactic forms. Make
+ sure Agda accepts the whole file.
diff --git a/StlcProp.md b/StlcProp.md
new file mode 100644
index 00000000..9386c884
--- /dev/null
+++ b/StlcProp.md
@@ -0,0 +1,5279 @@
+---
+title : "StlcProp: Properties of STLC"
+layout : default
+hide-implicit : false
+extra-script : [agda-extra-script.html]
+extra-style : [agda-extra-style.html]
+permalink : "sf/StlcProp.html"
+---
+
+
+module StlcProp where
+
+
+
+
+# Properties of STLC
+
+In this chapter, we develop the fundamental theory of the Simply
+Typed Lambda Calculus -- in particular, the type safety
+theorem.
+
+
+## Canonical Forms
+
+As we saw for the simple calculus in the [Stlc](Stlc.html) chapter, the
+first step in establishing basic properties of reduction and types
+is to identify the possible _canonical forms_ (i.e., well-typed
+closed values) belonging to each type. For $$bool$$, these are the boolean
+values $$true$$ and $$false$$. For arrow types, the canonical forms
+are lambda-abstractions.
+
+
+CanonicalForms : Term → Type → Set
+CanonicalForms t bool = t ≡ true ⊎ t ≡ false
+CanonicalForms t (A ⇒ B) = ∃₂ λ x t′ → t ≡ abs x A t′
+
+canonicalForms : ∀ {t A} → ∅ ⊢ t ∶ A → Value t → CanonicalForms t A
+canonicalForms (abs t′) abs = _ , _ , refl
+canonicalForms true true = inj₁ refl
+canonicalForms false false = inj₂ refl
+
+
+## Progress
+
+As before, the _progress_ theorem tells us that closed, well-typed
+terms are not stuck: either a well-typed term is a value, or it
+can take a reduction step. The proof is a relatively
+straightforward extension of the progress proof we saw in the
+[Stlc](Stlc.html) chapter. We'll give the proof in English first,
+then the formal version.
+
+
+progress : ∀ {t A} → ∅ ⊢ t ∶ A → Value t ⊎ ∃ λ t′ → t ==> t′
+
+
+_Proof_: By induction on the derivation of $$\vdash t : A$$.
+
+ - The last rule of the derivation cannot be `var`,
+ since a variable is never well typed in an empty context.
+
+ - The `true`, `false`, and `abs` cases are trivial, since in
+ each of these cases we can see by inspecting the rule that $$t$$
+ is a value.
+
+ - If the last rule of the derivation is `app`, then $$t$$ has the
+ form $$t_1\;t_2$$ for som e$$t_1$$ and $$t_2$$, where we know that
+ $$t_1$$ and $$t_2$$ are also well typed in the empty context; in particular,
+ there exists a type $$B$$ such that $$\vdash t_1 : A\to T$$ and
+ $$\vdash t_2 : B$$. By the induction hypothesis, either $$t_1$$ is a
+ value or it can take a reduction step.
+
+ - If $$t_1$$ is a value, then consider $$t_2$$, which by the other
+ induction hypothesis must also either be a value or take a step.
+
+ - Suppose $$t_2$$ is a value. Since $$t_1$$ is a value with an
+ arrow type, it must be a lambda abstraction; hence $$t_1\;t_2$$
+ can take a step by `red`.
+
+ - Otherwise, $$t_2$$ can take a step, and hence so can $$t_1\;t_2$$
+ by `app2`.
+
+ - If $$t_1$$ can take a step, then so can $$t_1 t_2$$ by `app1`.
+
+ - If the last rule of the derivation is `if`, then $$t = \text{if }t_1
+ \text{ then }t_2\text{ else }t_3$$, where $$t_1$$ has type $$bool$$. By
+ the IH, $$t_1$$ either is a value or takes a step.
+
+ - If $$t_1$$ is a value, then since it has type $$bool$$ it must be
+ either $$true$$ or $$false$$. If it is $$true$$, then $$t$$ steps
+ to $$t_2$$; otherwise it steps to $$t_3$$.
+
+ - Otherwise, $$t_1$$ takes a step, and therefore so does $$t$$ (by `if`).
+
+
+progress (var x ())
+progress true = inj₁ true
+progress false = inj₁ false
+progress (abs t∶A) = inj₁ abs
+progress (app t₁∶A⇒B t₂∶B)
+ with progress t₁∶A⇒B
+... | inj₂ (_ , t₁⇒t₁′) = inj₂ (_ , app1 t₁⇒t₁′)
+... | inj₁ v₁
+ with progress t₂∶B
+... | inj₂ (_ , t₂⇒t₂′) = inj₂ (_ , app2 v₁ t₂⇒t₂′)
+... | inj₁ v₂
+ with canonicalForms t₁∶A⇒B v₁
+... | (_ , _ , refl) = inj₂ (_ , red v₂)
+progress (if t₁∶bool then t₂∶A else t₃∶A)
+ with progress t₁∶bool
+... | inj₂ (_ , t₁⇒t₁′) = inj₂ (_ , if t₁⇒t₁′)
+... | inj₁ v₁
+ with canonicalForms t₁∶bool v₁
+... | inj₁ refl = inj₂ (_ , iftrue)
+... | inj₂ refl = inj₂ (_ , iffalse)
+
+
+#### Exercise: 3 stars, optional (progress_from_term_ind)
+Show that progress can also be proved by induction on terms
+instead of induction on typing derivations.
+
+
+postulate
+ progress′ : ∀ {t A} → ∅ ⊢ t ∶ A → Value t ⊎ ∃ λ t′ → t ==> t′
+
+
+## Preservation
+
+The other half of the type soundness property is the preservation
+of types during reduction. For this, we need to develop some
+technical machinery for reasoning about variables and
+substitution. Working from top to bottom (from the high-level
+property we are actually interested in to the lowest-level
+technical lemmas that are needed by various cases of the more
+interesting proofs), the story goes like this:
+
+ - The _preservation theorem_ is proved by induction on a typing
+ derivation, pretty much as we did in the [Stlc](Stlc.html)
+ chapter. The one case that is significantly different is the
+ one for the $$red$$ rule, whose definition uses the substitution
+ operation. To see that this step preserves typing, we need to
+ know that the substitution itself does. So we prove a...
+
+ - _substitution lemma_, stating that substituting a (closed)
+ term $$s$$ for a variable $$x$$ in a term $$t$$ preserves the type
+ of $$t$$. The proof goes by induction on the form of $$t$$ and
+ requires looking at all the different cases in the definition
+ of substitition. This time, the tricky cases are the ones for
+ variables and for function abstractions. In both cases, we
+ discover that we need to take a term $$s$$ that has been shown
+ to be well-typed in some context $$\Gamma$$ and consider the same
+ term $$s$$ in a slightly different context $$\Gamma'$$. For this
+ we prove a...
+
+ - _context invariance_ lemma, showing that typing is preserved
+ under "inessential changes" to the context $$\Gamma$$ -- in
+ particular, changes that do not affect any of the free
+ variables of the term. And finally, for this, we need a
+ careful definition of...
+
+ - the _free variables_ of a term -- i.e., those variables
+ mentioned in a term and not in the scope of an enclosing
+ function abstraction binding a variable of the same name.
+
+To make Agda happy, we need to formalize the story in the opposite
+order...
+
+
+### Free Occurrences
+
+A variable $$x$$ _appears free in_ a term _t_ if $$t$$ contains some
+occurrence of $$x$$ that is not under an abstraction labeled $$x$$.
+For example:
+
+ - $$y$$ appears free, but $$x$$ does not, in $$\lambda x : A\to B. x\;y$$
+ - both $$x$$ and $$y$$ appear free in $$(\lambda x:A\to B. x\;y) x$$
+ - no variables appear free in $$\lambda x:A\to B. \lambda y:A. x\;y$$
+
+Formally:
+
+
+data _FreeIn_ (x : Id) : Term → Set where
+ var : x FreeIn var x
+ abs : ∀ {y A t} → y ≢ x → x FreeIn t → x FreeIn abs y A t
+ app1 : ∀ {t₁ t₂} → x FreeIn t₁ → x FreeIn app t₁ t₂
+ app2 : ∀ {t₁ t₂} → x FreeIn t₂ → x FreeIn app t₁ t₂
+ if1 : ∀ {t₁ t₂ t₃} → x FreeIn t₁ → x FreeIn (if t₁ then t₂ else t₃)
+ if2 : ∀ {t₁ t₂ t₃} → x FreeIn t₂ → x FreeIn (if t₁ then t₂ else t₃)
+ if3 : ∀ {t₁ t₂ t₃} → x FreeIn t₃ → x FreeIn (if t₁ then t₂ else t₃)
+
+
+A term in which no variables appear free is said to be _closed_.
+
+
+Closed : Term → Set
+Closed t = ∀ {x} → ¬ (x FreeIn t)
+
+
+#### Exercise: 1 star (free-in)
+If the definition of `_FreeIn_` is not crystal clear to
+you, it is a good idea to take a piece of paper and write out the
+rules in informal inference-rule notation. (Although it is a
+rather low-level, technical definition, understanding it is
+crucial to understanding substitution and its properties, which
+are really the crux of the lambda-calculus.)
+
+### Substitution
+To prove that substitution preserves typing, we first need a
+technical lemma connecting free variables and typing contexts: If
+a variable $$x$$ appears free in a term $$t$$, and if we know $$t$$ is
+well typed in context $$\Gamma$$, then it must be the case that
+$$\Gamma$$ assigns a type to $$x$$.
+
+
+freeInCtxt : ∀ {x t A Γ} → x FreeIn t → Γ ⊢ t ∶ A → ∃ λ B → Γ x ≡ just B
+
+
+_Proof_: We show, by induction on the proof that $$x$$ appears
+ free in $$t$$, that, for all contexts $$\Gamma$$, if $$t$$ is well
+ typed under $$\Gamma$$, then $$\Gamma$$ assigns some type to $$x$$.
+
+ - If the last rule used was `var`, then $$t = x$$, and from
+ the assumption that $$t$$ is well typed under $$\Gamma$$ we have
+ immediately that $$\Gamma$$ assigns a type to $$x$$.
+
+ - If the last rule used was `app1`, then $$t = t_1\;t_2$$ and $$x$$
+ appears free in $$t_1$$. Since $$t$$ is well typed under $$\Gamma$$,
+ we can see from the typing rules that $$t_1$$ must also be, and
+ the IH then tells us that $$\Gamma$$ assigns $$x$$ a type.
+
+ - Almost all the other cases are similar: $$x$$ appears free in a
+ subterm of $$t$$, and since $$t$$ is well typed under $$\Gamma$$, we
+ know the subterm of $$t$$ in which $$x$$ appears is well typed
+ under $$\Gamma$$ as well, and the IH gives us exactly the
+ conclusion we want.
+
+ - The only remaining case is `abs`. In this case $$t =
+ \lambda y:A.t'$$, and $$x$$ appears free in $$t'$$; we also know that
+ $$x$$ is different from $$y$$. The difference from the previous
+ cases is that whereas $$t$$ is well typed under $$\Gamma$$, its
+ body $$t'$$ is well typed under $$(\Gamma, y:A)$$, so the IH
+ allows us to conclude that $$x$$ is assigned some type by the
+ extended context $$(\Gamma, y:A)$$. To conclude that $$\Gamma$$
+ assigns a type to $$x$$, we appeal the decidable equality for names
+ `_≟_`, noting that $$x$$ and $$y$$ are different variables.
+
+
+freeInCtxt var (var _ x∶A) = (_ , x∶A)
+freeInCtxt (app1 x∈t₁) (app t₁∶A _ ) = freeInCtxt x∈t₁ t₁∶A
+freeInCtxt (app2 x∈t₂) (app _ t₂∶A) = freeInCtxt x∈t₂ t₂∶A
+freeInCtxt (if1 x∈t₁) (if t₁∶A then _ else _ ) = freeInCtxt x∈t₁ t₁∶A
+freeInCtxt (if2 x∈t₂) (if _ then t₂∶A else _ ) = freeInCtxt x∈t₂ t₂∶A
+freeInCtxt (if3 x∈t₃) (if _ then _ else t₃∶A) = freeInCtxt x∈t₃ t₃∶A
+freeInCtxt {x} (abs {y} y≠x x∈t) (abs t∶B)
+ with freeInCtxt x∈t t∶B
+... | x∶A
+ with y ≟ x
+... | yes y=x = ⊥-elim (y≠x y=x)
+... | no _ = x∶A
+
+
+Next, we'll need the fact that any term $$t$$ which is well typed in
+the empty context is closed (it has no free variables).
+
+#### Exercise: 2 stars, optional (∅⊢-closed)
+
+
+postulate
+ ∅⊢-closed : ∀ {t A} → ∅ ⊢ t ∶ A → Closed t
+
+
+
+
+Sometimes, when we have a proof $$\Gamma\vdash t : A$$, we will need to
+replace $$\Gamma$$ by a different context $$\Gamma'$$. When is it safe
+to do this? Intuitively, it must at least be the case that
+$$\Gamma'$$ assigns the same types as $$\Gamma$$ to all the variables
+that appear free in $$t$$. In fact, this is the only condition that
+is needed.
+
+
+replaceCtxt : ∀ {Γ Γ′ t A}
+ → (∀ {x} → x FreeIn t → Γ x ≡ Γ′ x)
+ → Γ ⊢ t ∶ A
+ → Γ′ ⊢ t ∶ A
+
+
+_Proof_: By induction on the derivation of
+$$\Gamma \vdash t \in T$$.
+
+ - If the last rule in the derivation was `var`, then $$t = x$$
+ and $$\Gamma x = T$$. By assumption, $$\Gamma' x = T$$ as well, and
+ hence $$\Gamma' \vdash t : T$$ by `var`.
+
+ - If the last rule was `abs`, then $$t = \lambda y:A. t'$$, with
+ $$T = A\to B$$ and $$\Gamma, y : A \vdash t' : B$$. The
+ induction hypothesis is that, for any context $$\Gamma''$$, if
+ $$\Gamma, y:A$$ and $$\Gamma''$$ assign the same types to all the
+ free variables in $$t'$$, then $$t'$$ has type $$B$$ under
+ $$\Gamma''$$. Let $$\Gamma'$$ be a context which agrees with
+ $$\Gamma$$ on the free variables in $$t$$; we must show
+ $$\Gamma' \vdash \lambda y:A. t' : A\to B$$.
+
+ By $$abs$$, it suffices to show that $$\Gamma', y:A \vdash t' : t'$$.
+ By the IH (setting $$\Gamma'' = \Gamma', y:A$$), it suffices to show
+ that $$\Gamma, y:A$$ and $$\Gamma', y:A$$ agree on all the variables
+ that appear free in $$t'$$.
+
+ Any variable occurring free in $$t'$$ must be either $$y$$ or
+ some other variable. $$\Gamma, y:A$$ and $$\Gamma', y:A$$
+ clearly agree on $$y$$. Otherwise, note that any variable other
+ than $$y$$ that occurs free in $$t'$$ also occurs free in
+ $$t = \lambda y:A. t'$$, and by assumption $$\Gamma$$ and
+ $$\Gamma'$$ agree on all such variables; hence so do $$\Gamma, y:A$$ and
+ $$\Gamma', y:A$$.
+
+ - If the last rule was `app`, then $$t = t_1\;t_2$$, with
+ $$\Gamma \vdash t_1 : A\to T$$ and $$\Gamma \vdash t_2 : A$$.
+ One induction hypothesis states that for all contexts $$\Gamma'$$,
+ if $$\Gamma'$$ agrees with $$\Gamma$$ on the free variables in $$t_1$$,
+ then $$t_1$$ has type $$A\to T$$ under $$\Gamma'$$; there is a similar IH
+ for $$t_2$$. We must show that $$t_1\;t_2$$ also has type $$T$$ under
+ $$\Gamma'$$, given the assumption that $$\Gamma'$$ agrees with
+ $$\Gamma$$ on all the free variables in $$t_1\;t_2$$. By `app`, it
+ suffices to show that $$t_1$$ and $$t_2$$ each have the same type
+ under $$\Gamma'$$ as under $$\Gamma$$. But all free variables in
+ $$t_1$$ are also free in $$t_1\;t_2$$, and similarly for $$t_2$$;
+ hence the desired result follows from the induction hypotheses.
+
+
+replaceCtxt f (var x x∶A) rewrite f var = var x x∶A
+replaceCtxt f (app t₁∶A⇒B t₂∶A)
+ = app (replaceCtxt (f ∘ app1) t₁∶A⇒B) (replaceCtxt (f ∘ app2) t₂∶A)
+replaceCtxt {Γ} {Γ′} f (abs {.Γ} {x} {A} {B} {t′} t′∶B)
+ = abs (replaceCtxt f′ t′∶B)
+ where
+ f′ : ∀ {y} → y FreeIn t′ → (Γ , x ∶ A) y ≡ (Γ′ , x ∶ A) y
+ f′ {y} y∈t′ with x ≟ y
+ ... | yes _ = refl
+ ... | no x≠y = f (abs x≠y y∈t′)
+replaceCtxt _ true = true
+replaceCtxt _ false = false
+replaceCtxt f (if t₁∶bool then t₂∶A else t₃∶A)
+ = if replaceCtxt (f ∘ if1) t₁∶bool
+ then replaceCtxt (f ∘ if2) t₂∶A
+ else replaceCtxt (f ∘ if3) t₃∶A
+
+
+Now we come to the conceptual heart of the proof that reduction
+preserves types -- namely, the observation that _substitution_
+preserves types.
+
+Formally, the so-called _Substitution Lemma_ says this: Suppose we
+have a term $$t$$ with a free variable $$x$$, and suppose we've been
+able to assign a type $$T$$ to $$t$$ under the assumption that $$x$$ has
+some type $$U$$. Also, suppose that we have some other term $$v$$ and
+that we've shown that $$v$$ has type $$U$$. Then, since $$v$$ satisfies
+the assumption we made about $$x$$ when typing $$t$$, we should be
+able to substitute $$v$$ for each of the occurrences of $$x$$ in $$t$$
+and obtain a new term that still has type $$T$$.
+
+_Lemma_: If $$\Gamma,x:U \vdash t : T$$ and $$\vdash v : U$$, then
+$$\Gamma \vdash [x:=v]t : T$$.
+
+
+[:=]-preserves-⊢ : ∀ {Γ x A t v B}
+ → ∅ ⊢ v ∶ A
+ → Γ , x ∶ A ⊢ t ∶ B
+ → Γ , x ∶ A ⊢ [ x := v ] t ∶ B
+
+
+One technical subtlety in the statement of the lemma is that
+we assign $$v$$ the type $$U$$ in the _empty_ context -- in other
+words, we assume $$v$$ is closed. This assumption considerably
+simplifies the $$abs$$ case of the proof (compared to assuming
+$$\Gamma \vdash v : U$$, which would be the other reasonable assumption
+at this point) because the context invariance lemma then tells us
+that $$v$$ has type $$U$$ in any context at all -- we don't have to
+worry about free variables in $$v$$ clashing with the variable being
+introduced into the context by $$abs$$.
+
+The substitution lemma can be viewed as a kind of "commutation"
+property. Intuitively, it says that substitution and typing can
+be done in either order: we can either assign types to the terms
+$$t$$ and $$v$$ separately (under suitable contexts) and then combine
+them using substitution, or we can substitute first and then
+assign a type to $$ $$x:=v$$ t $$ -- the result is the same either
+way.
+
+_Proof_: We show, by induction on $$t$$, that for all $$T$$ and
+$$\Gamma$$, if $$\Gamma,x:U \vdash t : T$$ and $$\vdash v : U$$, then $$\Gamma
+\vdash $$x:=v$$t : T$$.
+
+ - If $$t$$ is a variable there are two cases to consider,
+ depending on whether $$t$$ is $$x$$ or some other variable.
+
+ - If $$t = x$$, then from the fact that $$\Gamma, x:U \vdash x :
+ T$$ we conclude that $$U = T$$. We must show that $$$$x:=v$$x =
+ v$$ has type $$T$$ under $$\Gamma$$, given the assumption that
+ $$v$$ has type $$U = T$$ under the empty context. This
+ follows from context invariance: if a closed term has type
+ $$T$$ in the empty context, it has that type in any context.
+
+ - If $$t$$ is some variable $$y$$ that is not equal to $$x$$, then
+ we need only note that $$y$$ has the same type under $$\Gamma,
+ x:U$$ as under $$\Gamma$$.
+
+ - If $$t$$ is an abstraction $$\lambda y:T_11. t_12$$, then the IH tells us,
+ for all $$\Gamma'$$ and $$T'$$, that if $$\Gamma',x:U \vdash t_12 : T'$$
+ and $$\vdash v : U$$, then $$\Gamma' \vdash $$x:=v$$t_12 : T'$$.
+
+ The substitution in the conclusion behaves differently
+ depending on whether $$x$$ and $$y$$ are the same variable.
+
+ First, suppose $$x = y$$. Then, by the definition of
+ substitution, $$[x:=v]t = t$$, so we just need to show $$\Gamma \vdash
+ t : T$$. But we know $$\Gamma,x:U \vdash t : T$$, and, since $$y$$
+ does not appear free in $$\lambda y:T_11. t_12$$, the context invariance
+ lemma yields $$\Gamma \vdash t : T$$.
+
+ Second, suppose $$x <> y$$. We know $$\Gamma,x:U,y:T_11 \vdash t_12 :
+ T_12$$ by inversion of the typing relation, from which
+ $$\Gamma,y:T_11,x:U \vdash t_12 : T_12$$ follows by the context
+ invariance lemma, so the IH applies, giving us $$\Gamma,y:T_11 \vdash
+ $$x:=v$$t_12 : T_12$$. By $$abs$$, $$\Gamma \vdash \lambda y:T_11. $$x:=v$$t_12
+ : T_11→T_12$$, and by the definition of substitution (noting
+ that $$x <> y$$), $$\Gamma \vdash \lambda y:T_11. $$x:=v$$t_12 : T_11→T_12$$ as
+ required.
+
+ - If $$t$$ is an application $$t_1 t_2$$, the result follows
+ straightforwardly from the definition of substitution and the
+ induction hypotheses.
+
+ - The remaining cases are similar to the application case.
+
+One more technical note: This proof is a rare case where an
+induction on terms, rather than typing derivations, yields a
+simpler argument. The reason for this is that the assumption
+$$update Gamma x U \vdash t : T$$ is not completely generic, in the
+sense that one of the "slots" in the typing relation -- namely the
+context -- is not just a variable, and this means that Agda's
+native induction tactic does not give us the induction hypothesis
+that we want. It is possible to work around this, but the needed
+generalization is a little tricky. The term $$t$$, on the other
+hand, _is_ completely generic.
+
+
+[:=]-preserves-⊢ {Γ} {x} v∶A (var y y∈Γ) with x ≟ y
+... | yes x=y = {!!}
+... | no x≠y = {!!}
+[:=]-preserves-⊢ v∶A (abs t′∶B) = {!!}
+[:=]-preserves-⊢ v∶A (app t₁∶A⇒B t₂∶A) =
+ app ([:=]-preserves-⊢ v∶A t₁∶A⇒B) ([:=]-preserves-⊢ v∶A t₂∶A)
+[:=]-preserves-⊢ v∶A true = true
+[:=]-preserves-⊢ v∶A false = false
+[:=]-preserves-⊢ v∶A (if t₁∶bool then t₂∶B else t₃∶B) =
+ if [:=]-preserves-⊢ v∶A t₁∶bool
+ then [:=]-preserves-⊢ v∶A t₂∶B
+ else [:=]-preserves-⊢ v∶A t₃∶B
+
+
+
+### Main Theorem
+
+We now have the tools we need to prove preservation: if a closed
+term $$t$$ has type $$T$$ and takes a step to $$t'$$, then $$t'$$
+is also a closed term with type $$T$$. In other words, the small-step
+reduction relation preserves types.
+
+Theorem preservation : forall t t' T,
+ empty \vdash t : T →
+ t ==> t' →
+ empty \vdash t' : T.
+
+_Proof_: By induction on the derivation of $$\vdash t : T$$.
+
+- We can immediately rule out $$var$$, $$abs$$, $$T_True$$, and
+ $$T_False$$ as the final rules in the derivation, since in each of
+ these cases $$t$$ cannot take a step.
+
+- If the last rule in the derivation was $$app$$, then $$t = t_1
+ t_2$$. There are three cases to consider, one for each rule that
+ could have been used to show that $$t_1 t_2$$ takes a step to $$t'$$.
+
+ - If $$t_1 t_2$$ takes a step by $$Sapp1$$, with $$t_1$$ stepping to
+ $$t_1'$$, then by the IH $$t_1'$$ has the same type as $$t_1$$, and
+ hence $$t_1' t_2$$ has the same type as $$t_1 t_2$$.
+
+ - The $$Sapp2$$ case is similar.
+
+ - If $$t_1 t_2$$ takes a step by $$Sred$$, then $$t_1 =
+ \lambda x:T_11.t_12$$ and $$t_1 t_2$$ steps to $$$$x:=t_2$$t_12$$; the
+ desired result now follows from the fact that substitution
+ preserves types.
+
+ - If the last rule in the derivation was $$if$$, then $$t = if t_1
+ then t_2 else t_3$$, and there are again three cases depending on
+ how $$t$$ steps.
+
+ - If $$t$$ steps to $$t_2$$ or $$t_3$$, the result is immediate, since
+ $$t_2$$ and $$t_3$$ have the same type as $$t$$.
+
+ - Otherwise, $$t$$ steps by $$Sif$$, and the desired conclusion
+ follows directly from the induction hypothesis.
+
+Proof with eauto.
+ remember (@empty ty) as Gamma.
+ intros t t' T HT. generalize dependent t'.
+ induction HT;
+ intros t' HE; subst Gamma; subst;
+ try solve $$inversion HE; subst; auto$$.
+ - (* app
+ inversion HE; subst...
+ (* Most of the cases are immediate by induction,
+ and $$eauto$$ takes care of them
+ + (* Sred
+ apply substitution_preserves_typing with T_11...
+ inversion HT_1...
+Qed.
+
+#### Exercise: 2 stars, recommended (subject_expansion_stlc)
+An exercise in the $$Stlc$$$$sf/Stlc.html$$ chapter asked about the subject
+expansion property for the simple language of arithmetic and
+boolean expressions. Does this property hold for STLC? That is,
+is it always the case that, if $$t ==> t'$$ and $$has_type t' T$$,
+then $$empty \vdash t : T$$? If so, prove it. If not, give a
+counter-example not involving conditionals.
+
+(* FILL IN HERE
+
+## Type Soundness
+
+#### Exercise: 2 stars, optional (type_soundness)
+Put progress and preservation together and show that a well-typed
+term can _never_ reach a stuck state.
+
+Definition stuck (t:tm) : Prop :=
+ (normal_form step) t /\ ~ value t.
+
+Corollary soundness : forall t t' T,
+ empty \vdash t : T →
+ t ==>* t' →
+ ~(stuck t').
+Proof.
+ intros t t' T Hhas_type Hmulti. unfold stuck.
+ intros $$Hnf Hnot_val$$. unfold normal_form in Hnf.
+ induction Hmulti.
+ (* FILL IN HERE Admitted.
+(** $$$$
+
+
+## Uniqueness of Types
+
+#### Exercise: 3 stars (types_unique)
+Another nice property of the STLC is that types are unique: a
+given term (in a given context) has at most one type.
+Formalize this statement and prove it.
+
+(* FILL IN HERE
+(** $$$$
+
+
+## Additional Exercises
+
+#### Exercise: 1 star (progress_preservation_statement)
+Without peeking at their statements above, write down the progress
+and preservation theorems for the simply typed lambda-calculus.
+$$$$
+
+#### Exercise: 2 stars (stlc_variation1)
+Suppose we add a new term $$zap$$ with the following reduction rule
+
+ --------- (ST_Zap)
+ t ==> zap
+
+and the following typing rule:
+
+ ---------------- (T_Zap)
+ Gamma \vdash zap : T
+
+Which of the following properties of the STLC remain true in
+the presence of these rules? For each property, write either
+"remains true" or "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+#### Exercise: 2 stars (stlc_variation2)
+Suppose instead that we add a new term $$foo$$ with the following
+reduction rules:
+
+ ----------------- (ST_Foo1)
+ (\lambda x:A. x) ==> foo
+
+ ------------ (ST_Foo2)
+ foo ==> true
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+#### Exercise: 2 stars (stlc_variation3)
+Suppose instead that we remove the rule $$Sapp1$$ from the $$step$$
+relation. Which of the following properties of the STLC remain
+true in the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+#### Exercise: 2 stars, optional (stlc_variation4)
+Suppose instead that we add the following new rule to the
+reduction relation:
+
+ ---------------------------------- (ST_FunnyIfTrue)
+ (if true then t_1 else t_2) ==> true
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+
+#### Exercise: 2 stars, optional (stlc_variation5)
+Suppose instead that we add the following new rule to the typing
+relation:
+
+ Gamma \vdash t_1 : bool→bool→bool
+ Gamma \vdash t_2 : bool
+ ------------------------------ (T_FunnyApp)
+ Gamma \vdash t_1 t_2 : bool
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+
+#### Exercise: 2 stars, optional (stlc_variation6)
+Suppose instead that we add the following new rule to the typing
+relation:
+
+ Gamma \vdash t_1 : bool
+ Gamma \vdash t_2 : bool
+ --------------------- (T_FunnyApp')
+ Gamma \vdash t_1 t_2 : bool
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+
+#### Exercise: 2 stars, optional (stlc_variation7)
+Suppose we add the following new rule to the typing relation
+of the STLC:
+
+ ------------------- (T_FunnyAbs)
+ \vdash \lambda x:bool.t : bool
+
+Which of the following properties of the STLC remain true in
+the presence of this rule? For each one, write either
+"remains true" or else "becomes false." If a property becomes
+false, give a counterexample.
+
+ - Determinism of $$step$$
+
+ - Progress
+
+ - Preservation
+
+
+### Exercise: STLC with Arithmetic
+
+To see how the STLC might function as the core of a real
+programming language, let's extend it with a concrete base
+type of numbers and some constants and primitive
+operators.
+
+To types, we add a base type of natural numbers (and remove
+booleans, for brevity).
+
+Inductive ty : Type :=
+ | TArrow : ty → ty → ty
+ | TNat : ty.
+
+To terms, we add natural number constants, along with
+successor, predecessor, multiplication, and zero-testing.
+
+Inductive tm : Type :=
+ | tvar : id → tm
+ | tapp : tm → tm → tm
+ | tabs : id → ty → tm → tm
+ | tnat : nat → tm
+ | tsucc : tm → tm
+ | tpred : tm → tm
+ | tmult : tm → tm → tm
+ | tif0 : tm → tm → tm → tm.
+
+#### Exercise: 4 stars (stlc_arith)
+Finish formalizing the definition and properties of the STLC extended
+with arithmetic. Specifically:
+
+ - Copy the whole development of STLC that we went through above (from
+ the definition of values through the Type Soundness theorem), and
+ paste it into the file at this point.
+
+ - Extend the definitions of the $$subst$$ operation and the $$step$$
+ relation to include appropriate clauses for the arithmetic operators.
+
+ - Extend the proofs of all the properties (up to $$soundness$$) of
+ the original STLC to deal with the new syntactic forms. Make
+ sure Agda accepts the whole file.