Minimal version without agda2html

This commit is contained in:
Wen Kokke 2019-06-05 10:38:52 +01:00
parent 3da844c239
commit 56ebba4a73
51 changed files with 2148 additions and 2100 deletions

View file

@ -1,7 +1,7 @@
SHELL := /bin/bash
agda := $(shell find . -type f -and \( -path '*/src/*' -or -path '*/tspl/*' \) -and -name '*.lagda')
agda := $(shell find . -type f -and \( -path '*/src/*' -or -path '*/tspl/*' \) -and -name '*.lagda.md')
agdai := $(shell find . -type f -and \( -path '*/src/*' -or -path '*/tspl/*' \) -and -name '*.agdai')
markdown := $(subst tspl/,out/,$(subst src/,out/,$(subst .lagda,.md,$(agda))))
markdown := $(subst tspl/,out/,$(subst src/,out/,$(subst .lagda,,$(agda))))
PLFA_DIR := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
AGDA2HTML_FLAGS := --verbose --link-to-local-agda-names --use-jekyll=out/
@ -30,17 +30,13 @@ out/:
# Build PLFA pages
out/%.md: src/%.lagda | out/
set -o pipefail && agda2html $(AGDA2HTML_FLAGS) -i $< -o $@ 2>&1 \
| sed '/^Generating.*/d; /^Warning\: HTML.*/d; /^reached from the.*/d; /^\s*$$/d'
@sed -i '1 s|---|---\nsrc : $(<)|' $@
out/%.md: src/%.lagda.md | out/
./highlight.sh $< $@
# Build TSPL pages
out/%.md: tspl/%.lagda | out/
set -o pipefail && agda2html $(AGDA2HTML_FLAGS) -i $< -o $@ -- --include-path=$(realpath src) 2>&1 \
| sed '/^Generating.*/d; /^Warning\: HTML.*/d; /^reached from the.*/d; /^\s*$$/d'
@sed -i '1 s|---|---\nsrc : $(<)|' $@
out/%.md: tspl/%.lagda.md | out/
./highlight.sh $< $@ --include-path $(realpath src) --include-path $(realpath tspl)
# Start server

View file

@ -7,7 +7,6 @@ permalink: /GettingStarted/
[![Build Status](https://travis-ci.org/plfa/plfa.github.io.svg?branch=dev)](https://travis-ci.org/plfa/plfa.github.io)
[![Agda](https://img.shields.io/badge/agda-2.5.4.2-blue.svg)](https://github.com/agda/agda/releases/tag/v2.5.4.2)
[![agda-stdlib](https://img.shields.io/badge/agda--stdlib-0.17-blue.svg)](https://github.com/agda/agda-stdlib/releases/tag/v0.17)
[![agda2html](https://img.shields.io/badge/agda2html-0.2.4.0-blue.svg)](https://github.com/wenkokke/agda2html/releases/tag/v0.2.4.0)
# Getting Started with PLFA
@ -38,19 +37,12 @@ To do so, add the path to `plfa.agda-lib` to `~/.agda/libraries` and add `plfa`
To build and host a local copy of the book, there are several tools you need *in addition to those listed above*:
- [agda2html](https://github.com/wenkokke/agda2html)
- [Ruby](https://www.ruby-lang.org/en/documentation/installation/)
- [Bundler](https://bundler.io/#getting-started)
For most of the tools, you can simply follow their respective build instructions.
Most recent versions of Ruby should work.
We advise installing agda2html using [Stack](https://docs.haskellstack.org/en/stable/README/):
git clone https://github.com/wenkokke/agda2html.git
cd agda2html
stack install
Finally, you must install the Ruby dependencies---[Jekyll](https://jekyllrb.com/), [html-proofer](https://github.com/gjtorikian/html-proofer), *etc.*---using Bundler:
You install the Ruby dependencies---[Jekyll](https://jekyllrb.com/), [html-proofer](https://github.com/gjtorikian/html-proofer), *etc.*---using Bundler:
bundle install
@ -87,6 +79,7 @@ unzip, and from within the directory run
bundle install
bundle exec jekyll serve
## GNU sed and macOS
The version of sed that ships with macOS is not fully compatible with the GNU sed.
@ -99,15 +92,6 @@ You can fix this error by installing a GNU compatible version of sed, e.g. using
brew install gnu-sed --with-default-names
```
## Updating `agda2html`
Sometimes we have to update agda2html.
To update your local copy, run the following commands from your clone of the
agda2html repository, or simply follow the installation instructions again:
git pull
stack install
## Unicode characters

View file

@ -32,9 +32,9 @@ markdown: kramdown
theme: minima
exclude:
- "hs/"
- "src/"
- "extra/"
- "depr/"
- "vendor/"
- "*.lagda"
- "*.lagda.md"
- "Gemfile"
- "Gemfile.lock"

25
highlight.sh Executable file
View file

@ -0,0 +1,25 @@
#!/bin/bash
SRC="$1"
shift
OUT="$1"
OUT_DIR="$(dirname $OUT)"
shift
# NOTE: this assumes $OUT is equivalent to out/ plus the module path
HTML_DIR="$(mktemp -d)"
HTML="${OUT#out/}"
HTML="/${HTML//\//.}"
HTML="$HTML_DIR/$HTML"
set -o pipefail \
&& agda --html --html-highlight=code --html-dir="$HTML_DIR" "$SRC" "$@" \
| sed '/^Generating.*/d; /^Warning\: HTML.*/d; /^reached from the.*/d; /^\s*$/d'
sed -i "1 s|---|---\nsrc : $SRC |" "$HTML"
sed -i "s|<pre class=\"Agda\">|{% raw %}<pre class=\"Agda\">|" "$HTML"
sed -i "s|</pre>|</pre>{% endraw %}|" "$HTML"
mkdir -p "$OUT_DIR"
cp "$HTML" "$OUT"

View file

@ -6,9 +6,9 @@ permalink : /Adequacy/
next : /ContextualEquivalence/
---
\begin{code}
```
module plfa.Adequacy where
\end{code}
```
## Introduction
@ -69,7 +69,7 @@ The rest of this chapter is organized as follows.
## Imports
\begin{code}
```
open import plfa.Untyped
using (Context; _⊢_; ★; _∋_; ∅; _,_; Z; S_; `_; ƛ_; _·_;
rename; subst; ext; exts; _[_]; subst-zero)
@ -98,7 +98,7 @@ open import Data.Empty using (⊥-elim) renaming (⊥ to Bot)
open import Data.Unit
open import Relation.Nullary using (Dec; yes; no)
open import Function using (_∘_)
\end{code}
```
## The property of being greater or equal to a function
@ -106,25 +106,25 @@ open import Function using (_∘_)
We define the following short-hand for saying that a value is
greather-than or equal to a function value.
\begin{code}
```
AboveFun : Value → Set
AboveFun u = Σ[ v ∈ Value ] Σ[ w ∈ Value ] v ↦ w ⊑ u
\end{code}
```
If a value `u` is greater than a function, then an even greater value `u'`
is too.
\begin{code}
```
AboveFun-⊑ : ∀{u u' : Value}
→ AboveFun u → u ⊑ u'
-------------------
→ AboveFun u'
AboveFun-⊑ ⟨ v , ⟨ w , lt' ⟩ ⟩ lt = ⟨ v , ⟨ w , Trans⊑ lt' lt ⟩ ⟩
\end{code}
```
The bottom value `⊥` is not greater than a function.
\begin{code}
```
AboveFun⊥ : ¬ AboveFun ⊥
AboveFun⊥ ⟨ v , ⟨ w , lt ⟩ ⟩
with sub-inv-fun lt
@ -133,12 +133,12 @@ AboveFun⊥ ⟨ v , ⟨ w , lt ⟩ ⟩
... | ⟨ A , ⟨ B , m ⟩ ⟩
with Γ⊆⊥ m
... | ()
\end{code}
```
If the join of two values `u` and `u'` is greater than a function, then
at least one of them is too.
\begin{code}
```
AboveFun-⊔ : ∀{u u'}
→ AboveFun (u ⊔ u')
→ AboveFun u ⊎ AboveFun u'
@ -150,12 +150,12 @@ AboveFun-⊔{u}{u'} ⟨ v , ⟨ w , v↦w⊑u⊔u' ⟩ ⟩
with Γ⊆u⊔u' m
... | inj₁ x = inj₁ ⟨ A , ⟨ B , (∈→⊑ x) ⟩ ⟩
... | inj₂ x = inj₂ ⟨ A , ⟨ B , (∈→⊑ x) ⟩ ⟩
\end{code}
```
On the other hand, if neither of `u` and `u'` is greater than a function,
then their join is also not greater than a function.
\begin{code}
```
not-AboveFun-⊔ : ∀{u u' : Value}
→ ¬ AboveFun u → ¬ AboveFun u'
→ ¬ AboveFun (u ⊔ u')
@ -163,12 +163,12 @@ not-AboveFun-⊔ naf1 naf2 af12
with AboveFun-⊔ af12
... | inj₁ af1 = contradiction af1 naf1
... | inj₂ af2 = contradiction af2 naf2
\end{code}
```
The converse is also true. If the join of two values is not above a
function, then neither of them is individually.
\begin{code}
```
not-AboveFun-⊔-inv : ∀{u u' : Value} → ¬ AboveFun (u ⊔ u')
→ ¬ AboveFun u × ¬ AboveFun u'
not-AboveFun-⊔-inv af = ⟨ f af , g af ⟩
@ -179,12 +179,12 @@ not-AboveFun-⊔-inv af = ⟨ f af , g af ⟩
g : ∀{u u' : Value} → ¬ AboveFun (u ⊔ u') → ¬ AboveFun u'
g{u}{u'} af12 ⟨ v , ⟨ w , lt ⟩ ⟩ =
contradiction ⟨ v , ⟨ w , ConjR2⊑ lt ⟩ ⟩ af12
\end{code}
```
The property of being greater than a function value is decidable, as
exhibited by the following function.
\begin{code}
```
AboveFun? : (v : Value) → Dec (AboveFun v)
AboveFun? ⊥ = no AboveFun⊥
AboveFun? (v ↦ w) = yes ⟨ v , ⟨ w , Refl⊑ ⟩ ⟩
@ -193,7 +193,7 @@ AboveFun? (u ⊔ u')
... | yes ⟨ v , ⟨ w , lt ⟩ ⟩ | _ = yes ⟨ v , ⟨ w , (ConjR1⊑ lt) ⟩ ⟩
... | no _ | yes ⟨ v , ⟨ w , lt ⟩ ⟩ = yes ⟨ v , ⟨ w , (ConjR2⊑ lt) ⟩ ⟩
... | no x | no y = no (not-AboveFun-⊔ x y)
\end{code}
```
## Relating values to closures
@ -206,10 +206,10 @@ to a closure `c'` in WHNF and `𝕍 v c'`. Regarding `𝕍 v c`, it will hold wh
`c` is in WHNF, and if `v` is a function, the body of `c` evaluates
according to `v`.
\begin{code}
```
𝕍 : Value → Clos → Set
𝔼 : Value → Clos → Set
\end{code}
```
We define `𝕍` as a function from values and closures to `Set` and not as a
data type because it is mutually recursive with `𝔼` in a negative
@ -219,7 +219,7 @@ application, then `𝕍` is false (`Bot`). If the term is a lambda
abstraction, we define `𝕍` by recursion on the value, which we
describe below.
\begin{code}
```
𝕍 v (clos (` x₁) γ) = Bot
𝕍 v (clos (M · M₁) γ) = Bot
𝕍 ⊥ (clos (ƛ M) γ) =
@ -227,7 +227,7 @@ describe below.
(∀{c : Clos} → 𝔼 v c → AboveFun w → Σ[ c' ∈ Clos ]
(γ ,' c) ⊢ N ⇓ c' × 𝕍 w c')
𝕍 (u ⊔ v) (clos (ƛ N) γ) = 𝕍 u (clos (ƛ N) γ) × 𝕍 v (clos (ƛ N) γ)
\end{code}
```
* If the value is `⊥`, then the result is true (``).
@ -243,9 +243,9 @@ describe below.
The definition of `𝔼` is straightforward. If `v` is a greater than a
function, then `M` evaluates to a closure related to `v`.
\begin{code}
```
𝔼 v (clos M γ') = AboveFun v → Σ[ c ∈ Clos ] γ' ⊢ M ⇓ c × 𝕍 v c
\end{code}
```
The proof of the main lemma is by induction on `γ ⊢ M ↓ v`, so it goes
underneath lambda abstractions and must therefore reason about open
@ -254,7 +254,7 @@ semantic values to environments of closures. In the following, `𝔾`
relates `γ` to `γ'` if the corresponding values and closures are related
by `𝔼`.
\begin{code}
```
𝔾 : ∀{Γ} → Env Γ → ClosEnv Γ → Set
𝔾 {Γ} γ γ' = ∀{x : Γ ∋ ★} → 𝔼 (γ x) (γ' x)
@ -265,34 +265,34 @@ by `𝔼`.
𝔾 γ γ' → 𝔼 v c → 𝔾 (γ `, v) (γ' ,' c)
𝔾-ext {Γ} {γ} {γ'} g e {Z} = e
𝔾-ext {Γ} {γ} {γ'} g e {S x} = g
\end{code}
```
We need a few properties of the `𝕍` and `𝔼` relations. The first is that
a closure in the `𝕍` relation must be in weak-head normal form. We
define WHNF has follows.
\begin{code}
```
data WHNF : ∀ {Γ A} → Γ ⊢ A → Set where
ƛ_ : ∀ {Γ} {N : Γ , ★ ⊢ ★}
→ WHNF (ƛ N)
\end{code}
```
The proof goes by cases on the term in the closure.
\begin{code}
```
𝕍→WHNF : ∀{Γ}{γ : ClosEnv Γ}{M : Γ ⊢ ★}{v}
𝕍 v (clos M γ) → WHNF M
𝕍→WHNF {M = ` x} {v} ()
𝕍→WHNF {M = ƛ N} {v} vc = ƛ_
𝕍→WHNF {M = L · M} {v} ()
\end{code}
```
Next we have an introduction rule for `𝕍` that mimics the `⊔-intro`
rule. If both `u` and `v` are related to a closure `c`, then their join is
too.
\begin{code}
```
𝕍⊔-intro : ∀{c u v}
𝕍 u c → 𝕍 v c
---------------
@ -300,7 +300,7 @@ too.
𝕍⊔-intro {clos (` x) γ} () vc
𝕍⊔-intro {clos (ƛ N) γ} uc vc = ⟨ uc , vc ⟩
𝕍⊔-intro {clos (L · M) γ} () vc
\end{code}
```
In a moment we prove that `𝕍` is preserved when going from a greater
value to a lesser value: if `𝕍 v c` and `v' ⊑ v`, then `𝕍 v' c`.
@ -311,7 +311,7 @@ To prove `𝕍-sub`, we in turn need the following property concerning
values that are not greater than a function, that is, values that are
equivalent to `⊥`. In such cases, `𝕍 v (clos (ƛ N) γ')` is trivially true.
\begin{code}
```
not-AboveFun-𝕍 : ∀{v : Value}{Γ}{γ' : ClosEnv Γ}{N : Γ , ★ ⊢ ★ }
→ ¬ AboveFun v
-------------------
@ -321,20 +321,20 @@ not-AboveFun-𝕍 {v ↦ v'} af = ⊥-elim (contradiction ⟨ v , ⟨ v' , Refl
not-AboveFun-𝕍 {v₁ ⊔ v₂} af
with not-AboveFun-⊔-inv af
... | ⟨ af1 , af2 ⟩ = ⟨ not-AboveFun-𝕍 af1 , not-AboveFun-𝕍 af2 ⟩
\end{code}
```
The proofs of `𝕍-sub` and `𝔼-sub` are intertwined.
\begin{code}
```
sub-𝕍 : ∀{c : Clos}{v v'} → 𝕍 v c → v' ⊑ v → 𝕍 v' c
sub-𝔼 : ∀{c : Clos}{v v'} → 𝔼 v c → v' ⊑ v → 𝔼 v' c
\end{code}
```
We prove `𝕍-sub` by case analysis on the closure's term, to dispatch the
cases for variables and application. We then proceed by induction on
`v' ⊑ v`. We describe each case below.
\begin{code}
```
sub-𝕍 {clos (` x) γ} {v} () lt
sub-𝕍 {clos (L · M) γ} () lt
sub-𝕍 {clos (ƛ N) γ} vc Bot⊑ = tt
@ -374,7 +374,7 @@ sub-𝕍 {c} {v ↦ w ⊔ v ↦ w'} ⟨ vcw , vcw' ⟩ Dist⊑ ev1c ⟨ v' , ⟨
with AboveFun-⊔ ⟨ v' , ⟨ w'' , lt ⟩ ⟩
... | inj₁ af2 = ⊥-elim (contradiction af2 naf2)
... | inj₂ af3 = ⊥-elim (contradiction af3 naf3)
\end{code}
```
* Case `Bot⊑`. We immediately have `𝕍 ⊥ (clos (ƛ N) γ)`.
@ -445,12 +445,12 @@ sub-𝕍 {c} {v ↦ w ⊔ v ↦ w'} ⟨ vcw , vcw' ⟩ Dist⊑ ev1c ⟨ v' , ⟨
The proof of `sub-𝔼` is direct and explained below.
\begin{code}
```
sub-𝔼 {clos M γ} {v} {v'} 𝔼v v'⊑v fv'
with 𝔼v (AboveFun-⊑ fv' v'⊑v)
... | ⟨ c , ⟨ M⇓c , 𝕍v ⟩ ⟩ =
⟨ c , ⟨ M⇓c , sub-𝕍 𝕍v v'⊑v ⟩ ⟩
\end{code}
```
From `AboveFun v'` and `v' ⊑ v` we have `AboveFun v`. Then with `𝔼 v c` we
obtain a closure `c` such that `γ ⊢ M ⇓ c` and `𝕍 v c`. We conclude with an
@ -466,15 +466,15 @@ induction on the derivation of `γ ⊢ M ↓ v` we discuss each case below.
The following lemma, kth-x, is used in the case for the `var` rule.
\begin{code}
```
kth-x : ∀{Γ}{γ' : ClosEnv Γ}{x : Γ ∋ ★}
→ Σ[ Δ ∈ Context ] Σ[ δ ∈ ClosEnv Δ ] Σ[ M ∈ Δ ⊢ ★ ]
γ' x ≡ clos M δ
kth-x{γ' = γ'}{x = x} with γ' x
... | clos{Γ = Δ} M δ = ⟨ Δ , ⟨ δ , ⟨ M , refl ⟩ ⟩ ⟩
\end{code}
```
\begin{code}
```
↓→𝔼 : ∀{Γ}{γ : Env Γ}{γ' : ClosEnv Γ}{M : Γ ⊢ ★ }{v}
𝔾 γ γ' → γ ⊢ M ↓ v → 𝔼 v (clos M γ')
↓→𝔼 {Γ} {γ} {γ'} 𝔾γγ' (var{x = x}) fγx
@ -526,7 +526,7 @@ kth-x{γ' = γ'}{x = x} with γ' x
with ↓→𝔼 {Γ} {γ} {γ'} {M} 𝔾γγ' d (AboveFun-⊑ fv' v'⊑v)
... | ⟨ c , ⟨ M⇓c , 𝕍v ⟩ ⟩ =
⟨ c , ⟨ M⇓c , sub-𝕍 𝕍v v'⊑v ⟩ ⟩
\end{code}
```
* Case `var`. Looking up `x` in `γ'` yields some closure, `clos M' δ`,
and from `𝔾 γ γ'` we have `𝔼 (γ x) (clos M' δ)`. With the premise
@ -589,7 +589,7 @@ We have `∅ ⊢ ƛ N ↓ ⊥ ↦ ⊥`, so ` M ≃ (ƛ N)`
gives us `∅ ⊢ M ↓ ⊥ ↦ ⊥`. Then the main lemma gives us
`∅ ⊢ M ⇓ clos (ƛ N) γ` for some `N` and `γ`.
\begin{code}
```
adequacy : ∀{M : ∅ ⊢ ★}{N : ∅ , ★ ⊢ ★} → M ≃ (ƛ N)
→ Σ[ Γ ∈ Context ] Σ[ N ∈ (Γ , ★ ⊢ ★) ] Σ[ γ ∈ ClosEnv Γ ]
∅' ⊢ M ⇓ clos (ƛ N) γ
@ -600,7 +600,7 @@ adequacy{M}{N} eq
with 𝕍→WHNF Vc
... | ƛ_ {N = N} =
⟨ Γ , ⟨ N , ⟨ γ , M⇓c ⟩ ⟩ ⟩
\end{code}
```
## Call-by-name is equivalent to beta reduction
@ -612,13 +612,13 @@ result, then the program beta reduces to a lambda abstraction. We now
prove the backward direction of the if-and-only-if, leveraging our
results about the denotational semantics.
\begin{code}
```
reduce→cbn : ∀ {M : ∅ ⊢ ★} {N : ∅ , ★ ⊢ ★}
→ M —↠ ƛ N
→ Σ[ Δ ∈ Context ] Σ[ N ∈ Δ , ★ ⊢ ★ ] Σ[ δ ∈ ClosEnv Δ ]
∅' ⊢ M ⇓ clos (ƛ N) δ
reduce→cbn M—↠ƛN = adequacy (soundness M—↠ƛN)
\end{code}
```
Suppose `M —↠ ƛ N`. Soundness of the denotational semantics gives us
` M ≃ (ƛ N)`. Then by adequacy we conclude that
@ -628,7 +628,7 @@ Putting the two directions of the if-and-only-if together, we
establish that call-by-name evaluation is equivalent to beta reduction
in the following sense.
\begin{code}
```
cbn↔reduce : ∀ {M : ∅ ⊢ ★}
→ (Σ[ N ∈ ∅ , ★ ⊢ ★ ] (M —↠ ƛ N))
iff
@ -636,7 +636,7 @@ cbn↔reduce : ∀ {M : ∅ ⊢ ★}
∅' ⊢ M ⇓ clos (ƛ N) δ)
cbn↔reduce {M} = ⟨ (λ x → reduce→cbn (proj₂ x)) ,
(λ x → cbn→reduce (proj₂ (proj₂ (proj₂ x)))) ⟩
\end{code}
```
## Unicode
@ -646,4 +646,3 @@ This chapter uses the following unicode:
𝔼 U+1D53C MATHEMATICAL DOUBLE-STRUCK CAPITAL E (\bE)
𝔾 U+1D53E MATHEMATICAL DOUBLE-STRUCK CAPITAL G (\bG)
𝕍 U+1D53E MATHEMATICAL DOUBLE-STRUCK CAPITAL V (\bV)

View file

@ -6,9 +6,9 @@ permalink : /Bisimulation/
next : /Inference/
---
\begin{code}
```
module plfa.Bisimulation where
\end{code}
```
Some constructs can be defined in terms of other constructs. In the
previous chapter, we saw how _let_ terms can be rewritten as an
@ -127,16 +127,16 @@ are in bisimulation.
We import our source language from
Chapter [More][plfa.More]:
\begin{code}
```
open import plfa.More
\end{code}
```
## Simulation
The simulation is a straightforward formalisation of the rules
in the introduction:
\begin{code}
```
infix 4 _~_
infix 5 ~ƛ_
infix 7 _~·_
@ -163,7 +163,7 @@ data _~_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
→ N ~ N†
----------------------
→ `let M N ~ (ƛ N†) · M†
\end{code}
```
The language in Chapter [More][plfa.More] has more constructs, which we could easily add.
However, leaving the simulation small let's us focus on the essence.
It's a handy technical trick that we can have a large source language,
@ -174,9 +174,9 @@ but only bother to include in the simulation the terms of interest.
Formalise the translation from source to target given in the introduction.
Show that `M † ≡ N` implies `M ~ N`, and conversely.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Simulation commutes with values
@ -184,7 +184,7 @@ Show that `M † ≡ N` implies `M ~ N`, and conversely.
We need a number of technical results. The first is that simulation
commutes with values. That is, if `M ~ M†` and `M` is a value then
`M†` is also a value:
\begin{code}
```
~val : ∀ {Γ A} {M M† : Γ ⊢ A}
→ M ~ M†
→ Value M
@ -194,7 +194,7 @@ commutes with values. That is, if `M ~ M†` and `M` is a value then
~val (~ƛ ~N) V-ƛ = V-ƛ
~val (~L ~· ~M) ()
~val (~let ~M ~N) ()
\end{code}
```
It is a straightforward case analysis, where here the only value
of interest is a lambda abstraction.
@ -203,9 +203,9 @@ of interest is a lambda abstraction.
Show that this also holds in the reverse direction: if `M ~ M†`
and `Value M†` then `Value M`.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Simulation commutes with renaming
@ -214,7 +214,7 @@ The next technical result is that simulation commutes with renaming.
That is, if `ρ` maps any judgment `Γ ∋ A` to a judgment `Δ ∋ A`,
and if `M ~ M†` then `rename ρ M ~ rename ρ M†`:
\begin{code}
```
~rename : ∀ {Γ Δ}
→ (ρ : ∀ {A} → Γ ∋ A → Δ ∋ A)
----------------------------------------------------------
@ -223,7 +223,7 @@ and if `M ~ M†` then `rename ρ M ~ rename ρ M†`:
~rename ρ (~ƛ ~N) = ~ƛ (~rename (ext ρ) ~N)
~rename ρ (~L ~· ~M) = (~rename ρ ~L) ~· (~rename ρ ~M)
~rename ρ (~let ~M ~N) = ~let (~rename ρ ~M) (~rename (ext ρ) ~N)
\end{code}
```
The structure of the proof is similar to the structure of renaming itself:
reconstruct each term with recursive invocation, extending the environment
where appropriate (in this case, only for the body of an abstraction).
@ -239,7 +239,7 @@ The proof first requires we establish an analogue of extension.
If `σ` and `σ†` both map any judgment `Γ ∋ A` to a judgment `Δ ⊢ A`,
such that for every `x` in `Γ ∋ A` we have `σ x ~ σ† x`,
then for any `x` in `Γ , B ∋ A` we have `exts σ x ~ exts σ† x`:
\begin{code}
```
~exts : ∀ {Γ Δ}
→ {σ : ∀ {A} → Γ ∋ A → Δ ⊢ A}
→ {σ† : ∀ {A} → Γ ∋ A → Δ ⊢ A}
@ -248,7 +248,7 @@ then for any `x` in `Γ , B ∋ A` we have `exts σ x ~ exts σ† x`:
→ (∀ {A B} → (x : Γ , B ∋ A) → exts σ x ~ exts σ† x)
~exts ~σ Z = ~`
~exts ~σ (S x) = ~rename S_ (~σ x)
\end{code}
```
The structure of the proof is similar to the structure of extension itself.
The newly introduced variable trivially relates to itself, and otherwise
we apply renaming to the hypothesis.
@ -257,7 +257,7 @@ With extension under our belts, it is straightforward to show
substitution commutes. If `σ` and `σ†` both map any judgment `Γ ∋ A`
to a judgment `Δ ⊢ A`, such that for every `x` in `Γ ∋ A` we have `σ
x ~ σ† x`, and if `M ~ M†`, then `subst σ M ~ subst σ† M†`:
\begin{code}
```
~subst : ∀ {Γ Δ}
→ {σ : ∀ {A} → Γ ∋ A → Δ ⊢ A}
→ {σ† : ∀ {A} → Γ ∋ A → Δ ⊢ A}
@ -268,7 +268,7 @@ x ~ σ† x`, and if `M ~ M†`, then `subst σ M ~ subst σ† M†`:
~subst ~σ (~ƛ ~N) = ~ƛ (~subst (~exts ~σ) ~N)
~subst ~σ (~L ~· ~M) = (~subst ~σ ~L) ~· (~subst ~σ ~M)
~subst ~σ (~let ~M ~N) = ~let (~subst ~σ ~M) (~subst (~exts ~σ) ~N)
\end{code}
```
Again, the structure of the proof is similar to the structure of
substitution itself: reconstruct each term with recursive invocation,
extending the environment where appropriate (in this case, only for
@ -277,7 +277,7 @@ the body of an abstraction).
From the general case of substitution, it is also easy to derive
the required special case. If `N ~ N†` and `M ~ M†`, then
`N [ M ] ~ N† [ M† ]`:
\begin{code}
```
~sub : ∀ {Γ A B} {N N† : Γ , B ⊢ A} {M M† : Γ ⊢ B}
→ N ~ N†
→ M ~ M†
@ -288,7 +288,7 @@ the required special case. If `N ~ N†` and `M ~ M†`, then
~σ : ∀ {A} → (x : Γ , B ∋ A) → _ ~ _
~σ Z = ~M
~σ (S x) = ~`
\end{code}
```
Once more, the structure of the proof resembles the original.
@ -315,7 +315,7 @@ Or, in a diagram:
We first formulate a concept corresponding to the lower leg
of the diagram, that is, its right and bottom edges:
\begin{code}
```
data Leg {Γ A} (M† N : Γ ⊢ A) : Set where
leg : ∀ {N† : Γ ⊢ A}
@ -323,14 +323,14 @@ data Leg {Γ A} (M† N : Γ ⊢ A) : Set where
→ M† —→ N†
--------
→ Leg M† N
\end{code}
```
For our formalisation, in this case, we can use a stronger
relation than `—↠`, replacing it by `—→`.
We can now state and prove that the relation is a simulation.
Again, in this case, we can use a stronger relation than
`—↠`, replacing it by `—→`:
\begin{code}
```
sim : ∀ {Γ A} {M M† N : Γ ⊢ A}
→ M ~ M†
→ M —→ N
@ -349,7 +349,7 @@ sim (~let ~M ~N) (ξ-let M—→)
with sim ~M M—→
... | leg ~M M†—→ = leg (~let ~M ~N) (ξ-·₂ V-ƛ M†—→)
sim (~let ~V ~N) (β-let VV) = leg (~sub ~N ~V) (β-ƛ (~val ~V VV))
\end{code}
```
The proof is by case analysis, examining each possible instance of `M ~ M†`
and each possible instance of `M —→ M†`, using recursive invocation whenever
the reduction is by a `ξ` rule, and hence contains another reduction.
@ -461,9 +461,9 @@ In its structure, it looks a little bit like a proof of progress:
Show that we also have a simulation in the other direction, and hence that we have
a bisimulation.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `products`
@ -473,9 +473,9 @@ are in bisimulation. The only constructs you need to include are
variables, and those connected to functions and products.
In this case, the simulation is _not_ lock-step.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Unicode

View file

@ -6,9 +6,9 @@ permalink : /CallByName/
next : /Denotational/
---
\begin{code}
```
module plfa.CallByName where
\end{code}
```
## Introduction
@ -33,7 +33,7 @@ single sub-computation has been completed.
## Imports
\begin{code}
```
open import plfa.Untyped
using (Context; _⊢_; _∋_; ★; ∅; _,_; Z; S_; `_; ƛ_; _·_; subst; subst-zero;
exts; rename)
@ -48,7 +48,7 @@ open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
open import Data.Product using (_×_; Σ; Σ-syntax; ∃; ∃-syntax; proj₁; proj₂)
renaming (_,_ to ⟨_,_⟩)
open import Function using (_∘_)
\end{code}
```
## Environments
@ -64,26 +64,26 @@ is made easier by aligning these choices.
We define environments and closures as follows.
\begin{code}
```
ClosEnv : Context → Set
data Clos : Set where
clos : ∀{Γ} → (M : Γ ⊢ ★) → ClosEnv Γ → Clos
ClosEnv Γ = ∀ (x : Γ ∋ ★) → Clos
\end{code}
```
As usual, we have the empty environment, and we can extend an
environment.
\begin{code}
```
∅' : ClosEnv ∅
∅' ()
_,'_ : ∀ {Γ} → ClosEnv Γ → Clos → ClosEnv (Γ , ★)
(γ ,' c) Z = c
(γ ,' c) (S x) = γ x
\end{code}
```
## Big-step evaluation
@ -92,7 +92,7 @@ written `γ ⊢ M ⇓ V`, where `γ` is the environment, `M` is the input
term, and `V` is the result value. A _value_ is a closure whose term
is a lambda abstraction.
\begin{code}
```
data _⊢_⇓_ : ∀{Γ} → ClosEnv Γ → (Γ ⊢ ★) → Clos → Set where
⇓-var : ∀{Γ}{γ : ClosEnv Γ}{x : Γ ∋ ★}{Δ}{δ : ClosEnv Δ}{M : Δ ⊢ ★}{V}
@ -108,7 +108,7 @@ data _⊢_⇓_ : ∀{Γ} → ClosEnv Γ → (Γ ⊢ ★) → Clos → Set where
γ ⊢ L ⇓ clos (ƛ N) δ → (δ ,' clos M γ) ⊢ N ⇓ V
---------------------------------------------------
γ ⊢ L · M ⇓ V
\end{code}
```
* The `⇓-var` rule evaluates a variable by finding the associated
closure in the environment and then evaluating the closure.
@ -131,7 +131,7 @@ If the big-step relation evaluates a term `M` to both `V` and
call-by-name relation is a partial function. The proof is a
straightforward induction on the two big-step derivations.
\begin{code}
```
⇓-determ : ∀{Γ}{γ : ClosEnv Γ}{M : Γ ⊢ ★}{V V' : Clos}
γ ⊢ M ⇓ V → γ ⊢ M ⇓ V'
→ V ≡ V'
@ -142,7 +142,7 @@ straightforward induction on the two big-step derivations.
⇓-determ (⇓-app mc mc₁) (⇓-app mc' mc'')
with ⇓-determ mc mc'
... | refl = ⇓-determ mc₁ mc''
\end{code}
```
## Big-step evaluation implies beta reduction to a lambda
@ -174,14 +174,14 @@ equivalent.
We make the two notions of equivalence precise by defining the
following two mutually-recursive predicates `V ≈ M` and `γ ≈ₑ σ`.
\begin{code}
```
_≈_ : Clos → (∅ ⊢ ★) → Set
_≈ₑ_ : ∀{Γ} → ClosEnv Γ → Subst Γ ∅ → Set
(clos {Γ} M γ) ≈ N = Σ[ σ ∈ Subst Γ ∅ ] γ ≈ₑ σ × (N ≡ ⟪ σ ⟫ M)
γ ≈ₑ σ = ∀{x} → (γ x) ≈ (σ x)
\end{code}
```
We can now state the main lemma:
@ -193,17 +193,17 @@ about equivalent environments and substitutions.
The empty environment is equivalent to the identity substitution.
\begin{code}
```
≈ₑ-id : ∅' ≈ₑ ids
≈ₑ-id {()}
\end{code}
```
We define an auxilliary function for extending a substitution.
\begin{code}
```
ext-subst : ∀{Γ Δ} → Subst Γ Δ → Δ ⊢ ★ → Subst (Γ , ★) Δ
ext-subst{Γ}{Δ} σ N {A} = ⟪ subst-zero N ⟫ ∘ exts σ
\end{code}
```
The next lemma states that if you start with an equivalent
environment and substitution `γ ≈ₑ σ`, extending them with
@ -211,7 +211,7 @@ an equivalent closure and term `c ≈ N` produces
an equivalent environment and substitution:
`(γ ,' V) ≈ₑ (ext-subst σ N)`.
\begin{code}
```
≈ₑ-ext : ∀ {Γ} {γ : ClosEnv Γ} {σ : Subst Γ ∅} {V} {N : ∅ ⊢ ★}
γ ≈ₑ σ → V ≈ N
--------------------------
@ -226,7 +226,7 @@ an equivalent environment and substitution:
goal
with ext-cons {x}
... | a rewrite sym (subst-zero-exts-cons{Γ}{∅}{σ}{★}{N}{★}) = a
\end{code}
```
To prove `≈ₑ-ext`, we make use of the fact that `ext-subst σ N` is
equivalent to `N • σ` (by `subst-zero-exts-cons`). So we just
@ -245,7 +245,7 @@ closure `V` in environment `γ`, and if `γ ≈ₑ σ`, then `⟪ σ ⟫ M` redu
to some term `N` that is equivalent to `V`. We describe the proof
below.
\begin{code}
```
⇓→—↠×𝔹 : ∀{Γ}{γ : ClosEnv Γ}{σ : Subst Γ ∅}{M : Γ ⊢ ★}{V : Clos}
γ ⊢ M ⇓ V → γ ≈ₑ σ
---------------------------------------
@ -269,7 +269,7 @@ below.
let rs = (ƛ ⟪ exts τ ⟫ N) · ⟪ σ ⟫ M —→⟨ ƛτN·σM—→ ⟩ —↠N' in
let g = —↠-trans (appL-cong σL—↠ƛτN) rs in
⟨ N' , ⟨ g , V≈N' ⟩ ⟩
\end{code}
```
The proof is by induction on `γ ⊢ M ⇓ V`. We have three cases
to consider.
@ -326,7 +326,7 @@ to consider.
With the main lemma complete, we establish the forward direction
of the equivalence between the big-step semantics and beta reduction.
\begin{code}
```
cbn→reduce : ∀{M : ∅ ⊢ ★}{Δ}{δ : ClosEnv Δ}{N : Δ , ★ ⊢ ★}
→ ∅' ⊢ M ⇓ clos (ƛ N) δ
-----------------------------
@ -336,7 +336,7 @@ cbn→reduce {M}{Δ}{δ}{N} M⇓c
... | ⟨ N , ⟨ rs , ⟨ σ , ⟨ h , eq2 ⟩ ⟩ ⟩ ⟩
rewrite sub-id{M = M} | eq2 =
⟨ ⟪ exts σ ⟫ N , rs ⟩
\end{code}
```
## Beta reduction to a lambda implies big-step evaluation
@ -401,4 +401,3 @@ This chapter uses the following unicode:
ₑ U+2091 LATIN SUBSCRIPT SMALL LETTER E (\_e)
⊢ U+22A2 RIGHT TACK (\|- or \vdash)
⇓ U+21DB DOWNWARDS DOUBLE ARROW (\d= or \Downarrow)

View file

@ -6,9 +6,9 @@ permalink : /Compositional/
next : /Soundness/
---
\begin{code}
```
module plfa.Compositional where
\end{code}
```
## Introduction
@ -26,7 +26,7 @@ with such a definition and prove that it is equivalent to .
## Imports
\begin{code}
```
open import plfa.Untyped
using (Context; _,_; ★; _∋_; _⊢_; `_; ƛ_; _·_)
open import plfa.Denotational
@ -40,7 +40,7 @@ open import Data.Product using (_×_; Σ; Σ-syntax; ∃; ∃-syntax; proj₁; p
renaming (_,_ to ⟨_,_⟩)
open import Data.Sum using (_⊎_; inj₁; inj₂)
open import Data.Unit using (; tt)
\end{code}
```
## Equation for lambda abstraction
@ -57,12 +57,12 @@ subterm `M`, an environment `γ`, and a value `v`. If we define `` by
recursion on the value `v`, then it matches up nicely with the three
rules `↦-intro`, `⊥-intro`, and `⊔-intro`.
\begin{code}
```
: ∀{Γ} → Denotation (Γ , ★) → Denotation Γ
D γ (v ↦ w) = D (γ `, v) w
D γ ⊥ =
D γ (u ⊔ v) = ( D γ u) × ( D γ v)
\end{code}
```
[JGS: add comment about how can be viewed as curry.]
@ -76,7 +76,7 @@ smaller value `u`. The proof is a straightforward induction on the
derivation of `u ⊑ v`, using the `up-env` lemma in the case for the
`Fun⊑` rule.
\begin{code}
```
sub- : ∀{Γ}{N : Γ , ★ ⊢ ★}{γ v u}
( N) γ v
→ u ⊑ v
@ -90,7 +90,7 @@ sub- d (ConjR2⊑ lt) = sub- (proj₂ d) lt
sub- {v = v₁ ↦ v₂ ⊔ v₁ ↦ v₃} {v₁ ↦ (v₂ ⊔ v₃)} ⟨ N2 , N3 ⟩ Dist⊑ =
⊔-intro N2 N3
sub- d (Trans⊑ x₁ x₂) = sub- (sub- d x₂) x₁
\end{code}
```
[PLW:
If denotations were strengthened to be downward closed,
@ -102,7 +102,7 @@ direction of the semantic equation for lambda. The proof is by
induction on the semantics, using `sub-` in the case for the `sub`
rule.
\begin{code}
```
ℰƛ→ℱℰ : ∀{Γ}{γ : Env Γ}{N : Γ , ★ ⊢ ★}{v : Value}
(ƛ N) γ v
------------
@ -111,25 +111,25 @@ rule.
ℰƛ→ℱℰ ⊥-intro = tt
ℰƛ→ℱℰ (⊔-intro d₁ d₂) = ⟨ ℰƛ→ℱℰ d₁ , ℰƛ→ℱℰ d₂ ⟩
ℰƛ→ℱℰ (sub d lt) = sub- (ℰƛ→ℱℰ d) lt
\end{code}
```
The "inversion lemma" for lambda abstraction is a special case of the
above. The inversion lemma is useful in proving that denotations are
preserved by reduction.
\begin{code}
```
lambda-inversion : ∀{Γ}{γ : Env Γ}{N : Γ , ★ ⊢ ★}{v₁ v₂ : Value}
γ ⊢ ƛ N ↓ v₁ ↦ v₂
-----------------
→ (γ `, v₁) ⊢ N ↓ v₂
lambda-inversion{v₁ = v₁}{v₂ = v₂} d = ℰƛ→ℱℰ{v = v₁ ↦ v₂} d
\end{code}
```
The backward direction of the semantic equation for lambda is even
easier to prove than the forward direction. We proceed by induction on
the value v.
\begin{code}
```
ℱℰ→ℰƛ : ∀{Γ}{γ : Env Γ}{N : Γ , ★ ⊢ ★}{v : Value}
( N) γ v
------------
@ -137,16 +137,16 @@ the value v.
ℱℰ→ℰƛ {v = ⊥} d = ⊥-intro
ℱℰ→ℰƛ {v = v₁ ↦ v₂} d = ↦-intro d
ℱℰ→ℰƛ {v = v₁ ⊔ v₂} ⟨ d1 , d2 ⟩ = ⊔-intro (ℱℰ→ℰƛ d1) (ℱℰ→ℰƛ d2)
\end{code}
```
So indeed, the denotational semantics is compositional with respect
to lambda abstraction, as witnessed by the function ``.
\begin{code}
```
lam-equiv : ∀{Γ}{N : Γ , ★ ⊢ ★}
(ƛ N) ≃ ( N)
lam-equiv γ v = ⟨ ℰƛ→ℱℰ , ℱℰ→ℰƛ ⟩
\end{code}
```
## Equation for function application
@ -172,12 +172,12 @@ any value `w` equivalent to `⊥`, for the `⊥-intro` rule, and to include any
value `w` that is the output of an entry `v ↦ w` in `D₁`, provided the
input `v` is in `D₂`, for the `↦-elim` rule.
\begin{code}
```
infixl 7 _●_
_●_ : ∀{Γ} → Denotation Γ → Denotation Γ → Denotation Γ
(D₁ ● D₂) γ w = w ⊑ ⊥ ⊎ Σ[ v ∈ Value ]( D₁ γ (v ↦ w) × D₂ γ v )
\end{code}
```
[JGS: add comment about how ● can be viewed as apply.]
@ -185,7 +185,7 @@ Next we consider the inversion lemma for application, which is also
the forward direction of the semantic equation for application. We
describe the proof below.
\begin{code}
```
ℰ·→●ℰ : ∀{Γ}{γ : Env Γ}{L M : Γ ⊢ ★}{v : Value}
(L · M) γ v
----------------
@ -213,7 +213,7 @@ describe the proof below.
... | inj₁ lt2 = inj₁ (Trans⊑ lt lt2)
... | inj₂ ⟨ v₁ , ⟨ L↓v12 , M↓v3 ⟩ ⟩ =
inj₂ ⟨ v₁ , ⟨ sub L↓v12 (Fun⊑ Refl⊑ lt) , M↓v3 ⟩ ⟩
\end{code}
```
We proceed by induction on the semantics.
@ -265,29 +265,29 @@ The forward direction is proved by cases on the premise `( L ● M) γ v`
In case `v ⊑ ⊥`, we obtain `Γ ⊢ L · M ↓ ⊥` by rule `⊥-intro`.
Otherwise, we conclude immediately by rule `↦-elim`.
\begin{code}
```
●ℰ→ℰ· : ∀{Γ}{γ : Env Γ}{L M : Γ ⊢ ★}{v}
→ ( L ● M) γ v
----------------
(L · M) γ v
●ℰ→ℰ· {γ}{v} (inj₁ lt) = sub ⊥-intro lt
●ℰ→ℰ· {γ}{v} (inj₂ ⟨ v₁ , ⟨ d1 , d2 ⟩ ⟩) = ↦-elim d1 d2
\end{code}
```
So we have proved that the semantics is compositional with respect to
function application, as witnessed by the `●` function.
\begin{code}
```
app-equiv : ∀{Γ}{L M : Γ ⊢ ★}
(L · M) ≃ ( L) ● ( M)
app-equiv γ v = ⟨ ℰ·→●ℰ , ●ℰ→ℰ· ⟩
\end{code}
```
We also need an inversion lemma for variables.
If `Γ ⊢ x ↓ v`, then `v ⊑ γ x`. The proof is a straightforward
induction on the semantics.
\begin{code}
```
var-inv : ∀ {Γ v x} {γ : Env Γ}
(` x) γ v
-------------------
@ -296,16 +296,16 @@ var-inv (var) = Refl⊑
var-inv (⊔-intro d₁ d₂) = ConjL⊑ (var-inv d₁) (var-inv d₂)
var-inv (sub d lt) = Trans⊑ lt (var-inv d)
var-inv ⊥-intro = Bot⊑
\end{code}
```
To round-out the semantic equations, we establish the following one
for variables.
\begin{code}
```
var-equiv : ∀{Γ}{x : Γ ∋ ★}
(` x) ≃ (λ γ v → v ⊑ γ x)
var-equiv γ v = ⟨ var-inv , (λ lt → sub var lt) ⟩
\end{code}
```
@ -324,7 +324,7 @@ respect to lambda abstraction: that ` N ≃ N` implies ` (ƛ N) ≃
(ƛ N)`. We shall use the `lam-equiv` equation to reduce this question to
whether `` is a congruence.
\begin{code}
```
-cong : ∀{Γ}{D D : Denotation (Γ , ★)}
→ D ≃ D
-----------
@ -337,7 +337,7 @@ whether `` is a congruence.
ℱ≃ {v = ⊥} fd dd = tt
ℱ≃ {γ}{v ↦ w} fd dd = proj₁ (dd (γ `, v) w) fd
ℱ≃ {γ}{u ⊔ w} fd dd = ⟨ ℱ≃{γ}{u} (proj₁ fd) dd , ℱ≃{γ}{w} (proj₂ fd) dd
\end{code}
```
The proof of `-cong` uses the lemma `ℱ≃` to handle both directions of
the if-and-only-if. That lemma is proved by a straightforward
@ -346,7 +346,7 @@ induction on the value `v`.
We now prove that lambda abstraction is a congruence by direct
equational reasoning.
\begin{code}
```
lam-cong : ∀{Γ}{N N : Γ , ★ ⊢ ★}
N ≃ N
-----------------
@ -361,7 +361,7 @@ lam-cong {Γ}{N}{N} N≃N =
≃⟨ ≃-sym lam-equiv ⟩
(ƛ N)
\end{code}
```
Next we prove that denotational equality is a congruence for
application: that ` L ≃ L` and ` M ≃ M` imply
@ -369,7 +369,7 @@ application: that ` L ≃ L` and ` M ≃ M` imply
reduces this to the question of whether the `●` operator
is a congruence.
\begin{code}
```
●-cong : ∀{Γ}{D₁ D₁ D₂ D₂ : Denotation Γ}
→ D₁ ≃ D₁ → D₂ ≃ D₂
→ (D₁ ● D₂) ≃ (D₁ ● D₂)
@ -382,7 +382,7 @@ is a congruence.
●≃ (inj₁ v⊑⊥) eq₁ eq₂ = inj₁ v⊑⊥
●≃ {γ} {w} (inj₂ ⟨ v , ⟨ Dv↦w , Dv ⟩ ⟩) eq₁ eq₂ =
inj₂ ⟨ v , ⟨ proj₁ (eq₁ γ (v ↦ w)) Dv↦w , proj₁ (eq₂ γ v) Dv ⟩ ⟩
\end{code}
```
Again, both directions of the if-and-only-if are proved via a lemma.
This time the lemma is proved by cases on `(D₁ ● D₂) γ v`.
@ -390,7 +390,7 @@ This time the lemma is proved by cases on `(D₁ ● D₂) γ v`.
With the congruence of `●`, we can prove that application is a
congruence by direct equational reasoning.
\begin{code}
```
app-cong : ∀{Γ}{L L M M : Γ ⊢ ★}
L ≃ L
M ≃ M
@ -406,7 +406,7 @@ app-cong {Γ}{L}{L}{M}{M} L≅L M≅M =
≃⟨ ≃-sym app-equiv ⟩
(L · M)
\end{code}
```
## Compositionality
@ -421,13 +421,13 @@ definition `Ctx` makes this idea explicit. We index the `Ctx` data
type with two contexts for variables: one for the the hole and one for
terms that result from filling the hole.
\begin{code}
```
data Ctx : Context → Context → Set where
CHole : ∀{Γ} → Ctx Γ Γ
CLam : ∀{Γ Δ} → Ctx (Γ , ★) (Δ , ★) → Ctx (Γ , ★) Δ
CAppL : ∀{Γ Δ} → Ctx Γ Δ → Δ ⊢ ★ → Ctx Γ Δ
CAppR : ∀{Γ Δ} → Δ ⊢ ★ → Ctx Γ Δ → Ctx Γ Δ
\end{code}
```
* The constructor `CHole` represents the hole, and in this case the
variable context for the hole is the same as the variable context
@ -447,20 +447,20 @@ data Ctx : Context → Context → Set where
The action of surrounding a term with a context is defined by the
following `plug` function. It is defined by recursion on the context.
\begin{code}
```
plug : ∀{Γ}{Δ} → Ctx Γ Δ → Γ ⊢ ★ → Δ ⊢ ★
plug CHole M = M
plug (CLam C) N = ƛ plug C N
plug (CAppL C N) L = (plug C L) · N
plug (CAppR L C) M = L · (plug C M)
\end{code}
```
We are ready to state and prove the compositionality principle. Given
two terms `M` and `N` that are denotationally equal, plugging them both
into an arbitrary context `C` produces two programs that are
denotationally equal.
\begin{code}
```
compositionality : ∀{Γ Δ}{C : Ctx Γ Δ} {M N : Γ ⊢ ★}
M ≃ N
---------------------------
@ -473,7 +473,7 @@ compositionality {C = CAppL C L} M≃N =
app-cong (compositionality {C = C} M≃N) λ γ v → ⟨ (λ x → x) , (λ x → x) ⟩
compositionality {C = CAppR L C} M≃N =
app-cong (λ γ v → ⟨ (λ x → x) , (λ x → x) ⟩) (compositionality {C = C} M≃N)
\end{code}
```
The proof is a straightforward induction on the context `C`, using the
congruence properties `lam-cong` and `app-cong` that we established
@ -488,19 +488,19 @@ following function `⟦ M ⟧` that maps terms to denotations, using the
auxiliary curry `` and apply `●` functions in the cases for lambda
and application, respectively.
\begin{code}
```
⟦_⟧ : ∀{Γ} → (M : Γ ⊢ ★) → Denotation Γ
⟦ ` x ⟧ γ v = v ⊑ γ x
⟦ ƛ N ⟧ = ⟦ N ⟧
⟦ L · M ⟧ = ⟦ L ⟧ ● ⟦ M ⟧
\end{code}
```
The proof that ` M` is denotationally equal to `⟦ M ⟧` is a
straightforward induction, using the three equations
`var-equiv`, `lam-equiv`, and `app-equiv` together
with the congruence lemmas for `` and `●`.
\begin{code}
```
ℰ≃⟦⟧ : ∀ {Γ} {M : Γ ⊢ ★}
M ≃ ⟦ M ⟧
ℰ≃⟦⟧ {Γ} {` x} = var-equiv
@ -523,7 +523,7 @@ with the congruence lemmas for `` and `●`.
≃⟨⟩
⟦ L · M ⟧
\end{code}
```
## Unicode
@ -532,4 +532,3 @@ This chapter uses the following unicode:
U+2131 SCRIPT CAPITAL F (\McF)
● U+2131 BLACK CIRCLE (\cib)

View file

@ -6,9 +6,9 @@ permalink : /Confluence/
next : /CallByName/
---
\begin{code}
```
module plfa.Confluence where
\end{code}
```
## Introduction
@ -60,7 +60,7 @@ confluence for parallel reduction.
## Imports
\begin{code}
```
open import plfa.Substitution
using (subst-commute; rename-subst-commute; Rename; Subst)
open import plfa.LambdaReduction
@ -74,13 +74,13 @@ open Eq using (_≡_; refl)
open import Function using (_∘_)
open import Data.Product using (_×_; Σ; Σ-syntax; ∃; ∃-syntax; proj₁; proj₂)
renaming (_,_ to ⟨_,_⟩)
\end{code}
```
## Parallel Reduction
The parallel reduction relation is defined as follows.
\begin{code}
```
infix 2 _⇛_
data _⇛_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -105,7 +105,7 @@ data _⇛_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
→ M ⇛ M
-----------------------
→ (ƛ N) · M ⇛ N [ M ]
\end{code}
```
The first three rules are congruences that reduce each of their
parts simultaneously. The last rule reduces a lambda term and
term in parallel followed by a beta step.
@ -116,15 +116,15 @@ akin to the `ζ` rule and `pbeta` is akin to `β`.
Parallel reduction is reflexive.
\begin{code}
```
par-refl : ∀{Γ A}{M : Γ ⊢ A} → M ⇛ M
par-refl {Γ} {A} {` x} = pvar
par-refl {Γ} {★} {ƛ N} = pabs par-refl
par-refl {Γ} {★} {L · M} = papp par-refl par-refl
\end{code}
```
We define the sequences of parallel reduction as follows.
\begin{code}
```
infix 2 _⇛*_
infixr 2 _⇛⟨_⟩_
infix 3 _∎
@ -140,7 +140,7 @@ data _⇛*_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
→ M ⇛* N
---------
→ L ⇛* N
\end{code}
```
## Equivalence between parallel reduction and reduction
@ -149,7 +149,7 @@ The only-if direction is particularly easy. We start by showing
that if `M —→ N`, then `M ⇛ N`. The proof is by induction on
the reduction `M —→ N`.
\begin{code}
```
beta-par : ∀{Γ A}{M N : Γ ⊢ A}
→ M —→ N
------
@ -158,13 +158,13 @@ beta-par {Γ} {★} {L · M} (ξ₁ r) = papp (beta-par {M = L} r) par-refl
beta-par {Γ} {★} {L · M} (ξ₂ r) = papp par-refl (beta-par {M = M} r)
beta-par {Γ} {★} {(ƛ N) · M} β = pbeta par-refl par-refl
beta-par {Γ} {★} {ƛ N} (ζ r) = pabs (beta-par r)
\end{code}
```
With this lemma in hand we complete the only-if direction,
that `M —↠ N` implies `M ⇛* N`. The proof is a straightforward
induction on the reduction sequence `M —↠ N`.
\begin{code}
```
betas-pars : ∀{Γ A} {M N : Γ ⊢ A}
→ M —↠ N
------
@ -172,14 +172,14 @@ betas-pars : ∀{Γ A} {M N : Γ ⊢ A}
betas-pars {Γ} {A} {M₁} {.M₁} (M₁ []) = M₁ ∎
betas-pars {Γ} {A} {.L} {N} (L —→⟨ b ⟩ bs) =
L ⇛⟨ beta-par b ⟩ betas-pars bs
\end{code}
```
Now for the other direction, that `M ⇛* N` implies `M —↠ N`. The
proof of this direction is a bit different because it's not the case
that `M ⇛ N` implies `M —→ N`. After all, `M ⇛ N` performs many
reductions. So instead we shall prove that `M ⇛ N` implies `M —↠ N`.
\begin{code}
```
par-betas : ∀{Γ A}{M N : Γ ⊢ A}
→ M ⇛ N
------
@ -197,7 +197,7 @@ par-betas {Γ} {★} {(ƛ N) · M} (pbeta{N = N}{M = M} p₁ p₂) =
b = appR-cong{L = ƛ N} ih₂ in
let c = (ƛ N) · M —→⟨ β ⟩ N [ M ] [] in
—↠-trans (—↠-trans a b) c
\end{code}
```
The proof is by induction on `M ⇛ N`.
@ -222,14 +222,14 @@ The proof is by induction on `M ⇛ N`.
With this lemma in hand, we complete the proof that `M ⇛* N` implies
`M —↠ N` with a simple induction on `M ⇛* N`.
\begin{code}
```
pars-betas : ∀{Γ A} {M N : Γ ⊢ A}
→ M ⇛* N
------
→ M —↠ N
pars-betas (M₁ ∎) = M₁ []
pars-betas (L ⇛⟨ p ⟩ ps) = —↠-trans (par-betas p) (pars-betas ps)
\end{code}
```
## Substitution lemma for parallel reduction
@ -244,16 +244,16 @@ the substitution `σ` pointwise parallel reduces to `τ`,
then `subst σ N ⇛ subst τ N`. We define the notion
of pointwise parallel reduction as follows.
\begin{code}
```
par-subst : ∀{Γ Δ} → Subst Γ Δ → Subst Γ Δ → Set
par-subst {Γ}{Δ} σ σ = ∀{A}{x : Γ ∋ A} → σ x ⇛ σ x
\end{code}
```
Because substitution depends on the extension function `exts`, which
in turn relies on `rename`, we start with a version of the
substitution lemma specialized to renamings.
\begin{code}
```
par-rename : ∀{Γ Δ A} {ρ : Rename Γ Δ} {M M : Γ ⊢ A}
→ M ⇛ M
------------------------
@ -264,7 +264,7 @@ par-rename (papp p₁ p₂) = papp (par-rename p₁) (par-rename p₂)
par-rename {Γ}{Δ}{A}{ρ} (pbeta{Γ}{N}{N}{M}{M} p₁ p₂)
with pbeta (par-rename{ρ = ext ρ} p₁) (par-rename{ρ = ρ} p₂)
... | G rewrite rename-subst-commute{Γ}{Δ}{N}{M}{ρ} = G
\end{code}
```
The proof is by induction on `M ⇛ M`. The first four cases
are straightforward so we just consider the last one for `pbeta`.
@ -285,18 +285,18 @@ are straightforward so we just consider the last one for `pbeta`.
With this lemma in hand, it is straightforward to show that extending
substitutions preserves the pointwise parallel reduction relation.
\begin{code}
```
par-subst-exts : ∀{Γ Δ} {σ τ : Subst Γ Δ}
→ par-subst σ τ
→ ∀{B} → par-subst (exts σ {B = B}) (exts τ)
par-subst-exts s {x = Z} = pvar
par-subst-exts s {x = S x} = par-rename s
\end{code}
```
We are ready to prove the main lemma regarding substitution and
parallel reduction.
\begin{code}
```
subst-par : ∀{Γ Δ A} {σ τ : Subst Γ Δ} {M M : Γ ⊢ A}
→ par-subst σ τ → M ⇛ M
--------------------------
@ -313,7 +313,7 @@ subst-par {Γ} {Δ} {★} {σ} {τ} {(ƛ N) · M} s (pbeta{N = N}{M = M
(subst-par {σ = σ} s p₂)
... | G rewrite subst-commute{N = N}{M = M}{σ = τ} =
G
\end{code}
```
We proceed by induction on `M ⇛ M`.
@ -348,25 +348,25 @@ We proceed by induction on `M ⇛ M`.
Of course, if `M ⇛ M`, then `subst-zero M` pointwise parallel reduces
to `subst-zero M`.
\begin{code}
```
par-subst-zero : ∀{Γ}{A}{M M : Γ ⊢ A}
→ M ⇛ M
→ par-subst (subst-zero M) (subst-zero M)
par-subst-zero {M} {M} p {A} {Z} = p
par-subst-zero {M} {M} p {A} {S x} = pvar
\end{code}
```
We conclude this section with the desired corollary, that substitution
respects parallel reduction.
\begin{code}
```
sub-par : ∀{Γ A B} {N N : Γ , A ⊢ B} {M M : Γ ⊢ A}
→ N ⇛ N
→ M ⇛ M
--------------------------
→ N [ M ] ⇛ N [ M ]
sub-par pn pm = subst-par (par-subst-zero pm) pn
\end{code}
```
## Parallel reduction satisfies the diamond property
@ -377,7 +377,7 @@ property: that if `M ⇛ N` and `M ⇛ N`, then `N ⇛ L` and `N ⇛ L` fo
some `L`. The proof is relatively easy; it is parallel reduction's
_raison d'etre_.
\begin{code}
```
par-diamond : ∀{Γ A} {M N N : Γ ⊢ A}
→ M ⇛ N
→ M ⇛ N
@ -413,7 +413,7 @@ par-diamond {Γ}{A} (pbeta p1 p3) (pbeta p2 p4)
with par-diamond p3 p4
... | ⟨ M₃ , ⟨ p7 , p8 ⟩ ⟩ =
⟨ N₃ [ M₃ ] , ⟨ sub-par p5 p7 , sub-par p6 p8 ⟩ ⟩
\end{code}
```
The proof is by induction on both premises.
@ -466,7 +466,7 @@ if `M ⇒ N` and `M ⇒* N`, then
The proof is a straightforward induction on `M ⇒* N`,
using the diamond property in the induction step.
\begin{code}
```
strip : ∀{Γ A} {M N N : Γ ⊢ A}
→ M ⇛ N
→ M ⇛* N
@ -479,13 +479,13 @@ strip{Γ}{A}{M}{N}{N} mn (M ⇛⟨ mm' ⟩ m'n')
with strip m'l m'n'
... | ⟨ L , ⟨ ll' , n'l' ⟩ ⟩ =
⟨ L , ⟨ (N ⇛⟨ nl ⟩ ll') , n'l' ⟩ ⟩
\end{code}
```
The proof of confluence for parallel reduction is now proved by
induction on the sequence `M ⇛* N`, using the above lemma in the
induction step.
\begin{code}
```
par-confluence : ∀{Γ A} {L M₁ M₂ : Γ ⊢ A}
→ L ⇛* M₁
→ L ⇛* M₂
@ -498,7 +498,7 @@ par-confluence {Γ}{A}{L}{M₁}{M₂} (L ⇛⟨ L⇛M₁ ⟩ M₁⇛*M₁)
with par-confluence M₁⇛*M₁ M₁⇛*N
... | ⟨ N , ⟨ M₁⇛*N , N⇛*N ⟩ ⟩ =
⟨ N , ⟨ M₁⇛*N , (M₂ ⇛⟨ M₂⇛N ⟩ N⇛*N) ⟩ ⟩
\end{code}
```
The step case may be illustrated as follows:
@ -531,7 +531,7 @@ Then by confluence we obtain some `L` such that
`M₁ ⇛* N` and `M₂ ⇛* N`, from which we conclude that
`M₁ —↠ N` and `M₂ —↠ N` by `pars-betas`.
\begin{code}
```
confluence : ∀{Γ A} {L M₁ M₂ : Γ ⊢ A}
→ L —↠ M₁
→ L —↠ M₂
@ -541,7 +541,7 @@ confluence L↠M₁ L↠M₂
with par-confluence (betas-pars L↠M₁) (betas-pars L↠M₂)
... | ⟨ N , ⟨ M₁⇛N , M₂⇛N ⟩ ⟩ =
⟨ N , ⟨ pars-betas M₁⇛N , pars-betas M₂⇛N ⟩ ⟩
\end{code}
```
## Notes
@ -566,4 +566,3 @@ Homeier's (TPHOLs 2001).
This chapter uses the following unicode:
⇛ U+3015 RIGHTWARDS TRIPLE ARROW (\r== or \Rrightarrow)

View file

@ -6,9 +6,9 @@ permalink : /Connectives/
next : /Negation/
---
\begin{code}
```
module plfa.Connectives where
\end{code}
```
<!-- The ⊥ ⊎ A ≅ A exercise requires a (inj₁ ()) pattern,
which the reader will not have seen. Restore this
@ -29,7 +29,7 @@ principle known as _Propositions as Types_:
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl)
open Eq.≡-Reasoning
@ -37,7 +37,7 @@ open import Data.Nat using ()
open import Function using (_∘_)
open import plfa.Isomorphism using (_≃_; _≲_; extensionality)
open plfa.Isomorphism.≃-Reasoning
\end{code}
```
## Conjunction is product
@ -45,7 +45,7 @@ open plfa.Isomorphism.≃-Reasoning
Given two propositions `A` and `B`, the conjunction `A × B` holds
if both `A` holds and `B` holds. We formalise this idea by
declaring a suitable inductive type:
\begin{code}
```
data _×_ (A B : Set) : Set where
⟨_,_⟩ :
@ -53,14 +53,14 @@ data _×_ (A B : Set) : Set where
→ B
-----
→ A × B
\end{code}
```
Evidence that `A × B` holds is of the form `⟨ M , N ⟩`, where `M`
provides evidence that `A` holds and `N` provides evidence that `B`
holds.
Given evidence that `A × B` holds, we can conclude that either
`A` holds or `B` holds:
\begin{code}
```
proj₁ : ∀ {A B : Set}
→ A × B
-----
@ -72,18 +72,18 @@ proj₂ : ∀ {A B : Set}
-----
→ B
proj₂ ⟨ x , y ⟩ = y
\end{code}
```
If `L` provides evidence that `A × B` holds, then `proj₁ L` provides evidence
that `A` holds, and `proj₂ L` provides evidence that `B` holds.
Equivalently, we could also declare conjunction as a record type:
\begin{code}
```
record _×_ (A B : Set) : Set where
field
proj₁ : A
proj₂ : B
open _×_
\end{code}
```
Here record construction
record
@ -121,19 +121,19 @@ _Communications of the ACM_, December 2015.)
In this case, applying each destructor and reassembling the results with the
constructor is the identity over products:
\begin{code}
```
η-× : ∀ {A B : Set} (w : A × B) → ⟨ proj₁ w , proj₂ w ⟩ ≡ w
η-× ⟨ x , y ⟩ = refl
\end{code}
```
The pattern matching on the left-hand side is essential, since
replacing `w` by `⟨ x , y ⟩` allows both sides of the
propositional equality to simplify to the same term.
We set the precedence of conjunction so that it binds less
tightly than anything save disjunction:
\begin{code}
```
infixr 2 _×_
\end{code}
```
Thus, `m ≤ n × n ≤ p` parses as `(m ≤ n) × (n ≤ p)`.
Given two types `A` and `B`, we refer to `A × B` as the
@ -145,7 +145,7 @@ distinct members, and type `B` has `n` distinct members,
then the type `A × B` has `m * n` distinct members.
For instance, consider a type `Bool` with two members, and
a type `Tri` with three members:
\begin{code}
```
data Bool : Set where
true : Bool
false : Bool
@ -154,7 +154,7 @@ data Tri : Set where
aa : Tri
bb : Tri
cc : Tri
\end{code}
```
Then the type `Bool × Tri` has six members:
⟨ true , aa ⟩ ⟨ true , bb ⟩ ⟨ true , cc ⟩
@ -162,7 +162,7 @@ Then the type `Bool × Tri` has six members:
For example, the following function enumerates all
possible arguments of type `Bool × Tri`:
\begin{code}
```
×-count : Bool × Tri →
×-count ⟨ true , aa ⟩ = 1
×-count ⟨ true , bb ⟩ = 2
@ -170,7 +170,7 @@ possible arguments of type `Bool × Tri`:
×-count ⟨ false , aa ⟩ = 4
×-count ⟨ false , bb ⟩ = 5
×-count ⟨ false , cc ⟩ = 6
\end{code}
```
Product on types also shares a property with product on numbers in
that there is a sense in which it is commutative and associative. In
@ -182,7 +182,7 @@ For commutativity, the `to` function swaps a pair, taking `⟨ x , y ⟩` to
Instantiating the patterns correctly in `from∘to` and `to∘from` is essential.
Replacing the definition of `from∘to` by `λ w → refl` will not work;
and similarly for `to∘from`:
\begin{code}
```
×-comm : ∀ {A B : Set} → A × B ≃ B × A
×-comm =
record
@ -191,7 +191,7 @@ and similarly for `to∘from`:
; from∘to = λ{ ⟨ x , y ⟩ → refl }
; to∘from = λ{ ⟨ y , x ⟩ → refl }
}
\end{code}
```
Being _commutative_ is different from being _commutative up to
isomorphism_. Compare the two statements:
@ -210,7 +210,7 @@ For associativity, the `to` function reassociates two uses of pairing,
taking `⟨ ⟨ x , y ⟩ , z ⟩` to `⟨ x , ⟨ y , z ⟩ ⟩`, and the `from` function does
the inverse. Again, the evidence of left and right inverse requires
matching against a suitable pattern to enable simplification:
\begin{code}
```
×-assoc : ∀ {A B C : Set} → (A × B) × C ≃ A × (B × C)
×-assoc =
record
@ -219,7 +219,7 @@ matching against a suitable pattern to enable simplification:
; from∘to = λ{ ⟨ ⟨ x , y ⟩ , z ⟩ → refl }
; to∘from = λ{ ⟨ x , ⟨ y , z ⟩ ⟩ → refl }
}
\end{code}
```
Being _associative_ is not the same as being _associative
up to isomorphism_. Compare the two statements:
@ -237,22 +237,22 @@ corresponds to `⟨ 1 , ⟨ true , aa ⟩ ⟩`, which is a member of the latter.
Show that `A ⇔ B` as defined [earlier][plfa.Isomorphism#iff]
is isomorphic to `(A → B) × (B → A)`.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Truth is unit
Truth `` always holds. We formalise this idea by
declaring a suitable inductive type:
\begin{code}
```
data : Set where
tt :
--
\end{code}
```
Evidence that `` holds is of the form `tt`.
There is an introduction rule, but no elimination rule.
@ -262,10 +262,10 @@ us nothing new.
The nullary case of `η-×` is `η-`, which asserts that any
value of type `` must be equal to `tt`:
\begin{code}
```
η- : ∀ (w : ) → tt ≡ w
η- tt = refl
\end{code}
```
The pattern matching on the left-hand side is essential. Replacing
`w` by `tt` allows both sides of the propositional equality to
simplify to the same term.
@ -273,17 +273,17 @@ simplify to the same term.
We refer to `` as the _unit_ type. And, indeed,
type `` has exactly one member, `tt`. For example, the following
function enumerates all possible arguments of type ``:
\begin{code}
```
-count :
-count tt = 1
\end{code}
```
For numbers, one is the identity of multiplication. Correspondingly,
unit is the identity of product _up to isomorphism_. For left
identity, the `to` function takes `⟨ tt , x ⟩` to `x`, and the `from`
function does the inverse. The evidence of left inverse requires
matching against a suitable pattern to enable simplification:
\begin{code}
```
-identityˡ : ∀ {A : Set} → × A ≃ A
-identityˡ =
record
@ -292,7 +292,7 @@ matching against a suitable pattern to enable simplification:
; from∘to = λ{ ⟨ tt , x ⟩ → refl }
; to∘from = λ{ x → refl }
}
\end{code}
```
Having an _identity_ is different from having an identity
_up to isomorphism_. Compare the two statements:
@ -308,7 +308,7 @@ For instance, `⟨ tt , true ⟩`, which is a member of the former,
corresponds to `true`, which is a member of the latter.
Right identity follows from commutativity of product and left identity:
\begin{code}
```
-identityʳ : ∀ {A : Set} → (A × ) ≃ A
-identityʳ {A} =
≃-begin
@ -318,7 +318,7 @@ Right identity follows from commutativity of product and left identity:
≃⟨ -identityˡ ⟩
A
≃-∎
\end{code}
```
Here we have used a chain of isomorphisms, analogous to that used for
equality.
@ -328,7 +328,7 @@ equality.
Given two propositions `A` and `B`, the disjunction `A ⊎ B` holds
if either `A` holds or `B` holds. We formalise this idea by
declaring a suitable inductive type:
\begin{code}
```
data _⊎_ (A B : Set) : Set where
inj₁ :
@ -340,14 +340,14 @@ data _⊎_ (A B : Set) : Set where
B
-----
→ A ⊎ B
\end{code}
```
Evidence that `A ⊎ B` holds is either of the form `inj₁ M`, where `M`
provides evidence that `A` holds, or `inj₂ N`, where `N` provides
evidence that `B` holds.
Given evidence that `A → C` and `B → C` both hold, then given
evidence that `A ⊎ B` holds we can conclude that `C` holds:
\begin{code}
```
case-⊎ : ∀ {A B C : Set}
→ (A → C)
→ (B → C)
@ -356,7 +356,7 @@ case-⊎ : ∀ {A B C : Set}
→ C
case-⊎ f g (inj₁ x) = f x
case-⊎ f g (inj₂ y) = g y
\end{code}
```
Pattern matching against `inj₁` and `inj₂` is typical of how we exploit
evidence that a disjunction holds.
@ -370,27 +370,27 @@ the former are sometimes given the names `⊎-I₁` and `⊎-I₂` and the
latter the name `⊎-E`.
Applying the destructor to each of the constructors is the identity:
\begin{code}
```
η-⊎ : ∀ {A B : Set} (w : A ⊎ B) → case-⊎ inj₁ inj₂ w ≡ w
η-⊎ (inj₁ x) = refl
η-⊎ (inj₂ y) = refl
\end{code}
```
More generally, we can also throw in an arbitrary function from a disjunction:
\begin{code}
```
uniq-⊎ : ∀ {A B C : Set} (h : A ⊎ B → C) (w : A ⊎ B) →
case-⊎ (h ∘ inj₁) (h ∘ inj₂) w ≡ h w
uniq-⊎ h (inj₁ x) = refl
uniq-⊎ h (inj₂ y) = refl
\end{code}
```
The pattern matching on the left-hand side is essential. Replacing
`w` by `inj₁ x` allows both sides of the propositional equality to
simplify to the same term, and similarly for `inj₂ y`.
We set the precedence of disjunction so that it binds less tightly
than any other declared operator:
\begin{code}
```
infix 1 _⊎_
\end{code}
```
Thus, `A × C ⊎ B × C` parses as `(A × C) ⊎ (B × C)`.
Given two types `A` and `B`, we refer to `A ⊎ B` as the
@ -411,14 +411,14 @@ members:
For example, the following function enumerates all
possible arguments of type `Bool ⊎ Tri`:
\begin{code}
```
⊎-count : Bool ⊎ Tri →
⊎-count (inj₁ true) = 1
⊎-count (inj₁ false) = 2
⊎-count (inj₂ aa) = 3
⊎-count (inj₂ bb) = 4
⊎-count (inj₂ cc) = 5
\end{code}
```
Sum on types also shares a property with sum on numbers in that it is
commutative and associative _up to isomorphism_.
@ -427,26 +427,26 @@ commutative and associative _up to isomorphism_.
Show sum is commutative up to isomorphism.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `⊎-assoc`
Show sum is associative up to isomorphism.
\begin{code}
```
-- Your code goes here
\end{code}
```
## False is empty
False `⊥` never holds. We formalise this idea by declaring
a suitable inductive type:
\begin{code}
```
data ⊥ : Set where
-- no clauses!
\end{code}
```
There is no possible evidence that `⊥` holds.
Dual to ``, for `⊥` there is no introduction rule but an elimination rule.
@ -456,13 +456,13 @@ conclude anything! This is a basic principle of logic, known in
medieval times by the Latin phrase _ex falso_, and known to children
through phrases such as "if pigs had wings, then I'd be the Queen of
Sheba". We formalise it as follows:
\begin{code}
```
⊥-elim : ∀ {A : Set}
→ ⊥
--
→ A
⊥-elim ()
\end{code}
```
This is our first use of the _absurd pattern_ `()`.
Here since `⊥` is a type with no members, we indicate that it is
_never_ possible to match against a value of this type by using
@ -474,20 +474,20 @@ in the standard library.
The nullary case of `uniq-⊎` is `uniq-⊥`, which asserts that `⊥-elim`
is equal to any arbitrary function from `⊥`:
\begin{code}
```
uniq-⊥ : ∀ {C : Set} (h : ⊥ → C) (w : ⊥) → ⊥-elim w ≡ h w
uniq-⊥ h ()
\end{code}
```
Using the absurd pattern asserts there are no possible values for `w`,
so the equation holds trivially.
We refer to `⊥` as the _empty_ type. And, indeed,
type `⊥` has no members. For example, the following function
enumerates all possible arguments of type `⊥`:
\begin{code}
```
⊥-count : ⊥ →
⊥-count ()
\end{code}
```
Here again the absurd pattern `()` indicates that no value can match
type `⊥`.
@ -498,17 +498,17 @@ is the identity of sums _up to isomorphism_.
Show empty is the left identity of sums up to isomorphism.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `⊥-identityʳ`
Show empty is the right identity of sums up to isomorphism.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Implication is function {#implication}
@ -528,14 +528,14 @@ converts evidence that `A` holds into evidence that `B` holds.
Put another way, if we know that `A → B` and `A` both hold,
then we may conclude that `B` holds:
\begin{code}
```
→-elim : ∀ {A B : Set}
→ (A → B)
→ A
-------
→ B
→-elim L M = L M
\end{code}
```
In medieval times, this rule was known by the name _modus ponens_.
It corresponds to function application.
@ -544,10 +544,10 @@ is referred to as _introducing_ a function,
while applying a function is referred to as _eliminating_ the function.
Elimination followed by introduction is the identity:
\begin{code}
```
η-→ : ∀ {A B : Set} (f : A → B) → (λ (x : A) → f x) ≡ f
η-→ f = refl
\end{code}
```
Implication binds less tightly than any other operator. Thus, `A ⊎ B →
B ⊎ A` parses as `(A ⊎ B) → (B ⊎ A)`.
@ -568,7 +568,7 @@ three squared) members:
For example, the following function enumerates all possible
arguments of the type `Bool → Tri`:
\begin{code}
```
→-count : (Bool → Tri) →
→-count f with f true | f false
... | aa | aa = 1
@ -580,7 +580,7 @@ arguments of the type `Bool → Tri`:
... | cc | aa = 7
... | cc | bb = 8
... | cc | cc = 9
\end{code}
```
Exponential on types also share a property with exponential on
numbers in that many of the standard identities for numbers carry
@ -598,7 +598,7 @@ Both types can be viewed as functions that given evidence that `A` holds
and evidence that `B` holds can return evidence that `C` holds.
This isomorphism sometimes goes by the name *currying*.
The proof of the right inverse requires extensionality:
\begin{code}
```
currying : ∀ {A B C : Set} → (A → B → C) ≃ (A × B → C)
currying =
record
@ -607,7 +607,7 @@ currying =
; from∘to = λ{ f → refl }
; to∘from = λ{ g → extensionality λ{ ⟨ x , y ⟩ → refl }}
}
\end{code}
```
Currying tells us that instead of a function that takes a pair of arguments,
we can have a function that takes the first argument and returns a function that
@ -634,7 +634,7 @@ we have the isomorphism:
That is, the assertion that if either `A` holds or `B` holds then `C` holds
is the same as the assertion that if `A` holds then `C` holds and if
`B` holds then `C` holds. The proof of the left inverse requires extensionality:
\begin{code}
```
→-distrib-⊎ : ∀ {A B C : Set} → (A ⊎ B → C) ≃ ((A → C) × (B → C))
→-distrib-⊎ =
record
@ -643,7 +643,7 @@ is the same as the assertion that if `A` holds then `C` holds and if
; from∘to = λ{ f → extensionality λ{ (inj₁ x) → refl ; (inj₂ y) → refl } }
; to∘from = λ{ ⟨ g , h ⟩ → refl }
}
\end{code}
```
Corresponding to the law
@ -657,7 +657,7 @@ That is, the assertion that if `A` holds then `B` holds and `C` holds
is the same as the assertion that if `A` holds then `B` holds and if
`A` holds then `C` holds. The proof of left inverse requires both extensionality
and the rule `η-×` for products:
\begin{code}
```
→-distrib-× : ∀ {A B C : Set} → (A → B × C) ≃ (A → B) × (A → C)
→-distrib-× =
record
@ -666,14 +666,14 @@ and the rule `η-×` for products:
; from∘to = λ{ f → extensionality λ{ x → η-× (f x) } }
; to∘from = λ{ ⟨ g , h ⟩ → refl }
}
\end{code}
```
## Distribution
Products distribute over sum, up to isomorphism. The code to validate
this fact is similar in structure to our previous results:
\begin{code}
```
×-distrib-⊎ : ∀ {A B C : Set} → (A ⊎ B) × C ≃ (A × C) ⊎ (B × C)
×-distrib-⊎ =
record
@ -690,10 +690,10 @@ this fact is similar in structure to our previous results:
; (inj₂ ⟨ y , z ⟩) → refl
}
}
\end{code}
```
Sums do not distribute over products up to isomorphism, but it is an embedding:
\begin{code}
```
⊎-distrib-× : ∀ {A B C : Set} → (A × B) ⊎ C ≲ (A ⊎ C) × (B ⊎ C)
⊎-distrib-× =
record
@ -708,7 +708,7 @@ Sums do not distribute over products up to isomorphism, but it is an embedding:
; (inj₂ z) → refl
}
}
\end{code}
```
Note that there is a choice in how we write the `from` function.
As given, it takes `⟨ inj₂ z , inj₂ z ⟩` to `inj₂ z`, but it is
easy to write a variant that instead returns `inj₂ z`. We have
@ -730,42 +730,42 @@ one of these laws is "more true" than the other.
#### Exercise `⊎-weak-×` (recommended)
Show that the following property holds:
\begin{code}
```
postulate
⊎-weak-× : ∀ {A B C : Set} → (A ⊎ B) × C → A ⊎ (B × C)
\end{code}
```
This is called a _weak distributive law_. Give the corresponding
distributive law, and explain how it relates to the weak version.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `⊎×-implies-×⊎`
Show that a disjunct of conjuncts implies a conjunct of disjuncts:
\begin{code}
```
postulate
⊎×-implies-×⊎ : ∀ {A B C D : Set} → (A × B) ⊎ (C × D) → (A ⊎ C) × (B ⊎ D)
\end{code}
```
Does the converse hold? If so, prove; if not, give a counterexample.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Standard library
Definitions similar to those in this chapter can be found in the standard library:
\begin{code}
```
import Data.Product using (_×_; proj₁; proj₂) renaming (_,_ to ⟨_,_⟩)
import Data.Unit using (; tt)
import Data.Sum using (_⊎_; inj₁; inj₂) renaming ([_,_] to case-⊎)
import Data.Empty using (⊥; ⊥-elim)
import Function.Equivalence using (_⇔_)
\end{code}
```
The standard library constructs pairs with `_,_` whereas we use `⟨_,_⟩`.
The former makes it convenient to build triples or larger tuples from pairs,
permitting `a , b , c` to stand for `(a , (b , c))`. But it conflicts with

View file

@ -6,13 +6,13 @@ permalink : /ContextualEquivalence/
next : /Acknowledgements/
---
\begin{code}
```
module plfa.ContextualEquivalence where
\end{code}
```
## Imports
\begin{code}
```
open import plfa.Untyped using (_⊢_; ★; ∅; _,_; ƛ_)
open import plfa.LambdaReduction using (_—↠_)
open import plfa.Denotational using (; _≃_; ≃-sym; ≃-trans; _iff_)
@ -23,7 +23,7 @@ open import plfa.CallByName using (_⊢_⇓_; cbn→reduce)
open import Data.Product using (_×_; Σ; Σ-syntax; ∃; ∃-syntax; proj₁; proj₂)
renaming (_,_ to ⟨_,_⟩)
\end{code}
```
## Contextual Equivalence
@ -36,20 +36,20 @@ results. As discuss in the Denotational chapter, the result of
a program in the lambda calculus is to terminate or not.
We characterize termination with the reduction semantics as follows.
\begin{code}
```
terminates : ∀{Γ} → (M : Γ ⊢ ★) → Set
terminates {Γ} M = Σ[ N ∈ (Γ , ★ ⊢ ★) ] (M —↠ ƛ N)
\end{code}
```
So two terms are contextually equivalent if plugging them into the
same context produces two programs that either terminate or diverge
together.
\begin{code}
```
_≅_ : ∀{Γ} → (M N : Γ ⊢ ★) → Set
(_≅_ {Γ} M N) = ∀ {C : Ctx Γ ∅}
→ (terminates (plug C M)) iff (terminates (plug C N))
\end{code}
```
The contextual equivalence of two terms is difficult to prove directly
based on the above definition because of the universal quantification
@ -71,7 +71,7 @@ The lemma states that if `M` and `N` are denotationally equal
and if `M` plugged into `C` terminates, then so does
`N` plugged into `C`.
\begin{code}
```
denot-equal-terminates : ∀{Γ} {M N : Γ ⊢ ★} {C : Ctx Γ ∅}
M ≃ N → terminates (plug C M)
-----------------------------------
@ -81,7 +81,7 @@ denot-equal-terminates {Γ}{M}{N}{C} M≃N ⟨ N , CM—↠ƛN ⟩ =
let CM≃CN = compositionality{Γ = Γ}{Δ = ∅}{C = C} M≃N in
let CN≃ƛN = ≃-trans (≃-sym CM≃CN) CM≃ƛN in
cbn→reduce (proj₂ (proj₂ (proj₂ (adequacy CN≃ƛN))))
\end{code}
```
The proof is direct. Because `plug C —↠ plug C (ƛN)`,
we can apply soundness to obtain
@ -108,7 +108,7 @@ so we conclude that
The main theorem follows by two applications of the lemma.
\begin{code}
```
denot-equal-contex-equal : ∀{Γ} {M N : Γ ⊢ ★}
M ≃ N
---------
@ -116,7 +116,7 @@ denot-equal-contex-equal : ∀{Γ} {M N : Γ ⊢ ★}
denot-equal-contex-equal{Γ}{M}{N} eq {C} =
⟨ (λ tm → denot-equal-terminates eq tm) ,
(λ tn → denot-equal-terminates (≃-sym eq) tn) ⟩
\end{code}
```
## Unicode

View file

@ -6,9 +6,9 @@ permalink : /DeBruijn/
next : /More/
---
\begin{code}
```
module plfa.DeBruijn where
\end{code}
```
The previous two chapters introduced lambda calculus, with a
formalisation based on named variables, and terms defined
@ -29,13 +29,13 @@ James Chapman, James McKinna, and many others.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl)
open import Data.Empty using (⊥; ⊥-elim)
open import Data.Nat using (; zero; suc)
open import Relation.Nullary using (¬_)
\end{code}
```
## Introduction
@ -207,7 +207,7 @@ We now begin our formal development.
First, we get all our infix declarations out of the way.
We list separately operators for judgments, types, and terms:
\begin{code}
```
infix 4 _⊢_
infix 4 _∋_
infixl 5 _,_
@ -221,7 +221,7 @@ infix 8 `suc_
infix 9 `_
infix 9 S_
infix 9 #_
\end{code}
```
Since terms are inherently typed, we must define types and
contexts before terms.
@ -230,30 +230,30 @@ contexts before terms.
As before, we have just two types, functions and naturals.
The formal definition is unchanged:
\begin{code}
```
data Type : Set where
_⇒_ : Type → Type → Type
` : Type
\end{code}
```
### Contexts
Contexts are as before, but we drop the names.
Contexts are formalised as follows:
\begin{code}
```
data Context : Set where
∅ : Context
_,_ : Context → Type → Context
\end{code}
```
A context is just a list of types, with the type of the most
recently bound variable on the right. As before, we let `Γ`
and `Δ` range over contexts. We write `∅` for the empty
context, and `Γ , A` for the context `Γ` extended by type `A`.
For example
\begin{code}
```
_ : Context
_ = ∅ , ` ⇒ ` , `
\end{code}
```
is a context with two variables in scope, where the outer
bound one has type `` ` → ` ``, and the inner bound one has
type `` ` ``.
@ -269,7 +269,7 @@ correspond to natural numbers. We write
for variables which in context `Γ` have type `A`. Their
formalisation looks exactly like the old lookup judgment, but
with all variable names dropped:
\begin{code}
```
data _∋_ : Context → Type → Set where
Z : ∀ {Γ A}
@ -280,7 +280,7 @@ data _∋_ : Context → Type → Set where
→ Γ ∋ A
---------
→ Γ , B ∋ A
\end{code}
```
Constructor `S` no longer requires an additional parameter,
since without names shadowing is no longer an issue. Now
constructors `Z` and `S` correspond even more closely to the
@ -295,13 +295,13 @@ judgments:
* `` ∅ , "s" ⦂ ` ⇒ ` , "z" ⦂ ` ∋ "s" ⦂ `` ``
They correspond to the following inherently-typed variables:
\begin{code}
```
_ : ∅ , ` ⇒ ` , ` ∋ `
_ = Z
_ : ∅ , ` ⇒ ` , ` ∋ ` ⇒ `
_ = S Z
\end{code}
```
In the given context, `"z"` is represented by `Z`
(as the most recently bound variable),
and `"s"` by `S Z`
@ -317,7 +317,7 @@ We write
for terms which in context `Γ` has type `A`. Their
formalisation looks exactly like the old typing judgment, but
with all terms and variable names dropped:
\begin{code}
```
data _⊢_ : Context → Type → Set where
`_ : ∀ {Γ} {A}
@ -356,7 +356,7 @@ data _⊢_ : Context → Type → Set where
→ Γ , A ⊢ A
----------
→ Γ ⊢ A
\end{code}
```
The definition exploits the close correspondence between the
structure of terms and the structure of a derivation showing
that it is well-typed: now we use the derivation _as_ the
@ -373,7 +373,7 @@ judgments:
* `` ∅ ⊢ ƛ "s" ⇒ ƛ "z" ⇒ ` "s" · (` "s" · ` "z")) ⦂ (``) ⇒ `` ``
They correspond to the following inherently-typed terms:
\begin{code}
```
_ : ∅ , ` ⇒ ` , ` ⊢ `
_ = ` Z
@ -391,20 +391,20 @@ _ = ƛ (` S Z · (` S Z · ` Z))
_ : ∅ ⊢ (``) ⇒ ` ⇒ `
_ = ƛ ƛ (` S Z · (` S Z · ` Z))
\end{code}
```
The final inherently-typed term represents the Church numeral
two.
### Abbreviating de Bruijn indices
We can use a natural number to select a type from a context:
\begin{code}
```
lookup : Context → → Type
lookup (Γ , A) zero = A
lookup (Γ , _) (suc n) = lookup Γ n
lookup ∅ _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
We intend to apply the function only when the natural is
shorter than the length of the context, which we indicate by
postulating an `impossible` term, just as we did
@ -412,26 +412,26 @@ postulating an `impossible` term, just as we did
Given the above, we can convert a natural to a corresponding
de Bruijn index, looking up its type in the context:
\begin{code}
```
count : ∀ {Γ} → (n : ) → Γ ∋ lookup Γ n
count {Γ , _} zero = Z
count {Γ , _} (suc n) = S (count n)
count {∅} _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
This requires the same trick as before.
We can then introduce a convenient abbreviation for variables:
\begin{code}
```
#_ : ∀ {Γ} → (n : ) → Γ ⊢ lookup Γ n
# n = ` count n
\end{code}
```
With this abbreviation, we can rewrite the Church numeral two more compactly:
\begin{code}
```
_ : ∅ ⊢ (``) ⇒ ` ⇒ `
_ = ƛ ƛ (# 1 · (# 1 · # 0))
\end{code}
```
### Test examples
@ -443,7 +443,7 @@ You can find them
for comparison.
First, computing two plus two on naturals:
\begin{code}
```
two : ∀ {Γ} → Γ ⊢ `
two = `suc `suc `zero
@ -452,12 +452,12 @@ plus = μ ƛ ƛ (case (# 1) (# 0) (`suc (# 3 · # 0 · # 1)))
2+2 : ∀ {Γ} → Γ ⊢ `
2+2 = plus · two · two
\end{code}
```
We generalise to arbitrary contexts because later we will give examples
where `two` appears nested inside binders.
Next, computing two plus two on Church numerals:
\begin{code}
```
Ch : Type → Type
Ch A = (A ⇒ A) ⇒ A ⇒ A
@ -472,7 +472,7 @@ sucᶜ = ƛ `suc (# 0)
2+2ᶜ : ∀ {Γ} → Γ ⊢ `
2+2ᶜ = plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero
\end{code}
```
As before we generalise everything to arbitrary
contexts. While we are at it, we also generalise `twoᶜ` and
`plusᶜ` to Church numerals over arbitrary types.
@ -485,9 +485,9 @@ Write out the definition of a lambda term that multiplies
two natural numbers, now adapted to the inherently typed
DeBruijn representation.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Renaming
@ -504,14 +504,14 @@ from variables in one context to variables in another,
extension yields a map from the first context extended to the
second context similarly extended. It looks exactly like the
old extension lemma, but with all names and terms dropped:
\begin{code}
```
ext : ∀ {Γ Δ}
→ (∀ {A} → Γ ∋ A → Δ ∋ A)
-----------------------------------
→ (∀ {A B} → Γ , B ∋ A → Δ , B ∋ A)
ext ρ Z = Z
ext ρ (S x) = S (ρ x)
\end{code}
```
Let `ρ` be the name of the map that takes variables in `Γ`
to variables in `Δ`. Consider the de Bruijn index of the
variable in `Γ , B`:
@ -527,7 +527,7 @@ With extension under our belts, it is straightforward
to define renaming. If variables in one context map to
variables in another, then terms in the first context map to
terms in the second:
\begin{code}
```
rename : ∀ {Γ Δ}
→ (∀ {A} → Γ ∋ A → Δ ∋ A)
------------------------
@ -539,7 +539,7 @@ rename ρ (`zero) = `zero
rename ρ (`suc M) = `suc (rename ρ M)
rename ρ (case L M N) = case (rename ρ L) (rename ρ M) (rename (ext ρ) N)
rename ρ (μ N) = μ (rename (ext ρ) N)
\end{code}
```
Let `ρ` be the name of the map that takes variables in `Γ`
to variables in `Δ`. Let's unpack the first three cases:
@ -566,7 +566,7 @@ calculus.
Here is an example of renaming a term with one free
and one bound variable:
\begin{code}
```
M₀ : ∅ , ` ⇒ `` ⇒ `
M₀ = ƛ (# 1 · (# 1 · # 0))
@ -575,7 +575,7 @@ M₁ = ƛ (# 2 · (# 2 · # 0))
_ : rename S_ M₀ ≡ M₁
_ = refl
\end{code}
```
In general, `rename S_` will increment the de Bruijn index for
each free variable by one, while leaving the index for each
bound variable unchanged. The code achieves this naturally:
@ -613,14 +613,14 @@ map from variables in one context to _terms_ in another.
Given a map from variables in one context map to terms over
another, extension yields a map from the first context
extended to the second context similarly extended:
\begin{code}
```
exts : ∀ {Γ Δ}
→ (∀ {A} → Γ ∋ A → Δ ⊢ A)
----------------------------------
→ (∀ {A B} → Γ , B ∋ A → Δ , B ⊢ A)
exts σ Z = ` Z
exts σ (S x) = rename S_ (σ x)
\end{code}
```
Let `σ` be the name of the map that takes variables in `Γ`
to terms over `Δ`. Consider the de Bruijn index of the
variable in `Γ , B`:
@ -641,7 +641,7 @@ With extension under our belts, it is straightforward
to define substitution. If variable in one context map
to terms over another, then terms in the first context
map to terms in the second:
\begin{code}
```
subst : ∀ {Γ Δ}
→ (∀ {A} → Γ ∋ A → Δ ⊢ A)
------------------------
@ -653,7 +653,7 @@ subst σ (`zero) = `zero
subst σ (`suc M) = `suc (subst σ M)
subst σ (case L M N) = case (subst σ L) (subst σ M) (subst (exts σ) N)
subst σ (μ N) = μ (subst (exts σ) N)
\end{code}
```
Let `σ` be the name of the map that takes variables in `Γ`
to terms over `Δ`. Let's unpack the first three cases:
@ -675,7 +675,7 @@ bound variable.
From the general case of substitution for multiple free
variables it is easy to define the special case of
substitution for one free variable:
\begin{code}
```
_[_] : ∀ {Γ A B}
→ Γ , B ⊢ A
→ Γ ⊢ B
@ -686,7 +686,7 @@ _[_] {Γ} {A} {B} N M = subst {Γ , B} {Γ} σ {A} N
σ : ∀ {A} → Γ , B ∋ A → Γ ⊢ A
σ Z = M
σ (S x) = ` x
\end{code}
```
In a term of type `A` over context `Γ , B`, we replace the
variable of type `B` by a term of type `B` over context `Γ`.
To do so, we use a map from the context `Γ , B` to the context
@ -699,7 +699,7 @@ Consider the previous example:
`` ƛ "z" ⇒ sucᶜ · (sucᶜ · ` "z") ``
Here is the example formalised:
\begin{code}
```
M₂ : ∅ , ` ⇒ `` ⇒ `
M₂ = ƛ # 1 · (# 1 · # 0)
@ -711,7 +711,7 @@ M₄ = ƛ (ƛ `suc # 0) · ((ƛ `suc # 0) · # 0)
_ : M₂ [ M₃ ] ≡ M₄
_ = refl
\end{code}
```
Previously, we presented an example of substitution that we
did not implement, since it needed to rename the bound
@ -723,7 +723,7 @@ variable to avoid capture:
Say the bound `"x"` has type `` ` ⇒ ` ``, the substituted
`"y"` has type `` ` ``, and the free `"x"` also has type ``
` ⇒ ` ``. Here is the example formalised:
\begin{code}
```
M₅ : ∅ , ` ⇒ ` , ` ⊢ (``) ⇒ `
M₅ = ƛ # 0 · # 1
@ -735,7 +735,7 @@ M₇ = ƛ (# 0 · (# 1 · `zero))
_ : M₅ [ M₆ ] ≡ M₇
_ = refl
\end{code}
```
The logician Haskell Curry observed that getting the
definition of substitution right can be a tricky business. It
@ -753,7 +753,7 @@ to sneak in.
The definition of value is much as before, save that the
added types incorporate the same information found in the
Canonical Forms lemma:
\begin{code}
```
data Value : ∀ {Γ A} → Γ ⊢ A → Set where
V-ƛ : ∀ {Γ A B} {N : Γ , A ⊢ B}
@ -768,7 +768,7 @@ data Value : ∀ {Γ A} → Γ ⊢ A → Set where
→ Value V
--------------
→ Value (`suc V)
\end{code}
```
Here `zero` requires an implicit parameter to aid inference,
much in the same way that `[]` did in
@ -783,7 +783,7 @@ have compatibility rules that reduce a part of a term,
labelled with `ξ`, and rules that simplify a constructor
combined with a destructor, labelled with `β`:
\begin{code}
```
infix 2 _—→_
data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -826,7 +826,7 @@ data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
β-μ : ∀ {Γ A} {N : Γ , A ⊢ A}
---------------
→ μ N —→ N [ μ N ]
\end{code}
```
The definition states that `M —→ N` can only hold of terms `M`
and `N` which _both_ have type `Γ ⊢ A` for some context `Γ`
and type `A`. In other words, it is _built-in_ to our
@ -842,7 +842,7 @@ definition of substitution.
The reflexive and transitive closure is exactly as before.
We simply cut-and-paste the previous definition:
\begin{code}
```
infix 2 _—↠_
infix 1 begin_
infixr 2 _—→⟨_⟩_
@ -865,7 +865,7 @@ begin_ : ∀ {Γ} {A} {M N : Γ ⊢ A}
------
→ M —↠ N
begin M—↠N = M—↠N
\end{code}
```
## Examples
@ -873,7 +873,7 @@ begin M—↠N = M—↠N
We reiterate each of our previous examples. First, the Church
numeral two applied to the successor function and zero yields
the natural number two:
\begin{code}
```
_ : twoᶜ · sucᶜ · `zero {∅} —↠ `suc `suc `zero
_ =
begin
@ -887,11 +887,11 @@ _ =
—→⟨ β-ƛ (V-suc V-zero) ⟩
`suc (`suc `zero)
\end{code}
```
As before, we need to supply an explicit context to `` `zero ``.
Next, a sample reduction demonstrating that two plus two is four:
\begin{code}
```
_ : plus {∅} · two · two —↠ `suc `suc `suc `suc `zero
_ =
plus · two · two
@ -922,10 +922,10 @@ _ =
—→⟨ ξ-suc (ξ-suc β-zero) ⟩
`suc (`suc (`suc (`suc `zero)))
\end{code}
```
And finally, a similar sample reduction for Church numerals:
\begin{code}
```
_ : plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero —↠ `suc `suc `suc `suc `zero {∅}
_ =
begin
@ -955,7 +955,7 @@ _ =
—→⟨ β-ƛ (V-suc (V-suc (V-suc V-zero))) ⟩
`suc (`suc (`suc (`suc `zero)))
\end{code}
```
## Values do not reduce
@ -972,16 +972,16 @@ Following the previous development, show values do
not reduce, and its corollary, terms that reduce are not
values.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Progress
As before, every term that is well-typed and closed is either
a value or takes a reduction step. The formulation of progress
is just as before, but annotated with types:
\begin{code}
```
data Progress {A} (M : ∅ ⊢ A) : Set where
step : ∀ {N : ∅ ⊢ A}
@ -993,13 +993,13 @@ data Progress {A} (M : ∅ ⊢ A) : Set where
Value M
----------
→ Progress M
\end{code}
```
The statement and proof of progress is much as before,
appropriately annotated. We no longer need
to explicitly refer to the Canonical Forms lemma, since it
is built-in to the definition of value:
\begin{code}
```
progress : ∀ {A} → (M : ∅ ⊢ A) → Progress M
progress (` ())
progress (ƛ N) = done V-ƛ
@ -1017,7 +1017,7 @@ progress (case L M N) with progress L
... | done V-zero = step (β-zero)
... | done (V-suc VL) = step (β-suc VL)
progress (μ N) = step (β-μ)
\end{code}
```
## Evaluation
@ -1027,13 +1027,13 @@ We can do much the same here, but we no longer need to explicitly
refer to preservation, since it is built-in to the definition of reduction.
As previously, gas is specified by a natural number:
\begin{code}
```
data Gas : Set where
gas : → Gas
\end{code}
```
When our evaluator returns a term `N`, it will either give evidence that
`N` is a value or indicate that it ran out of gas:
\begin{code}
```
data Finished {Γ A} (N : Γ ⊢ A) : Set where
done :
@ -1044,11 +1044,11 @@ data Finished {Γ A} (N : Γ ⊢ A) : Set where
out-of-gas :
----------
Finished N
\end{code}
```
Given a term `L` of type `A`, the evaluator will, for some `N`, return
a reduction sequence from `L` to `N` and an indication of whether
reduction finished:
\begin{code}
```
data Steps : ∀ {A} → ∅ ⊢ A → Set where
steps : ∀ {A} {L N : ∅ ⊢ A}
@ -1056,9 +1056,9 @@ data Steps : ∀ {A} → ∅ ⊢ A → Set where
→ Finished N
----------
→ Steps L
\end{code}
```
The evaluator takes gas and a term and returns the corresponding steps:
\begin{code}
```
eval : ∀ {A}
→ Gas
→ (L : ∅ ⊢ A)
@ -1069,7 +1069,7 @@ eval (gas (suc m)) L with progress L
... | done VL = steps (L ∎) (done VL)
... | step {M} L—→M with eval (gas m) M
... | steps M—↠N fin = steps (L —→⟨ L—→M ⟩ M—↠N) fin
\end{code}
```
The definition is a little simpler than previously, as we no longer need
to invoke preservation.
@ -1077,13 +1077,13 @@ to invoke preservation.
We reiterate each of our previous examples. We re-define the term
`sucμ` that loops forever:
\begin{code}
```
sucμ : ∅ ⊢ `
sucμ = μ (`suc (# 0))
\end{code}
```
To compute the first three steps of the infinite reduction sequence,
we evaluate with three steps worth of gas:
\begin{code}
```
_ : eval (gas 3) sucμ ≡
steps
`suc ` Z
@ -1096,10 +1096,10 @@ _ : eval (gas 3) sucμ ≡
∎)
out-of-gas
_ = refl
\end{code}
```
The Church numeral two applied to successor and zero:
\begin{code}
```
_ : eval (gas 100) (twoᶜ · sucᶜ · `zero) ≡
steps
((ƛ (ƛ ` (S Z) · (` (S Z) · ` Z))) · (ƛ `suc ` Z) · `zero
@ -1114,10 +1114,10 @@ _ : eval (gas 100) (twoᶜ · sucᶜ · `zero) ≡
∎)
(done (V-suc (V-suc V-zero)))
_ = refl
\end{code}
```
Two plus two is four:
\begin{code}
```
_ : eval (gas 100) (plus · two · two) ≡
steps
((μ
@ -1256,10 +1256,10 @@ _ : eval (gas 100) (plus · two · two) ≡
∎)
(done (V-suc (V-suc (V-suc (V-suc V-zero)))))
_ = refl
\end{code}
```
And the corresponding term for Church numerals:
\begin{code}
```
_ : eval (gas 100) (plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero) ≡
steps
((ƛ
@ -1316,7 +1316,7 @@ _ : eval (gas 100) (plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero) ≡
∎)
(done (V-suc (V-suc (V-suc (V-suc V-zero)))))
_ = refl
\end{code}
```
We omit the proof that reduction is deterministic, since it is
tedious and almost identical to the previous proof.
@ -1326,9 +1326,9 @@ tedious and almost identical to the previous proof.
Using the evaluator, confirm that two times two is four.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Inherently-typed is golden

View file

@ -6,9 +6,9 @@ permalink : /Decidable/
next : /Lists/
---
\begin{code}
```
module plfa.Decidable where
\end{code}
```
We have a choice as to how to represent relations:
as an inductive data type of _evidence_ that the relation holds,
@ -21,7 +21,7 @@ of a new notion of _decidable_.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl)
open Eq.≡-Reasoning
@ -35,7 +35,7 @@ open import Data.Unit using (; tt)
open import Data.Empty using (⊥; ⊥-elim)
open import plfa.Relations using (_<_; z<s; s<s)
open import plfa.Isomorphism using (_⇔_)
\end{code}
```
## Evidence vs Computation
@ -43,7 +43,7 @@ Recall that Chapter [Relations][plfa.Relations]
defined comparison as an inductive datatype,
which provides _evidence_ that one number
is less than or equal to another:
\begin{code}
```
infix 4 _≤_
data _≤_ : → Set where
@ -56,16 +56,16 @@ data _≤_ : → Set where
→ m ≤ n
-------------
→ suc m ≤ suc n
\end{code}
```
For example, we can provide evidence that `2 ≤ 4`,
and show there is no possible evidence that `4 ≤ 2`:
\begin{code}
```
2≤4 : 2 ≤ 4
2≤4 = s≤s (s≤s z≤n)
¬4≤2 : ¬ (4 ≤ 2)
¬4≤2 (s≤s (s≤s ()))
\end{code}
```
The occurrence of `()` attests to the fact that there is
no possible evidence for `2 ≤ 0`, which `z≤n` cannot match
(because `2` is not `zero`) and `s≤s` cannot match
@ -73,28 +73,28 @@ no possible evidence for `2 ≤ 0`, which `z≤n` cannot match
An alternative, which may seem more familiar, is to define a
type of booleans:
\begin{code}
```
data Bool : Set where
true : Bool
false : Bool
\end{code}
```
Given booleans, we can define a function of two numbers that
_computes_ to `true` if the comparison holds and to `false` otherwise:
\begin{code}
```
infix 4 _≤ᵇ_
_≤ᵇ_ : → Bool
zero ≤ᵇ n = true
suc m ≤ᵇ zero = false
suc m ≤ᵇ suc n = m ≤ᵇ n
\end{code}
```
The first and last clauses of this definition resemble the two
constructors of the corresponding inductive datatype, while the
middle clause arises because there is no possible evidence that
`suc m ≤ zero` for any `m`.
For example, we can compute that `2 ≤ᵇ 4` holds,
and we can compute that `4 ≤ᵇ 2` does not hold:
\begin{code}
```
_ : (2 ≤ᵇ 4) ≡ true
_ =
begin
@ -118,7 +118,7 @@ _ =
≡⟨⟩
false
\end{code}
```
In the first case, it takes two steps to reduce the first argument to zero,
and one more step to compute true, corresponding to the two uses of `s≤s`
and the one use of `z≤n` when providing evidence that `2 ≤ 4`.
@ -131,11 +131,11 @@ and the one use of `()` when showing there can be no evidence that `4 ≤ 2`.
We would hope to be able to show these two approaches are related, and
indeed we can. First, we define a function that lets us map from the
computation world to the evidence world:
\begin{code}
```
T : Bool → Set
T true =
T false = ⊥
\end{code}
```
Recall that `` is the unit type which contains the single element `tt`,
and the `⊥` is the empty type which contains no values. (Also note that
`T` is a capital letter t, and distinct from ``.) If `b` is of type `Bool`,
@ -145,19 +145,19 @@ no possible evidence that `T b` holds if `b` is false.
Another way to put this is that `T b` is inhabited exactly when `b ≡ true`
is inhabited.
In the forward direction, we need to do a case analysis on the boolean `b`:
\begin{code}
```
T→≡ : ∀ (b : Bool) → T b → b ≡ true
T→≡ true tt = refl
T→≡ false ()
\end{code}
```
If `b` is true then `T b` is inhabited by `tt` and `b ≡ true` is inhabited
by `refl`, while if `b` is false then `T b` in uninhabited.
In the reverse direction, there is no need for a case analysis on the boolean `b`:
\begin{code}
```
≡→T : ∀ {b : Bool} → b ≡ true → T b
≡→T refl = tt
\end{code}
```
If `b ≡ true` is inhabited by `refl` we know that `b` is `true` and
hence `T b` is inhabited by `tt`.
@ -165,12 +165,12 @@ Now we can show that `T (m ≤ᵇ n)` is inhabited exactly when `m ≤ n` is inh
In the forward direction, we consider the three clauses in the definition
of `_≤ᵇ_`:
\begin{code}
```
≤ᵇ→≤ : ∀ (m n : ) → T (m ≤ᵇ n) → m ≤ n
≤ᵇ→≤ zero n tt = z≤n
≤ᵇ→≤ (suc m) zero ()
≤ᵇ→≤ (suc m) (suc n) t = s≤s (≤ᵇ→≤ m n t)
\end{code}
```
In the first clause, we immediately have that `zero ≤ᵇ n` is
true, so `T (m ≤ᵇ n)` is evidenced by `tt`, and correspondingly `m ≤ n` is
evidenced by `z≤n`. In the middle clause, we immediately have that
@ -184,11 +184,11 @@ We recursively invoke the function to get evidence that `m ≤ n`, which
In the reverse direction, we consider the possible forms of evidence
that `m ≤ n`:
\begin{code}
```
≤→≤ᵇ : ∀ {m n : } → m ≤ n → T (m ≤ᵇ n)
≤→≤ᵇ z≤n = tt
≤→≤ᵇ (s≤s m≤n) = ≤→≤ᵇ m≤n
\end{code}
```
If the evidence is `z≤n` then we immediately have that `zero ≤ᵇ n` is
true, so `T (m ≤ᵇ n)` is evidenced by `tt`. If the evidence is `s≤s`
applied to `m≤n`, then `suc m ≤ᵇ suc n` reduces to `m ≤ᵇ n`, and we
@ -216,11 +216,11 @@ does the relation hold or does it not? Conversely, the evidence approach tells
us exactly why the relation holds, but we are responsible for generating the
evidence. But it is easy to define a type that combines the benefits of
both approaches. It is called `Dec A`, where `Dec` is short for _decidable_:
\begin{code}
```
data Dec (A : Set) : Set where
yes : A → Dec A
no : ¬ A → Dec A
\end{code}
```
Like booleans, the type has two constructors. A value of type `Dec A`
is either of the form `yes x`, where `x` provides evidence that `A` holds,
or of the form `no ¬x`, where `¬x` provides evidence that `A` cannot hold
@ -231,13 +231,13 @@ is less than or equal to the other, and provides evidence to justify its conclus
First, we introduce two functions useful for constructing evidence that
an inequality does not hold:
\begin{code}
```
¬s≤z : ∀ {m : } → ¬ (suc m ≤ zero)
¬s≤z ()
¬s≤s : ∀ {m n : } → ¬ (m ≤ n) → ¬ (suc m ≤ suc n)
¬s≤s ¬m≤n (s≤s m≤n) = ¬m≤n m≤n
\end{code}
```
The first of these asserts that `¬ (suc m ≤ zero)`, and follows by
absurdity, since any evidence of inequality has the form `zero ≤ n`
or `suc m ≤ suc n`, neither of which match `suc m ≤ zero`. The second
@ -247,14 +247,14 @@ form `s≤s m≤n` where `m≤n` is evidence that `m ≤ n`. Hence, we have
a contradiction, evidenced by `¬m≤n m≤n`.
Using these, it is straightforward to decide an inequality:
\begin{code}
```
_≤?_ : ∀ (m n : ) → Dec (m ≤ n)
zero ≤? n = yes z≤n
suc m ≤? zero = no ¬s≤z
suc m ≤? suc n with m ≤? n
... | yes m≤n = yes (s≤s m≤n)
... | no ¬m≤n = no (¬s≤s ¬m≤n)
\end{code}
```
As with `_≤ᵇ_`, the definition has three clauses. In the first
clause, it is immediate that `zero ≤ n` holds, and it is evidenced by
`z≤n`. In the second clause, it is immediate that `suc m ≤ zero` does
@ -275,13 +275,13 @@ to derive them from `_≤?_`.
We can use our new function to _compute_ the _evidence_ that earlier we had to
think up on our own:
\begin{code}
```
_ : 2 ≤? 4 ≡ yes (s≤s (s≤s z≤n))
_ = refl
_ : 4 ≤? 2 ≡ no (¬s≤s (¬s≤s ¬s≤z))
_ = refl
\end{code}
```
You can check that Agda will indeed compute these values. Typing
`C-c C-n` and providing `2 ≤? 4` or `4 ≤? 2` as the requested expression
causes Agda to print the values given above.
@ -294,26 +294,26 @@ trouble normalising evidence of negation.)
#### Exercise `_<?_` (recommended)
Analogous to the function above, define a function to decide strict inequality:
\begin{code}
```
postulate
_<?_ : ∀ (m n : ) → Dec (m < n)
\end{code}
```
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `_≡?_`
Define a function to decide whether two naturals are equal:
\begin{code}
```
postulate
_≡?_ : ∀ (m n : ) → Dec (m ≡ n)
\end{code}
```
\begin{code}
```
-- Your code goes here
\end{code}
```
## Decidables from booleans, and booleans from decidables
@ -321,12 +321,12 @@ postulate
Curious readers might wonder if we could reuse the definition of
`m ≤ᵇ n`, together with the proofs that it is equivalent to `m ≤ n`, to show
decidability. Indeed, we can do so as follows:
\begin{code}
```
_≤?_ : ∀ (m n : ) → Dec (m ≤ n)
m ≤? n with m ≤ᵇ n | ≤ᵇ→≤ m n | ≤→≤ᵇ {m} {n}
... | true | p | _ = yes (p tt)
... | false | _ | ¬p = no ¬p
\end{code}
```
If `m ≤ᵇ n` is true then `≤ᵇ→≤` yields a proof that `m ≤ n` holds,
while if it is false then `≤→≤ᵇ` takes a proof that `m ≤ n` holds into a contradiction.
@ -355,20 +355,20 @@ section. If one really wants `_≤ᵇ_`, then it and its properties are easily
from `_≤?_`, as we will now show.
Erasure takes a decidable value to a boolean:
\begin{code}
```
⌊_⌋ : ∀ {A : Set} → Dec A → Bool
⌊ yes x ⌋ = true
⌊ no ¬x ⌋ = false
\end{code}
```
Using erasure, we can easily derive `_≤ᵇ_` from `_≤?_`:
\begin{code}
```
_≤ᵇ_ : → Bool
m ≤ᵇ′ n = ⌊ m ≤? n ⌋
\end{code}
```
Further, if `D` is a value of type `Dec A`, then `T ⌊ D ⌋` is
inhabited exactly when `A` is inhabited:
\begin{code}
```
toWitness : ∀ {A : Set} {D : Dec A} → T ⌊ D ⌋ → A
toWitness {A} {yes x} tt = x
toWitness {A} {no ¬x} ()
@ -376,16 +376,16 @@ toWitness {A} {no ¬x} ()
fromWitness : ∀ {A : Set} {D : Dec A} → A → T ⌊ D ⌋
fromWitness {A} {yes x} _ = tt
fromWitness {A} {no ¬x} x = ¬x x
\end{code}
```
Using these, we can easily derive that `T (m ≤ᵇ′ n)` is inhabited
exactly when `m ≤ n` is inhabited:
\begin{code}
```
≤ᵇ′→≤ : ∀ {m n : } → T (m ≤ᵇ′ n) → m ≤ n
≤ᵇ′→≤ = toWitness
≤→≤ᵇ′ : ∀ {m n : } → m ≤ n → T (m ≤ᵇ′ n)
≤→≤ᵇ′ = fromWitness
\end{code}
```
In summary, it is usually best to eschew booleans and rely on decidables.
If you need booleans, they and their properties are easily derived from the
@ -399,14 +399,14 @@ Each of these extends to decidables.
The conjunction of two booleans is true if both are true,
and false if either is false:
\begin{code}
```
infixr 6 _∧_
_∧_ : Bool → Bool → Bool
true ∧ true = true
false ∧ _ = false
_ ∧ false = false
\end{code}
```
In Emacs, the left-hand side of the third equation displays in grey,
indicating that the order of the equations determines which of the
second or the third can match. However, regardless of which matches
@ -414,14 +414,14 @@ the answer is the same.
Correspondingly, given two decidable propositions, we can
decide their conjunction:
\begin{code}
```
infixr 6 _×-dec_
_×-dec_ : ∀ {A B : Set} → Dec A → Dec B → Dec (A × B)
yes x ×-dec yes y = yes ⟨ x , y ⟩
no ¬x ×-dec _ = no λ{ ⟨ x , y ⟩ → ¬x x }
_ ×-dec no ¬y = no λ{ ⟨ x , y ⟩ → ¬y y }
\end{code}
```
The conjunction of two propositions holds if they both hold,
and its negation holds if the negation of either holds.
If both hold, then we pair the evidence for each to yield
@ -436,14 +436,14 @@ yield the contradiction, but it would be equally valid to pick the second.
The disjunction of two booleans is true if either is true,
and false if both are false:
\begin{code}
```
infixr 5 __
__ : Bool → Bool → Bool
true _ = true
_ true = true
false false = false
\end{code}
```
In Emacs, the left-hand side of the second equation displays in grey,
indicating that the order of the equations determines which of the
first or the second can match. However, regardless of which matches
@ -451,14 +451,14 @@ the answer is the same.
Correspondingly, given two decidable propositions, we can
decide their disjunction:
\begin{code}
```
infixr 5 _⊎-dec_
_⊎-dec_ : ∀ {A B : Set} → Dec A → Dec B → Dec (A ⊎ B)
yes x ⊎-dec _ = yes (inj₁ x)
_ ⊎-dec yes y = yes (inj₂ y)
no ¬x ⊎-dec no ¬y = no λ{ (inj₁ x) → ¬x x ; (inj₂ y) → ¬y y }
\end{code}
```
The disjunction of two propositions holds if either holds,
and its negation holds if the negation of both hold.
If either holds, we inject the evidence to yield
@ -473,18 +473,18 @@ but it would be equally valid to pick the second.
The negation of a boolean is false if its argument is true,
and vice versa:
\begin{code}
```
not : Bool → Bool
not true = false
not false = true
\end{code}
```
Correspondingly, given a decidable proposition, we
can decide its negation:
\begin{code}
```
¬? : ∀ {A : Set} → Dec A → Dec (¬ A)
¬? (yes x) = no (¬¬-intro x)
¬? (no ¬x) = yes ¬x
\end{code}
```
We simply swap yes and no. In the first equation,
the right-hand side asserts that the negation of `¬ A` holds,
in other words, that `¬ ¬ A` holds, which is an easy consequence
@ -492,12 +492,12 @@ of the fact that `A` holds.
There is also a slightly less familiar connective,
corresponding to implication:
\begin{code}
```
_⊃_ : Bool → Bool → Bool
_ ⊃ true = true
false ⊃ _ = true
true ⊃ false = false
\end{code}
```
One boolean implies another if
whenever the first is true then the second is true.
Hence, the implication of two booleans is true if
@ -510,12 +510,12 @@ the answer is the same.
Correspondingly, given two decidable propositions,
we can decide if the first implies the second:
\begin{code}
```
_→-dec_ : ∀ {A B : Set} → Dec A → Dec B → Dec (A → B)
_ →-dec yes y = yes (λ _ → y)
no ¬x →-dec _ = yes (λ x → ⊥-elim (¬x x))
yes x →-dec no ¬y = no (λ f → ¬y (f x))
\end{code}
```
The implication holds if either the second holds or
the negation of the first holds, and its negation
holds if the first holds and the negation of the second holds.
@ -538,32 +538,32 @@ on which matches; but either is equally valid.
#### Exercise `erasure`
Show that erasure relates corresponding boolean and decidable operations:
\begin{code}
```
postulate
∧-× : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ ∧ ⌊ y ⌋ ≡ ⌊ x ×-dec y ⌋
-× : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ ⌊ y ⌋ ≡ ⌊ x ⊎-dec y ⌋
not-¬ : ∀ {A : Set} (x : Dec A) → not ⌊ x ⌋ ≡ ⌊ ¬? x ⌋
\end{code}
```
#### Exercise `iff-erasure` (recommended)
Give analogues of the `_⇔_` operation from
Chapter [Isomorphism][plfa.Isomorphism#iff],
operation on booleans and decidables, and also show the corresponding erasure:
\begin{code}
```
postulate
_iff_ : Bool → Bool → Bool
_⇔-dec_ : ∀ {A B : Set} → Dec A → Dec B → Dec (A ⇔ B)
iff-⇔ : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ iff ⌊ y ⌋ ≡ ⌊ x ⇔-dec y ⌋
\end{code}
```
\begin{code}
```
-- Your code goes here
\end{code}
```
## Standard Library
\begin{code}
```
import Data.Bool.Base using (Bool; true; false; T; _∧_; __; not)
import Data.Nat using (_≤?_)
import Relation.Nullary using (Dec; yes; no)
@ -572,7 +572,7 @@ import Relation.Nullary.Negation using (¬?)
import Relation.Nullary.Product using (_×-dec_)
import Relation.Nullary.Sum using (_⊎-dec_)
import Relation.Binary using (Decidable)
\end{code}
```
## Unicode

View file

@ -6,9 +6,9 @@ permalink : /Denotational/
next : /Compositional/
---
\begin{code}
```
module plfa.Denotational where
\end{code}
```
The lambda calculus is a language about _functions_, that is, mappings
from input to output. In computing we often think of such
@ -54,7 +54,7 @@ down a denotational semantics of the lambda calculus.
## Imports
\begin{code}
```
open import Relation.Binary.PropositionalEquality
using (_≡_; _≢_; refl; sym; cong; cong₂; cong-app)
open import Data.Product using (_×_; Σ; Σ-syntax; ∃; ∃-syntax; proj₁; proj₂) renaming (_,_ to ⟨_,_⟩)
@ -68,7 +68,7 @@ open import Relation.Nullary using (¬_)
open import Relation.Nullary.Negation using (contradiction)
open import Data.Empty using (⊥-elim)
open import Function using (_∘_)
\end{code}
```
## Values
@ -88,7 +88,7 @@ either a single mapping or the empty set.
outputs according to both `v` and `w`. Think of it as taking the
union of the two sets.
\begin{code}
```
infixr 7 _↦_
infixl 5 _⊔_
@ -96,14 +96,14 @@ data Value : Set where
⊥ : Value
_↦_ : Value → Value → Value
_⊔_ : Value → Value → Value
\end{code}
```
The `⊑` relation adapts the familiar notion of subset to the Value data
type. This relation plays the key role in enabling self-application.
There are two rules that are specific to functions, `Fun⊑` and `Dist⊑`,
which we discuss below.
\begin{code}
```
infix 4 _⊑_
data _⊑_ : Value → Value → Set where
@ -141,7 +141,7 @@ data _⊑_ : Value → Value → Set where
Dist⊑ : ∀{v w w}
---------------------------------
→ v ↦ (w ⊔ w) ⊑ (v ↦ w) ⊔ (v ↦ w)
\end{code}
```
The first five rules are straightforward.
@ -156,42 +156,42 @@ outputs.
The `⊑` relation is reflexive.
\begin{code}
```
Refl⊑ : ∀ {v} → v ⊑ v
Refl⊑ {⊥} = Bot⊑
Refl⊑ {v ↦ v} = Fun⊑ Refl⊑ Refl⊑
Refl⊑ {v₁ ⊔ v₂} = ConjL⊑ (ConjR1⊑ Refl⊑) (ConjR2⊑ Refl⊑)
\end{code}
```
The `⊔` operation is monotonic with respect to `⊑`, that is, given two
larger values it produces a larger value.
\begin{code}
```
⊔⊑⊔ : ∀ {v w v w}
→ v ⊑ v → w ⊑ w
-----------------------
→ (v ⊔ w) ⊑ (v ⊔ w)
⊔⊑⊔ d₁ d₂ = ConjL⊑ (ConjR1⊑ d₁) (ConjR2⊑ d₂)
\end{code}
```
The `Dist⊑` rule can be used to combine two entries even when the
input values are not identical. One can first combine the two inputs
using ⊔ and then apply the `Dist⊑` rule to obtain the following
property.
\begin{code}
```
Dist⊔↦⊔ : ∀{v v w w : Value}
→ (v ⊔ v) ↦ (w ⊔ w) ⊑ (v ↦ w) ⊔ (v ↦ w)
Dist⊔↦⊔ = Trans⊑ Dist⊑ (⊔⊑⊔ (Fun⊑ (ConjR1⊑ Refl⊑) Refl⊑)
(Fun⊑ (ConjR2⊑ Refl⊑) Refl⊑))
\end{code}
```
<!-- above might read more nicely if we introduce inequational reasoning -->
If the join `u ⊔ v` is less than another value `w`,
then both `u` and `v` are less than `w`.
\begin{code}
```
⊔⊑-invL : ∀{u v w : Value}
→ u ⊔ v ⊑ w
---------
@ -209,7 +209,7 @@ then both `u` and `v` are less than `w`.
⊔⊑-invR (ConjR1⊑ lt) = ConjR1⊑ (⊔⊑-invR lt)
⊔⊑-invR (ConjR2⊑ lt) = ConjR2⊑ (⊔⊑-invR lt)
⊔⊑-invR (Trans⊑ lt1 lt2) = Trans⊑ (⊔⊑-invR lt1) lt2
\end{code}
```
## Environments
@ -217,13 +217,13 @@ then both `u` and `v` are less than `w`.
An environment gives meaning to the free variables in a term by
mapping variables to values.
\begin{code}
```
Env : Context → Set
Env Γ = ∀ (x : Γ ∋ ★) → Value
\end{code}
```
We have the empty environment, and we can extend an environment.
\begin{code}
```
`∅ : Env ∅
`∅ ()
@ -232,11 +232,11 @@ infixl 5 _`,_
_`,_ : ∀ {Γ} → Env Γ → Value → Env (Γ , ★)
(γ `, v) Z = v
(γ `, v) (S x) = γ x
\end{code}
```
We can recover the initial environment from an extended environment,
and the last value. Putting them back together again takes us where we started.
\begin{code}
```
init : ∀ {Γ} → Env (Γ , ★) → Env Γ
init γ x = γ (S x)
@ -249,40 +249,40 @@ init-last {Γ} γ = extensionality lemma
lemma : ∀ (x : Γ , ★ ∋ ★) → γ x ≡ (init γ `, last γ) x
lemma Z = refl
lemma (S x) = refl
\end{code}
```
The nth function takes a de Bruijn index and finds the corresponding
value in the environment.
\begin{code}
```
nth : ∀{Γ} → (Γ ∋ ★) → Env Γ → Value
nth x ρ = ρ x
\end{code}
```
We extend the `⊑` relation point-wise to environments with the
following definition.
\begin{code}
```
_`⊑_ : ∀ {Γ} → Env Γ → Env Γ → Set
_`⊑_ {Γ} γ δ = ∀ (x : Γ ∋ ★) → γ x ⊑ δ x
\end{code}
```
We define a bottom environment and a join operator on environments,
which takes the point-wise join of their values.
\begin{code}
```
`⊥ : ∀ {Γ} → Env Γ
`⊥ x = ⊥
_`⊔_ : ∀ {Γ} → Env Γ → Env Γ → Env Γ
(γ `⊔ δ) x = γ x ⊔ δ x
\end{code}
```
The `Refl⊑`, `ConjR1⊑`, and `ConjR2⊑` rules lift to environments. So
the join of two environments `γ` and `δ` is greater than the first
environment `γ` or the second environment `δ`.
\begin{code}
```
`Refl⊑ : ∀ {Γ} {γ : Env Γ} → γ `γ
`Refl⊑ {Γ} {γ} x = Refl⊑ {γ x}
@ -291,7 +291,7 @@ EnvConjR1⊑ γ δ x = ConjR1⊑ Refl⊑
EnvConjR2⊑ : ∀ {Γ} → (γ : Env Γ) → (δ : Env Γ) → δ `⊑ (γ `⊔ δ)
EnvConjR2⊑ γ δ x = ConjR2⊑ Refl⊑
\end{code}
```
## Denotational Semantics
@ -302,7 +302,7 @@ quite natural, but don't let the similarity fool you. There are
subtle but important differences! So here is the definition of the
semantics, which we discuss in detail in the following paragraphs.
\begin{code}
```
infix 3 _⊢_↓_
data _⊢_↓_ : ∀{Γ} → Env Γ → (Γ ⊢ ★) → Value → Set where
@ -337,7 +337,7 @@ data _⊢_↓_ : ∀{Γ} → Env Γ → (Γ ⊢ ★) → Value → Set where
→ w ⊑ v
---------
γ ⊢ M ↓ w
\end{code}
```
Consider the rule for lambda abstractions, `↦-intro`. It says that a
lambda abstraction results in a single-entry table that maps the input
@ -346,18 +346,18 @@ environment with `v` bound to its parameter produces the output `w`.
As a simple example of this rule, we can see that the identity function
maps `⊥` to `⊥`.
\begin{code}
```
id : ∅ ⊢ ★
id = ƛ # 0
\end{code}
```
\begin{code}
```
denot-id : ∀ {γ v} → γ ⊢ id ↓ v ↦ v
denot-id = ↦-intro var
denot-id-two : ∀ {γ v w} → γ ⊢ id ↓ (v ↦ v) ⊔ (w ↦ w)
denot-id-two = ⊔-intro denot-id denot-id
\end{code}
```
Of course, we will need tables with many rows to capture the meaning
of lambda abstractions. These can be constructed using the `⊔-intro`
@ -372,10 +372,10 @@ In the following we show that the identity function produces a table
containing both of the previous results, `⊥ ↦ ⊥` and `(⊥ ↦ ⊥) ↦ (⊥ ↦
⊥)`.
\begin{code}
```
denot-id3 : `∅ ⊢ id ↓ (⊥ ↦ ⊥) ⊔ (⊥ ↦ ⊥) ↦ (⊥ ↦ ⊥)
denot-id3 = denot-id-two
\end{code}
```
We most often think of the judgment `γ ⊢ M ↓ v` as taking the
environment `γ` and term `M` as input, producing the result `v`. However,
@ -398,10 +398,10 @@ the identity function to itself. Indeed, we have both that
`∅ ⊢ id ↓ (u ↦ u) ↦ (u ↦ u)` and also `∅ ⊢ id ↓ (u ↦ u)`, so we can
apply the rule `↦-elim`.
\begin{code}
```
id-app-id : ∀ {u : Value} → `∅ ⊢ id · id ↓ (u ↦ u)
id-app-id {u} = ↦-elim (↦-intro var) (↦-intro var)
\end{code}
```
Next we revisit the Church numeral two. This function has two
parameters: a function and an arbitrary value `u`, and it applies the
@ -415,7 +415,7 @@ In particular, we use the ConjR1⊑ and ConjR2⊑ to select `u ↦ v` and `v
twoᶜ is that it takes this table and parameter `u`, and it returns `w`.
Indeed we derive this as follows.
\begin{code}
```
denot-twoᶜ : ∀{u v w : Value} → `∅ ⊢ twoᶜ ↓ ((u ↦ v ⊔ v ↦ w) ↦ (u ↦ w))
denot-twoᶜ {u}{v}{w} =
↦-intro (↦-intro (↦-elim (sub var lt1) (↦-elim (sub var lt2) var)))
@ -424,7 +424,7 @@ denot-twoᶜ {u}{v}{w} =
lt2 : u ↦ v ⊑ u ↦ v ⊔ v ↦ w
lt2 = (ConjR1⊑ (Fun⊑ Refl⊑ Refl⊑))
\end{code}
```
Next we have a classic example of self application: `Δ = λx. (x x)`.
@ -435,14 +435,14 @@ output of `Δ` is `w`. The derivation is given below. The first occurrences
of `x` evaluates to `v ↦ w`, the second occurrence of `x` evaluates to `v`,
and then the result of the application is `w`.
\begin{code}
```
Δ : ∅ ⊢ ★
Δ = (ƛ (# 0) · (# 0))
denot-Δ : ∀ {v w} → `∅ ⊢ Δ ↓ ((v ↦ w ⊔ v) ↦ w)
denot-Δ = ↦-intro (↦-elim (sub var (ConjR1⊑ Refl⊑))
(sub var (ConjR2⊑ Refl⊑)))
\end{code}
```
One might worry whether this semantics can deal with diverging
programs. The `⊥` value and the `⊥-intro` rule provide a way to handle
@ -459,21 +459,21 @@ whenever we can show that a program evaluates to two values, we can apply
matches the input of the first occurrence of `Δ`, so we can conclude that
the result of the application is `⊥`.
\begin{code}
```
Ω : ∅ ⊢ ★
Ω = Δ · Δ
denot-Ω : `∅ ⊢ Ω ↓ ⊥
denot-Ω = ↦-elim denot-Δ (⊔-intro (↦-intro ⊥-intro) ⊥-intro)
\end{code}
```
A shorter derivation of the same result is by just one use of the
`⊥-intro` rule.
\begin{code}
```
denot-Ω' : `∅ ⊢ Ω ↓ ⊥
denot-Ω' = ⊥-intro
\end{code}
```
Just because one can derive `∅ ⊢ M ↓ ⊥` for some closed term `M` doesn't mean
that `M` necessarily diverges. There may be other derivations that
@ -489,7 +489,7 @@ Instead, the `↦-elim` rule seems to require an exact match. However,
because of the `sub` rule, application really does allow larger
arguments.
\begin{code}
```
↦-elim2 : ∀ {Γ} {γ : Env Γ} {M₁ M₂ v₁ v₂ v₃}
γ ⊢ M₁ ↓ (v₁ ↦ v₃)
γ ⊢ M₂ ↓ v₂
@ -497,7 +497,7 @@ arguments.
------------------
γ ⊢ (M₁ · M₂) ↓ v₃
↦-elim2 d₁ d₂ lt = ↦-elim d₁ (sub d₂ lt)
\end{code}
```
## Denotations and denotational equality
@ -506,41 +506,41 @@ Next we define a notion of denotational equality based on the above
semantics. Its statement makes use of an if-and-only-if, which we
define as follows.
\begin{code}
```
_iff_ : Set → Set → Set
P iff Q = (P → Q) × (Q → P)
\end{code}
```
Another way to view the denotational semantics is as a function that
maps a term to a relation from environments to values. That is, the
_denotation_ of a term is a relation from environments to values.
\begin{code}
```
Denotation : Context → Set₁
Denotation Γ = (Env Γ → Value → Set)
\end{code}
```
The following function gives this alternative view of the semantics,
which really just amounts to changing the order of the parameters.
\begin{code}
```
: ∀{Γ} → (M : Γ ⊢ ★) → Denotation Γ
M = λ γ v → γ ⊢ M ↓ v
\end{code}
```
In general, two denotations are equal when they produce the same
values in the same environment.
\begin{code}
```
infix 3 _≃_
_≃_ : ∀ {Γ} → (Denotation Γ) → (Denotation Γ) → Set
(_≃_ {Γ} D₁ D₂) = (γ : Env Γ) → (v : Value) → D₁ γ v iff D₂ γ v
\end{code}
```
Denotational equality is an equivalence relation.
\begin{code}
```
≃-refl : ∀ {Γ : Context} → {M : Denotation Γ}
→ M ≃ M
≃-refl γ v = ⟨ (λ x → x) , (λ x → x) ⟩
@ -558,7 +558,7 @@ Denotational equality is an equivalence relation.
→ M₁ ≃ M₃
≃-trans eq1 eq2 γ v = ⟨ (λ z → proj₁ (eq2 γ v) (proj₁ (eq1 γ v) z)) ,
(λ z → proj₂ (eq1 γ v) (proj₂ (eq2 γ v) z)) ⟩
\end{code}
```
Two terms `M` and `N` are denotational equal when their denotations are
equal, that is, ` M ≃ N`.
@ -566,7 +566,7 @@ equal, that is, ` M ≃ N`.
The following submodule introduces equational reasoning for the `≃`
relation.
\begin{code}
```
module ≃-Reasoning {Γ : Context} where
infix 1 start_
@ -596,7 +596,7 @@ module ≃-Reasoning {Γ : Context} where
-----
→ x ≃ x
(x ☐) = ≃-refl
\end{code}
```
## Road map for the following chapters
@ -690,7 +690,7 @@ that `ρ` is a renaming that maps variables in `γ` into variables with
equal or larger values in `δ`. This lemmas says that extending the
renaming producing a renaming `ext r` that maps `γ , v` to `δ , v`.
\begin{code}
```
ext-nth : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ}
→ (ρ : Rename Γ Δ)
γ `⊑ (δ ∘ ρ)
@ -698,7 +698,7 @@ ext-nth : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ}
→ (γ `, v) `⊑ ((δ `, v) ∘ ext ρ)
ext-nth ρ lt Z = Refl⊑
ext-nth ρ lt (S n) = lt n
\end{code}
```
We proceed by cases on the de Bruijn index `n`.
@ -713,7 +713,7 @@ results in `v` when evaluated in environment `γ`, then applying the
renaming to `M` produces a program that results in the same value `v` when
evaluated in `δ`.
\begin{code}
```
rename-pres : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ} {M : Γ ⊢ ★}
→ (ρ : Rename Γ Δ)
γ `⊑ (δ ∘ ρ)
@ -730,7 +730,7 @@ rename-pres ρ lt (⊔-intro d d₁) =
⊔-intro (rename-pres ρ lt d) (rename-pres ρ lt d₁)
rename-pres ρ lt (sub d lt) =
sub (rename-pres ρ lt d) lt
\end{code}
```
The proof is by induction on the semantics of `M`. As you can see, all
of the cases are trivial except the cases for variables and lambda.
@ -755,7 +755,7 @@ function. So we apply the renaming lemma with the identity renaming,
which gives us `δ ⊢ rename (λ {A} x → x) M ↓ v`, and then we apply the
`rename-id` lemma to obtain `δ ⊢ M ↓ v`.
\begin{code}
```
Env⊑ : ∀ {Γ} {γ : Env Γ} {δ : Env Γ} {M v}
γ ⊢ M ↓ v
γ `⊑ δ
@ -765,13 +765,13 @@ Env⊑{Γ}{γ}{δ}{M}{v} d lt
with rename-pres{Γ}{Γ}{v}{γ}{δ}{M} (λ {A} x → x) lt d
... | d rewrite rename-id {Γ}{★}{M} =
d
\end{code}
```
In the proof that substitution reflects denotations, in the case for
lambda abstraction, we use a minor variation of `Env⊑`, in which just
the last element of the environment gets larger.
\begin{code}
```
up-env : ∀ {Γ} {γ : Env Γ} {M v u₁ u₂}
→ (γ `, u₁) ⊢ M ↓ v
→ u₁ ⊑ u₂
@ -782,7 +782,7 @@ up-env d lt = Env⊑ d (nth-le lt)
nth-le : ∀ {γ u₁ u₂} → u₁ ⊑ u₂ → (γ `, u₁) `⊑ (γ `, u₂)
nth-le lt Z = lt
nth-le lt (S n) = Refl⊑
\end{code}
```
## Inversion of the less-than relation for functions
@ -814,30 +814,30 @@ other contexts one can instead think of `⊥` as the empty set, but here
we must think of it as an element.) We write `u ∈ v` to say that `u` is
an element of `v`, as defined below.
\begin{code}
```
infix 5 _∈_
_∈_ : Value → Value → Set
u ∈ ⊥ = u ≡ ⊥
u ∈ v ↦ w = u ≡ v ↦ w
u ∈ (v ⊔ w) = u ∈ v ⊎ u ∈ w
\end{code}
```
So we can represent a collection of values simply as a value. We
write `v ⊆ w` to say that all the elements of `v` are also in `w`.
\begin{code}
```
infix 5 _⊆_
_⊆_ : Value → Value → Set
v ⊆ w = ∀{u} → u ∈ v → u ∈ w
\end{code}
```
The notions of membership and inclusion for values are closely related
to the less-than relation. They are narrower relations in that they
imply the less-than relation but not the other way around.
\begin{code}
```
∈→⊑ : ∀{u v : Value}
→ u ∈ v
-----
@ -856,31 +856,31 @@ imply the less-than relation but not the other way around.
⊆→⊑ {u ↦ u} s with s {u ↦ u} refl
... | x = ∈→⊑ x
⊆→⊑ {u ⊔ u} s = ConjL⊑ (⊆→⊑ (λ z → s (inj₁ z))) (⊆→⊑ (λ z → s (inj₂ z)))
\end{code}
```
We shall also need some inversion principles for value inclusion. If
the union of `u` and `v` is included in `w`, then of course both `u` and
`v` are each included in `w`.
\begin{code}
```
⊔⊆-inv : ∀{u v w : Value}
→ (u ⊔ v) ⊆ w
---------------
→ u ⊆ w × v ⊆ w
⊔⊆-inv uvw = ⟨ (λ x → uvw (inj₁ x)) , (λ x → uvw (inj₂ x)) ⟩
\end{code}
```
In our value representation, the function value `v ↦ w` is both an
element and also a singleton set. So if `v ↦ w` is a subset of `u`,
then `v ↦ w` must be a member of `u`.
\begin{code}
```
↦⊆→∈ : ∀{v w u : Value}
→ v ↦ w ⊆ u
---------
→ v ↦ w ∈ u
↦⊆→∈ incl = incl refl
\end{code}
```
### Function values
@ -890,26 +890,26 @@ predicates. We write `Fun u` if `u` is a function value, that is, if
`u ≡ v ↦ w` for some values `v` and `w`. We write `Funs v` if all the elements
of `v` are functions.
\begin{code}
```
data Fun : Value → Set where
fun : ∀{u v w} → u ≡ (v ↦ w) → Fun u
Funs : Value → Set
Funs v = ∀{u} → u ∈ v → Fun u
\end{code}
```
The value `⊥` is not a function.
\begin{code}
```
¬Fun⊥ : ¬ (Fun ⊥)
¬Fun⊥ (fun ())
\end{code}
```
In our values-as-sets representation, our sets always include at least
one element. Thus, if all the elements are functions, there is at
least one that is a function.
\begin{code}
```
Funs∈ : ∀{u}
→ Funs u
→ Σ[ v ∈ Value ] Σ[ w ∈ Value ] v ↦ w ∈ u
@ -919,7 +919,7 @@ Funs∈ {v ↦ w} f = ⟨ v , ⟨ w , refl ⟩ ⟩
Funs∈ {u ⊔ u} f
with Funs∈ λ z → f (inj₁ z)
... | ⟨ v , ⟨ w , m ⟩ ⟩ = ⟨ v , ⟨ w , (inj₁ m) ⟩ ⟩
\end{code}
```
### Domains and codomains
@ -933,7 +933,7 @@ To this end we define the following dom and cod functions. Given some
value `u` (that represents a set of entries), `dom u` returns the join of
their domains and `cod u` returns the join of their codomains.
\begin{code}
```
dom : (u : Value) → Value
dom ⊥ = ⊥
dom (v ↦ w) = v
@ -943,13 +943,13 @@ cod : (u : Value) → Value
cod ⊥ = ⊥
cod (v ↦ w) = w
cod (u ⊔ u) = cod u ⊔ cod u
\end{code}
```
We need just one property each for `dom` and `cod`. Given a collection of
functions represented by value `u`, and an entry `v ↦ w ∈ u`, we know
that `v` is included in the domain of `v`.
\begin{code}
```
↦∈→⊆dom : ∀{u v w : Value}
→ Funs u → (v ↦ w) ∈ u
----------------------
@ -962,13 +962,13 @@ that `v` is included in the domain of `v`.
↦∈→⊆dom {u ⊔ u} fg (inj₂ v↦w∈u) u∈v =
let ih = ↦∈→⊆dom (λ z → fg (inj₂ z)) v↦w∈u in
inj₂ (ih u∈v)
\end{code}
```
Regarding `cod`, suppose we have a collection of functions represented
by `u`, but all of them are just copies of `v ↦ w`. Then the `cod u` is
included in `w`.
\begin{code}
```
⊆↦→cod⊆ : ∀{u v w : Value}
→ u ⊆ v ↦ w
---------
@ -979,7 +979,7 @@ included in `w`.
... | refl = m
⊆↦→cod⊆ {u ⊔ u} s (inj₁ x) = ⊆↦→cod⊆ (λ {C} z → s (inj₁ z)) x
⊆↦→cod⊆ {u ⊔ u} s (inj₂ y) = ⊆↦→cod⊆ (λ {C} z → s (inj₂ z)) y
\end{code}
```
With the `dom` and `cod` functions in hand, we can make precise the
conclusion of the inversion principle for functions, which we package
@ -988,10 +988,10 @@ _factors_ `u` into `u` if `u` is a included in `u`, if `u` contains onl
functions, its domain is less than `v`, and its codomain is greater
than `w`.
\begin{code}
```
factor : (u : Value) → (u : Value) → (v : Value) → (w : Value) → Set
factor u u v w = Funs u × u ⊆ u × dom u ⊑ v × w ⊑ cod u
\end{code}
```
We prove the inversion principle for functions by induction on the
derivation of the less-than relation. To make the induction hypothesis
@ -1017,7 +1017,7 @@ With these facts in hand, we proceed by induction on `u`
to prove that `(dom u) ↦ (cod u)` factors `u₂` into `u₃`.
We discuss each case of the proof in the text below.
\begin{code}
```
sub-inv-trans : ∀{u u₂ u : Value}
→ Funs u → u ⊆ u
→ (∀{v w} → v ↦ w ∈ u → Σ[ u₃ ∈ Value ] factor u₂ u₃ v w)
@ -1042,7 +1042,7 @@ sub-inv-trans {u₁ ⊔ u₂} {u₂} {u} fg u⊆u IH
u₂⊆u₂ : {C : Value} → C ∈ u₃₁ ⊎ C ∈ u₃₂ → C ∈ u₂
u₂⊆u₂ {C} (inj₁ x) = u₃₁⊆u₂ x
u₂⊆u₂ {C} (inj₂ y) = u₃₂⊆u₂ y
\end{code}
```
* Suppose `u ≡ ⊥`. Then we have a contradiction because
it is not the case that `Fun ⊥`.
@ -1075,7 +1075,7 @@ less-than for functions. We show that if `u₁ ⊑ u₂`, then for any
by induction on the derivation of `u₁ ⊑ u₂`, and describe each case in
the text after the Agda proof.
\begin{code}
```
sub-inv : ∀{u₁ u₂ : Value}
→ u₁ ⊑ u₂
→ ∀{v w} → v ↦ w ∈ u₁
@ -1112,7 +1112,7 @@ sub-inv {u₂₁ ↦ (u₂₂ ⊔ u₂₃)} {u₂₁ ↦ u₂₂ ⊔ u₂₁ ↦
g : (u₂₁ ↦ u₂₂ ⊔ u₂₁ ↦ u₂₃) ⊆ (u₂₁ ↦ u₂₂ ⊔ u₂₁ ↦ u₂₃)
g (inj₁ x) = inj₁ x
g (inj₂ y) = inj₂ y
\end{code}
```
Let `v` and `w` be arbitrary values.
@ -1194,7 +1194,7 @@ later proofs. We specialize the premise to just `v ↦ w ⊑ u₁`
and we modify the conclusion to say that for every
`v ↦ w ∈ u₂`, we have `v ⊑ v`.
\begin{code}
```
sub-inv-fun : ∀{v w u₁ : Value}
→ (v ↦ w) ⊑ u₁
-----------------------------------------------------
@ -1206,12 +1206,12 @@ sub-inv-fun{v}{w}{u₁} abc
⟨ u₂ , ⟨ f , ⟨ u₂⊆u₁ , ⟨ G , cc ⟩ ⟩ ⟩ ⟩
where G : ∀{D E} → (D ↦ E) ∈ u₂ → D ⊑ v
G{D}{E} m = Trans⊑ (⊆→⊑ (↦∈→⊆dom f m)) db
\end{code}
```
The second corollary is the inversion rule that one would expect for
less-than with functions on the left and right-hand sides.
\begin{code}
```
↦⊑↦-inv : ∀{v w v w}
→ v ↦ w ⊑ v ↦ w
-----------------
@ -1225,7 +1225,7 @@ less-than with functions on the left and right-hand sides.
... | refl =
let codΓ⊆w = ⊆↦→cod⊆ Γ⊆v34 in
⟨ lt1 u↦u∈Γ , Trans⊑ lt2 (⊆→⊑ codΓ⊆w) ⟩
\end{code}
```
## Notes

View file

@ -6,9 +6,9 @@ permalink : /Equality/
next : /Isomorphism/
---
\begin{code}
```
module plfa.Equality where
\end{code}
```
Much of our reasoning has involved equality. Given two terms `M`
and `N`, both of type `A`, we write `M ≡ N` to assert that `M` and `N`
@ -26,10 +26,10 @@ Since we define equality here, any import would create a conflict.
## Equality
We declare equality as follows:
\begin{code}
```
data _≡_ {A : Set} (x : A) : A → Set where
refl : x ≡ x
\end{code}
```
In other words, for any type `A` and for any `x` of type `A`, the
constructor `refl` provides evidence that `x ≡ x`. Hence, every value
is equal to itself, and we have no other way of showing values
@ -41,9 +41,9 @@ can be a parameter because it doesn't vary, while the second must be
an index, so it can be required to be equal to the first.
We declare the precedence of equality as follows:
\begin{code}
```
infix 4 _≡_
\end{code}
```
We set the precedence of `_≡_` at level 4, the same as `_≤_`,
which means it binds less tightly than any arithmetic operator.
It associates neither to left nor right; writing `x ≡ y ≡ z`
@ -55,13 +55,13 @@ is illegal.
An equivalence relation is one which is reflexive, symmetric, and transitive.
Reflexivity is built-in to the definition of equality, via the
constructor `refl`. It is straightforward to show symmetry:
\begin{code}
```
sym : ∀ {A : Set} {x y : A}
→ x ≡ y
-----
→ y ≡ x
sym refl = refl
\end{code}
```
How does this proof work? The argument to `sym` has type `x ≡ y`, but
on the left-hand side of the equation the argument has been
instantiated to the pattern `refl`, which requires that `x` and `y`
@ -120,14 +120,14 @@ the expected type:
This completes the definition as given above.
Transitivity is equally straightforward:
\begin{code}
```
trans : ∀ {A : Set} {x y z : A}
→ x ≡ y
→ y ≡ z
-----
→ x ≡ z
trans refl refl = refl
\end{code}
```
Again, a useful exercise is to carry out an interactive development,
checking how Agda's knowledge changes as each of the two arguments is
instantiated.
@ -136,44 +136,44 @@ instantiated.
Equality satisfies _congruence_. If two terms are equal,
they remain so after the same function is applied to both:
\begin{code}
```
cong : ∀ {A B : Set} (f : A → B) {x y : A}
→ x ≡ y
---------
→ f x ≡ f y
cong f refl = refl
\end{code}
```
Congruence of functions with two arguments is similar:
\begin{code}
```
cong₂ : ∀ {A B C : Set} (f : A → B → C) {u x : A} {v y : B}
→ u ≡ x
→ v ≡ y
-------------
→ f u v ≡ f x y
cong₂ f refl refl = refl
\end{code}
```
Equality is also a congruence in the function position of an application.
If two functions are equal, then applying them to the same term
yields equal terms:
\begin{code}
```
cong-app : ∀ {A B : Set} {f g : A → B}
→ f ≡ g
---------------------
→ ∀ (x : A) → f x ≡ g x
cong-app refl x = refl
\end{code}
```
Equality also satisfies *substitution*.
If two values are equal and a predicate holds of the first then it also holds of the second:
\begin{code}
```
subst : ∀ {A : Set} {x y : A} (P : A → Set)
→ x ≡ y
---------
→ P x → P y
subst P refl px = px
\end{code}
```
## Chains of equations
@ -182,7 +182,7 @@ Here we show how to support reasoning with chains of equations, as
used throughout the book. We package the declarations into a module,
named `≡-Reasoning`, to match the format used in Agda's standard
library:
\begin{code}
```
module ≡-Reasoning {A : Set} where
infix 1 begin_
@ -214,7 +214,7 @@ module ≡-Reasoning {A : Set} where
x ∎ = refl
open ≡-Reasoning
\end{code}
```
This is our first use of a nested module. It consists of the keyword
`module` followed by the module name and any parameters, explicit or
implicit, the keyword `where`, and the contents of the module indented.
@ -226,7 +226,7 @@ available in the current environment.
As an example, let's look at a proof of transitivity
as a chain of equations:
\begin{code}
```
trans : ∀ {A : Set} {x y z : A}
→ x ≡ y
→ y ≡ z
@ -240,7 +240,7 @@ trans {A} {x} {y} {z} x≡y y≡z =
≡⟨ y≡z ⟩
z
\end{code}
```
According to the fixity declarations, the body parses as follows:
begin (x ≡⟨ x≡y ⟩ (y ≡⟨ y≡z ⟩ (z ∎)))
@ -275,7 +275,7 @@ As a second example of chains of equations, we repeat the proof that addition
is commutative. We first repeat the definitions of naturals and addition.
We cannot import them because (as noted at the beginning of this chapter)
it would cause a conflict:
\begin{code}
```
data : Set where
zero :
suc :
@ -283,14 +283,14 @@ data : Set where
_+_ :
zero + n = n
(suc m) + n = suc (m + n)
\end{code}
```
To save space we postulate (rather than prove in full) two lemmas:
\begin{code}
```
postulate
+-identity : ∀ (m : ) → m + zero ≡ m
+-suc : ∀ (m n : ) → m + suc n ≡ suc (m + n)
\end{code}
```
This is our first use of a _postulate_. A postulate specifies a
signature for an identifier but no definition. Here we postulate
something proved earlier to save space. Postulates must be used with
@ -298,7 +298,7 @@ caution. If we postulate something false then we could use Agda to
prove anything whatsoever.
We then repeat the proof of commutativity:
\begin{code}
```
+-comm : ∀ (m n : ) → m + n ≡ n + m
+-comm m zero =
begin
@ -318,7 +318,7 @@ We then repeat the proof of commutativity:
≡⟨⟩
suc n + m
\end{code}
```
The reasoning here is similar to that in the
preceding section. We use
`_≡⟨⟩_` when no justification is required.
@ -353,9 +353,9 @@ notation for `≡-Reasoning`. Define `≤-Reasoning` analogously, and use
it to write out an alternative proof that addition is monotonic with
regard to inequality. Rewrite all of `+-monoˡ-≤`, `+-monoʳ-≤`, and `+-mono-≤`.
\begin{code}
```
-- Your code goes here
\end{code}
```
@ -363,7 +363,7 @@ regard to inequality. Rewrite all of `+-monoˡ-≤`, `+-monoʳ-≤`, and `+-mon
Consider a property of natural numbers, such as being even.
We repeat the earlier definition:
\begin{code}
```
data even : → Set
data odd : → Set
@ -381,7 +381,7 @@ data odd where
→ even n
-----------
→ odd (suc n)
\end{code}
```
In the previous section, we proved addition is commutative. Given
evidence that `even (m + n)` holds, we ought also to be able to take
that as evidence that `even (n + m)` holds.
@ -390,18 +390,18 @@ Agda includes special notation to support just this kind of reasoning,
the `rewrite` notation we encountered earlier.
To enable this notation, we use pragmas to tell Agda which type
corresponds to equality:
\begin{code}
```
{-# BUILTIN EQUALITY _≡_ #-}
\end{code}
```
We can then prove the desired property as follows:
\begin{code}
```
even-comm : ∀ (m n : )
→ even (m + n)
------------
→ even (n + m)
even-comm m n ev rewrite +-comm n m = ev
\end{code}
```
Here `ev` ranges over evidence that `even (m + n)` holds, and we show
that it also provides evidence that `even (n + m)` holds. In
general, the keyword `rewrite` is followed by evidence of an
@ -454,11 +454,11 @@ the same type as the goal.
One may perform multiple rewrites, each separated by a vertical bar. For instance,
here is a second proof that addition is commutative, relying on rewrites rather
than chains of equalities:
\begin{code}
```
+-comm : ∀ (m n : ) → m + n ≡ n + m
+-comm zero n rewrite +-identity n = refl
+-comm (suc m) n rewrite +-suc n m | +-comm m n = refl
\end{code}
```
This is far more compact. Among other things, whereas the previous
proof required `cong suc (+-comm m n)` as the justification to invoke
the inductive hypothesis, here it is sufficient to rewrite with
@ -472,14 +472,14 @@ when feasible.
The `rewrite` notation is in fact shorthand for an appropriate use of `with`
abstraction:
\begin{code}
```
even-comm : ∀ (m n : )
→ even (m + n)
------------
→ even (n + m)
even-comm m n ev with m + n | +-comm m n
... | .(n + m) | refl = ev
\end{code}
```
In general, one can follow `with` by any number of expressions,
separated by bars, where each following equation has the same number
of patterns. We often write expressions and the corresponding
@ -499,13 +499,13 @@ reversing the order of the clauses will cause Agda to report an error.
In this case, we can avoid rewrite by simply applying the substitution
function defined earlier:
\begin{code}
```
even-comm″ : ∀ (m n : )
→ even (m + n)
------------
→ even (n + m)
even-comm″ m n = subst even (+-comm m n)
\end{code}
```
Nonetheless, rewrite is a vital part of the Agda toolkit. We will use
it sparingly, but it is occasionally essential.
@ -529,10 +529,10 @@ converse, that every property `P` that holds of `y` also holds of `x`.
Let `x` and `y` be objects of type `A`. We say that `x ≐ y` holds if
for every predicate `P` over type `A` we have that `P x` implies `P y`:
\begin{code}
```
_≐_ : ∀ {A : Set} (x y : A) → Set₁
_≐_ {A} x y = ∀ (P : A → Set) → P x → P y
\end{code}
```
We cannot write the left-hand side of the equation as `x ≐ y`,
and instead we write `_≐_ {A} x y` to provide access to the implicit
parameter `A` which appears on the right-hand side.
@ -548,7 +548,7 @@ must use `Set₁`. We say a bit more about levels below.
Leibniz equality is reflexive and transitive,
where the first follows by a variant of the identity function
and the second by a variant of function composition:
\begin{code}
```
refl-≐ : ∀ {A : Set} {x : A}
→ x ≐ x
refl-≐ P Px = Px
@ -559,12 +559,12 @@ trans-≐ : ∀ {A : Set} {x y z : A}
-----
→ x ≐ z
trans-≐ x≐y y≐z P Px = y≐z P (x≐y P Px)
\end{code}
```
Symmetry is less obvious. We have to show that if `P x` implies `P y`
for all predicates `P`, then the implication holds the other way round
as well:
\begin{code}
```
sym-≐ : ∀ {A : Set} {x y : A}
→ x ≐ y
-----
@ -577,7 +577,7 @@ sym-≐ {A} {x} {y} x≐y P = Qy
Qx = refl-≐ P
Qy : Q y
Qy = x≐y Q Qx
\end{code}
```
Given `x ≐ y`, a specific `P`, we have to construct a proof that `P y`
implies `P x`. To do so, we instantiate the equality with a predicate
@ -590,18 +590,18 @@ Leibniz equality, and vice versa. In the forward direction, if we know
`x ≡ y` we need for any `P` to take evidence of `P x` to evidence of `P y`,
which is easy since equality of `x` and `y` implies that any proof
of `P x` is also a proof of `P y`:
\begin{code}
```
≡-implies-≐ : ∀ {A : Set} {x y : A}
→ x ≡ y
-----
→ x ≐ y
≡-implies-≐ x≡y P = subst P x≡y
\end{code}
```
This direction follows from substitution, which we showed earlier.
In the reverse direction, given that for any `P` we can take a proof of `P x`
to a proof of `P y` we need to show `x ≡ y`:
\begin{code}
```
≐-implies-≡ : ∀ {A : Set} {x y : A}
→ x ≐ y
-----
@ -614,7 +614,7 @@ to a proof of `P y` we need to show `x ≡ y`:
Qx = refl
Qy : Q y
Qy = x≐y Q Qx
\end{code}
```
The proof is similar to that for symmetry of Leibniz equality. We take
`Q` to be the predicate that holds of `z` if `x ≡ z`. Then `Q x` is
trivial by reflexivity of Martin Löf equality, and hence `Q y`
@ -639,9 +639,9 @@ two values of a type that belongs to `Set ` for some arbitrary level ``?
The answer is _universe polymorphism_, where a definition is made
with respect to an arbitrary level ``. To make use of levels, we
first import the following:
\begin{code}
```
open import Level using (Level; _⊔_) renaming (zero to lzero; suc to lsuc)
\end{code}
```
We rename constructors `zero` and `suc` to `lzero` and `lsuc` to avoid confusion
between levels and naturals.
@ -663,27 +663,27 @@ and so on. There is also an operator
that given two levels returns the larger of the two.
Here is the definition of equality, generalised to an arbitrary level:
\begin{code}
```
data _≡_ { : Level} {A : Set } (x : A) : A → Set where
refl : x ≡′ x
\end{code}
```
Similarly, here is the generalised definition of symmetry:
\begin{code}
```
sym : ∀ { : Level} {A : Set } {x y : A}
→ x ≡′ y
------
→ y ≡′ x
sym refl = refl
\end{code}
```
For simplicity, we avoid universe polymorphism in the definitions given in
the text, but most definitions in the standard library, including those for
equality, are generalised to arbitrary levels as above.
Here is the generalised definition of Leibniz equality:
\begin{code}
```
_≐_ : ∀ { : Level} {A : Set } (x y : A) → Set (lsuc )
_≐_ {} {A} x y = ∀ (P : A → Set ) → P x → P y
\end{code}
```
Before the signature used `Set₁` as the type of a term that includes
`Set`, whereas here the signature uses `Set (lsuc )` as the type of a
term that includes `Set `.
@ -697,11 +697,11 @@ Further information on levels can be found in the [Agda Wiki][wiki].
Definitions similar to those in this chapter can be found in the
standard library:
\begin{code}
```
-- import Relation.Binary.PropositionalEquality as Eq
-- open Eq using (_≡_; refl; trans; sym; cong; cong-app; subst)
-- open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
\end{code}
```
Here the imports are shown as comments rather than code to avoid
collisions, as mentioned in the introduction.

View file

@ -6,13 +6,13 @@ permalink : /Fonts/
next : /Statistics/
---
\begin{code}
```
module plfa.Fonts where
\end{code}
```
Test page for fonts. Preferably, all vertical bars should line up.
\begin{code}
```
{-
--------------------------|
abcdefghijklmnopqrstuvwxyz|
@ -83,11 +83,11 @@ ABCDEFGHIJKLMNOPQRSTUVWXYZ|
∌∌∉∉|
----|
-}
\end{code}
```
In the book we use the em-dash to make big arrows.
\begin{code}
```
{-
----|
—→—→|
@ -96,11 +96,11 @@ In the book we use the em-dash to make big arrows.
—↠—↠|
----|
-}
\end{code}
```
Here are some characters that are often not monospaced.
\begin{code}
```
{-
----|
😇😇|
@ -117,4 +117,4 @@ Here are some characters that are often not monospaced.
----------|
-}
\end{code}
```

View file

@ -6,9 +6,9 @@ permalink : /Induction/
next : /Relations/
---
\begin{code}
```
module plfa.Induction where
\end{code}
```
> Induction makes you feel guilty for getting something out of nothing
> ... but it is one of the greatest ideas of civilization.
@ -25,12 +25,12 @@ _induction_.
We require equality as in the previous chapter, plus the naturals
and some operations upon them. We also import a couple of new operations,
`cong`, `sym`, and `_≡⟨_⟩_`, which are explained below:
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; sym)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
open import Data.Nat using (; zero; suc; _+_; _*_; _∸_)
\end{code}
```
## Properties of operators
@ -79,9 +79,9 @@ and are associative, commutative, and distribute over one another.
Give an example of an operator that has an identity and is
associative but is not commutative.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Associativity
@ -95,7 +95,7 @@ Here `m`, `n`, and `p` are variables that range over all natural numbers.
We can test the proposition by choosing specific numbers for the three
variables:
\begin{code}
```
_ : (3 + 4) + 5 ≡ 3 + (4 + 5)
_ =
begin
@ -109,7 +109,7 @@ _ =
≡⟨⟩
3 + (4 + 5)
\end{code}
```
Here we have displayed the computation as a chain of equations,
one term to a line. It is often easiest to read such chains from the top down
until one reaches the simplest term (in this case, `12`), and
@ -225,7 +225,7 @@ If we can demonstrate both of these, then associativity of addition
follows by induction.
Here is the proposition's statement and proof:
\begin{code}
```
+-assoc : ∀ (m n p : ) → (m + n) + p ≡ m + (n + p)
+-assoc zero n p =
begin
@ -247,7 +247,7 @@ Here is the proposition's statement and proof:
≡⟨⟩
suc m + (n + p)
\end{code}
```
We have named the proof `+-assoc`. In Agda, identifiers can consist of
any sequence of characters not including spaces or the characters `@.(){};_`.
@ -403,7 +403,7 @@ Our first lemma states that zero is also a right-identity:
m + zero ≡ m
Here is the lemma's statement and proof:
\begin{code}
```
+-identityʳ : ∀ (m : ) → m + zero ≡ m
+-identityʳ zero =
begin
@ -419,7 +419,7 @@ Here is the lemma's statement and proof:
≡⟨ cong suc (+-identityʳ m) ⟩
suc m
\end{code}
```
The signature states that we are defining the identifier `+-identityʳ` which
provides evidence for the proposition:
@ -470,7 +470,7 @@ Our second lemma does the same for `suc` on the second argument:
m + suc n ≡ suc (m + n)
Here is the lemma's statement and proof:
\begin{code}
```
+-suc : ∀ (m n : ) → m + suc n ≡ suc (m + n)
+-suc zero n =
begin
@ -490,7 +490,7 @@ Here is the lemma's statement and proof:
≡⟨⟩
suc (suc m + n)
\end{code}
```
The signature states that we are defining the identifier `+-suc` which provides
evidence for the proposition:
@ -532,7 +532,7 @@ yield the needed equation. This completes the second lemma.
### The proposition
Finally, here is our proposition's statement and proof:
\begin{code}
```
+-comm : ∀ (m n : ) → m + n ≡ n + m
+-comm m zero =
begin
@ -552,7 +552,7 @@ Finally, here is our proposition's statement and proof:
≡⟨⟩
suc n + m
\end{code}
```
The first line states that we are defining the identifier
`+-comm` which provides evidence for the proposition:
@ -605,7 +605,7 @@ will suggest what lemmas to prove.
We can apply associativity to rearrange parentheses however we like.
Here is an example:
\begin{code}
```
+-rearrange : ∀ (m n p q : ) → (m + n) + (p + q) ≡ m + (n + p) + q
+-rearrange m n p q =
begin
@ -617,7 +617,7 @@ Here is an example:
≡⟨ sym (+-assoc m (n + p) q) ⟩
(m + (n + p)) + q
\end{code}
```
No induction is required, we simply apply associativity twice.
A few points are worthy of note.
@ -703,20 +703,20 @@ Write out what is known about associativity of addition on each of the
first four days using a finite story of creation, as
[earlier][plfa.Naturals#finite-creation].
\begin{code}
```
-- Your code goes here
\end{code}
```
## Associativity with rewrite
There is more than one way to skin a cat. Here is a second proof of
associativity of addition in Agda, using `rewrite` rather than chains of
equations:
\begin{code}
```
+-assoc : ∀ (m n p : ) → (m + n) + p ≡ m + (n + p)
+-assoc zero n p = refl
+-assoc (suc m) n p rewrite +-assoc m n p = refl
\end{code}
```
For the base case, we must show:
@ -746,7 +746,7 @@ not only chains of equations but also the need to invoke `cong`.
Here is a second proof of commutativity of addition, using `rewrite` rather than
chains of equations:
\begin{code}
```
+-identity : ∀ (n : ) → n + zero ≡ n
+-identity zero = refl
+-identity (suc n) rewrite +-identity n = refl
@ -758,7 +758,7 @@ chains of equations:
+-comm : ∀ (m n : ) → m + n ≡ n + m
+-comm m zero rewrite +-identity m = refl
+-comm m (suc n) rewrite +-suc m n | +-comm m n = refl
\end{code}
```
In the final line, rewriting with two equations is
indicated by separating the two proofs of the relevant equations by a
vertical bar; the rewrite on the left is performed before that on the
@ -869,9 +869,9 @@ for all naturals `m`, `n`, and `p`. No induction is needed,
just apply the previous results which show addition
is associative and commutative.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `*-distrib-+` (recommended) {#times-distrib-plus}
@ -882,9 +882,9 @@ Show multiplication distributes over addition, that is,
for all naturals `m`, `n`, and `p`.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `*-assoc` (recommended) {#times-assoc}
@ -895,9 +895,9 @@ Show multiplication is associative, that is,
for all naturals `m`, `n`, and `p`.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `*-comm` {#times-comm}
@ -909,9 +909,9 @@ Show multiplication is commutative, that is,
for all naturals `m` and `n`. As with commutativity of addition,
you will need to formulate and prove suitable lemmas.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `0∸n≡0` {#zero-monus}
@ -922,9 +922,9 @@ Show
for all naturals `n`. Did your proof require induction?
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `∸-+-assoc` {#monus-plus-assoc}
@ -935,9 +935,9 @@ Show that monus associates with addition, that is,
for all naturals `m`, `n`, and `p`.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `+*^` (stretch)
@ -956,12 +956,12 @@ for all `m`, `n`, and `p`.
Recall that
Exercise [Bin][plfa.Naturals#Bin]
defines a datatype of bitstrings representing natural numbers
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
and asks you to define functions
inc : Bin → Bin
@ -977,17 +977,17 @@ over bitstrings:
For each law: if it holds, prove; if not, give a counterexample.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Standard library
Definitions similar to those in this chapter can be found in the standard library:
\begin{code}
```
import Data.Nat.Properties using (+-assoc; +-identityʳ; +-suc; +-comm)
\end{code}
```
## Unicode

View file

@ -6,9 +6,9 @@ permalink : /Inference/
next : /Untyped/
---
\begin{code}
```
module plfa.Inference where
\end{code}
```
So far in our development, type derivations for the corresponding
term have been provided by fiat.
@ -246,7 +246,7 @@ We are now ready to begin the formal development.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong; cong₂; _≢_)
open import Data.Empty using (⊥; ⊥-elim)
@ -254,16 +254,16 @@ open import Data.Nat using (; zero; suc; _+_)
open import Data.String using (String; _≟_)
open import Data.Product using (_×_; ∃; ∃-syntax) renaming (_,_ to ⟨_,_⟩)
open import Relation.Nullary using (¬_; Dec; yes; no)
\end{code}
```
Once we have a type derivation, it will be easy to construct
from it the inherently typed representation. In order that we
can compare with our previous development, we import
module `pfla.DeBruijn`:
\begin{code}
```
import plfa.DeBruijn as DB
\end{code}
```
The phrase `as DB` allows us to refer to definitions
from that module as, for instance, `DB._⊢_`, which is
@ -278,7 +278,7 @@ also be referred to as just `Type`.
First, we get all our infix declarations out of the way.
We list separately operators for judgments and terms:
\begin{code}
```
infix 4 _∋_⦂_
infix 4 _⊢_↑_
infix 4 _⊢_↓_
@ -293,10 +293,10 @@ infix 6 _↓_
infixl 7 _·_
infix 8 `suc_
infix 9 `_
\end{code}
```
Identifiers, types, and contexts are as before:
\begin{code}
```
Id : Set
Id = String
@ -307,14 +307,14 @@ data Type : Set where
data Context : Set where
∅ : Context
_,_⦂_ : Context → Id → Type → Context
\end{code}
```
The syntax of terms is defined by mutual recursion.
We use `Term⁺` and `Term⁻`
for terms with synthesized and inherited types, respectively.
Note the inclusion of the switching forms,
`M ↓ A` and `M ↑`:
\begin{code}
```
data Term⁺ : Set
data Term⁻ : Set
@ -330,7 +330,7 @@ data Term⁻ where
`case_[zero⇒_|suc_⇒_] : Term⁺ → Term⁻ → Id → Term⁻ → Term⁻
μ_⇒_ : Id → Term⁻ → Term⁻
_↑ : Term⁺ → Term⁻
\end{code}
```
The choice as to whether each term is synthesized or
inherited follows the discussion above, and can be read
off from the informal grammar presented earlier. Main terms in
@ -341,7 +341,7 @@ in deconstructors inherit.
We can recreate the examples from preceding chapters.
First, computing two plus two on naturals:
\begin{code}
```
two : Term⁻
two = `suc (`suc `zero)
@ -353,12 +353,12 @@ plus = (μ "p" ⇒ ƛ "m" ⇒ ƛ "n" ⇒
2+2 : Term⁺
2+2 = plus · two · two
\end{code}
```
The only change is to decorate with down and up arrows as required.
The only type decoration required is for `plus`.
Next, computing two plus two with Church numerals:
\begin{code}
```
Ch : Type
Ch = (``) ⇒ ` ⇒ `
@ -375,7 +375,7 @@ sucᶜ = ƛ "x" ⇒ `suc (` "x" ↑)
2+2ᶜ : Term⁺
2+2ᶜ = plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero
\end{code}
```
The only type decoration required is for `plusᶜ`. One is not even
required for `sucᶜ`, which inherits its type as an argument of `plusᶜ`.
@ -383,7 +383,7 @@ required for `sucᶜ`, which inherits its type as an argument of `plusᶜ`.
The typing rules for variables are as in
[Lambda][plfa.Lambda]:
\begin{code}
```
data _∋_⦂_ : Context → Id → Type → Set where
Z : ∀ {Γ x A}
@ -395,11 +395,11 @@ data _∋_⦂_ : Context → Id → Type → Set where
→ Γ ∋ x ⦂ A
-----------------
→ Γ , y ⦂ B ∋ x ⦂ A
\end{code}
```
As with syntax, the judgments for synthesizing
and inheriting types are mutually recursive:
\begin{code}
```
data _⊢_↑_ : Context → Term⁺ → Type → Set
data _⊢_↓_ : Context → Term⁻ → Type → Set
@ -454,7 +454,7 @@ data _⊢_↓_ where
→ A ≡ B
-------------
→ Γ ⊢ (M ↑) ↓ B
\end{code}
```
We follow the same convention as
Chapter [Lambda][plfa.Lambda],
prefacing the constructor with `⊢` to derive the name of the
@ -476,9 +476,9 @@ the equality test in the application rule in the first
Rewrite your definition of multiplication from
Chapter [Lambda][plfa.Lambda], decorated to support inference.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `bidirectional-products` (recommended) {#bidirectional-products}
@ -486,9 +486,9 @@ Chapter [Lambda][plfa.Lambda], decorated to support inference.
Extend the bidirectional type rules to include products from
Chapter [More][plfa.More].
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `bidirectional-rest` (stretch)
@ -496,16 +496,16 @@ Chapter [More][plfa.More].
Extend the bidirectional type rules to include the rest of the constructs from
Chapter [More][plfa.More].
\begin{code}
```
-- Your code goes here
\end{code}
```
## Prerequisites
The rule for `M ↑` requires the ability to decide whether two types
are equal. It is straightforward to code:
\begin{code}
```
_≟Tp_ : (A B : Type) → Dec (A ≡ B)
` ≟Tp ` = yes refl
` ≟Tp (A ⇒ B) = no λ()
@ -515,24 +515,24 @@ _≟Tp_ : (A B : Type) → Dec (A ≡ B)
... | no A≢ | _ = no λ{refl → A≢ refl}
... | yes _ | no B≢ = no λ{refl → B≢ refl}
... | yes refl | yes refl = yes refl
\end{code}
```
We will also need a couple of obvious lemmas; the domain
and range of equal function types are equal:
\begin{code}
```
dom≡ : ∀ {A A B B} → A ⇒ B ≡ A ⇒ B → A ≡ A
dom≡ refl = refl
rng≡ : ∀ {A A B B} → A ⇒ B ≡ A ⇒ B → B ≡ B
rng≡ refl = refl
\end{code}
```
We will also need to know that the types `` ` ``
and `A ⇒ B` are not equal:
\begin{code}
```
ℕ≢⇒ : ∀ {A B} → ` ≢ A ⇒ B
ℕ≢⇒ ()
\end{code}
```
## Unique types
@ -540,13 +540,13 @@ and `A ⇒ B` are not equal:
Looking up a type in the context is unique. Given two derivations,
one showing `Γ ∋ x ⦂ A` and one showing `Γ ∋ x ⦂ B`, it follows that
`A` and `B` must be identical:
\begin{code}
```
uniq-∋ : ∀ {Γ x A B} → Γ ∋ x ⦂ A → Γ ∋ x ⦂ B → A ≡ B
uniq-∋ Z Z = refl
uniq-∋ Z (S x≢y _) = ⊥-elim (x≢y refl)
uniq-∋ (S x≢y _) Z = ⊥-elim (x≢y refl)
uniq-∋ (S _ ∋x) (S _ ∋x) = uniq-∋ ∋x ∋x
\end{code}
```
If both derivations are by rule `Z` then uniqueness
follows immediately, while if both derivations are
by rule `S` then uniqueness follows by induction.
@ -559,12 +559,12 @@ it is not.
Synthesizing a type is also unique. Given two derivations,
one showing `Γ ⊢ M ↑ A` and one showing `Γ ⊢ M ↑ B`, it follows
that `A` and `B` must be identical:
\begin{code}
```
uniq-↑ : ∀ {Γ M A B} → Γ ⊢ M ↑ A → Γ ⊢ M ↑ B → A ≡ B
uniq-↑ (⊢` ∋x) (⊢` ∋x) = uniq-∋ ∋x ∋x
uniq-↑ (⊢L · ⊢M) (⊢L · ⊢M) = rng≡ (uniq-↑ ⊢L ⊢L)
uniq-↑ (⊢↓ ⊢M) (⊢↓ ⊢M) = refl
\end{code}
```
There are three possibilities for the term. If it is a variable,
uniqueness of synthesis follows from uniqueness of lookup.
If it is an application, uniqueness follows by induction on
@ -578,7 +578,7 @@ follows since both terms are decorated with the same type.
Given `Γ` and two distinct variables `x` and `y`, if there is no type `A`
such that `Γ ∋ x ⦂ A` holds, then there is also no type `A` such that
`Γ , y ⦂ B ∋ x ⦂ A` holds:
\begin{code}
```
ext∋ : ∀ {Γ B x y}
→ x ≢ y
→ ¬ ∃[ A ]( Γ ∋ x ⦂ A )
@ -586,7 +586,7 @@ ext∋ : ∀ {Γ B x y}
→ ¬ ∃[ A ]( Γ , y ⦂ B ∋ x ⦂ A )
ext∋ x≢y _ ⟨ A , Z ⟩ = x≢y refl
ext∋ _ ¬∃ ⟨ A , S _ ⊢x ⟩ = ¬∃ ⟨ A , ⊢x ⟩
\end{code}
```
Given a type `A` and evidence that `Γ , y ⦂ B ∋ x ⦂ A` holds, we must
demonstrate a contradiction. If the judgment holds by `Z`, then we
must have that `x` and `y` are the same, which contradicts the first
@ -596,7 +596,7 @@ evidence that `Γ ∋ x ⦂ A`, which contradicts the second assumption.
Given a context `Γ` and a variable `x`, we decide whether
there exists a type `A` such that `Γ ∋ x ⦂ A` holds, or its
negation:
\begin{code}
```
lookup : ∀ (Γ : Context) (x : Id)
-----------------------
→ Dec (∃[ A ](Γ ∋ x ⦂ A))
@ -606,7 +606,7 @@ lookup (Γ , y ⦂ B) x with x ≟ y
... | no x≢y with lookup Γ x
... | no ¬∃ = no (ext∋ x≢y ¬∃)
... | yes ⟨ A , ⊢x ⟩ = yes ⟨ A , S x≢y ⊢x ⟩
\end{code}
```
Consider the context:
* If it is empty, then trivially there is no possible derivation.
@ -635,14 +635,14 @@ auxiliary functions for a couple of the trickier cases.
If `Γ ⊢ L ↑ A ⇒ B` holds but `Γ ⊢ M ↓ A` does not hold, then
there is no term `B` such that `Γ ⊢ L · M ↑ B` holds:
\begin{code}
```
¬arg : ∀ {Γ A B L M}
→ Γ ⊢ L ↑ A ⇒ B
→ ¬ Γ ⊢ M ↓ A
-------------------------
→ ¬ ∃[ B ](Γ ⊢ L · M ↑ B)
¬arg ⊢L ¬⊢M ⟨ B , ⊢L · ⊢M ⟩ rewrite dom≡ (uniq-↑ ⊢L ⊢L) = ¬⊢M ⊢M
\end{code}
```
Let `⊢L` be evidence that `Γ ⊢ L ↑ A ⇒ B` holds and `¬⊢M` be evidence
that `Γ ⊢ M ↓ A` does not hold. Given a type `B` and evidence that
`Γ ⊢ L · M ↑ B` holds, we must demonstrate a contradiction. The
@ -656,14 +656,14 @@ type `A` and the other type `A`.
If `Γ ⊢ M ↑ A` holds and `A ≢ B`, then `Γ ⊢ (M ↑) ↓ B` does not hold:
\begin{code}
```
¬switch : ∀ {Γ M A B}
→ Γ ⊢ M ↑ A
→ A ≢ B
---------------
→ ¬ Γ ⊢ (M ↑) ↓ B
¬switch ⊢M A≢B (⊢↑ ⊢M A≡B) rewrite uniq-↑ ⊢M ⊢M = A≢B A≡B
\end{code}
```
Let `⊢M` be evidence that `Γ ⊢ M ↑ A` holds, and `A≢B` be evidence
that `A ≢ B`. Given evidence that `Γ ⊢ (M ↑) ↓ B` holds, we must
demonstrate a contradiction. The evidence must take the form `⊢↑ ⊢M
@ -685,7 +685,7 @@ returns a type `A` and evidence that `Γ ⊢ M ↑ A`, or its negation.
Inheritance is given a context `Γ`, an inheritance term `M`,
and a type `A` and either returns evidence that `Γ ⊢ M ↓ A`,
or its negation:
\begin{code}
```
synthesize : ∀ (Γ : Context) (M : Term⁺)
-----------------------
→ Dec (∃[ A ](Γ ⊢ M ↑ A))
@ -693,10 +693,10 @@ synthesize : ∀ (Γ : Context) (M : Term⁺)
inherit : ∀ (Γ : Context) (M : Term⁻) (A : Type)
---------------
→ Dec (Γ ⊢ M ↓ A)
\end{code}
```
We first consider the code for synthesis:
\begin{code}
```
synthesize Γ (` x) with lookup Γ x
... | no ¬∃ = no (λ{ ⟨ A , ⊢` ∋x ⟩ → ¬∃ ⟨ A , ∋x ⟩ })
... | yes ⟨ A , ∋x ⟩ = yes ⟨ A , ⊢` ∋x ⟩
@ -709,7 +709,7 @@ synthesize Γ (L · M) with synthesize Γ L
synthesize Γ (M ↓ A) with inherit Γ M A
... | no ¬⊢M = no (λ{ ⟨ _ , ⊢↓ ⊢M ⟩ → ¬⊢M ⊢M })
... | yes ⊢M = yes ⟨ A , ⊢↓ ⊢M ⟩
\end{code}
```
There are three cases:
* If the term is a variable `` ` x ``, we use lookup as defined above:
@ -759,7 +759,7 @@ There are three cases:
and `⊢↓ ⊢M` provides evidence that `Γ ⊢ (M ↓ A) ↑ A`.
We next consider the code for inheritance:
\begin{code}
```
inherit Γ (ƛ x ⇒ N) ` = no (λ())
inherit Γ (ƛ x ⇒ N) (A ⇒ B) with inherit (Γ , x ⦂ A) N B
... | no ¬⊢N = no (λ{ (⊢ƛ ⊢N) → ¬⊢N ⊢N })
@ -786,7 +786,7 @@ inherit Γ (M ↑) B with synthesize Γ M
... | yes ⟨ A , ⊢M ⟩ with A ≟Tp B
... | no A≢B = no (¬switch ⊢M A≢B)
... | yes A≡B = yes (⊢↑ ⊢M A≡B)
\end{code}
```
We consider only the cases for abstraction and
and for switching from inherited to synthesized:
@ -831,16 +831,16 @@ read directly from the corresponding typing rules.
First, we copy a function introduced earlier that makes it easy to
compute the evidence that two variable names are distinct:
\begin{code}
```
_≠_ : ∀ (x y : Id) → x ≢ y
x ≠ y with x ≟ y
... | no x≢y = x≢y
... | yes _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
Here is the result of typing two plus two on naturals:
\begin{code}
```
⊢2+2 : ∅ ⊢ 2+2 ↑ `
⊢2+2 =
(⊢↓
@ -859,13 +859,13 @@ Here is the result of typing two plus two on naturals:
refl))))))
· ⊢suc (⊢suc ⊢zero)
· ⊢suc (⊢suc ⊢zero))
\end{code}
```
We confirm that synthesis on the relevant term returns
natural as the type and the above derivation:
\begin{code}
```
_ : synthesize ∅ 2+2 ≡ yes ⟨ ` , ⊢2+2 ⟩
_ = refl
\end{code}
```
Indeed, the above derivation was computed by evaluating the term on
the left, with minor editing of the result. The only editing required
was to replace Agda's representation of the evidence that two strings
@ -873,7 +873,7 @@ are unequal (which it cannot print nor read) by equivalent calls to
`_≠_`.
Here is the result of typing two plus two with Church numerals:
\begin{code}
```
⊢2+2ᶜ : ∅ ⊢ 2+2ᶜ ↑ `
⊢2+2ᶜ =
⊢↓
@ -914,13 +914,13 @@ Here is the result of typing two plus two with Church numerals:
refl))
· ⊢ƛ (⊢suc (⊢↑ (⊢` Z) refl))
· ⊢zero
\end{code}
```
We confirm that synthesis on the relevant term returns
natural as the type and the above derivation:
\begin{code}
```
_ : synthesize ∅ 2+2ᶜ ≡ yes ⟨ ` , ⊢2+2ᶜ ⟩
_ = refl
\end{code}
```
Again, the above derivation was computed by evaluating the
term on the left and editing.
@ -931,72 +931,72 @@ but also that it fails as intended. Here are checks for
several possible errors:
Unbound variable:
\begin{code}
```
_ : synthesize ∅ ((ƛ "x" ⇒ ` "y" ↑) ↓ (` ⇒ `)) ≡ no _
_ = refl
\end{code}
```
Argument in application is ill-typed:
\begin{code}
```
_ : synthesize ∅ (plus · sucᶜ) ≡ no _
_ = refl
\end{code}
```
Function in application is ill-typed:
\begin{code}
```
_ : synthesize ∅ (plus · sucᶜ · two) ≡ no _
_ = refl
\end{code}
```
Function in application has type natural:
\begin{code}
```
_ : synthesize ∅ ((two ↓ `) · two) ≡ no _
_ = refl
\end{code}
```
Abstraction inherits type natural:
\begin{code}
```
_ : synthesize ∅ (twoᶜ ↓ `) ≡ no _
_ = refl
\end{code}
```
Zero inherits a function type:
\begin{code}
```
_ : synthesize ∅ (`zero ↓ ` ⇒ `) ≡ no _
_ = refl
\end{code}
```
Successor inherits a function type:
\begin{code}
```
_ : synthesize ∅ (two ↓ ` ⇒ `) ≡ no _
_ = refl
\end{code}
```
Successor of an ill-typed term:
\begin{code}
```
_ : synthesize ∅ (`suc twoᶜ ↓ `) ≡ no _
_ = refl
\end{code}
```
Case of a term with a function type:
\begin{code}
```
_ : synthesize ∅
((`case (twoᶜ ↓ Ch) [zero⇒ `zero |suc "x" ⇒ ` "x" ↑ ] ↓ `) ) ≡ no _
_ = refl
\end{code}
```
Case of an ill-typed term:
\begin{code}
```
_ : synthesize ∅
((`case (twoᶜ ↓ `) [zero⇒ `zero |suc "x" ⇒ ` "x" ↑ ] ↓ `) ) ≡ no _
_ = refl
\end{code}
```
Inherited and synthesised types disagree in a switch:
\begin{code}
```
_ : synthesize ∅ (((ƛ "x" ⇒ ` "x" ↑) ↓ ` ⇒ (` ⇒ `))) ≡ no _
_ = refl
\end{code}
```
## Erasure
@ -1009,33 +1009,33 @@ It is easy to define an _erasure_ function that takes evidence of a
type judgment into the corresponding inherently typed term.
First, we give code to erase a type:
\begin{code}
```
∥_∥Tp : Type → DB.Type
` ∥Tp = DB.`
∥ A ⇒ B ∥Tp = ∥ A ∥Tp DB.⇒ ∥ B ∥Tp
\end{code}
```
It simply renames to the corresponding constructors in module `DB`.
Next, we give the code to erase a context:
\begin{code}
```
∥_∥Cx : Context → DB.Context
∥ ∅ ∥Cx = DB.∅
∥ Γ , x ⦂ A ∥Cx = ∥ Γ ∥Cx DB., ∥ A ∥Tp
\end{code}
```
It simply drops the variable names.
Next, we give the code to erase a lookup judgment:
\begin{code}
```
∥_∥∋ : ∀ {Γ x A} → Γ ∋ x ⦂ A → ∥ Γ ∥Cx DB.∋ ∥ A ∥Tp
∥ Z ∥∋ = DB.Z
∥ S x≢ ⊢x ∥∋ = DB.S ∥ ⊢x ∥∋
\end{code}
```
It simply drops the evidence that variable names are distinct.
Finally, we give the code to erase a typing judgment.
Just as there are two mutually recursive typing judgments,
there are two mutually recursive erasure functions:
\begin{code}
```
∥_∥⁺ : ∀ {Γ M A} → Γ ⊢ M ↑ A → ∥ Γ ∥Cx DB.⊢ ∥ A ∥Tp
∥_∥⁻ : ∀ {Γ M A} → Γ ⊢ M ↓ A → ∥ Γ ∥Cx DB.⊢ ∥ A ∥Tp
@ -1049,7 +1049,7 @@ there are two mutually recursive erasure functions:
∥ ⊢case ⊢L ⊢M ⊢N ∥⁻ = DB.case ∥ ⊢L ∥⁺ ∥ ⊢M ∥⁻ ∥ ⊢N ∥⁻
∥ ⊢μ ⊢M ∥⁻ = DB.μ ∥ ⊢M ∥⁻
∥ ⊢↑ ⊢M refl ∥⁻ = ∥ ⊢M ∥⁺
\end{code}
```
Erasure replaces constructors for each typing judgment
by the corresponding term constructor from `DB`. The
constructors that correspond to switching from synthesized
@ -1058,13 +1058,13 @@ to inherited or vice versa are dropped.
We confirm that the erasure of the type derivations in
this chapter yield the corresponding inherently typed terms
from the earlier chapter:
\begin{code}
```
_ : ∥ ⊢2+2 ∥⁺ ≡ DB.2+2
_ = refl
_ : ∥ ⊢2+2ᶜ ∥⁺ ≡ DB.2+2ᶜ
_ = refl
\end{code}
```
Thus, we have confirmed that bidirectional type inference
converts decorated versions of the lambda terms from
Chapter [Lambda][plfa.Lambda]
@ -1079,9 +1079,9 @@ exercise [`bidirectional-mul`][plfa.Inference#bidirectional-mul], and show that
erasure of the inferred typing yields your definition of
multiplication from Chapter [DeBruijn][plfa.DeBruijn].
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `inference-products` (recommended)
@ -1089,18 +1089,18 @@ Using your rules from exercise
[`bidirectional-products`][plfa.Inference#bidirectional-products], extend
bidirectional inference to include products.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `inference-rest` (stretch)
Extend the bidirectional type rules to include the rest of the constructs from
Chapter [More][plfa.More].
\begin{code}
```
-- Your code goes here
\end{code}
```
## Bidirectional inference in Agda

View file

@ -6,9 +6,9 @@ permalink : /Isomorphism/
next : /Connectives/
---
\begin{code}
```
module plfa.Isomorphism where
\end{code}
```
This section introduces isomorphism as a way of asserting that two
types are equal, and embedding as a way of asserting that one type is
@ -19,13 +19,13 @@ distributivity.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; cong-app)
open Eq.≡-Reasoning
open import Data.Nat using (; zero; suc; _+_)
open import Data.Nat.Properties using (+-comm)
\end{code}
```
## Lambda expressions
@ -70,17 +70,17 @@ reader to search for the definition in the code.
## Function composition
In what follows, we will make use of function composition:
\begin{code}
```
_∘_ : ∀ {A B C : Set} → (B → C) → (A → B) → (A → C)
(g ∘ f) x = g (f x)
\end{code}
```
Thus, `g ∘ f` is the function that first applies `f` and
then applies `g`. An equivalent definition, exploiting lambda
expressions, is as follows:
\begin{code}
```
_∘_ : ∀ {A B C : Set} → (B → C) → (A → B) → (A → C)
g ∘′ f = λ x → g (f x)
\end{code}
```
## Extensionality {#extensionality}
@ -92,13 +92,13 @@ converse of `cong-app`, as introduced
[earlier][plfa.Equality#cong].
Agda does not presume extensionality, but we can postulate that it holds:
\begin{code}
```
postulate
extensionality : ∀ {A B : Set} {f g : A → B}
→ (∀ (x : A) → f x ≡ g x)
-----------------------
→ f ≡ g
\end{code}
```
Postulating extensionality does not lead to difficulties, as it is
known to be consistent with the theory that underlies Agda.
@ -106,28 +106,28 @@ As an example, consider that we need results from two libraries,
one where addition is defined, as in
Chapter [Naturals][plfa.Naturals],
and one where it is defined the other way around.
\begin{code}
```
_+_ :
m + zero = m
m + suc n = suc (m + n)
\end{code}
```
Applying commutativity, it is easy to show that both operators always
return the same result given the same arguments:
\begin{code}
```
same-app : ∀ (m n : ) → m + n ≡ m + n
same-app m n rewrite +-comm m n = helper m n
where
helper : ∀ (m n : ) → m + n ≡ n + m
helper m zero = refl
helper m (suc n) = cong suc (helper m n)
\end{code}
```
However, it might be convenient to assert that the two operators are
actually indistinguishable. This we can do via two applications of
extensionality:
\begin{code}
```
same : _+_ ≡ _+_
same = extensionality (λ m → extensionality (λ n → same-app m n))
\end{code}
```
We occasionally need to postulate extensionality in what follows.
@ -135,7 +135,7 @@ We occasionally need to postulate extensionality in what follows.
Two sets are isomorphic if they are in one-to-one correspondence.
Here is a formal definition of isomorphism:
\begin{code}
```
infix 0 _≃_
record _≃_ (A B : Set) : Set where
field
@ -144,7 +144,7 @@ record _≃_ (A B : Set) : Set where
from∘to : ∀ (x : A) → from (to x) ≡ x
to∘from : ∀ (y : B) → to (from y) ≡ y
open _≃_
\end{code}
```
Let's unpack the definition. An isomorphism between sets `A` and `B` consists
of four things:
+ A function `to` from `A` to `B`,
@ -159,7 +159,7 @@ The declaration `open _≃_` makes available the names `to`, `from`,
The above is our first use of records. A record declaration is equivalent
to a corresponding inductive data declaration:
\begin{code}
```
data _≃_ (A B : Set): Set where
mk-≃′ : ∀ (to : A → B) →
∀ (from : B → A) →
@ -178,7 +178,7 @@ from∘to (mk-≃′ f g g∘f f∘g) = g∘f
to∘from : ∀ {A B : Set} → (A≃B : A ≃′ B) → (∀ (y : B) → to A≃B (from A≃B y) ≡ y)
to∘from (mk-≃′ f g g∘f f∘g) = f∘g
\end{code}
```
We construct values of the record type with the syntax
@ -202,7 +202,7 @@ where `f`, `g`, `g∘f`, and `f∘g` are values of suitable types.
Isomorphism is an equivalence, meaning that it is reflexive, symmetric,
and transitive. To show isomorphism is reflexive, we take both `to`
and `from` to be the identity function:
\begin{code}
```
≃-refl : ∀ {A : Set}
-----
→ A ≃ A
@ -213,7 +213,7 @@ and `from` to be the identity function:
; from∘to = λ{x → refl}
; to∘from = λ{y → refl}
}
\end{code}
```
In the above, `to` and `from` are both bound to identity functions,
and `from∘to` and `to∘from` are both bound to functions that discard
their argument and return `refl`. In this case, `refl` alone is an
@ -222,7 +222,7 @@ simplifies to `x`, and similarly for the right inverse.
To show isomorphism is symmetric, we simply swap the roles of `to`
and `from`, and `from∘to` and `to∘from`:
\begin{code}
```
≃-sym : ∀ {A B : Set}
→ A ≃ B
-----
@ -234,11 +234,11 @@ and `from`, and `from∘to` and `to∘from`:
; from∘to = to∘from A≃B
; to∘from = from∘to A≃B
}
\end{code}
```
To show isomorphism is transitive, we compose the `to` and `from`
functions, and use equational reasoning to combine the inverses:
\begin{code}
```
≃-trans : ∀ {A B C : Set}
→ A ≃ B
→ B ≃ C
@ -269,7 +269,7 @@ functions, and use equational reasoning to combine the inverses:
y
∎}
}
\end{code}
```
## Equational reasoning for isomorphism
@ -279,7 +279,7 @@ isomorphism. We essentially copy the previous definition
of equality for isomorphism. We omit the form that corresponds to `_≡⟨⟩_`, since
trivial isomorphisms arise far less often than trivial equalities:
\begin{code}
```
module ≃-Reasoning where
infix 1 ≃-begin_
@ -305,7 +305,7 @@ module ≃-Reasoning where
A ≃-∎ = ≃-refl
open ≃-Reasoning
\end{code}
```
## Embedding
@ -317,7 +317,7 @@ included in the second; or, equivalently, that there is a many-to-one
correspondence between the second type and the first.
Here is the formal definition of embedding:
\begin{code}
```
infix 0 _≲_
record _≲_ (A B : Set) : Set where
field
@ -325,14 +325,14 @@ record _≲_ (A B : Set) : Set where
from : B → A
from∘to : ∀ (x : A) → from (to x) ≡ x
open _≲_
\end{code}
```
It is the same as an isomorphism, save that it lacks the `to∘from` field.
Hence, we know that `from` is left-inverse to `to`, but not that `from`
is right-inverse to `to`.
Embedding is reflexive and transitive, but not symmetric. The proofs
are cut down versions of the similar proofs for isomorphism:
\begin{code}
```
≲-refl : ∀ {A : Set} → A ≲ A
≲-refl =
record
@ -355,12 +355,12 @@ are cut down versions of the similar proofs for isomorphism:
x
∎}
}
\end{code}
```
It is also easy to see that if two types embed in each other, and the
embedding functions correspond, then they are isomorphic. This is a
weak form of anti-symmetry:
\begin{code}
```
≲-antisym : ∀ {A B : Set}
→ (A≲B : A ≲ B)
→ (B≲A : B ≲ A)
@ -384,7 +384,7 @@ weak form of anti-symmetry:
y
∎}
}
\end{code}
```
The first three components are copied from the embedding, while the
last combines the left inverse of `B ≲ A` with the equivalences of
the `to` and `from` components from the two embeddings to obtain
@ -395,7 +395,7 @@ the right inverse of the isomorphism.
We can also support tabular reasoning for embedding,
analogous to that used for isomorphism:
\begin{code}
```
module ≲-Reasoning where
infix 1 ≲-begin_
@ -421,37 +421,37 @@ module ≲-Reasoning where
A ≲-∎ = ≲-refl
open ≲-Reasoning
\end{code}
```
#### Exercise `≃-implies-≲`
Show that every isomorphism implies an embedding.
\begin{code}
```
postulate
≃-implies-≲ : ∀ {A B : Set}
→ A ≃ B
-----
→ A ≲ B
\end{code}
```
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `_⇔_` {#iff}
Define equivalence of propositions (also known as "if and only if") as follows:
\begin{code}
```
record _⇔_ (A B : Set) : Set where
field
to : A → B
from : B → A
\end{code}
```
Show that equivalence is reflexive, symmetric, and transitive.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `Bin-embedding` (stretch) {#Bin-embedding}
@ -459,12 +459,12 @@ Recall that Exercises
[Bin][plfa.Naturals#Bin] and
[Bin-laws][plfa.Induction#Bin-laws]
define a datatype of bitstrings representing natural numbers:
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
And ask you to define the following functions
to : → Bin
@ -475,20 +475,20 @@ which satisfy the following property:
from (to n) ≡ n
Using the above, establish that there is an embedding of `` into `Bin`.
\begin{code}
```
-- Your code goes here
\end{code}
```
Why do `to` and `from` not form an isomorphism?
## Standard library
Definitions similar to those in this chapter can be found in the standard library:
\begin{code}
```
import Function using (_∘_)
import Function.Inverse using (_↔_)
import Function.LeftInverse using (_↞_)
\end{code}
```
The standard library `_↔_` and `_↞_` correspond to our `_≃_` and
`_≲_`, respectively, but those in the standard library are less
convenient, since they depend on a nested record structure and are

View file

@ -6,9 +6,9 @@ permalink : /Lambda/
next : /Properties/
---
\begin{code}
```
module plfa.Lambda where
\end{code}
```
The _lambda-calculus_, first published by the logician Alonzo Church in
1932, is a core calculus with only three syntactic constructs:
@ -51,7 +51,7 @@ four.
## Imports
\begin{code}
```
open import Relation.Binary.PropositionalEquality using (_≡_; _≢_; refl)
open import Data.String using (String; _≟_)
open import Data.Nat using (; zero; suc)
@ -59,7 +59,7 @@ open import Data.Empty using (⊥; ⊥-elim)
open import Relation.Nullary using (Dec; yes; no; ¬_)
open import Relation.Nullary.Negation using (¬?)
open import Data.List using (List; _∷_; [])
\end{code}
```
## Syntax of terms
@ -97,7 +97,7 @@ Here is the syntax of terms in Backus-Naur Form (BNF):
μ x ⇒ M
And here it is formalised in Agda:
\begin{code}
```
Id : Set
Id = String
@ -115,7 +115,7 @@ data Term : Set where
`suc_ : Term → Term
case_[zero⇒_|suc_⇒_] : Term → Term → Id → Term → Term
μ_⇒_ : Id → Term → Term
\end{code}
```
We represent identifiers by strings. We choose precedence so that
lambda abstraction and fixpoint bind least tightly, then application,
then successor, and tightest of all is the constructor for variables.
@ -127,7 +127,7 @@ Case expressions are self-bracketing.
Here are some example terms: the natural number two,
a function that adds naturals,
and a term that computes two plus two:
\begin{code}
```
two : Term
two = `suc `suc `zero
@ -136,7 +136,7 @@ plus = μ "+" ⇒ ƛ "m" ⇒ ƛ "n" ⇒
case ` "m"
[zero⇒ ` "n"
|suc "m" ⇒ `suc (` "+" · ` "m" · ` "n") ]
\end{code}
```
The recursive definition of addition is similar to our original
definition of `_+_` for naturals, as given in
Chapter [Naturals][plfa.Naturals#plus].
@ -158,7 +158,7 @@ second. This is called the _Church representation_ of the
naturals. Here are some example terms: the Church numeral two, a
function that adds Church numerals, a function to compute successor,
and a term that computes two plus two:
\begin{code}
```
twoᶜ : Term
twoᶜ = ƛ "s" ⇒ ƛ "z" ⇒ ` "s" · (` "s" · ` "z")
@ -168,7 +168,7 @@ plusᶜ = ƛ "m" ⇒ ƛ "n" ⇒ ƛ "s" ⇒ ƛ "z" ⇒
sucᶜ : Term
sucᶜ = ƛ "n" ⇒ `suc (` "n")
\end{code}
```
The Church numeral for two takes two arguments `s` and `z`
and applies `s` twice to `z`.
Addition takes two numerals `m` and `n`, a
@ -192,9 +192,9 @@ Write out the definition of a lambda term that multiplies
two natural numbers. Your definition may use `plus` as
defined earlier.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `mulᶜ`
@ -204,9 +204,9 @@ two natural numbers represented as Church numerals. Your
definition may use `plusᶜ` as defined earlier (or may not
— there are nice definitions both ways).
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `primed` (stretch)
@ -214,7 +214,7 @@ definition may use `plusᶜ` as defined earlier (or may not
Some people find it annoying to write `` ` "x" `` instead of `x`.
We can make examples with lambda terms slightly easier to write
by adding the following definitions:
\begin{code}
```
ƛ_⇒_ : Term → Term → Term
ƛ′ (` x) ⇒ N = ƛ x ⇒ N
ƛ′ _ ⇒ _ = ⊥-elim impossible
@ -229,9 +229,9 @@ case _ [zero⇒ _ |suc _ ⇒ _ ] = ⊥-elim impossible
μ′ (` x) ⇒ N = μ x ⇒ N
μ′ _ ⇒ _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
The definition of `plus` can now be written as follows:
\begin{code}
```
plus : Term
plus = μ′ + ⇒ ƛ′ m ⇒ ƛ′ n ⇒
case m
@ -241,7 +241,7 @@ plus = μ′ + ⇒ ƛ′ m ⇒ ƛ′ n ⇒
+ = ` "+"
m = ` "m"
n = ` "n"
\end{code}
```
Write out the definition of multiplication in the same style.
@ -340,7 +340,7 @@ as values; thus, `` plus `` by itself is considered a value.
The predicate `Value M` holds if term `M` is a value:
\begin{code}
```
data Value : Term → Set where
V-ƛ : ∀ {x N}
@ -355,7 +355,7 @@ data Value : Term → Set where
→ Value V
--------------
→ Value (`suc V)
\end{code}
```
In what follows, we let `V` and `W` range over values.
@ -441,7 +441,7 @@ which will be adequate for our purposes.
Here is the formal definition of substitution by closed terms in Agda:
\begin{code}
```
infix 9 _[_:=_]
_[_:=_] : Term → Id → Term → Term
@ -460,7 +460,7 @@ _[_:=_] : Term → Id → Term → Term
(μ x ⇒ N) [ y := V ] with x ≟ y
... | yes _ = μ x ⇒ N
... | no _ = μ x ⇒ N [ y := V ]
\end{code}
```
Let's unpack the first three cases:
@ -484,7 +484,7 @@ simply push substitution recursively into the subterms.
Here is confirmation that the examples above are correct:
\begin{code}
```
_ : (ƛ "z" ⇒ ` "s" · (` "s" · ` "z")) [ "s" := sucᶜ ] ≡ ƛ "z" ⇒ sucᶜ · (sucᶜ · ` "z")
_ = refl
@ -499,7 +499,7 @@ _ = refl
_ : (ƛ "y" ⇒ ` "y") [ "x" := `zero ] ≡ ƛ "y" ⇒ ` "y"
_ = refl
\end{code}
```
#### Quiz
@ -522,9 +522,9 @@ Rewrite the definition to factor the common part of these three
clauses into a single function, defined by mutual recursion with
substitution.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Reduction
@ -580,7 +580,7 @@ case where we substitute by a term that is not a value.
Here are the rules formalised in Agda:
\begin{code}
```
infix 4 _—→_
data _—→_ : Term → Term → Set where
@ -623,7 +623,7 @@ data _—→_ : Term → Term → Set where
β-μ : ∀ {x M}
------------------------------
→ μ x ⇒ M —→ M [ x := μ x ⇒ M ]
\end{code}
```
The reduction rules are carefully designed to ensure that subterms
of a term are reduced to values before the whole term is reduced.
@ -673,7 +673,7 @@ We define reflexive and transitive closure as a sequence of zero or
more steps of the underlying relation, along lines similar to that for
reasoning about chains of equalities in
Chapter [Equality][plfa.Equality]:
\begin{code}
```
infix 2 _—↠_
infix 1 begin_
infixr 2 _—→⟨_⟩_
@ -695,7 +695,7 @@ begin_ : ∀ {M N}
------
→ M —↠ N
begin M—↠N = M—↠N
\end{code}
```
We can read this as follows:
* From term `M`, we can take no steps, giving a step of type `M —↠ M`.
@ -712,7 +712,7 @@ appealing way, as we will see in the next section.
An alternative is to define reflexive and transitive closure directly,
as the smallest relation that includes `—→` and is also reflexive
and transitive. We could do so as follows:
\begin{code}
```
data _—↠_ : Term → Term → Set where
step : ∀ {M N}
@ -729,7 +729,7 @@ data _—↠_ : Term → Term → Set where
→ M —↠′ N
-------
→ L —↠′ N
\end{code}
```
The three constructors specify, respectively, that `—↠′` includes `—→`
and is reflexive and transitive. A good exercise is to show that
the two definitions are equivalent (indeed, one embeds in the other).
@ -739,9 +739,9 @@ the two definitions are equivalent (indeed, one embeds in the other).
Show that the first notion of reflexive and transitive closure
above embeds into the second. Why are they not isomorphic?
\begin{code}
```
-- Your code goes here
\end{code}
```
## Confluence
@ -796,7 +796,7 @@ systems studied in this text are trivially confluent.
We start with a simple example. The Church numeral two applied to the
successor function and zero yields the natural number two:
\begin{code}
```
_ : twoᶜ · sucᶜ · `zero —↠ `suc `suc `zero
_ =
begin
@ -810,10 +810,10 @@ _ =
—→⟨ β-ƛ (V-suc V-zero) ⟩
`suc (`suc `zero)
\end{code}
```
Here is a sample reduction demonstrating that two plus two is four:
\begin{code}
```
_ : plus · two · two —↠ `suc `suc `suc `suc `zero
_ =
begin
@ -855,10 +855,10 @@ _ =
—→⟨ ξ-suc (ξ-suc β-zero) ⟩
`suc (`suc (`suc (`suc `zero)))
\end{code}
```
And here is a similar sample reduction for Church numerals:
\begin{code}
```
_ : plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero —↠ `suc `suc `suc `suc `zero
_ =
begin
@ -890,7 +890,7 @@ _ =
—→⟨ β-ƛ (V-suc (V-suc (V-suc V-zero))) ⟩
`suc (`suc (`suc (`suc `zero)))
\end{code}
```
In the next chapter, we will see how to compute such reduction sequences.
@ -899,9 +899,9 @@ In the next chapter, we will see how to compute such reduction sequences.
Write out the reduction sequence demonstrating that one plus one is two.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Syntax of types
@ -919,13 +919,13 @@ Here is the syntax of types in BNF:
And here it is formalised in Agda:
\begin{code}
```
infixr 7 _⇒_
data Type : Set where
_⇒_ : Type → Type → Type
` : Type
\end{code}
```
### Precedence
@ -987,13 +987,13 @@ and variable `` "z" `` with type `` ` ``.
Contexts are formalised as follows:
\begin{code}
```
infixl 5 _,_⦂_
data Context : Set where
∅ : Context
_,_⦂_ : Context → Id → Type → Context
\end{code}
```
#### Exercise `Context-≃`
@ -1007,9 +1007,9 @@ to the list
[ ⟨ "z" , ` ⟩ , ⟨ "s" , ` ⇒ ` ⟩ ]
\begin{code}
```
-- Your code goes here
\end{code}
```
### Lookup judgment
@ -1038,7 +1038,7 @@ the other variables. For example,
Here `` "x" ⦂ ` ⇒ ` `` is shadowed by `` "x" ⦂ ` ``.
Lookup is formalised as follows:
\begin{code}
```
infix 4 _∋_⦂_
data _∋_⦂_ : Context → Id → Type → Set where
@ -1052,7 +1052,7 @@ data _∋_⦂_ : Context → Id → Type → Set where
→ Γ ∋ x ⦂ A
------------------
→ Γ , y ⦂ B ∋ x ⦂ A
\end{code}
```
The constructors `Z` and `S` correspond roughly to the constructors
`here` and `there` for the element-of relation `_∈_` on lists.
@ -1078,7 +1078,7 @@ For example:
* `` ∅ ⊢ ƛ "s" ⇒ ƛ "z" ⇒ ` "s" · (` "s" · ` "z") ⦂ (``) ⇒ `` ``
Typing is formalised as follows:
\begin{code}
```
infix 4 _⊢_⦂_
data _⊢_⦂_ : Context → Term → Type → Set where
@ -1125,7 +1125,7 @@ data _⊢_⦂_ : Context → Term → Type → Set where
→ Γ , x ⦂ A ⊢ M ⦂ A
-----------------
→ Γ ⊢ μ x ⇒ M ⦂ A
\end{code}
```
Each type rule is named after the constructor for the
corresponding term.
@ -1154,13 +1154,13 @@ The rules are deterministic, in that at most one rule applies to every term.
### Checking inequality and postulating the impossible {#impossible}
The following function makes it convenient to assert an inequality:
\begin{code}
```
_≠_ : ∀ (x y : Id) → x ≢ y
x ≠ y with x ≟ y
... | no x≢y = x≢y
... | yes _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
Here `_≟_` is the function that tests two identifiers for equality.
We intend to apply the function only when the
two arguments are indeed unequal, and indicate that the second
@ -1207,7 +1207,7 @@ The typing derivation is valid for any `Γ` and `A`, for instance,
we might take `Γ` to be `∅` and `A` to be `` ` ``.
Here is the above typing derivation formalised in Agda:
\begin{code}
```
Ch : Type → Type
Ch A = (A ⇒ A) ⇒ A ⇒ A
@ -1216,10 +1216,10 @@ Ch A = (A ⇒ A) ⇒ A ⇒ A
where
∋s = S ("s" ≠ "z") Z
∋z = Z
\end{code}
```
Here are the typings corresponding to computing two plus two:
\begin{code}
```
⊢two : ∀ {Γ} → Γ ⊢ two ⦂ `
⊢two = ⊢suc (⊢suc ⊢zero)
@ -1235,7 +1235,7 @@ Here are the typings corresponding to computing two plus two:
⊢2+2 : ∅ ⊢ plus · two · two ⦂ `
⊢2+2 = ⊢plus · ⊢two · ⊢two
\end{code}
```
In contrast to our earlier examples, here we have typed `two` and `plus`
in an arbitrary context rather than the empty context; this makes it easy
to use them inside other binding contexts as well as at the top level.
@ -1246,7 +1246,7 @@ contexts, the first where "n" is the last binding in the context, and
the second after "m" is bound in the successor branch of the case.
And here are typings for the remainder of the Church example:
\begin{code}
```
⊢plusᶜ : ∀ {Γ A} → Γ ⊢ plusᶜ ⦂ Ch A ⇒ Ch A ⇒ Ch A
⊢plusᶜ = ⊢ƛ (⊢ƛ (⊢ƛ (⊢ƛ (⊢` ∋m · ⊢` ∋s · (⊢` ∋n · ⊢` ∋s · ⊢` ∋z)))))
where
@ -1262,7 +1262,7 @@ And here are typings for the remainder of the Church example:
⊢2+2ᶜ : ∅ ⊢ plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero ⦂ `
⊢2+2ᶜ = ⊢plusᶜ · ⊢twoᶜ · ⊢twoᶜ · ⊢sucᶜ · ⊢zero
\end{code}
```
### Interaction with Agda
@ -1312,13 +1312,13 @@ will show how to use Agda to compute type derivations directly.
The lookup relation `Γ ∋ x ⦂ A` is injective, in that for each `Γ` and `x`
there is at most one `A` such that the judgment holds:
\begin{code}
```
∋-injective : ∀ {Γ x A B} → Γ ∋ x ⦂ A → Γ ∋ x ⦂ B → A ≡ B
∋-injective Z Z = refl
∋-injective Z (S x≢ _) = ⊥-elim (x≢ refl)
∋-injective (S x≢ _) Z = ⊥-elim (x≢ refl)
∋-injective (S _ ∋x) (S _ ∋x) = ∋-injective ∋x ∋x
\end{code}
```
The typing relation `Γ ⊢ M ⦂ A` is not injective. For example, in any `Γ`
the term `ƛ "x" ⇒ "x"` has type `A ⇒ A` for any type `A`.
@ -1331,22 +1331,22 @@ a formal proof that it is not possible to type the term
requires that the first term in the application is both a natural and
a function:
\begin{code}
```
nope₁ : ∀ {A} → ¬ (∅ ⊢ `zero · `suc `zero ⦂ A)
nope₁ (() · _)
\end{code}
```
As a second example, here is a formal proof that it is not possible to
type `` ƛ "x" ⇒ ` "x" · ` "x" ``. It cannot be typed, because
doing so requires types `A` and `B` such that `A ⇒ B ≡ A`:
\begin{code}
```
nope₂ : ∀ {A} → ¬ (∅ ⊢ ƛ "x" ⇒ ` "x" · ` "x" ⦂ A)
nope₂ (⊢ƛ (⊢` ∋x · ⊢` ∋x)) = contradiction (∋-injective ∋x ∋x)
where
contradiction : ∀ {A B} → ¬ (A ⇒ B ≡ A)
contradiction ()
\end{code}
```
#### Quiz
@ -1370,9 +1370,9 @@ or explain why there are no such types.
Using the term `mul` you defined earlier, write out the derivation
showing that it is well-typed.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `mulᶜ-type`
@ -1380,9 +1380,9 @@ showing that it is well-typed.
Using the term `mulᶜ` you defined earlier, write out the derivation
showing that it is well-typed.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Unicode

View file

@ -6,19 +6,19 @@ permalink : /LambdaReduction/
next : /Confluence/
---
\begin{code}
```
module plfa.LambdaReduction where
\end{code}
```
## Imports
\begin{code}
```
open import plfa.Untyped using (_⊢_; ★; _·_; ƛ_; _,_; _[_])
\end{code}
```
## Full beta reduction
\begin{code}
```
infix 2 _—→_
data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -41,9 +41,9 @@ data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
→ N —→ N
-----------
→ ƛ N —→ ƛ N
\end{code}
```
\begin{code}
```
infix 2 _—↠_
infix 1 start_
infixr 2 _—→⟨_⟩_
@ -66,20 +66,20 @@ start_ : ∀ {Γ} {A} {M N : Γ ⊢ A}
------
→ M —↠ N
start M—↠N = M—↠N
\end{code}
```
\begin{code}
```
—↠-trans : ∀{Γ}{A}{L M N : Γ ⊢ A}
→ L —↠ M
→ M —↠ N
→ L —↠ N
—↠-trans (M []) mn = mn
—↠-trans (L —→⟨ r ⟩ lm) mn = L —→⟨ r ⟩ (—↠-trans lm mn)
\end{code}
```
## Reduction is a congruence
\begin{code}
```
—→-app-cong : ∀{Γ}{L L' M : Γ ⊢ ★}
→ L —→ L'
→ L · M —→ L' · M
@ -87,36 +87,35 @@ start M—↠N = M—↠N
—→-app-cong (ξ₂ ll') = ξ₁ (ξ₂ ll')
—→-app-cong β = ξ₁ β
—→-app-cong (ζ ll') = ξ₁ (ζ ll')
\end{code}
```
## Multi-step reduction is a congruence
\begin{code}
```
abs-cong : ∀ {Γ} {N N' : Γ , ★ ⊢ ★}
→ N —↠ N'
----------
→ ƛ N —↠ ƛ N'
abs-cong (M []) = ƛ M []
abs-cong (L —→⟨ r ⟩ rs) = ƛ L —→⟨ ζ r ⟩ abs-cong rs
\end{code}
```
\begin{code}
```
appL-cong : ∀ {Γ} {L L' M : Γ ⊢ ★}
→ L —↠ L'
---------------
→ L · M —↠ L' · M
appL-cong {Γ}{L}{L'}{M} (L []) = L · M []
appL-cong {Γ}{L}{L'}{M} (L —→⟨ r ⟩ rs) = L · M —→⟨ ξ₁ r ⟩ appL-cong rs
\end{code}
```
\begin{code}
```
appR-cong : ∀ {Γ} {L M M' : Γ ⊢ ★}
→ M —↠ M'
---------------
→ L · M —↠ L · M'
appR-cong {Γ}{L}{M}{M'} (M []) = L · M []
appR-cong {Γ}{L}{M}{M'} (M —→⟨ r ⟩ rs) = L · M —→⟨ ξ₂ r ⟩ appR-cong rs
\end{code}
```

View file

@ -6,9 +6,9 @@ permalink : /Lists/
next : /Lambda/
---
\begin{code}
```
module plfa.Lists where
\end{code}
```
This chapter discusses the list data type. It gives further examples
of many of the techniques we have developed so far, and provides
@ -16,7 +16,7 @@ examples of polymorphic types and higher-order functions.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong)
open Eq.≡-Reasoning
@ -29,19 +29,19 @@ open import Data.Product using (_×_; ∃; ∃-syntax) renaming (_,_ to ⟨_,_
open import Function using (_∘_)
open import Level using (Level)
open import plfa.Isomorphism using (_≃_; _⇔_)
\end{code}
```
## Lists
Lists are defined in Agda as follows:
\begin{code}
```
data List (A : Set) : Set where
[] : List A
_∷_ : A → List A → List A
infixr 5 _∷_
\end{code}
```
Let's unpack this definition. If `A` is a set, then `List A` is a set.
The next two lines tell us that `[]` (pronounced _nil_) is a list of
type `A` (often called the _empty_ list), and that `_∷_` (pronounced
@ -50,10 +50,10 @@ of type `List A` and returns a value of type `List A`. Operator `_∷_`
has precedence level 5 and associates to the right.
For example,
\begin{code}
```
_ : List
_ = 0 ∷ 1 ∷ 2 ∷ []
\end{code}
```
denotes the list of the first three natural numbers. Since `_∷_`
associates to the right, the term parses as `0 ∷ (1 ∷ (2 ∷ []))`.
Here `0` is the first element of the list, called the _head_,
@ -63,17 +63,17 @@ nothing in between, and the tail is itself another list!
As we've seen, parameterised types can be translated to
indexed types. The definition above is equivalent to the following:
\begin{code}
```
data List : Set → Set where
[] : ∀ {A : Set} → List A
_∷_ : ∀ {A : Set} → A → List A → List A
\end{code}
```
Each constructor takes the parameter as an implicit argument.
Thus, our example list could also be written:
\begin{code}
```
_ : List
_ = _∷_ {} 0 (_∷_ {} 1 (_∷_ {} 2 ([] {})))
\end{code}
```
where here we have provided the implicit parameters explicitly.
Including the pragma:
@ -88,14 +88,14 @@ cons respectively, allowing a more efficient representation of lists.
## List syntax
We can write lists more conveniently by introducing the following definitions:
\begin{code}
```
pattern [_] z = z ∷ []
pattern [_,_] y z = y ∷ z ∷ []
pattern [_,_,_] x y z = x ∷ y ∷ z ∷ []
pattern [_,_,_,_] w x y z = w ∷ x ∷ y ∷ z ∷ []
pattern [_,_,_,_,_] v w x y z = v ∷ w ∷ x ∷ y ∷ z ∷ []
pattern [_,_,_,_,_,_] u v w x y z = u ∷ v ∷ w ∷ x ∷ y ∷ z ∷ []
\end{code}
```
This is our first use of pattern declarations. For instance,
the third line tells us that `[ x , y , z ]` is equivalent to
`x ∷ y ∷ z ∷ []`, and permits the former to appear either in
@ -108,13 +108,13 @@ on the right-hand side of an equation.
Our first function on lists is written `_++_` and pronounced
_append_:
\begin{code}
```
infixr 5 _++_
_++_ : ∀ {A : Set} → List A → List A → List A
[] ++ ys = ys
(x ∷ xs) ++ ys = x ∷ (xs ++ ys)
\end{code}
```
The type `A` is an implicit argument to append, making it a
_polymorphic_ function (one that can be used at many types). The
empty list appended to another list yields the other list. A
@ -124,7 +124,7 @@ the first list appended to the second list.
Here is an example, showing how to compute the result
of appending two lists:
\begin{code}
```
_ : [ 0 , 1 , 2 ] ++ [ 3 , 4 ] ≡ [ 0 , 1 , 2 , 3 , 4 ]
_ =
begin
@ -138,7 +138,7 @@ _ =
≡⟨⟩
0 ∷ 1 ∷ 2 ∷ 3 ∷ 4 ∷ []
\end{code}
```
Appending two lists requires time linear in the
number of elements in the first list.
@ -147,7 +147,7 @@ number of elements in the first list.
We can reason about lists in much the same way that we reason
about numbers. Here is the proof that append is associative:
\begin{code}
```
++-assoc : ∀ {A : Set} (xs ys zs : List A)
→ (xs ++ ys) ++ zs ≡ xs ++ (ys ++ zs)
++-assoc [] ys zs =
@ -170,7 +170,7 @@ about numbers. Here is the proof that append is associative:
≡⟨⟩
x ∷ xs ++ (ys ++ zs)
\end{code}
```
The proof is by induction on the first argument. The base case instantiates
to `[]`, and follows by straightforward computation.
The inductive case instantiates to `x ∷ xs`,
@ -191,7 +191,7 @@ which is needed in the proof.
It is also easy to show that `[]` is a left and right identity for `_++_`.
That it is a left identity is immediate from the definition:
\begin{code}
```
++-identityˡ : ∀ {A : Set} (xs : List A) → [] ++ xs ≡ xs
++-identityˡ xs =
begin
@ -199,9 +199,9 @@ That it is a left identity is immediate from the definition:
≡⟨⟩
xs
\end{code}
```
That it is a right identity follows by simple induction:
\begin{code}
```
++-identityʳ : ∀ {A : Set} (xs : List A) → xs ++ [] ≡ xs
++-identityʳ [] =
begin
@ -217,7 +217,7 @@ That it is a right identity follows by simple induction:
≡⟨ cong (x ∷_) (++-identityʳ xs) ⟩
x ∷ xs
\end{code}
```
As we will see later,
these three properties establish that `_++_` and `[]` form
a _monoid_ over lists.
@ -225,18 +225,18 @@ a _monoid_ over lists.
## Length
Our next function finds the length of a list:
\begin{code}
```
length : ∀ {A : Set} → List A →
length [] = zero
length (x ∷ xs) = suc (length xs)
\end{code}
```
Again, it takes an implicit parameter `A`.
The length of the empty list is zero.
The length of a non-empty list
is one greater than the length of the tail of the list.
Here is an example showing how to compute the length of a list:
\begin{code}
```
_ : length [ 0 , 1 , 2 ] ≡ 3
_ =
begin
@ -250,7 +250,7 @@ _ =
≡⟨⟩
suc (suc (suc zero))
\end{code}
```
Computing the length of a list requires time
linear in the number of elements in the list.
@ -263,7 +263,7 @@ has insufficient information to infer the implicit parameter.
The length of one list appended to another is the
sum of the lengths of the lists:
\begin{code}
```
length-++ : ∀ {A : Set} (xs ys : List A)
→ length (xs ++ ys) ≡ length xs + length ys
length-++ {A} [] ys =
@ -284,7 +284,7 @@ length-++ (x ∷ xs) ys =
≡⟨⟩
length (x ∷ xs) + length ys
\end{code}
```
The proof is by induction on the first argument. The base case
instantiates to `[]`, and follows by straightforward computation. As
before, Agda cannot infer the implicit type parameter to `length`, and
@ -298,18 +298,18 @@ and it is promoted by the congruence `cong suc`.
## Reverse
Using append, it is easy to formulate a function to reverse a list:
\begin{code}
```
reverse : ∀ {A : Set} → List A → List A
reverse [] = []
reverse (x ∷ xs) = reverse xs ++ [ x ]
\end{code}
```
The reverse of the empty list is the empty list.
The reverse of a non-empty list
is the reverse of its tail appended to a unit list
containing its head.
Here is an example showing how to reverse a list:
\begin{code}
```
_ : reverse [ 0 , 1 , 2 ] ≡ [ 2 , 1 , 0 ]
_ =
begin
@ -339,7 +339,7 @@ _ =
≡⟨⟩
[ 2 , 1 , 0 ]
\end{code}
```
Reversing a list in this way takes time _quadratic_ in the length of
the list. This is because reverse ends up appending lists of lengths
`1`, `2`, up to `n - 1`, where `n` is the length of the list being
@ -351,21 +351,21 @@ list, and the sum of the numbers up to `n - 1` is `n * (n - 1) / 2`.
Show that the reverse of one list appended to another is the
reverse of the second appended to the reverse of the first:
\begin{code}
```
postulate
reverse-++-commute : ∀ {A : Set} {xs ys : List A}
→ reverse (xs ++ ys) ≡ reverse ys ++ reverse xs
\end{code}
```
#### Exercise `reverse-involutive` (recommended)
A function is an _involution_ if when applied twice it acts
as the identity function. Show that reverse is an involution:
\begin{code}
```
postulate
reverse-involutive : ∀ {A : Set} {xs : List A}
→ reverse (reverse xs) ≡ xs
\end{code}
```
## Faster reverse
@ -373,17 +373,17 @@ postulate
The definition above, while easy to reason about, is less efficient than
one might expect since it takes time quadratic in the length of the list.
The idea is that we generalise reverse to take an additional argument:
\begin{code}
```
shunt : ∀ {A : Set} → List A → List A → List A
shunt [] ys = ys
shunt (x ∷ xs) ys = shunt xs (x ∷ ys)
\end{code}
```
The definition is by recursion on the first argument. The second argument
actually becomes _larger_, but this is not a problem because the argument
on which we recurse becomes _smaller_.
Shunt is related to reverse as follows:
\begin{code}
```
shunt-reverse : ∀ {A : Set} (xs ys : List A)
→ shunt xs ys ≡ reverse xs ++ ys
shunt-reverse [] ys =
@ -408,7 +408,7 @@ shunt-reverse (x ∷ xs) ys =
≡⟨⟩
reverse (x ∷ xs) ++ ys
\end{code}
```
The proof is by induction on the first argument.
The base case instantiates to `[]`, and follows by straightforward computation.
The inductive case instantiates to `x ∷ xs` and follows by the inductive
@ -422,14 +422,14 @@ your quiver of arrows, ready to slay the right problem.
Having defined shunt be generalisation, it is now easy to respecialise to
give a more efficient definition of reverse:
\begin{code}
```
reverse : ∀ {A : Set} → List A → List A
reverse xs = shunt xs []
\end{code}
```
Given our previous lemma, it is straightforward to show
the two definitions equivalent:
\begin{code}
```
reverses : ∀ {A : Set} (xs : List A)
→ reverse xs ≡ reverse xs
reverses xs =
@ -442,10 +442,10 @@ reverses xs =
≡⟨ ++-identityʳ (reverse xs) ⟩
reverse xs
\end{code}
```
Here is an example showing fast reverse of the list `[ 0 , 1 , 2 ]`:
\begin{code}
```
_ : reverse [ 0 , 1 , 2 ] ≡ [ 2 , 1 , 0 ]
_ =
begin
@ -461,7 +461,7 @@ _ =
≡⟨⟩
2 ∷ 1 ∷ 0 ∷ []
\end{code}
```
Now the time to reverse a list is linear in the length of the list.
## Map {#Map}
@ -469,18 +469,18 @@ Now the time to reverse a list is linear in the length of the list.
Map applies a function to every element of a list to generate a corresponding list.
Map is an example of a _higher-order function_, one which takes a function as an
argument or returns a function as a result:
\begin{code}
```
map : ∀ {A B : Set} → (A → B) → List A → List B
map f [] = []
map f (x ∷ xs) = f x ∷ map f xs
\end{code}
```
Map of the empty list is the empty list.
Map of a non-empty list yields a list
with head the same as the function applied to the head of the given list,
and tail the same as map of the function applied to the tail of the given list.
Here is an example showing how to use map to increment every element of a list:
\begin{code}
```
_ : map suc [ 0 , 1 , 2 ] ≡ [ 1 , 2 , 3 ]
_ =
begin
@ -496,13 +496,13 @@ _ =
≡⟨⟩
1 ∷ 2 ∷ 3 ∷ []
\end{code}
```
Map requires time linear in the length of the list.
It is often convenient to exploit currying by applying
map to a function to yield a new function, and at a later
point applying the resulting function:
\begin{code}
```
sucs : List → List
sucs = map suc
@ -515,7 +515,7 @@ _ =
≡⟨⟩
[ 1 , 2 , 3 ]
\end{code}
```
Any type that is parameterised on another type, such as lists, has a
corresponding map, which accepts a function and returns a function
@ -528,37 +528,37 @@ _n_ functions.
#### Exercise `map-compose`
Prove that the map of a composition is equal to the composition of two maps:
\begin{code}
```
postulate
map-compose : ∀ {A B C : Set} {f : A → B} {g : B → C}
→ map (g ∘ f) ≡ map g ∘ map f
\end{code}
```
The last step of the proof requires extensionality.
#### Exercise `map-++-commute`
Prove the following relationship between map and append:
\begin{code}
```
postulate
map-++-commute : ∀ {A B : Set} {f : A → B} {xs ys : List A}
→ map f (xs ++ ys) ≡ map f xs ++ map f ys
\end{code}
```
#### Exercise `map-Tree`
Define a type of trees with leaves of type `A` and internal
nodes of type `B`:
\begin{code}
```
data Tree (A B : Set) : Set where
leaf : A → Tree A B
node : Tree A B → B → Tree A B → Tree A B
\end{code}
```
Define a suitable map operator over trees:
\begin{code}
```
postulate
map-Tree : ∀ {A B C D : Set}
→ (A → C) → (B → D) → Tree A B → Tree C D
\end{code}
```
## Fold {#Fold}
@ -566,17 +566,17 @@ postulate
Fold takes an operator and a value, and uses the operator to combine
each of the elements of the list, taking the given value as the result
for the empty list:
\begin{code}
```
foldr : ∀ {A B : Set} → (A → B → B) → B → List A → B
foldr _⊗_ e [] = e
foldr _⊗_ e (x ∷ xs) = x ⊗ foldr _⊗_ e xs
\end{code}
```
Fold of the empty list is the given value.
Fold of a non-empty list uses the operator to combine
the head of the list and the fold of the tail of the list.
Here is an example showing how to use fold to find the sum of a list:
\begin{code}
```
_ : foldr _+_ 0 [ 1 , 2 , 3 , 4 ] ≡ 10
_ =
begin
@ -592,13 +592,13 @@ _ =
≡⟨⟩
1 + (2 + (3 + (4 + 0)))
\end{code}
```
Fold requires time linear in the length of the list.
It is often convenient to exploit currying by applying
fold to an operator and a value to yield a new function,
and at a later point applying the resulting function:
\begin{code}
```
sum : List
sum = foldr _+_ 0
@ -611,7 +611,7 @@ _ =
≡⟨⟩
10
\end{code}
```
Just as the list type has two constructors, `[]` and `_∷_`,
so the fold function takes two arguments, `e` and `_⊗_`
@ -626,71 +626,71 @@ For example:
product [ 1 , 2 , 3 , 4 ] ≡ 24
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `foldr-++` (recommended)
Show that fold and append are related as follows:
\begin{code}
```
postulate
foldr-++ : ∀ {A B : Set} (_⊗_ : A → B → B) (e : B) (xs ys : List A) →
foldr _⊗_ e (xs ++ ys) ≡ foldr _⊗_ (foldr _⊗_ e ys) xs
\end{code}
```
#### Exercise `map-is-foldr`
Show that map can be defined using fold:
\begin{code}
```
postulate
map-is-foldr : ∀ {A B : Set} {f : A → B} →
map f ≡ foldr (λ x xs → f x ∷ xs) []
\end{code}
```
This requires extensionality.
#### Exercise `fold-Tree`
Define a suitable fold function for the type of trees given earlier:
\begin{code}
```
postulate
fold-Tree : ∀ {A B C : Set}
→ (A → C) → (C → B → C → C) → Tree A B → C
\end{code}
```
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `map-is-fold-Tree`
Demonstrate an analogue of `map-is-foldr` for the type of trees.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `sum-downFrom` (stretch)
Define a function that counts down as follows:
\begin{code}
```
downFrom : → List
downFrom zero = []
downFrom (suc n) = n ∷ downFrom n
\end{code}
```
For example:
\begin{code}
```
_ : downFrom 3 ≡ [ 2 , 1 , 0 ]
_ = refl
\end{code}
```
Prove that the sum of the numbers `(n - 1) + ⋯ + 0` is
equal to `n * (n ∸ 1) / 2`:
\begin{code}
```
postulate
sum-downFrom : ∀ (n : )
→ sum (downFrom n) * 2 ≡ n * (n ∸ 1)
\end{code}
```
## Monoids
@ -700,7 +700,7 @@ value is a left and right identity for the operator, meaning that the
operator and the value form a _monoid_.
We can define a monoid as a suitable record type:
\begin{code}
```
record IsMonoid {A : Set} (_⊗_ : A → A → A) (e : A) : Set where
field
assoc : ∀ (x y z : A) → (x ⊗ y) ⊗ z ≡ x ⊗ (y ⊗ z)
@ -708,11 +708,11 @@ record IsMonoid {A : Set} (_⊗_ : A → A → A) (e : A) : Set where
identityʳ : ∀ (x : A) → x ⊗ e ≡ x
open IsMonoid
\end{code}
```
As examples, sum and zero, multiplication and one, and append and the empty
list, are all examples of monoids:
\begin{code}
```
+-monoid : IsMonoid _+_ 0
+-monoid =
record
@ -736,11 +736,11 @@ list, are all examples of monoids:
; identityˡ = ++-identityˡ
; identityʳ = ++-identityʳ
}
\end{code}
```
If `_⊗_` and `e` form a monoid, then we can re-express fold on the
same operator and an arbitrary value:
\begin{code}
```
foldr-monoid : ∀ {A : Set} (_⊗_ : A → A → A) (e : A) → IsMonoid _⊗_ e →
∀ (xs : List A) (y : A) → foldr _⊗_ y xs ≡ foldr _⊗_ e xs ⊗ y
foldr-monoid _⊗_ e ⊗-monoid [] y =
@ -765,10 +765,10 @@ foldr-monoid _⊗_ e ⊗-monoid (x ∷ xs) y =
≡⟨⟩
foldr _⊗_ e (x ∷ xs) ⊗ y
\end{code}
```
As a consequence, using a previous exercise, we have the following:
\begin{code}
```
foldr-monoid-++ : ∀ {A : Set} (_⊗_ : A → A → A) (e : A) → IsMonoid _⊗_ e →
∀ (xs ys : List A) → foldr _⊗_ e (xs ++ ys) ≡ foldr _⊗_ e xs ⊗ foldr _⊗_ e ys
foldr-monoid-++ _⊗_ e monoid-⊗ xs ys =
@ -779,7 +779,7 @@ foldr-monoid-++ _⊗_ e monoid-⊗ xs ys =
≡⟨ foldr-monoid _⊗_ e monoid-⊗ xs (foldr _⊗_ e ys) ⟩
foldr _⊗_ e xs ⊗ foldr _⊗_ e ys
\end{code}
```
#### Exercise `foldl`
@ -789,9 +789,9 @@ operations associate to the left rather than the right. For example:
foldr _⊗_ e [ x , y , z ] = x ⊗ (y ⊗ (z ⊗ e))
foldl _⊗_ e [ x , y , z ] = ((e ⊗ x) ⊗ y) ⊗ z
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `foldr-monoid-foldl`
@ -799,9 +799,9 @@ operations associate to the left rather than the right. For example:
Show that if `_⊗_` and `e` form a monoid, then `foldr _⊗_ e` and
`foldl _⊗_ e` always compute the same result.
\begin{code}
```
-- Your code goes here
\end{code}
```
## All {#All}
@ -810,11 +810,11 @@ We can also define predicates over lists. Two of the most important
are `All` and `Any`.
Predicate `All P` holds if predicate `P` is satisfied by every element of a list:
\begin{code}
```
data All {A : Set} (P : A → Set) : List A → Set where
[] : All P []
_∷_ : ∀ {x : A} {xs : List A} → P x → All P xs → All P (x ∷ xs)
\end{code}
```
The type has two constructors, reusing the names of the same constructors for lists.
The first asserts that `P` holds for every element of the empty list.
The second asserts that if `P` holds of the head of a list and for every
@ -826,10 +826,10 @@ For example, `All (_≤ 2)` holds of a list where every element is less
than or equal to two. Recall that `z≤n` proves `zero ≤ n` for any
`n`, and that if `m≤n` proves `m ≤ n` then `s≤s m≤n` proves `suc m ≤
suc n`, for any `m` and `n`:
\begin{code}
```
_ : All (_≤ 2) [ 0 , 1 , 2 ]
_ = z≤n ∷ s≤s z≤n ∷ s≤s (s≤s z≤n) ∷ []
\end{code}
```
Here `_∷_` and `[]` are the constructors of `All P` rather than of `List A`.
The three items are proofs of `0 ≤ 2`, `1 ≤ 2`, and `2 ≤ 2`, respectively.
@ -843,16 +843,16 @@ scope when the pattern is declared. That's not the case here, since
## Any
Predicate `Any P` holds if predicate `P` is satisfied by some element of a list:
\begin{code}
```
data Any {A : Set} (P : A → Set) : List A → Set where
here : ∀ {x : A} {xs : List A} → P x → Any P (x ∷ xs)
there : ∀ {x : A} {xs : List A} → Any P xs → Any P (x ∷ xs)
\end{code}
```
The first constructor provides evidence that the head of the list
satisfies `P`, while the second provides evidence that some element of
the tail of the list satisfies `P`. For example, we can define list
membership as follows:
\begin{code}
```
infix 4 _∈_ _∉_
_∈_ : ∀ {A : Set} (x : A) (xs : List A) → Set
@ -860,27 +860,27 @@ x ∈ xs = Any (x ≡_) xs
_∉_ : ∀ {A : Set} (x : A) (xs : List A) → Set
x ∉ xs = ¬ (x ∈ xs)
\end{code}
```
For example, zero is an element of the list `[ 0 , 1 , 0 , 2 ]`. Indeed, we can demonstrate
this fact in two different ways, corresponding to the two different
occurrences of zero in the list, as the first element and as the third element:
\begin{code}
```
_ : 0 ∈ [ 0 , 1 , 0 , 2 ]
_ = here refl
_ : 0 ∈ [ 0 , 1 , 0 , 2 ]
_ = there (there (here refl))
\end{code}
```
Further, we can demonstrate that three is not in the list, because
any possible proof that it is in the list leads to contradiction:
\begin{code}
```
not-in : 3 ∉ [ 0 , 1 , 0 , 2 ]
not-in (here ())
not-in (there (here ()))
not-in (there (there (here ())))
not-in (there (there (there (here ()))))
not-in (there (there (there (there ()))))
\end{code}
```
The five occurrences of `()` attest to the fact that there is no
possible evidence for `3 ≡ 0`, `3 ≡ 1`, `3 ≡ 0`, `3 ≡ 2`, and
`3 ∈ []`, respectively.
@ -889,7 +889,7 @@ possible evidence for `3 ≡ 0`, `3 ≡ 1`, `3 ≡ 0`, `3 ≡ 2`, and
A predicate holds for every element of one list appended to another if and
only if it holds for every element of both lists:
\begin{code}
```
All-++-⇔ : ∀ {A : Set} {P : A → Set} (xs ys : List A) →
All P (xs ++ ys) ⇔ (All P xs × All P ys)
All-++-⇔ xs ys =
@ -909,7 +909,7 @@ All-++-⇔ xs ys =
All P xs × All P ys → All P (xs ++ ys)
from [] ys ⟨ [] , Pys ⟩ = Pys
from (x ∷ xs) ys ⟨ Px ∷ Pxs , Pys ⟩ = Px ∷ from xs ys ⟨ Pxs , Pys ⟩
\end{code}
```
#### Exercise `Any-++-⇔` (recommended)
@ -917,41 +917,41 @@ Prove a result similar to `All-++-⇔`, but with `Any` in place of `All`, and a
replacement for `_×_`. As a consequence, demonstrate an equivalence relating
`_∈_` and `_++_`.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `All-++-≃` (stretch)
Show that the equivalence `All-++-⇔` can be extended to an isomorphism.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `¬Any≃All¬` (stretch)
First generalise composition to arbitrary levels, using
[universe polymorphism][plfa.Equality#unipoly]:
\begin{code}
```
_∘_ : ∀ {ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set ℓ₁} {B : Set ℓ₂} {C : Set ℓ₃}
→ (B → C) → (A → B) → A → C
(g ∘′ f) x = g (f x)
\end{code}
```
Show that `Any` and `All` satisfy a version of De Morgan's Law:
\begin{code}
```
postulate
¬Any≃All¬ : ∀ {A : Set} (P : A → Set) (xs : List A)
→ (¬_ ∘′ Any P) xs ≃ All (¬_ ∘′ P) xs
\end{code}
```
Do we also have the following?
\begin{code}
```
postulate
¬All≃Any¬ : ∀ {A : Set} (P : A → Set) (xs : List A)
→ (¬_ ∘′ All P) xs ≃ Any (¬_ ∘′ P) xs
\end{code}
```
If so, prove; if not, explain why.
@ -960,10 +960,10 @@ If so, prove; if not, explain why.
If we consider a predicate as a function that yields a boolean,
it is easy to define an analogue of `All`, which returns true if
a given predicate returns true for every element of a list:
\begin{code}
```
all : ∀ {A : Set} → (A → Bool) → List A → Bool
all p = foldr _∧_ true ∘ map p
\end{code}
```
The function can be written in a particularly compact style by
using the higher-order functions `map` and `foldr`.
@ -972,20 +972,20 @@ an analogue of `All`. First, return to the notion of a predicate `P` as
a function of type `A → Set`, taking a value `x` of type `A` into evidence
`P x` that a property holds for `x`. Say that a predicate `P` is _decidable_
if we have a function that for a given `x` can decide `P x`:
\begin{code}
```
Decidable : ∀ {A : Set} → (A → Set) → Set
Decidable {A} P = ∀ (x : A) → Dec (P x)
\end{code}
```
Then if predicate `P` is decidable, it is also decidable whether every
element of a list satisfies the predicate:
\begin{code}
```
All? : ∀ {A : Set} {P : A → Set} → Decidable P → Decidable (All P)
All? P? [] = yes []
All? P? (x ∷ xs) with P? x | All? P? xs
... | yes Px | yes Pxs = yes (Px ∷ Pxs)
... | no ¬Px | _ = no λ{ (Px ∷ Pxs) → ¬Px Px }
... | _ | no ¬Pxs = no λ{ (Px ∷ Pxs) → ¬Pxs Pxs }
\end{code}
```
If the list is empty, then trivially `P` holds for every element of
the list. Otherwise, the structure of the proof is similar to that
showing that the conjunction of two decidable propositions is itself
@ -1000,27 +1000,27 @@ predicate holds for every element of a list, so does `Any` have
analogues `any` and `Any?` which determine whether a predicate holds
for some element of a list. Give their definitions.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `All-∀`
Show that `All P xs` is isomorphic to `∀ {x} → x ∈ xs → P x`.
\begin{code}
```
-- You code goes here
\end{code}
```
#### Exercise `Any-∃`
Show that `Any P xs` is isomorphic to `∃[ x ∈ xs ] P x`.
\begin{code}
```
-- You code goes here
\end{code}
```
#### Exercise `filter?` (stretch)
@ -1028,17 +1028,17 @@ Show that `Any P xs` is isomorphic to `∃[ x ∈ xs ] P x`.
Define the following variant of the traditional `filter` function on lists,
which given a decidable predicate and a list returns all elements of the
list satisfying the predicate:
\begin{code}
```
postulate
filter? : ∀ {A : Set} {P : A → Set}
→ (P? : Decidable P) → List A → ∃[ ys ]( All P ys )
\end{code}
```
## Standard Library
Definitions similar to those in this chapter can be found in the standard library:
\begin{code}
```
import Data.List using (List; _++_; length; reverse; map; foldr; downFrom)
import Data.List.All using (All; []; _∷_)
import Data.List.Any using (Any; here; there)
@ -1049,7 +1049,7 @@ import Data.List.Properties
import Algebra.Structures using (IsMonoid)
import Relation.Unary using (Decidable)
import Relation.Binary using (Decidable)
\end{code}
```
The standard library version of `IsMonoid` differs from the
one given here, in that it is also parameterised on an equivalence relation.

View file

@ -7,16 +7,16 @@ permalink : /Modules/
** Turn this into a Setoid example. Copy equivalence relation and setoid
from the standard library. **
\begin{code}
```
module plfa.Modules where
\end{code}
```
This chapter introduces modules as a way of structuring proofs,
and proves some general results which will be useful later.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong)
open Eq.≡-Reasoning
@ -30,7 +30,7 @@ open import Data.List using (List; []; _∷_; _++_; map; foldr; downFrom)
open import Data.List.All using (All; []; _∷_)
open import Data.List.Any using (Any; here; there)
open import plfa.Isomorphism using (_≃_; extensionality)
\end{code}
```
@ -44,7 +44,7 @@ some definitions, where we represent collections as lists. (We would
call collections *sets*, save that the name `Set` already plays a
special role in Agda.)
\begin{code}
```
Coll : ∀ { : Level} → Set → Set
Coll A = List A
@ -53,7 +53,7 @@ _∈_ {_≈_ = _≈_} x xs = All (x ≈_) xs
_⊆_ : ∀ { : Level} {A : Set } {_≈_ : A → A → Set } → Coll A → Coll A → Set
_⊆_ {_≈_ = _≈_} xs ys = ∀ {w} → _∈_ {_≈_ = _≈_} w xs → _∈_ {_≈_ = _≈_} w ys
\end{code}
```
This rapidly gets tired. Passing around the equivalence relation `_≈_`
takes a lot of space, hinders the use of infix notation, and obscures the
@ -61,7 +61,7 @@ essence of the definitions.
Instead, we can define a module parameterised by the desired concepts,
which are then available throughout.
\begin{code}
```
module Collection { : Level} (A : Set ) (_≈_ : A → A → Set ) where
Coll : ∀ { : Level} → Set → Set
@ -72,10 +72,10 @@ module Collection { : Level} (A : Set ) (_≈_ : A → A → Set ) wher
_⊆_ : Coll A → Coll A → Set
xs ⊆ ys = ∀ {w} → w ∈ xs → w ∈ ys
\end{code}
```
Use of a module
\begin{code}
```
open Collection () (_≡_)
pattern [_] x = x ∷ []
@ -86,7 +86,7 @@ ex : [ 1 , 3 ] ⊆ [ 1 , 2 , 3 ]
ex (here refl) = here refl
ex (there (here refl)) = there (there (here refl))
ex (there (there ()))
\end{code}
```
## Abstract types
@ -94,7 +94,7 @@ ex (there (there ()))
Say I want to define a type of stacks, with operations push and pop.
I can define stacks in terms of lists, but hide the definitions from
the rest of the program.
\begin{code}
```
abstract
Stack : Set → Set
@ -115,15 +115,15 @@ abstract
lemma-pop-empty : ∀ {A} → pop {A} empty ≡ nothing
lemma-pop-empty = refl
\end{code}
```
## Standard Library
Definitions similar to those in this chapter can be found in the standard library.
\begin{code}
```
-- EDIT
\end{code}
```
The standard library version of `IsMonoid` differs from the
one given here, in that it is also parameterised on an equivalence relation.

View file

@ -6,9 +6,9 @@ permalink : /More/
next : /Bisimulation/
---
\begin{code}
```
module plfa.More where
\end{code}
```
So far, we have focussed on a relatively minimal language, based on
Plotkin's PCF, which supports functions, naturals, and fixpoints. In
@ -554,18 +554,18 @@ and leave formalisation of the remaining constructs as an exercise.
### Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl)
open import Data.Empty using (⊥; ⊥-elim)
open import Data.Nat using (; zero; suc; _*_)
open import Relation.Nullary using (¬_)
\end{code}
```
### Syntax
\begin{code}
```
infix 4 _⊢_
infix 4 _∋_
infixl 5 _,_
@ -581,29 +581,29 @@ infix 8 `suc_
infix 9 `_
infix 9 S_
infix 9 #_
\end{code}
```
### Types
\begin{code}
```
data Type : Set where
` : Type
_⇒_ : Type → Type → Type
Nat : Type
_`×_ : Type → Type → Type
\end{code}
```
### Contexts
\begin{code}
```
data Context : Set where
∅ : Context
_,_ : Context → Type → Context
\end{code}
```
### Variables and the lookup judgment
\begin{code}
```
data _∋_ : Context → Type → Set where
Z : ∀ {Γ A}
@ -614,11 +614,11 @@ data _∋_ : Context → Type → Set where
→ Γ ∋ B
---------
→ Γ , A ∋ B
\end{code}
```
### Terms and the typing judgment
\begin{code}
```
data _⊢_ : Context → Type → Set where
-- variables
@ -713,11 +713,11 @@ data _⊢_ : Context → Type → Set where
--------------
→ Γ ⊢ C
\end{code}
```
### Abbreviating de Bruijn indices
\begin{code}
```
lookup : Context → → Type
lookup (Γ , A) zero = A
lookup (Γ , _) (suc n) = lookup Γ n
@ -732,11 +732,11 @@ count {∅} _ = ⊥-elim impossible
#_ : ∀ {Γ} → (n : ) → Γ ⊢ lookup Γ n
# n = ` count n
\end{code}
```
## Renaming
\begin{code}
```
ext : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ∋ A) → (∀ {A B} → Γ , A ∋ B → Δ , A ∋ B)
ext ρ Z = Z
ext ρ (S x) = S (ρ x)
@ -756,11 +756,11 @@ rename ρ `⟨ M , N ⟩ = `⟨ rename ρ M , rename ρ N ⟩
rename ρ (`proj₁ L) = `proj₁ (rename ρ L)
rename ρ (`proj₂ L) = `proj₂ (rename ρ L)
rename ρ (case× L M) = case× (rename ρ L) (rename (ext (ext ρ)) M)
\end{code}
```
## Simultaneous Substitution
\begin{code}
```
exts : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ⊢ A) → (∀ {A B} → Γ , A ∋ B → Δ , A ⊢ B)
exts σ Z = ` Z
exts σ (S x) = rename S_ (σ x)
@ -780,11 +780,11 @@ subst σ `⟨ M , N ⟩ = `⟨ subst σ M , subst σ N ⟩
subst σ (`proj₁ L) = `proj₁ (subst σ L)
subst σ (`proj₂ L) = `proj₂ (subst σ L)
subst σ (case× L M) = case× (subst σ L) (subst (exts (exts σ)) M)
\end{code}
```
## Single and double substitution
\begin{code}
```
_[_] : ∀ {Γ A B}
→ Γ , A ⊢ B
→ Γ ⊢ A
@ -808,11 +808,11 @@ _[_][_] {Γ} {A} {B} N V W = subst {Γ , A , B} {Γ} σ N
σ Z = W
σ (S Z) = V
σ (S (S x)) = ` x
\end{code}
```
## Values
\begin{code}
```
data Value : ∀ {Γ A} → Γ ⊢ A → Set where
-- functions
@ -845,14 +845,14 @@ data Value : ∀ {Γ A} → Γ ⊢ A → Set where
→ Value W
----------------
→ Value `⟨ V , W ⟩
\end{code}
```
Implicit arguments need to be supplied when they are
not fixed by the given arguments.
## Reduction
\begin{code}
```
infix 2 _—→_
data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -979,11 +979,11 @@ data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
----------------------------------
→ case× `⟨ V , W ⟩ M —→ M [ V ][ W ]
\end{code}
```
## Reflexive and transitive closure
\begin{code}
```
infix 2 _—↠_
infix 1 begin_
infixr 2 _—→⟨_⟩_
@ -1006,12 +1006,12 @@ begin_ : ∀ {Γ} {A} {M N : Γ ⊢ A}
------
→ M —↠ N
begin M—↠N = M—↠N
\end{code}
```
## Values do not reduce
\begin{code}
```
V¬—→ : ∀ {Γ A} {M N : Γ ⊢ A}
→ Value M
----------
@ -1022,12 +1022,12 @@ V¬—→ (V-suc VM) (ξ-suc M—→M) = V¬—→ VM M—→M
V¬—→ V-con ()
V¬—→ V-⟨ VM , _ ⟩ (ξ-⟨,⟩₁ M—→M) = V¬—→ VM M—→M
V¬—→ V-⟨ _ , VN ⟩ (ξ-⟨,⟩₂ _ N—→N) = V¬—→ VN N—→N
\end{code}
```
## Progress
\begin{code}
```
data Progress {A} (M : ∅ ⊢ A) : Set where
step : ∀ {N : ∅ ⊢ A}
@ -1083,12 +1083,12 @@ progress (`proj₂ L) with progress L
progress (case× L M) with progress L
... | step L—→L = step (ξ-case× L—→L)
... | done (V-⟨ VM , VN ⟩) = step (β-case× VM VN)
\end{code}
```
## Evaluation
\begin{code}
```
data Gas : Set where
gas : → Gas
@ -1121,12 +1121,12 @@ eval (gas (suc m)) L with progress L
... | done VL = steps (L ∎) (done VL)
... | step {M} L—→M with eval (gas m) M
... | steps M—↠N fin = steps (L —→⟨ L—→M ⟩ M—↠N) fin
\end{code}
```
## Examples
\begin{code}
```
cube : ∅ ⊢ Nat ⇒ Nat
cube = ƛ (# 0 `* # 0 `* # 0)
@ -1197,7 +1197,7 @@ _ =
—→⟨ β-case× V-con V-zero ⟩
`⟨ `zero , con 42 ⟩
\end{code}
```
#### Exercise `More`
@ -1217,12 +1217,12 @@ to confirm it returns the expected answer:
Show that a double substitution is equivalent to two single
substitutions.
\begin{code}
```
postulate
double-subst :
∀ {Γ A B C} {V : Γ ⊢ A} {W : Γ ⊢ B} {N : Γ , A , B ⊢ C} →
N [ V ][ W ] ≡ (N [ rename S_ W ]) [ V ]
\end{code}
```
Note the arguments need to be swapped and `W` needs to have
its context adjusted via renaming in order for the right-hand
side to be well-typed.

View file

@ -6,9 +6,9 @@ permalink : /Naturals/
next : /Induction/
---
\begin{code}
```
module plfa.Naturals where
\end{code}
```
The night sky holds more stars than I can count, though fewer than five
thousand are visible to the naked eye. The observable universe
@ -45,11 +45,11 @@ as a pair of inference rules:
suc m :
And here is the definition in Agda:
\begin{code}
```
data : Set where
zero :
suc :
\end{code}
```
Here `` is the name of the *datatype* we are defining,
and `zero` and `suc` (short for *successor*) are the
@ -80,9 +80,9 @@ successor of two; and so on.
Write out `7` in longhand.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Unpacking the inference rules
@ -234,9 +234,9 @@ code, with the exception of one special kind of comment, called a
_pragma_, which is enclosed between `{-#` and `#-}`.
Including the line
\begin{code}
```
{-# BUILTIN NATURAL #-}
\end{code}
```
tells Agda that `` corresponds to the natural numbers, and hence one
is permitted to type `0` as shorthand for `zero`, `1` as shorthand for
`suc zero`, `2` as shorthand for `suc (suc zero)`, and so on. The
@ -260,11 +260,11 @@ terms involving natural numbers. To support doing so, we import
the definition of equality and notations for reasoning
about it from the Agda standard library:
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _∎)
\end{code}
```
The first line brings the standard library module that defines
equality into scope and gives it the name `Eq`. The second line
@ -302,11 +302,11 @@ instances of addition and multiplication can be specified in
just a couple of lines.
Here is the definition of addition in Agda:
\begin{code}
```
_+_ :
zero + n = n
suc m + n = suc (m + n)
\end{code}
```
Let's unpack this definition. Addition is an infix operator. It is
written with underbars where the argument go, hence its name is
@ -347,7 +347,7 @@ addition of larger numbers is defined in terms of addition of smaller
numbers. Such a definition is called _well founded_.
For example, let's add two and three:
\begin{code}
```
_ : 2 + 3 ≡ 5
_ =
begin
@ -363,10 +363,10 @@ _ =
≡⟨⟩ -- is longhand for
5
\end{code}
```
We can write the same derivation more compactly by only
expanding shorthand as needed:
\begin{code}
```
_ : 2 + 3 ≡ 5
_ =
begin
@ -380,7 +380,7 @@ _ =
≡⟨⟩
5
\end{code}
```
The first line matches the inductive case by taking `m = 1` and `n = 3`,
the second line matches the inductive case by taking `m = 0` and `n = 3`,
and the third line matches the base case by taking `n = 3`.
@ -399,10 +399,10 @@ consists of a series of terms separated by `≡⟨⟩`.
In fact, both proofs are longer than need be, and Agda is satisfied
with the following:
\begin{code}
```
_ : 2 + 3 ≡ 5
_ = refl
\end{code}
```
Agda knows how to
compute the value of `2 + 3`, and so can immediately
check it is the same as `5`. A binary relation is said to be _reflexive_
@ -430,20 +430,20 @@ other word for evidence, which we will use interchangeably, is _proof_.
Compute `3 + 4`, writing out your reasoning as a chain of equations.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Multiplication
Once we have defined addition, we can define multiplication
as repeated addition:
\begin{code}
```
_*_ :
zero * n = zero
(suc m) * n = n + (m * n)
\end{code}
```
Computing `m * n` returns the sum of `m` copies of `n`.
Again, rewriting turns the definition into two familiar equations:
@ -466,7 +466,7 @@ Again, the definition is well-founded in that multiplication of
larger numbers is defined in terms of multiplication of smaller numbers.
For example, let's multiply two and three:
\begin{code}
```
_ =
begin
2 * 3
@ -479,7 +479,7 @@ _ =
≡⟨⟩ -- simplify
6
\end{code}
```
The first line matches the inductive case by taking `m = 1` and `n = 3`,
The second line matches the inductive case by taking `m = 0` and `n = 3`,
and the third line matches the base case by taking `n = 3`.
@ -491,9 +491,9 @@ it can easily be inferred from the corresponding term.
Compute `3 * 4`, writing out your reasoning as a chain of equations.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `_^_` (recommended) {#power}
@ -505,9 +505,9 @@ Define exponentiation, which is given by the following equations:
Check that `3 ^ 4` is `81`.
\begin{code}
```
-- Your code goes here
\end{code}
```
@ -520,12 +520,12 @@ subtraction to naturals is called _monus_ (a twist on _minus_).
Monus is our first use of a definition that uses pattern
matching against both arguments:
\begin{code}
```
_∸_ :
m ∸ zero = m
zero ∸ suc n = zero
suc m ∸ suc n = m ∸ n
\end{code}
```
We can do a simple analysis to show that all the cases are covered.
* Consider the second argument.
@ -539,7 +539,7 @@ monus on bigger numbers is defined in terms of monus on
smaller numbers.
For example, let's subtract two from three:
\begin{code}
```
_ =
begin
3 ∸ 2
@ -550,10 +550,10 @@ _ =
≡⟨⟩
1
\end{code}
```
We did not use the second equation at all, but it will be required
if we try to subtract a larger number from a smaller one:
\begin{code}
```
_ =
begin
2 ∸ 3
@ -564,15 +564,15 @@ _ =
≡⟨⟩
0
\end{code}
```
#### Exercise `∸-examples` (recommended) {#monus-examples}
Compute `5 ∸ 3` and `3 ∸ 5`, writing out your reasoning as a chain of equations.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Precedence
@ -587,10 +587,10 @@ so write `m + n + p` to mean `(m + n) + p`.
In Agda the precedence and associativity of infix operators
needs to be declared:
\begin{code}
```
infixl 6 _+_ _∸_
infixl 7 _*_
\end{code}
```
This states operators `_+_` and `_∸_` have precedence level 6,
and operator `_*_` has precedence level 7.
Addition and monus bind less tightly than multiplication
@ -853,11 +853,11 @@ a program this simple, using `C-c C-c` to split cases can be helpful.
## More pragmas
Including the lines
\begin{code}
```
{-# BUILTIN NATPLUS _+_ #-}
{-# BUILTIN NATTIMES _*_ #-}
{-# BUILTIN NATMINUS _∸_ #-}
\end{code}
```
tells Agda that these three operators correspond to the usual ones,
and enables it to perform these computations using the corresponding
Haskell operators on the arbitrary-precision integer type.
@ -875,12 +875,12 @@ _m_ and _n_.
A more efficient representation of natural numbers uses a binary
rather than a unary system. We represent a number as a bitstring:
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
For instance, the bitstring
1011
@ -916,9 +916,9 @@ For the former, choose the bitstring to have no leading zeros if it
represents a positive natural, and represent zero by `x0 nil`.
Confirm that these both give the correct answer for zero through four.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Standard library
@ -928,9 +928,9 @@ definitions in the standard library. The naturals, constructors for
them, and basic operators upon them, are defined in the standard
library module `Data.Nat`:
\begin{code}
```
-- import Data.Nat using (; zero; suc; _+_; _*_; _^_; _∸_)
\end{code}
```
Normally, we will show an import as running code, so Agda will
complain if we attempt to import a definition that is not available.
@ -975,4 +975,3 @@ In place of left, right, up, and down keys, one may also use control characters:
We write `C-b` to stand for control-b, and similarly. One can also navigate
left and right by typing the digits that appear in the displayed list.

View file

@ -6,23 +6,23 @@ permalink : /Negation/
next : /Quantifiers/
---
\begin{code}
```
module plfa.Negation where
\end{code}
```
This chapter introduces negation, and discusses intuitionistic
and classical logic.
## Imports
\begin{code}
```
open import Relation.Binary.PropositionalEquality using (_≡_; refl)
open import Data.Nat using (; zero; suc)
open import Data.Empty using (⊥; ⊥-elim)
open import Data.Sum using (_⊎_; inj₁; inj₂)
open import Data.Product using (_×_)
open import plfa.Isomorphism using (_≃_; extensionality)
\end{code}
```
## Negation
@ -30,10 +30,10 @@ open import plfa.Isomorphism using (_≃_; extensionality)
Given a proposition `A`, the negation `¬ A` holds if `A` cannot hold.
We formalise this idea by declaring negation to be the same
as implication of false:
\begin{code}
```
¬_ : Set → Set
¬ A = A → ⊥
\end{code}
```
This is a form of _proof by contradiction_: if assuming `A` leads
to the conclusion `⊥` (a contradiction), then we must have `¬ A`.
@ -47,35 +47,35 @@ that `A` holds into evidence that `⊥` holds.
Given evidence that both `¬ A` and `A` hold, we can conclude that `⊥` holds.
In other words, if both `¬ A` and `A` hold, then we have a contradiction:
\begin{code}
```
¬-elim : ∀ {A : Set}
→ ¬ A
→ A
---
→ ⊥
¬-elim ¬x x = ¬x x
\end{code}
```
Here we write `¬x` for evidence of `¬ A` and `x` for evidence of `A`. This
means that `¬x` must be a function of type `A → ⊥`, and hence the application
`¬x x` must be of type `⊥`. Note that this rule is just a special case of `→-elim`.
We set the precedence of negation so that it binds more tightly
than disjunction and conjunction, but less tightly than anything else:
\begin{code}
```
infix 3 ¬_
\end{code}
```
Thus, `¬ A × ¬ B` parses as `(¬ A) × (¬ B)` and `¬ m ≡ n` as `¬ (m ≡ n)`.
In _classical_ logic, we have that `A` is equivalent to `¬ ¬ A`.
As we discuss below, in Agda we use _intuitionistic_ logic, where
we have only half of this equivalence, namely that `A` implies `¬ ¬ A`:
\begin{code}
```
¬¬-intro : ∀ {A : Set}
→ A
-----
→ ¬ ¬ A
¬¬-intro x = λ{¬x → ¬x x}
\end{code}
```
Let `x` be evidence of `A`. We show that assuming
`¬ A` leads to a contradiction, and hence `¬ ¬ A` must hold.
Let `¬x` be evidence of `¬ A`. Then from `A` and `¬ A`
@ -83,26 +83,26 @@ we have a contradiction, evidenced by `¬x x`. Hence, we have
shown `¬ ¬ A`.
An equivalent way to write the above is as follows:
\begin{code}
```
¬¬-intro : ∀ {A : Set}
→ A
-----
→ ¬ ¬ A
¬¬-intro x ¬x = ¬x x
\end{code}
```
Here we have simply converted the argument of the lambda term
to an additional argument of the function. We will usually
use this latter style, as it is more compact.
We cannot show that `¬ ¬ A` implies `A`, but we can show that
`¬ ¬ ¬ A` implies `¬ A`:
\begin{code}
```
¬¬¬-elim : ∀ {A : Set}
→ ¬ ¬ ¬ A
-------
→ ¬ A
¬¬¬-elim ¬¬¬x = λ x → ¬¬¬x (¬¬-intro x)
\end{code}
```
Let `¬¬¬x` be evidence of `¬ ¬ ¬ A`. We will show that assuming
`A` leads to a contradiction, and hence `¬ A` must hold.
Let `x` be evidence of `A`. Then by the previous result, we
@ -112,13 +112,13 @@ can conclude `¬ ¬ A`, evidenced by `¬¬-intro x`. Then from
Another law of logic is _contraposition_,
stating that if `A` implies `B`, then `¬ B` implies `¬ A`:
\begin{code}
```
contraposition : ∀ {A B : Set}
→ (A → B)
-----------
→ (¬ B → ¬ A)
contraposition f ¬y x = ¬y (f x)
\end{code}
```
Let `f` be evidence of `A → B` and let `¬y` be evidence of `¬ B`. We
will show that assuming `A` leads to a contradiction, and hence `¬ A`
must hold. Let `x` be evidence of `A`. Then from `A → B` and `A` we
@ -126,25 +126,25 @@ may conclude `B`, evidenced by `f x`, and from `B` and `¬ B` we may
conclude `⊥`, evidenced by `¬y (f x)`. Hence, we have shown `¬ A`.
Using negation, it is straightforward to define inequality:
\begin{code}
```
_≢_ : ∀ {A : Set} → A → A → Set
x ≢ y = ¬ (x ≡ y)
\end{code}
```
It is trivial to show distinct numbers are not equal:
\begin{code}
```
_ : 1 ≢ 2
_ = λ()
\end{code}
```
This is our first use of an absurd pattern in a lambda expression.
The type `M ≡ N` is occupied exactly when `M` and `N` simplify to
identical terms. Since `1` and `2` simplify to distinct normal forms,
Agda determines that there is no possible evidence that `1 ≡ 2`.
As a second example, it is also easy to validate
Peano's postulate that zero is not the successor of any number:
\begin{code}
```
peano : ∀ {m : } → zero ≢ suc m
peano = λ()
\end{code}
```
The evidence is essentially the same, as the absurd pattern matches
all possible evidence of type `zero ≡ suc m`.
@ -158,27 +158,27 @@ we know for arithmetic, where
Indeed, there is exactly one proof of `⊥ → ⊥`. We can write
this proof two different ways:
\begin{code}
```
id : ⊥ → ⊥
id x = x
id : ⊥ → ⊥
id ()
\end{code}
```
But, using extensionality, we can prove these equal:
\begin{code}
```
id≡id : id ≡ id
id≡id = extensionality (λ())
\end{code}
```
By extensionality, `id ≡ id` holds if for every
`x` in their domain we have `id x ≡ id x`. But there
is no `x` in their domain, so the equality holds trivially.
Indeed, we can show any two proofs of a negation are equal:
\begin{code}
```
assimilation : ∀ {A : Set} (¬x ¬x : ¬ A) → ¬x ≡ ¬x
assimilation ¬x ¬x = extensionality (λ x → ⊥-elim (¬x x))
\end{code}
```
Evidence for `¬ A` implies that any evidence of `A`
immediately leads to a contradiction. But extensionality
quantifies over all `x` such that `A` holds, hence any
@ -192,9 +192,9 @@ Using negation, show that
[strict inequality][plfa.Relations#strict-inequality]
is irreflexive, that is, `n < n` holds for no `n`.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `trichotomy`
@ -210,9 +210,9 @@ that is, for any naturals `m` and `n` exactly one of the following holds:
Here "exactly one" means that not only one of the three must hold,
but that when one holds the negation of the other two must also hold.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `⊎-dual-×` (recommended)
@ -223,9 +223,9 @@ version of De Morgan's Law.
This result is an easy consequence of something we've proved previously.
\begin{code}
```
-- Your code goes here
\end{code}
```
Do we also have the following?
@ -282,18 +282,18 @@ _Communications of the ACM_, December 2015.)
## Excluded middle is irrefutable
The law of the excluded middle can be formulated as follows:
\begin{code}
```
postulate
em : ∀ {A : Set} → A ⊎ ¬ A
\end{code}
```
As we noted, the law of the excluded middle does not hold in
intuitionistic logic. However, we can show that it is _irrefutable_,
meaning that the negation of its negation is provable (and hence that
its negation is never provable):
\begin{code}
```
em-irrefutable : ∀ {A : Set} → ¬ ¬ (A ⊎ ¬ A)
em-irrefutable = λ k → k (inj₂ (λ x → k (inj₁ x)))
\end{code}
```
The best way to explain this code is to develop it interactively:
em-irrefutable k = ?
@ -380,32 +380,32 @@ Consider the following principles:
Show that each of these implies all the others.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `Stable` (stretch)
Say that a formula is _stable_ if double negation elimination holds for it:
\begin{code}
```
Stable : Set → Set
Stable A = ¬ ¬ A → A
\end{code}
```
Show that any negated formula is stable, and that the conjunction
of two stable formulas is stable.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Standard Prelude
Definitions similar to those in this chapter can be found in the standard library:
\begin{code}
```
import Relation.Nullary using (¬_)
import Relation.Nullary.Negation using (contraposition)
\end{code}
```
## Unicode

View file

@ -6,9 +6,9 @@ permalink : /Properties/
next : /DeBruijn/
---
\begin{code}
```
module plfa.Properties where
\end{code}
```
This chapter covers properties of the simply-typed lambda calculus, as
introduced in the previous chapter. The most important of these
@ -19,7 +19,7 @@ sequences for us.
## Imports
\begin{code}
```
open import Relation.Binary.PropositionalEquality
using (_≡_; _≢_; refl; sym; cong; cong₂)
open import Data.String using (String; _≟_)
@ -33,7 +33,7 @@ open import Relation.Nullary using (¬_; Dec; yes; no)
open import Function using (_∘_)
open import plfa.Isomorphism
open import plfa.Lambda
\end{code}
```
## Introduction
@ -90,7 +90,7 @@ types without needing to develop a separate inductive definition of the
## Values do not reduce
We start with an easy observation. Values do not reduce:
\begin{code}
```
V¬—→ : ∀ {M N}
→ Value M
----------
@ -98,7 +98,7 @@ V¬—→ : ∀ {M N}
V¬—→ V-ƛ ()
V¬—→ V-zero ()
V¬—→ (V-suc VM) (ξ-suc M—→N) = V¬—→ VM M—→N
\end{code}
```
We consider the three possibilities for values:
* If it is an abstraction then no reduction applies
@ -110,13 +110,13 @@ We consider the three possibilities for values:
that reduces, which by induction cannot occur.
As a corollary, terms that reduce are not values:
\begin{code}
```
—→¬V : ∀ {M N}
→ M —→ N
---------
→ ¬ Value M
—→¬V M—→N VM = V¬—→ VM M—→N
\end{code}
```
If we expand out the negations, we have
V¬—→ : ∀ {M N} → Value M → M —→ N → ⊥
@ -134,7 +134,7 @@ and a zero or successor expression must be a natural.
Further, the body of a function must be well-typed in a context
containing only its bound variable, and the argument of successor
must itself be canonical:
\begin{code}
```
infix 4 Canonical_⦂_
data Canonical_⦂_ : Term → Type → Set where
@ -152,10 +152,10 @@ data Canonical_⦂_ : Term → Type → Set where
→ Canonical V ⦂ `
---------------------
→ Canonical `suc V ⦂ `
\end{code}
```
Every closed, well-typed value is canonical:
\begin{code}
```
canonical : ∀ {V A}
→ ∅ ⊢ V ⦂ A
→ Value V
@ -168,7 +168,7 @@ canonical ⊢zero V-zero = C-zero
canonical (⊢suc ⊢V) (V-suc VV) = C-suc (canonical ⊢V VV)
canonical (⊢case ⊢L ⊢M ⊢N) ()
canonical (⊢μ ⊢M) ()
\end{code}
```
There are only three interesting cases to consider:
* If the term is a lambda abstraction, then well-typing of the term
@ -187,7 +187,7 @@ are not values.
Conversely, if a term is canonical then it is a value
and it is well-typed in the empty context:
\begin{code}
```
value : ∀ {M A}
→ Canonical M ⦂ A
----------------
@ -203,7 +203,7 @@ typed : ∀ {M A}
typed (C-ƛ ⊢N) = ⊢ƛ ⊢N
typed C-zero = ⊢zero
typed (C-suc CM) = ⊢suc (typed CM)
\end{code}
```
The proofs are straightforward, and again use induction in the
case of successor.
@ -230,7 +230,7 @@ that `M —→ N`.
To formulate this property, we first introduce a relation that
captures what it means for a term `M` to make progress:
\begin{code}
```
data Progress (M : Term) : Set where
step : ∀ {N}
@ -242,13 +242,13 @@ data Progress (M : Term) : Set where
Value M
----------
→ Progress M
\end{code}
```
A term `M` makes progress if either it can take a step, meaning there
exists a term `N` such that `M —→ N`, or if it is done, meaning that
`M` is a value.
If a term is well-typed in the empty context then it satisfies progress:
\begin{code}
```
progress : ∀ {M A}
→ ∅ ⊢ M ⦂ A
----------
@ -271,7 +271,7 @@ progress (⊢case ⊢L ⊢M ⊢N) with progress ⊢L
... | C-zero = step β-zero
... | C-suc CL = step (β-suc (value CL))
progress (⊢μ ⊢M) = step β-μ
\end{code}
```
We induct on the evidence that the term is well-typed.
Let's unpack the first three cases:
@ -321,10 +321,10 @@ or introduce subsidiary functions.
Instead of defining a data type for `Progress M`, we could
have formulated progress using disjunction and existentials:
\begin{code}
```
postulate
progress : ∀ M {A} → ∅ ⊢ M ⦂ A → Value M ⊎ ∃[ N ](M —→ N)
\end{code}
```
This leads to a less perspicuous proof. Instead of the mnemonic `done`
and `step` we use `inj₁` and `inj₂`, and the term `N` is no longer
implicit and so must be written out in full. In the case for `β-ƛ`
@ -336,27 +336,27 @@ determine its bound variable and body, `ƛ x ⇒ N`, so we can show that
Show that `Progress M` is isomorphic to `Value M ⊎ ∃[ N ](M —→ N)`.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `progress`
Write out the proof of `progress` in full, and compare it to the
proof of `progress` above.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `value?`
Combine `progress` and `—→¬V` to write a program that decides
whether a well-typed term is a value:
\begin{code}
```
postulate
value? : ∀ {A M} → ∅ ⊢ M ⦂ A → Dec (Value M)
\end{code}
```
## Prelude to preservation
@ -448,14 +448,14 @@ for lambda expressions, and similarly for case and fixpoint. To deal
with this situation, we first prove a lemma showing that if one context maps to another,
this is still true after adding the same variable to
both contexts:
\begin{code}
```
ext : ∀ {Γ Δ}
→ (∀ {x A} → Γ ∋ x ⦂ A → Δ ∋ x ⦂ A)
-----------------------------------------------------
→ (∀ {x y A B} → Γ , y ⦂ B ∋ x ⦂ A → Δ , y ⦂ B ∋ x ⦂ A)
ext ρ Z = Z
ext ρ (S x≢y ∋x) = S x≢y (ρ ∋x)
\end{code}
```
Let `ρ` be the name of the map that takes evidence that
`x` appears in `Γ` to evidence that `x` appears in `Δ`.
The proof is by case analysis of the evidence that `x` appears
@ -473,7 +473,7 @@ applying `ρ` to find the evidence that `x` appears in `Δ`.
With the extension lemma under our belts, it is straightforward to
prove renaming preserves types:
\begin{code}
```
rename : ∀ {Γ Δ}
→ (∀ {x A} → Γ ∋ x ⦂ A → Δ ∋ x ⦂ A)
----------------------------------
@ -485,7 +485,7 @@ rename ρ ⊢zero = ⊢zero
rename ρ (⊢suc ⊢M) = ⊢suc (rename ρ ⊢M)
rename ρ (⊢case ⊢L ⊢M ⊢N) = ⊢case (rename ρ ⊢L) (rename ρ ⊢M) (rename (ext ρ) ⊢N)
rename ρ (⊢μ ⊢M) = ⊢μ (rename (ext ρ) ⊢M)
\end{code}
```
As before, let `ρ` be the name of the map that takes evidence that
`x` appears in `Γ` to evidence that `x` appears in `Δ`. We induct
on the evidence that `M` is well-typed in `Γ`. Let's unpack the
@ -514,7 +514,7 @@ We have three important corollaries, each proved by constructing
a suitable map between contexts.
First, a closed term can be weakened to any context:
\begin{code}
```
weaken : ∀ {Γ M A}
→ ∅ ⊢ M ⦂ A
----------
@ -526,13 +526,13 @@ weaken {Γ} ⊢M = rename ρ ⊢M
---------
→ Γ ∋ z ⦂ C
ρ ()
\end{code}
```
Here the map `ρ` is trivial, since there are no possible
arguments in the empty context `∅`.
Second, if the last two variables in a context are equal then we can
drop the shadowed one:
\begin{code}
```
drop : ∀ {Γ x M A B C}
→ Γ , x ⦂ A , x ⦂ B ⊢ M ⦂ C
--------------------------
@ -546,7 +546,7 @@ drop {Γ} {x} {M} {A} {B} {C} ⊢M = rename ρ ⊢M
ρ Z = Z
ρ (S x≢x Z) = ⊥-elim (x≢x refl)
ρ (S z≢x (S _ ∋z)) = S z≢x ∋z
\end{code}
```
Here map `ρ` can never be invoked on the inner occurrence of `x` since
it is masked by the outer occurrence. Skipping over the `x` in the
first position can only happen if the variable looked for differs from
@ -555,7 +555,7 @@ found in the second position, which also contains `x`, this leads to a
contradiction (evidenced by `x≢x refl`).
Third, if the last two variables in a context differ then we can swap them:
\begin{code}
```
swap : ∀ {Γ x y M A B C}
→ x ≢ y
→ Γ , y ⦂ B , x ⦂ A ⊢ M ⦂ C
@ -570,7 +570,7 @@ swap {Γ} {x} {y} {M} {A} {B} {C} x≢y ⊢M = rename ρ ⊢M
ρ Z = S x≢y Z
ρ (S y≢x Z) = Z
ρ (S z≢x (S z≢y ∋z)) = S z≢y (S z≢x ∋z)
\end{code}
```
Here the renaming map takes a variable at the end into a variable one
from the end, and vice versa. The first line is responsible for
moving `x` from a position at the end to a position one from the end
@ -597,7 +597,7 @@ variables the context grows. So for the induction to go through,
we require an arbitrary context `Γ`, as in the statement of the lemma.
Here is the formal statement and proof that substitution preserves types:
\begin{code}
```
subst : ∀ {Γ x N V A B}
→ ∅ ⊢ V ⦂ A
→ Γ , x ⦂ A ⊢ N ⦂ B
@ -622,7 +622,7 @@ subst {x = y} ⊢V (⊢case {x = x} ⊢L ⊢M ⊢N) with x ≟ y
subst {x = y} ⊢V (⊢μ {x = x} ⊢M) with x ≟ y
... | yes refl = ⊢μ (drop ⊢M)
... | no x≢y = ⊢μ (subst ⊢V (swap x≢y ⊢M))
\end{code}
```
We induct on the evidence that `N` is well-typed in the
context `Γ` extended by `x`.
@ -784,9 +784,9 @@ should factor dealing with bound variables into a single function,
defined by mutual recursion with the proof that substitution
preserves types.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Preservation
@ -794,7 +794,7 @@ preserves types.
Once we have shown that substitution preserves types, showing
that reduction preserves types is straightforward:
\begin{code}
```
preserve : ∀ {M N A}
→ ∅ ⊢ M ⦂ A
→ M —→ N
@ -811,7 +811,7 @@ preserve (⊢case ⊢L ⊢M ⊢N) (ξ-case L—→L) = ⊢case (pre
preserve (⊢case ⊢zero ⊢M ⊢N) β-zero = ⊢M
preserve (⊢case (⊢suc ⊢V) ⊢M ⊢N) (β-suc VV) = subst ⊢V ⊢N
preserve (⊢μ ⊢M) (β-μ) = subst (⊢μ ⊢M) ⊢M
\end{code}
```
The proof never mentions the types of `M` or `N`,
so in what follows we choose type name as convenient.
@ -874,7 +874,7 @@ function that computes the reduction sequence from any given closed,
well-typed term to its value, if it has one.
Some terms may reduce forever. Here is a simple example:
\begin{code}
```
sucμ = μ "x" ⇒ `suc (` "x")
_ =
@ -888,7 +888,7 @@ _ =
`suc `suc `suc sucμ
-- ...
\end{code}
```
Since every Agda computation must terminate,
we cannot simply ask Agda to reduce a term to a value.
Instead, we will provide a natural number to Agda, and permit it
@ -910,13 +910,13 @@ per unit of gas.
By analogy, we will use the name _gas_ for the parameter which puts a
bound on the number of reduction steps. `Gas` is specified by a natural number:
\begin{code}
```
data Gas : Set where
gas : → Gas
\end{code}
```
When our evaluator returns a term `N`, it will either give evidence that
`N` is a value or indicate that it ran out of gas:
\begin{code}
```
data Finished (N : Term) : Set where
done :
@ -927,11 +927,11 @@ data Finished (N : Term) : Set where
out-of-gas :
----------
Finished N
\end{code}
```
Given a term `L` of type `A`, the evaluator will, for some `N`, return
a reduction sequence from `L` to `N` and an indication of whether
reduction finished:
\begin{code}
```
data Steps (L : Term) : Set where
steps : ∀ {N}
@ -939,10 +939,10 @@ data Steps (L : Term) : Set where
→ Finished N
----------
→ Steps L
\end{code}
```
The evaluator takes gas and evidence that a term is well-typed,
and returns the corresponding steps:
\begin{code}
```
eval : ∀ {L A}
→ Gas
→ ∅ ⊢ L ⦂ A
@ -953,7 +953,7 @@ eval {L} (gas (suc m)) ⊢L with progress ⊢L
... | done VL = steps (L ∎) (done VL)
... | step L—→M with eval (gas m) (preserve ⊢L L—→M)
... | steps M—↠N fin = steps (L —→⟨ L—→M ⟩ M—↠N) fin
\end{code}
```
Let `L` be the name of the term we are reducing, and `⊢L` be the
evidence that `L` is well-typed. We consider the amount of gas
remaining. There are two possibilities:
@ -984,15 +984,15 @@ remaining. There are two possibilities:
We can now use Agda to compute the non-terminating reduction
sequence given earlier. First, we show that the term `sucμ`
is well-typed:
\begin{code}
```
⊢sucμ : ∅ ⊢ μ "x" ⇒ `suc ` "x" ⦂ `
⊢sucμ = ⊢μ (⊢suc (⊢` ∋x))
where
∋x = Z
\end{code}
```
To show the first three steps of the infinite reduction
sequence, we evaluate with three steps worth of gas:
\begin{code}
```
_ : eval (gas 3) ⊢sucμ ≡
steps
(μ "x" ⇒ `suc ` "x"
@ -1005,12 +1005,12 @@ _ : eval (gas 3) ⊢sucμ ≡
∎)
out-of-gas
_ = refl
\end{code}
```
Similarly, we can use Agda to compute the reductions sequences given
in the previous chapter. We start with the Church numeral two
applied to successor and zero. Supplying 100 steps of gas is more than enough:
\begin{code}
```
_ : eval (gas 100) (⊢twoᶜ · ⊢sucᶜ · ⊢zero) ≡
steps
((ƛ "s" ⇒ (ƛ "z" ⇒ ` "s" · (` "s" · ` "z"))) · (ƛ "n" ⇒ `suc ` "n")
@ -1027,7 +1027,7 @@ _ : eval (gas 100) (⊢twoᶜ · ⊢sucᶜ · ⊢zero) ≡
∎)
(done (V-suc (V-suc V-zero)))
_ = refl
\end{code}
```
The example above was generated by using `C-c C-n` to normalise the
left-hand side of the equation and pasting in the result as the
right-hand side of the equation. The example reduction of the
@ -1035,7 +1035,7 @@ previous chapter was derived from this result, reformatting and
writing `twoᶜ` and `sucᶜ` in place of their expansions.
Next, we show two plus two is four:
\begin{code}
```
_ : eval (gas 100) ⊢2+2 ≡
steps
((μ "+" ⇒
@ -1195,12 +1195,12 @@ _ : eval (gas 100) ⊢2+2 ≡
∎)
(done (V-suc (V-suc (V-suc (V-suc V-zero)))))
_ = refl
\end{code}
```
Again, the derivation in the previous chapter was derived by
editing the above.
Similarly, we can evaluate the corresponding term for Church numerals:
\begin{code}
```
_ : eval (gas 100) ⊢2+2ᶜ ≡
steps
((ƛ "m" ⇒
@ -1264,7 +1264,7 @@ _ : eval (gas 100) ⊢2+2ᶜ ≡
∎)
(done (V-suc (V-suc (V-suc (V-suc V-zero)))))
_ = refl
\end{code}
```
And again, the example in the previous section was derived by editing the
above.
@ -1272,9 +1272,9 @@ above.
Using the evaluator, confirm that two times two is four.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise: `progress-preservation`
@ -1282,9 +1282,9 @@ Using the evaluator, confirm that two times two is four.
Without peeking at their statements above, write down the progress
and preservation theorems for the simply typed lambda-calculus.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `subject_expansion`
@ -1298,55 +1298,55 @@ Its opposite is _subject expansion_, which holds if
Find two counter-examples to subject expansion, one
with case expressions and one not involving case expressions.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Well-typed terms don't get stuck
A term is _normal_ if it cannot reduce:
\begin{code}
```
Normal : Term → Set
Normal M = ∀ {N} → ¬ (M —→ N)
\end{code}
```
A term is _stuck_ if it is normal yet not a value:
\begin{code}
```
Stuck : Term → Set
Stuck M = Normal M × ¬ Value M
\end{code}
```
Using progress, it is easy to show that no well-typed term is stuck:
\begin{code}
```
postulate
unstuck : ∀ {M A}
→ ∅ ⊢ M ⦂ A
-----------
→ ¬ (Stuck M)
\end{code}
```
Using preservation, it is easy to show that after any number of steps,
a well-typed term remains well-typed:
\begin{code}
```
postulate
preserves : ∀ {M N A}
→ ∅ ⊢ M ⦂ A
→ M —↠ N
---------
→ ∅ ⊢ N ⦂ A
\end{code}
```
An easy consequence is that starting from a well-typed term, taking
any number of reduction steps leads to a term that is not stuck:
\begin{code}
```
postulate
wttdgs : ∀ {M N A}
→ ∅ ⊢ M ⦂ A
→ M —↠ N
-----------
→ ¬ (Stuck N)
\end{code}
```
Felleisen and Wright, who introduced proofs via progress and
preservation, summarised this result with the slogan _well-typed terms
don't get stuck_. (They were referring to earlier work by Robin
@ -1358,17 +1358,17 @@ showed _well-typed terms don't go wrong_.)
Give an example of an ill-typed term that does get stuck.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `unstuck` (recommended)
Provide proofs of the three postulates, `unstuck`, `preserves`, and `wttdgs` above.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Reduction is deterministic
@ -1379,15 +1379,15 @@ A case term takes four arguments (three subterms and a bound
variable), and our proof will need a variant
of congruence to deal with functions of four arguments. It
is exactly analogous to `cong` and `cong₂` as defined previously:
\begin{code}
```
cong₄ : ∀ {A B C D E : Set} (f : A → B → C → D → E)
{s w : A} {t x : B} {u y : C} {v z : D}
→ s ≡ w → t ≡ x → u ≡ y → v ≡ z → f s t u v ≡ f w x y z
cong₄ f refl refl refl refl = refl
\end{code}
```
It is now straightforward to show that reduction is deterministic:
\begin{code}
```
det : ∀ {M M M″}
→ (M —→ M)
→ (M —→ M″)
@ -1412,7 +1412,7 @@ det β-zero β-zero = refl
det (β-suc VL) (ξ-case L—→L″) = ⊥-elim (V¬—→ (V-suc VL) L—→L″)
det (β-suc _) (β-suc _) = refl
det β-μ β-μ = refl
\end{code}
```
The proof is by induction over possible reductions. We consider
three typical cases:

View file

@ -6,15 +6,15 @@ permalink : /Quantifiers/
next : /Decidable/
---
\begin{code}
```
module plfa.Quantifiers where
\end{code}
```
This chapter introduces universal and existential quantification.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl)
open import Data.Nat using (; zero; suc; _+_; _*_)
@ -22,7 +22,7 @@ open import Relation.Nullary using (¬_)
open import Data.Product using (_×_; proj₁) renaming (_,_ to ⟨_,_⟩)
open import Data.Sum using (_⊎_)
open import plfa.Isomorphism using (_≃_; extensionality)
\end{code}
```
## Universals
@ -51,14 +51,14 @@ M` provides evidence that `B M` holds. In other words, evidence that
Put another way, if we know that `∀ (x : A) → B x` holds and that `M`
is a term of type `A` then we may conclude that `B M` holds:
\begin{code}
```
∀-elim : ∀ {A : Set} {B : A → Set}
→ (L : ∀ (x : A) → B x)
→ (M : A)
-----------------
→ B M
∀-elim L M = L M
\end{code}
```
As with `→-elim`, the rule corresponds to function application.
Functions arise as a special case of dependent functions,
@ -84,34 +84,34 @@ dependent product is ambiguous.
#### Exercise `∀-distrib-×` (recommended)
Show that universals distribute over conjunction:
\begin{code}
```
postulate
∀-distrib-× : ∀ {A : Set} {B C : A → Set} →
(∀ (x : A) → B x × C x) ≃ (∀ (x : A) → B x) × (∀ (x : A) → C x)
\end{code}
```
Compare this with the result (`→-distrib-×`) in
Chapter [Connectives][plfa.Connectives].
#### Exercise `⊎∀-implies-∀⊎`
Show that a disjunction of universals implies a universal of disjunctions:
\begin{code}
```
postulate
⊎∀-implies-∀⊎ : ∀ {A : Set} {B C : A → Set} →
(∀ (x : A) → B x) ⊎ (∀ (x : A) → C x) → ∀ (x : A) → B x ⊎ C x
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
#### Exercise `∀-×`
Consider the following type.
\begin{code}
```
data Tri : Set where
aa : Tri
bb : Tri
cc : Tri
\end{code}
```
Let `B` be a type indexed by `Tri`, that is `B : Tri → Set`.
Show that `∀ (x : Tri) → B x` is isomorphic to `B aa × B bb × B cc`.
@ -128,16 +128,16 @@ the proposition `B x` with each free occurrence of `x` replaced by
We formalise existential quantification by declaring a suitable
inductive type:
\begin{code}
```
data Σ (A : Set) (B : A → Set) : Set where
⟨_,_⟩ : (x : A) → B x → Σ A B
\end{code}
```
We define a convenient syntax for existentials as follows:
\begin{code}
```
Σ-syntax = Σ
infix 2 Σ-syntax
syntax Σ-syntax A (λ x → B) = Σ[ x ∈ A ] B
\end{code}
```
This is our first use of a syntax declaration, which specifies that
the term on the left may be written with the syntax on the right.
The special syntax is available only when the identifier
@ -148,12 +148,12 @@ Evidence that `Σ[ x ∈ A ] B x` holds is of the form
that `B M` holds.
Equivalently, we could also declare existentials as a record type:
\begin{code}
```
record Σ′ (A : Set) (B : A → Set) : Set where
field
proj₁ : A
proj₂ : B proj₁
\end{code}
```
Here record construction
record
@ -191,27 +191,27 @@ product and since existentials also have a claim to the name dependent sum.
A common notation for existentials is `∃` (analogous to `∀` for universals).
We follow the convention of the Agda standard library, and reserve this
notation for the case where the domain of the bound variable is left implicit:
\begin{code}
```
∃ : ∀ {A : Set} (B : A → Set) → Set
∃ {A} B = Σ A B
∃-syntax = ∃
syntax ∃-syntax (λ x → B) = ∃[ x ] B
\end{code}
```
The special syntax is available only when the identifier `∃-syntax` is imported.
We will tend to use this syntax, since it is shorter and more familiar.
Given evidence that `∀ x → B x → C` holds, where `C` does not contain
`x` as a free variable, and given evidence that `∃[ x ] B x` holds, we
may conclude that `C` holds:
\begin{code}
```
∃-elim : ∀ {A : Set} {B : A → Set} {C : Set}
→ (∀ x → B x → C)
→ ∃[ x ] B x
---------------
→ C
∃-elim f ⟨ x , y ⟩ = f x y
\end{code}
```
In other words, if we know for every `x` of type `A` that `B x`
implies `C`, and we know for some `x` of type `A` that `B x` holds,
then we may conclude that `C` holds. This is because we may
@ -220,7 +220,7 @@ instantiate that proof that `∀ x → B x → C` to any value `x` of type
the evidence for `∃[ x ] B x`.
Indeed, the converse also holds, and the two together form an isomorphism:
\begin{code}
```
∀∃-currying : ∀ {A : Set} {B : A → Set} {C : Set}
→ (∀ x → B x → C) ≃ (∃[ x ] B x → C)
∀∃-currying =
@ -230,7 +230,7 @@ Indeed, the converse also holds, and the two together form an isomorphism:
; from∘to = λ{ f → refl }
; to∘from = λ{ g → extensionality λ{ ⟨ x , y ⟩ → refl }}
}
\end{code}
```
The result can be viewed as a generalisation of currying. Indeed, the code to
establish the isomorphism is identical to what we wrote when discussing
[implication][plfa.Connectives#implication].
@ -238,20 +238,20 @@ establish the isomorphism is identical to what we wrote when discussing
#### Exercise `∃-distrib-⊎` (recommended)
Show that existentials distribute over disjunction:
\begin{code}
```
postulate
∃-distrib-⊎ : ∀ {A : Set} {B C : A → Set} →
∃[ x ] (B x ⊎ C x) ≃ (∃[ x ] B x) ⊎ (∃[ x ] C x)
\end{code}
```
#### Exercise `∃×-implies-×∃`
Show that an existential of conjunctions implies a conjunction of existentials:
\begin{code}
```
postulate
∃×-implies-×∃ : ∀ {A : Set} {B C : A → Set} →
∃[ x ] (B x × C x) → (∃[ x ] B x) × (∃[ x ] C x)
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
#### Exercise `∃-⊎`
@ -264,7 +264,7 @@ Show that `∃[ x ] B x` is isomorphic to `B aa ⊎ B bb ⊎ B cc`.
Recall the definitions of `even` and `odd` from
Chapter [Relations][plfa.Relations]:
\begin{code}
```
data even : → Set
data odd : → Set
@ -282,7 +282,7 @@ data odd where
→ even n
-----------
→ odd (suc n)
\end{code}
```
A number is even if it is zero or the successor of an odd number, and
odd if it is the successor of an even number.
@ -299,7 +299,7 @@ the constant term in a sum last. Here we've reversed each of those
conventions, because doing so eases the proof.
Here is the proof in the forward direction:
\begin{code}
```
even-∃ : ∀ {n : } → even n → ∃[ m ] ( m * 2 ≡ n)
odd-∃ : ∀ {n : } → odd n → ∃[ m ] (1 + m * 2 ≡ n)
@ -309,7 +309,7 @@ even-∃ (even-suc o) with odd-∃ o
odd-∃ (odd-suc e) with even-∃ e
... | ⟨ m , refl ⟩ = ⟨ m , refl ⟩
\end{code}
```
We define two mutually recursive functions. Given
evidence that `n` is even or odd, we return a
number `m` and evidence that `m * 2 ≡ n` or `1 + m * 2 ≡ n`.
@ -333,7 +333,7 @@ substituting for `n`.
This completes the proof in the forward direction.
Here is the proof in the reverse direction:
\begin{code}
```
∃-even : ∀ {n : } → ∃[ m ] ( m * 2 ≡ n) → even n
∃-odd : ∀ {n : } → ∃[ m ] (1 + m * 2 ≡ n) → odd n
@ -341,7 +341,7 @@ Here is the proof in the reverse direction:
∃-even ⟨ suc m , refl ⟩ = even-suc (∃-odd ⟨ m , refl ⟩)
∃-odd ⟨ m , refl ⟩ = odd-suc (∃-even ⟨ m , refl ⟩)
\end{code}
```
Given a number that is twice some other number we must show it is
even, and a number that is one more than twice some other number we
must show it is odd. We induct over the evidence of the existential,
@ -367,18 +367,18 @@ How do the proofs become more difficult if we replace `m * 2` and `1 + m * 2`
by `2 * m` and `2 * m + 1`? Rewrite the proofs of `∃-even` and `∃-odd` when
restated in this way.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `∃-+-≤`
Show that `y ≤ z` holds if and only if there exists a `x` such that
`x + y ≡ z`.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Existentials, Universals, and Negation
@ -388,7 +388,7 @@ of a negation. Considering that existentials are generalised
disjunction and universals are generalised conjunction, this
result is analogous to the one which tells us that negation
of a disjunction is isomorphic to a conjunction of negations:
\begin{code}
```
¬∃≃∀¬ : ∀ {A : Set} {B : A → Set}
→ (¬ ∃[ x ] B x) ≃ ∀ x → ¬ B x
¬∃≃∀¬ =
@ -398,7 +398,7 @@ of a disjunction is isomorphic to a conjunction of negations:
; from∘to = λ{ ¬∃xy → extensionality λ{ ⟨ x , y ⟩ → refl } }
; to∘from = λ{ ∀¬xy → refl }
}
\end{code}
```
In the `to` direction, we are given a value `¬∃xy` of type
`¬ ∃[ x ] B x`, and need to show that given a value
`x` that `¬ B x` follows, in other words, from
@ -419,13 +419,13 @@ requires extensionality.
#### Exercise `∃¬-implies-¬∀` (recommended)
Show that existential of a negation implies negation of a universal:
\begin{code}
```
postulate
∃¬-implies-¬∀ : ∀ {A : Set} {B : A → Set}
→ ∃[ x ] (¬ B x)
--------------
→ ¬ (∀ x → B x)
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
@ -436,12 +436,12 @@ Recall that Exercises
[Bin-laws][plfa.Induction#Bin-laws], and
[Bin-predicates][plfa.Relations#Bin-predicates]
define a datatype of bitstrings representing natural numbers:
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
And ask you to define the following functions and predicates:
to : → Bin
@ -462,17 +462,17 @@ And to establish the following properties:
Using the above, establish that there is an isomorphism between `` and
`∃[ x ](Can x)`.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Standard library
Definitions similar to those in this chapter can be found in the standard library:
\begin{code}
```
import Data.Product using (Σ; _,_; ∃; Σ-syntax; ∃-syntax)
\end{code}
```
## Unicode

View file

@ -0,0 +1,73 @@
---
title : "Reflection: Proof by Reflection"
layout : page
prev : /Decidable/
permalink : /Reflection/
next : /Lists/
---
```
module plfa.Reflection where
open import plfa.Lambda hiding (ƛ_⇒_; _≠_; Ch; ⊢twoᶜ)
open import Function using (flip; _$_; const)
open import Data.Bool using (Bool; true; false)
open import Data.Empty using (⊥)
open import Data.Unit using ()
open import Data.String using (_≟_)
open import Relation.Nullary using (¬_; Dec; yes; no)
open import Relation.Binary.PropositionalEquality using (_≢_)
T : Bool → Set
T true =
T false = ⊥
⌊_⌋ : {P : Set} → Dec P → Bool
⌊ yes _ ⌋ = true
⌊ no _ ⌋ = false
True : {P : Set} → Dec P → Set
True Q = T ⌊ Q ⌋
not : Bool → Bool
not false = true
not true = false
False : {P : Set} → Dec P → Set
False Q = T (not ⌊ Q ⌋)
toWitnessFalse : {P : Set} {Q : Dec P} → False Q → ¬ P
toWitnessFalse {Q = yes _} ()
toWitnessFalse {Q = no ¬p} _ = ¬p
data is-` : Term → Set where
`_ : (x : Id) → is-` (` x)
is-`? : (M : Term) → Dec (is-` M)
is-`? (` x) = yes (` x)
is-`? (ƛ x ⇒ N) = no (λ ())
is-`? (L · M) = no (λ ())
is-`? `zero = no (λ ())
is-`? (`suc M) = no (λ ())
is-`? case L [zero⇒ M |suc x ⇒ N ] = no (λ ())
is-`? (μ x ⇒ N) = no (λ ())
ƛ_⇒_ : (x : Term) {{p : True (is-`? x)}} (N : Term) → Term
ƛ′ (` x) ⇒ N = ƛ x ⇒ N
S : ∀ {Γ x y A B}
→ {{p : False (x ≟ y)}}
→ Γ ∋ x ⦂ A
------------------
→ Γ , y ⦂ B ∋ x ⦂ A
S {{p}} Γ∋x⦂A = S (toWitnessFalse p) Γ∋x⦂A
Ch : Type → Type
Ch A = (A ⇒ A) ⇒ A ⇒ A
⊢twoᶜ : ∀ {Γ A} → Γ ⊢ twoᶜ ⦂ Ch A
⊢twoᶜ = ⊢ƛ (⊢ƛ (⊢` ∋s · (⊢` ∋s · ⊢` ∋z)))
where
∋s = S Z
∋z = Z
```

View file

@ -6,21 +6,21 @@ permalink : /Relations/
next : /Equality/
---
\begin{code}
```
module plfa.Relations where
\end{code}
```
After having defined operations such as addition and multiplication,
the next step is to define relations, such as _less than or equal_.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong)
open import Data.Nat using (; zero; suc; _+_)
open import Data.Nat.Properties using (+-comm)
\end{code}
```
## Defining relations
@ -46,7 +46,7 @@ definition as a pair of inference rules:
suc m ≤ suc n
And here is the definition in Agda:
\begin{code}
```
data _≤_ : → Set where
z≤n : ∀ {n : }
@ -57,7 +57,7 @@ data _≤_ : → Set where
→ m ≤ n
-------------
→ suc m ≤ suc n
\end{code}
```
Here `z≤n` and `s≤s` (with no spaces) are constructor names, while
`zero ≤ n`, and `m ≤ n` and `suc m ≤ suc n` (with spaces) are types.
This is our first use of an _indexed_ datatype, where the type `m ≤ n`
@ -91,10 +91,10 @@ For example, here in inference rule notation is the proof that
2 ≤ 4
And here is the corresponding Agda proof:
\begin{code}
```
_ : 2 ≤ 4
_ = s≤s (s≤s z≤n)
\end{code}
```
@ -120,29 +120,29 @@ If we wish, it is possible to provide implicit arguments explicitly by
writing the arguments inside curly braces. For instance, here is the
Agda proof that `2 ≤ 4` repeated, with the implicit arguments made
explicit:
\begin{code}
```
_ : 2 ≤ 4
_ = s≤s {1} {3} (s≤s {0} {2} (z≤n {2}))
\end{code}
```
One may also identify implicit arguments by name:
\begin{code}
```
_ : 2 ≤ 4
_ = s≤s {m = 1} {n = 3} (s≤s {m = 0} {n = 2} (z≤n {n = 2}))
\end{code}
```
In the latter format, you may only supply some implicit arguments:
\begin{code}
```
_ : 2 ≤ 4
_ = s≤s {n = 3} (s≤s {n = 2} z≤n)
\end{code}
```
It is not permitted to swap implicit arguments, even when named.
## Precedence
We declare the precedence for comparison as follows:
\begin{code}
```
infix 4 _≤_
\end{code}
```
We set the precedence of `_≤_` at level 4, so it binds less tightly
than `_+_` at level 6 and hence `1 + 2 ≤ 3` parses as `(1 + 2) ≤ 3`.
We write `infix` to indicate that the operator does not associate to
@ -168,26 +168,26 @@ want to go from bigger things to smaller things.
There is only one way to prove that `suc m ≤ suc n`, for any `m`
and `n`. This lets us invert our previous rule.
\begin{code}
```
inv-s≤s : ∀ {m n : }
→ suc m ≤ suc n
-------------
→ m ≤ n
inv-s≤s (s≤s m≤n) = m≤n
\end{code}
```
Not every rule is invertible; indeed, the rule for `z≤n` has
no non-implicit hypotheses, so there is nothing to invert.
But often inversions of this kind hold.
Another example of inversion is showing that there is
only one way a number can be less than or equal to zero.
\begin{code}
```
inv-z≤n : ∀ {m : }
→ m ≤ zero
--------
→ m ≡ zero
inv-z≤n z≤n = refl
\end{code}
```
## Properties of ordering relations
@ -227,15 +227,15 @@ partial order but not a total order.
Give an example of a preorder that is not a partial order.
\begin{code}
```
-- Your code goes here
\end{code}
```
Give an example of a partial order that is not a total order.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Reflexivity
@ -243,13 +243,13 @@ The first property to prove about comparison is that it is reflexive:
for any natural `n`, the relation `n ≤ n` holds. We follow the
convention in the standard library and make the argument implicit,
as that will make it easier to invoke reflexivity:
\begin{code}
```
≤-refl : ∀ {n : }
-----
→ n ≤ n
≤-refl {zero} = z≤n
≤-refl {suc n} = s≤s ≤-refl
\end{code}
```
The proof is a straightforward induction on the implicit argument `n`.
In the base case, `zero ≤ zero` holds by `z≤n`. In the inductive
case, the inductive hypothesis `≤-refl {n}` gives us a proof of `n ≤
@ -264,7 +264,7 @@ using holes and the `C-c C-c`, `C-c C-,`, and `C-c C-r` commands.
The second property to prove about comparison is that it is
transitive: for any naturals `m`, `n`, and `p`, if `m ≤ n` and `n ≤ p`
hold, then `m ≤ p` holds. Again, `m`, `n`, and `p` are implicit:
\begin{code}
```
≤-trans : ∀ {m n p : }
→ m ≤ n
→ n ≤ p
@ -272,7 +272,7 @@ hold, then `m ≤ p` holds. Again, `m`, `n`, and `p` are implicit:
→ m ≤ p
≤-trans z≤n _ = z≤n
≤-trans (s≤s m≤n) (s≤s n≤p) = s≤s (≤-trans m≤n n≤p)
\end{code}
```
Here the proof is by induction on the _evidence_ that `m ≤ n`. In the
base case, the first inequality holds by `z≤n` and must show `zero ≤
p`, which follows immediately by `z≤n`. In this case, the fact that
@ -291,7 +291,7 @@ inequality implies that it is `zero`. Agda can determine that such a
case cannot arise, and does not require (or permit) it to be listed.
Alternatively, we could make the implicit parameters explicit:
\begin{code}
```
≤-trans : ∀ (m n p : )
→ m ≤ n
→ n ≤ p
@ -299,7 +299,7 @@ Alternatively, we could make the implicit parameters explicit:
→ m ≤ p
≤-trans zero _ _ z≤n _ = z≤n
≤-trans (suc m) (suc n) (suc p) (s≤s m≤n) (s≤s n≤p) = s≤s (≤-trans m n p m≤n n≤p)
\end{code}
```
One might argue that this is clearer or one might argue that the extra
length obscures the essence of the proof. We will usually opt for
shorter proofs.
@ -318,7 +318,7 @@ using holes and the `C-c C-c`, `C-c C-,`, and `C-c C-r` commands.
The third property to prove about comparison is that it is
antisymmetric: for all naturals `m` and `n`, if both `m ≤ n` and `n ≤
m` hold, then `m ≡ n` holds:
\begin{code}
```
≤-antisym : ∀ {m n : }
→ m ≤ n
→ n ≤ m
@ -326,7 +326,7 @@ m` hold, then `m ≡ n` holds:
→ m ≡ n
≤-antisym z≤n z≤n = refl
≤-antisym (s≤s m≤n) (s≤s n≤m) = cong suc (≤-antisym m≤n n≤m)
\end{code}
```
Again, the proof is by induction over the evidence that `m ≤ n`
and `n ≤ m` hold.
@ -346,9 +346,9 @@ follows by congruence.
The above proof omits cases where one argument is `z≤n` and one
argument is `s≤s`. Why is it ok to omit them?
\begin{code}
```
-- Your code goes here
\end{code}
```
## Total
@ -358,7 +358,7 @@ for any naturals `m` and `n` either `m ≤ n` or `n ≤ m`, or both if
`m` and `n` are equal.
We specify what it means for inequality to be total:
\begin{code}
```
data Total (m n : ) : Set where
forward :
@ -370,7 +370,7 @@ data Total (m n : ) : Set where
n ≤ m
---------
→ Total m n
\end{code}
```
Evidence that `Total m n` holds is either of the form
`forward m≤n` or `flipped n≤m`, where `m≤n` and `n≤m` are
evidence of `m ≤ n` and `n ≤ m` respectively.
@ -382,7 +382,7 @@ be introduced in Chapter [Connectives][plfa.Connectives].)
This is our first use of a datatype with _parameters_,
in this case `m` and `n`. It is equivalent to the following
indexed datatype:
\begin{code}
```
data Total : → Set where
forward : ∀ {m n : }
@ -394,7 +394,7 @@ data Total : → Set where
→ n ≤ m
----------
→ Total m n
\end{code}
```
Each parameter of the type translates as an implicit parameter of each
constructor. Unlike an indexed datatype, where the indexes can vary
(as in `zero ≤ n` and `suc m ≤ suc n`), in a parameterised datatype
@ -404,14 +404,14 @@ occasionally aid Agda's termination checker, so we will use them in
preference to indexed types when possible.
With that preliminary out of the way, we specify and prove totality:
\begin{code}
```
≤-total : ∀ (m n : ) → Total m n
≤-total zero n = forward z≤n
≤-total (suc m) zero = flipped z≤n
≤-total (suc m) (suc n) with ≤-total m n
... | forward m≤n = forward (s≤s m≤n)
... | flipped n≤m = flipped (s≤s n≤m)
\end{code}
```
In this case the proof is by induction over both the first
and second arguments. We perform a case analysis:
@ -443,7 +443,7 @@ and the right-hand side of the equation.
Every use of `with` is equivalent to defining a helper function. For
example, the definition above is equivalent to the following:
\begin{code}
```
≤-total : ∀ (m n : ) → Total m n
≤-total zero n = forward z≤n
≤-total (suc m) zero = flipped z≤n
@ -452,7 +452,7 @@ example, the definition above is equivalent to the following:
helper : Total m n → Total (suc m) (suc n)
helper (forward m≤n) = forward (s≤s m≤n)
helper (flipped n≤m) = flipped (s≤s n≤m)
\end{code}
```
This is also our first use of a `where` clause in Agda. The keyword `where` is
followed by one or more definitions, which must be indented. Any variables
bound on the left-hand side of the preceding equation (in this case, `m` and
@ -463,14 +463,14 @@ of the preceding equation.
If both arguments are equal, then both cases hold and we could return evidence
of either. In the code above we return the forward case, but there is a
variant that returns the flipped case:
\begin{code}
```
≤-total″ : ∀ (m n : ) → Total m n
≤-total″ m zero = flipped z≤n
≤-total″ zero (suc n) = forward z≤n
≤-total″ (suc m) (suc n) with ≤-total″ m n
... | forward m≤n = forward (s≤s m≤n)
... | flipped n≤m = flipped (s≤s n≤m)
\end{code}
```
It differs from the original code in that it pattern
matches on the second argument before the first argument.
@ -486,14 +486,14 @@ is monotonic with regard to inequality, meaning:
The proof is straightforward using the techniques we have learned, and is best
broken into three parts. First, we deal with the special case of showing
addition is monotonic on the right:
\begin{code}
```
+-monoʳ-≤ : ∀ (n p q : )
→ p ≤ q
-------------
→ n + p ≤ n + q
+-monoʳ-≤ zero p q p≤q = p≤q
+-monoʳ-≤ (suc n) p q p≤q = s≤s (+-monoʳ-≤ n p q p≤q)
\end{code}
```
The proof is by induction on the first argument.
* _Base case_: The first argument is `zero` in which case
@ -508,25 +508,25 @@ The proof is by induction on the first argument.
Second, we deal with the special case of showing addition is
monotonic on the left. This follows from the previous
result and the commutativity of addition:
\begin{code}
```
+-monoˡ-≤ : ∀ (m n p : )
→ m ≤ n
-------------
→ m + p ≤ n + p
+-monoˡ-≤ m n p m≤n rewrite +-comm m p | +-comm n p = +-monoʳ-≤ p m n m≤n
\end{code}
```
Rewriting by `+-comm m p` and `+-comm n p` converts `m + p ≤ n + p` into
`p + m ≤ p + n`, which is proved by invoking `+-monoʳ-≤ p m n m≤n`.
Third, we combine the two previous results:
\begin{code}
```
+-mono-≤ : ∀ (m n p q : )
→ m ≤ n
→ p ≤ q
-------------
→ m + p ≤ n + q
+-mono-≤ m n p q m≤n p≤q = ≤-trans (+-monoˡ-≤ m n p m≤n) (+-monoʳ-≤ n p q p≤q)
\end{code}
```
Invoking `+-monoˡ-≤ m n p m≤n` proves `m + p ≤ n + p` and invoking
`+-monoʳ-≤ n p q p≤q` proves `n + p ≤ n + q`, and combining these with
transitivity proves `m + p ≤ n + q`, as was to be shown.
@ -536,15 +536,15 @@ transitivity proves `m + p ≤ n + q`, as was to be shown.
Show that multiplication is monotonic with regard to inequality.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Strict inequality {#strict-inequality}
We can define strict inequality similarly to inequality:
\begin{code}
```
infix 4 _<_
data _<_ : → Set where
@ -557,7 +557,7 @@ data _<_ : → Set where
→ m < n
-------------
→ suc m < suc n
\end{code}
```
The key difference is that zero is less than the successor of an
arbitrary number, but is not less than zero.
@ -583,9 +583,9 @@ exploiting the corresponding properties of inequality.
Show that strict inequality is transitive.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `trichotomy` {#trichotomy}
@ -601,26 +601,26 @@ similar to that used for totality.
(We will show that the three cases are exclusive after we introduce
[negation][plfa.Negation].)
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `+-mono-<` {#plus-mono-less}
Show that addition is monotonic with respect to strict inequality.
As with inequality, some additional definitions may be required.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `≤-iff-<` (recommended) {#leq-iff-less}
Show that `suc m ≤ n` implies `m < n`, and conversely.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `<-trans-revisited` {#less-trans-revisited}
@ -628,9 +628,9 @@ Give an alternative proof that strict inequality is transitive,
using the relation between strict inequality and inequality and
the fact that inequality is transitive.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Even and odd
@ -638,7 +638,7 @@ the fact that inequality is transitive.
As a further example, let's specify even and odd numbers. Inequality
and strict inequality are _binary relations_, while even and odd are
_unary relations_, sometimes called _predicates_:
\begin{code}
```
data even : → Set
data odd : → Set
@ -659,7 +659,7 @@ data odd where
→ even n
-----------
→ odd (suc n)
\end{code}
```
A number is even if it is zero or the successor of an odd number,
and odd if it is the successor of an even number.
@ -693,7 +693,7 @@ one restrict overloading to related meanings, as we have done here,
but it is not required.
We show that the sum of two even numbers is even:
\begin{code}
```
e+e≡e : ∀ {m n : }
→ even m
→ even n
@ -710,7 +710,7 @@ e+e≡e zero en = en
e+e≡e (suc om) en = suc (o+e≡o om en)
o+e≡o (suc em) en = suc (e+e≡e em en)
\end{code}
```
Corresponding to the mutually recursive types, we use two mutually recursive
functions, one to show that the sum of two even numbers is even, and the other
to show that the sum of an odd and an even number is odd.
@ -735,9 +735,9 @@ successor of the sum of two even numbers, which is even.
Show that the sum of two odd numbers is even.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `Bin-predicates` (stretch) {#Bin-predicates}
@ -787,18 +787,18 @@ and back is the identity:
(Hint: For each of these, you may first need to prove related
properties of `One`.)
\begin{code}
```
-- Your code goes here
\end{code}
```
## Standard library
Definitions similar to those in this chapter can be found in the standard library:
\begin{code}
```
import Data.Nat using (_≤_; z≤n; s≤s)
import Data.Nat.Properties using (≤-refl; ≤-trans; ≤-antisym; ≤-total;
+-monoʳ-≤; +-monoˡ-≤; +-mono-≤)
\end{code}
```
In the standard library, `≤-total` is formalised in terms of
disjunction (which we define in
Chapter [Connectives][plfa.Connectives]),

View file

@ -6,9 +6,9 @@ permalink : /Soundness/
next : /Adequacy/
---
\begin{code}
```
module plfa.Soundness where
\end{code}
```
## Introduction
@ -32,7 +32,7 @@ expansion is false for most typed lambda calculi!
## Imports
\begin{code}
```
open import plfa.Untyped
using (Context; _,_; _∋_; _⊢_; ★; Z; S_; `_; ƛ_; _·_;
subst; _[_]; subst-zero; ext; rename; exts)
@ -57,7 +57,7 @@ open import Data.Empty using (⊥-elim)
open import Relation.Nullary using (Dec; yes; no)
open import Function using (_∘_)
-- open import plfa.Isomorphism using (extensionality) -- causes a bug!
\end{code}
```
## Forward reduction preserves denotations
@ -81,18 +81,18 @@ an environment `δ` in which, for every variable `x`, `σ x` results in the
same value as the one for `x` in the original environment `γ`.
We write `δ ⊢ σγ` for this condition.
\begin{code}
```
infix 3 _`⊢_↓_
_`⊢_↓_ : ∀{Δ Γ} → Env Δ → Subst Γ Δ → Env Γ → Set
_`⊢_↓_ {Δ}{Γ} δ σ γ = (∀ (x : Γ ∋ ★) → δ ⊢ σ x ↓ γ x)
\end{code}
```
As usual, to prepare for lambda abstraction, we prove an extension
lemma. It says that applying the `exts` function to a substitution
produces a new substitution that maps variables to terms that when
evaluated in `δ , v` produce the values in `γ , v`.
\begin{code}
```
subst-ext : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ}
→ (σ : Subst Γ Δ)
→ δ `⊢ σγ
@ -100,7 +100,7 @@ subst-ext : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ}
→ δ `, v `⊢ exts σγ `, v
subst-ext σ d Z = var
subst-ext σ d (S x) = rename-pres S_ (λ _ → Refl⊑) (d x)
\end{code}
```
The proof is by cases on the de Bruijn index `x`.
@ -114,7 +114,7 @@ The proof is by cases on the de Bruijn index `x`.
With the extension lemma in hand, the proof that simultaneous
substitution preserves meaning is straightforward. Let's dive in!
\begin{code}
```
subst-pres : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ} {M : Γ ⊢ ★}
→ (σ : Subst Γ Δ)
→ δ `⊢ σγ
@ -130,7 +130,7 @@ subst-pres σ s ⊥-intro = ⊥-intro
subst-pres σ s (⊔-intro d₁ d₂) =
⊔-intro (subst-pres σ s d₁) (subst-pres σ s d₂)
subst-pres σ s (sub d lt) = sub (subst-pres σ s d) lt
\end{code}
```
The proof is by induction on the semantics of `M`. The two interesting
cases are for variables and lambda abstractions.
@ -154,7 +154,7 @@ we have that `γ , v ⊢ M ↓ w` and `γ ⊢ N ↓ v`.
So we need to show that `γ ⊢ M [ N ] ↓ w`, or equivalently,
that `γ ⊢ subst (subst-zero N) M ↓ w`.
\begin{code}
```
substitution : ∀ {Γ} {γ : Env Γ} {N M v w}
γ `, v ⊢ N ↓ w
γ ⊢ M ↓ v
@ -166,7 +166,7 @@ substitution{Γ}{γ}{N}{M}{v}{w} dn dm =
sub-z-ok : γ `⊢ subst-zero M ↓ (γ `, v)
sub-z-ok Z = dm
sub-z-ok (S x) = var
\end{code}
```
This result is a corollary of the lemma for simultaneous substitution.
To use the lemma, we just need to show that `subst-zero M` maps
@ -186,7 +186,7 @@ Let `y` be an arbitrary variable (de Bruijn index).
With the substitution lemma in hand, it is straightforward to prove
that reduction preserves denotations.
\begin{code}
```
preserve : ∀ {Γ} {γ : Env Γ} {M N v}
γ ⊢ M ↓ v
→ M —→ N
@ -200,7 +200,7 @@ preserve (↦-intro d) (ζ r) = ↦-intro (preserve d r)
preserve ⊥-intro r = ⊥-intro
preserve (⊔-intro d d₁) r = ⊔-intro (preserve d r) (preserve d₁ r)
preserve (sub d lt) r = sub (preserve d r) lt
\end{code}
```
We proceed by induction on the semantics of `M` with case analysis on
the reduction.
@ -243,7 +243,7 @@ we prove the opposite, that it reflects meaning. That is,
if `δ ⊢ rename ρ M ↓ v`, then `γ ⊢ M ↓ v`, where `(δ ∘ ρ) `γ`.
First, we need a variant of a lemma given earlier.
\begin{code}
```
nth-ext : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ}
→ (ρ : Rename Γ Δ)
→ (δ ∘ ρ) `⊑ γ
@ -251,10 +251,10 @@ nth-ext : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ}
→ ((δ `, v) ∘ ext ρ) `⊑ (γ `, v)
nth-ext ρ lt Z = Refl⊑
nth-ext ρ lt (S x) = lt x
\end{code}
```
The proof is then as follows.
\begin{code}
```
rename-reflect : ∀ {Γ Δ v} {γ : Env Γ} {δ : Env Δ} { M : Γ ⊢ ★}
→ {ρ : Rename Γ Δ}
→ (δ ∘ ρ) `⊑ γ
@ -277,7 +277,7 @@ rename-reflect {M = L · M} all-n (⊔-intro d₁ d₂) =
⊔-intro (rename-reflect all-n d₁) (rename-reflect all-n d₂)
rename-reflect {M = L · M} all-n (sub d₁ lt) =
sub (rename-reflect all-n d₁) lt
\end{code}
```
We cannot prove this lemma by induction on the derivation of
`δ ⊢ rename ρ M ↓ v`, so instead we proceed by induction on `M`.
@ -308,13 +308,13 @@ We cannot prove this lemma by induction on the derivation of
In the upcoming uses of `rename-reflect`, the renaming will always be
the increment function. So we prove a corollary for that special case.
\begin{code}
```
rename-inc-reflect : ∀ {Γ v v} {γ : Env Γ} { M : Γ ⊢ ★}
→ (γ `, v) ⊢ rename S_ M ↓ v
----------------------------
γ ⊢ M ↓ v
rename-inc-reflect d = rename-reflect `Refl⊑ d
\end{code}
```
### Substitution reflects denotations, the variable case
@ -331,7 +331,7 @@ Next we define the environment that maps `x` to `v` and every other
variable to `⊥`, that is `const-env x v`. To tell variables apart, we
define the following function for deciding equality of variables.
\begin{code}
```
_var≟_ : ∀ {Γ} → (x y : Γ ∋ ★) → Dec (x ≡ y)
Z var≟ Z = yes refl
Z var≟ (S _) = no λ()
@ -343,27 +343,27 @@ Z var≟ (S _) = no λ()
var≟-refl : ∀ {Γ} (x : Γ ∋ ★) → (x var≟ x) ≡ yes refl
var≟-refl Z = refl
var≟-refl (S x) rewrite var≟-refl x = refl
\end{code}
```
Now we use `var≟` to define `const-env`.
\begin{code}
```
const-env : ∀{Γ} → (x : Γ ∋ ★) → Value → Env Γ
const-env x v y with x var≟ y
... | yes _ = v
... | no _ = ⊥
\end{code}
```
Of course, `const-env x v` maps `x` to value `v`
\begin{code}
```
same-const-env : ∀{Γ} {x : Γ ∋ ★} {v} → (const-env x v) x ≡ v
same-const-env {x = x} rewrite var≟-refl x = refl
\end{code}
```
and `const-env x v` maps `y` to `⊥, so long as `x ≢ y`.
\begin{code}
```
diff-nth-const-env : ∀{Γ} {x y : Γ ∋ ★} {v}
→ x ≢ y
-------------------
@ -371,7 +371,7 @@ diff-nth-const-env : ∀{Γ} {x y : Γ ∋ ★} {v}
diff-nth-const-env {Γ} {x} {y} neq with x var≟ y
... | yes eq = ⊥-elim (neq eq)
... | no _ = refl
\end{code}
```
So we choose `const-env x v` for `δ` and obtain `δ ⊢ x ↓ v`
with the `var` rule.
@ -391,7 +391,7 @@ Thus, we have completed the variable case of the proof that
simultaneous substitution reflects denotations. Here is the proof
again, formally.
\begin{code}
```
subst-reflect-var : ∀ {Γ Δ} {γ : Env Δ} {x : Γ ∋ ★} {v} {σ : Subst Γ Δ}
γσ x ↓ v
-----------------------------------------
@ -404,42 +404,42 @@ subst-reflect-var {Γ}{Δ}{γ}{x}{v}{σ} xv
const-env-ok y with x var≟ y
... | yes x≡y rewrite sym x≡y | same-const-env {Γ}{x}{v} = xv
... | no x≢y rewrite diff-nth-const-env {Γ}{x}{y}{v} x≢y = ⊥-intro
\end{code}
```
### Substitutions and environment construction
Every substitution produces terms that can evaluate to `⊥`.
\begin{code}
```
subst-⊥ : ∀{Γ Δ}{γ : Env Δ}{σ : Subst Γ Δ}
-----------------
γ `⊢ σ ↓ `
subst-⊥ x = ⊥-intro
\end{code}
```
If a substitution produces terms that evaluate to the values in
both `γ₁` and `γ₂`, then those terms also evaluate to the values in
`γ₁ ⊔ γ₂`.
\begin{code}
```
subst-⊔ : ∀{Γ Δ}{γ : Env Δ}{γ₁ γ₂ : Env Γ}{σ : Subst Γ Δ}
γ `⊢ σ ↓ γ₁
γ `⊢ σ ↓ γ₂
-------------------------
γ `⊢ σ ↓ (γ₁ `⊔ γ₂)
subst-⊔ γ₁-ok γ₂-ok x = ⊔-intro (γ₁-ok x) (γ₂-ok x)
\end{code}
```
### The Lambda constructor is injective
\begin{code}
```
lambda-inj : ∀ {Γ} {M N : Γ , ★ ⊢ ★ }
_≡_ {A = Γ ⊢ ★} (ƛ M) (ƛ N)
---------------------------
→ M ≡ N
lambda-inj refl = refl
\end{code}
```
### Simultaneous substitution reflects denotations
@ -450,7 +450,7 @@ the derivation of `γ ⊢ subst σ M ↓ v`. This requires a minor
restatement of the lemma, changing the premise to `γ ⊢ L ↓ v` and
`L ≡ subst σ M`.
\begin{code}
```
split : ∀ {Γ} {M : Γ , ★ ⊢ ★} {δ : Env (Γ , ★)} {v}
→ δ ⊢ M ↓ v
--------------------------
@ -504,7 +504,7 @@ subst-reflect {σ = σ} (⊔-intro d₁ d₂) eq
subst-reflect (sub d lt) eq
with subst-reflect d eq
... | ⟨ δ , ⟨ subst-δ , m ⟩ ⟩ = ⟨ δ , ⟨ subst-δ , sub m lt ⟩ ⟩
\end{code}
```
* Case `var`: We have subst `σ M ≡ y`, so `M` must also be a variable, say `x`.
We apply the lemma `subst-reflect-var` to conclude.
@ -560,7 +560,7 @@ We first prove a lemma about `subst-zero`, that if
`δ ⊢ subst-zero M ↓ γ`, then
`γ ⊑ (δ , w) × δ ⊢ M ↓ w` for some `w`.
\begin{code}
```
subst-zero-reflect : ∀ {Δ} {δ : Env Δ} {γ : Env (Δ , ★)} {M : Δ ⊢ ★}
→ δ `⊢ subst-zero M ↓ γ
----------------------------------------
@ -570,7 +570,7 @@ subst-zero-reflect {δ = δ} {γ = γ} δσγ = ⟨ last γ , ⟨ lemma , δσγ
lemma : γ `⊑ (δ `, last γ)
lemma Z = Refl⊑
lemma (S x) = var-inv (δσγ (S x))
\end{code}
```
We choose `w` to be the last value in `γ` and we obtain `δ ⊢ M ↓ w`
by applying the premise to variable `Z`. Finally, to prove
@ -582,7 +582,7 @@ using `var-inv` we conclude that `γ (S x) ⊑ (δ `, w) (S x)`.
Now to prove that substitution reflects denotations.
\begin{code}
```
substitution-reflect : ∀ {Δ} {δ : Env Δ} {N : Δ , ★ ⊢ ★} {M : Δ ⊢ ★} {v}
→ δ ⊢ N [ M ] ↓ v
------------------------------------------------
@ -590,7 +590,7 @@ substitution-reflect : ∀ {Δ} {δ : Env Δ} {N : Δ , ★ ⊢ ★} {M : Δ ⊢
substitution-reflect d with subst-reflect d refl
... | ⟨ γ , ⟨ δσγ , γNv ⟩ ⟩ with subst-zero-reflect δσγ
... | ⟨ w , ⟨ ineq , δMw ⟩ ⟩ = ⟨ w , ⟨ δMw , Env⊑ γNv ineq ⟩ ⟩
\end{code}
```
We apply the `subst-reflect` lemma to obtain
`δ ⊢ subst-zero M ↓ γ` and `γ ⊢ N ↓ v` for some `γ`.
@ -605,7 +605,7 @@ us `γ ⊑ (δ , w)` and `δ ⊢ M ↓ w`. We conclude that
Now that we have proved that substitution reflects denotations, we can
easily prove that reduction does too.
\begin{code}
```
reflect-beta : ∀{Γ}{γ : Env Γ}{M N}{v}
γ ⊢ (N [ M ]) ↓ v
γ ⊢ (ƛ N) · M ↓ v
@ -640,27 +640,27 @@ reflect ⊥-intro r mn = ⊥-intro
reflect (⊔-intro d₁ d₂) r mn rewrite sym mn =
⊔-intro (reflect d₁ r refl) (reflect d₂ r refl)
reflect (sub d lt) r mn = sub (reflect d r mn) lt
\end{code}
```
## Reduction implies denotational equality
We have proved that reduction both preserves and reflects
denotations. Thus, reduction implies denotational equality.
\begin{code}
```
reduce-equal : ∀ {Γ} {M : Γ ⊢ ★} {N : Γ ⊢ ★}
→ M —→ N
---------
M ≃ N
reduce-equal {Γ}{M}{N} r γ v =
⟨ (λ m → preserve m r) , (λ n → reflect n r refl) ⟩
\end{code}
```
We conclude with the _soundness property_, that multi-step reduction
to a lambda abstraction implies denotational equivalence with a lambda
abstraction.
\begin{code}
```
soundness : ∀{Γ} {M : Γ ⊢ ★} {N : Γ , ★ ⊢ ★}
→ M —↠ ƛ N
-----------------
@ -670,7 +670,7 @@ soundness {Γ} (L —→⟨ r ⟩ M—↠N) γ v =
let ih = soundness M—↠N in
let e = reduce-equal r in
≃-trans {Γ} e ih γ v
\end{code}
```
## Unicode

View file

@ -7,9 +7,9 @@ next : /LambdaReduction/
---
\begin{code}
```
module plfa.Substitution where
\end{code}
```
## Introduction
@ -45,7 +45,7 @@ system that _decides_ whether any two substitutions are equal.
## Imports
\begin{code}
```
open import plfa.Untyped
using (Type; Context; _⊢_; ★; _∋_; ∅; _,_; Z; S_; `_; ƛ_; _·_;
rename; subst; ext; exts; _[_]; subst-zero)
@ -54,40 +54,40 @@ open Eq using (_≡_; refl; sym; cong; cong₂; cong-app)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
open import Function using (_∘_)
-- open import plfa.Isomorphism using (extensionality) -- causes a bug!
\end{code}
```
\begin{code}
```
postulate
extensionality : ∀ {A B : Set} {f g : A → B}
→ (∀ (x : A) → f x ≡ g x)
-----------------------
→ f ≡ g
\end{code}
```
## Notation
We introduce the following shorthand for the type of a _renaming_ from
variables in context `Γ` to variables in context `Δ`.
\begin{code}
```
Rename : Context → Context → Set
Rename Γ Δ = ∀{A} → Γ ∋ A → Δ ∋ A
\end{code}
```
Similarly, we introduce the following shorthand for the type of a
_substitution_ from variables in context `Γ` to terms in context `Δ`.
\begin{code}
```
Subst : Context → Context → Set
Subst Γ Δ = ∀{A} → Γ ∋ A → Δ ⊢ A
\end{code}
```
We use the following more succinct notation the `subst` function.
\begin{code}
```
⟪_⟫ : ∀{Γ Δ A} → Subst Γ Δ → Γ ⊢ A → Δ ⊢ A
σ ⟫ = λ M → subst σ M
\end{code}
```
## The σ algebra of substitution
@ -99,10 +99,10 @@ four operations for building such sequences: identity `ids`, shift
`↑`, cons `M • σ`, and sequencing `σ ⨟ τ`. The sequence `0, 1, 2, ...`
is constructed by the identity substitution.
\begin{code}
```
ids : ∀{Γ} → Subst Γ Γ
ids x = ` x
\end{code}
```
The shift operation `↑` constructs the sequence
@ -110,10 +110,10 @@ The shift operation `↑` constructs the sequence
and is defined as follows.
\begin{code}
```
↑ : ∀{Γ A} → Subst Γ (Γ , A)
↑ x = ` (S x)
\end{code}
```
Given a term `M` and substitution `σ`, the operation
`M • σ` constructs the sequence
@ -122,13 +122,13 @@ Given a term `M` and substitution `σ`, the operation
This operation is analogous to the `cons` operation of Lisp.
\begin{code}
```
infixr 6 _•_
_•_ : ∀{Γ Δ A} → (Δ ⊢ A) → Subst Γ Δ → Subst (Γ , A) Δ
(M • σ) Z = M
(M • σ) (S x) = σ x
\end{code}
```
Given two substitutions `σ` and `τ`, the sequencing operation `σ ⨟ τ`
produces the sequence
@ -138,12 +138,12 @@ produces the sequence
That is, it composes the two substitutions by first applying
`σ` and then applying `τ`.
\begin{code}
```
infixr 5 _⨟_
_⨟_ : ∀{Γ Δ Σ} → Subst Γ Δ → Subst Δ Σ → Subst Γ Σ
σ ⨟ τ = ⟪ τ ⟫ ∘ σ
\end{code}
```
For the sequencing operation, Abadi et al. use the notation of
function composition, writing `σ ∘ τ`, but still with `σ` applied
@ -208,10 +208,10 @@ We have
where `ren` turns a renaming `ρ` into a substitution by post-composing
`ρ` with the identity substitution.
\begin{code}
```
ren : ∀{Γ Δ} → Rename Γ Δ → Subst Γ Δ
ren ρ = ids ∘ ρ
\end{code}
```
When the renaming is the increment function, then it is equivalent to
shift.
@ -267,19 +267,19 @@ cons'ing `M` onto `σ`.
We start with the proofs that are immediate from the definitions of
the operators.
\begin{code}
```
sub-head : ∀ {Γ Δ} {A} {M : Δ ⊢ A}{σ : Subst Γ Δ}
→ ⟪ M • σ ⟫ (` Z) ≡ M
sub-head = refl
\end{code}
```
\begin{code}
```
sub-tail : ∀{Γ Δ} {A B} {M : Δ ⊢ A} {σ : Subst Γ Δ}
→ (↑ ⨟ M • σ) {A = B} ≡ σ
sub-tail = extensionality λ x → refl
\end{code}
```
\begin{code}
```
sub-η : ∀{Γ Δ} {A B} {σ : Subst (Γ , A) Δ}
→ (⟪ σ ⟫ (` Z) • (↑ ⨟ σ)) {A = B} ≡ σ
sub-η {Γ}{Δ}{A}{B}{σ} = extensionality λ x → lemma
@ -287,9 +287,9 @@ sub-η {Γ}{Δ}{A}{B}{σ} = extensionality λ x → lemma
lemma : ∀ {x} → ((⟪ σ ⟫ (` Z)) • (↑ ⨟ σ)) x ≡ σ x
lemma {x = Z} = refl
lemma {x = S x} = refl
\end{code}
```
\begin{code}
```
Z-shift : ∀{Γ}{A B}
→ ((` Z) • ↑) ≡ ids {Γ , A} {B}
Z-shift {Γ}{A}{B} = extensionality lemma
@ -297,15 +297,15 @@ Z-shift {Γ}{A}{B} = extensionality lemma
lemma : (x : Γ , A ∋ B) → ((` Z) • ↑) x ≡ ids x
lemma Z = refl
lemma (S y) = refl
\end{code}
```
\begin{code}
```
sub-idL : ∀{Γ Δ} {σ : Subst Γ Δ} {A}
→ ids ⨟ σσ {A}
sub-idL = extensionality λ x → refl
\end{code}
```
\begin{code}
```
sub-dist : ∀{Γ Δ Σ : Context} {A B} {σ : Subst Γ Δ} {τ : Subst Δ Σ}
{M : Δ ⊢ A}
→ ((M • σ) ⨟ τ) ≡ ((subst τ M) • (σ ⨟ τ)) {B}
@ -314,13 +314,13 @@ sub-dist {Γ}{Δ}{Σ}{A}{B}{σ}{τ}{M} = extensionality λ x → lemma {x = x}
lemma : ∀ {x : Γ , A ∋ B} → ((M • σ) ⨟ τ) x ≡ ((subst τ M) • (σ ⨟ τ)) x
lemma {x = Z} = refl
lemma {x = S x} = refl
\end{code}
```
\begin{code}
```
sub-app : ∀{Γ Δ} {σ : Subst Γ Δ} {L : Γ ⊢ ★}{M : Γ ⊢ ★}
→ ⟪ σ ⟫ (L · M) ≡ (⟪ σ ⟫ L) · (⟪ σ ⟫ M)
sub-app = refl
\end{code}
```
## Interlude: congruences
@ -334,7 +334,7 @@ the equational reasoning in the later sections of this chapter.
but I have not yet found a way to make that work. It seems that
various implicit parameters get in the way.]
\begin{code}
```
cong-ext : ∀{Γ Δ}{ρ ρ : Rename Γ Δ}{B}
→ (∀{A} → ρρ {A})
---------------------------------
@ -344,9 +344,9 @@ cong-ext{Γ}{Δ}{ρ}{ρ}{B} rr {A} = extensionality λ x → lemma {x}
lemma : ∀{x : Γ , B ∋ A} → ext ρ x ≡ ext ρ x
lemma {Z} = refl
lemma {S y} = cong S_ (cong-app rr y)
\end{code}
```
\begin{code}
```
cong-rename : ∀{Γ Δ}{ρ ρ : Rename Γ Δ}{B}{M M : Γ ⊢ B}
→ (∀{A} → ρρ {A}) → M ≡ M
------------------------------
@ -356,9 +356,9 @@ cong-rename {ρ = ρ} {ρ = ρ} {M = ƛ N} rr refl =
cong ƛ_ (cong-rename {ρ = ext ρ}{ρ = ext ρ}{M = N} (cong-ext rr) refl)
cong-rename {M = L · M} rr refl =
cong₂ _·_ (cong-rename rr refl) (cong-rename rr refl)
\end{code}
```
\begin{code}
```
cong-exts : ∀{Γ Δ}{σ σ : Subst Γ Δ}{B}
→ (∀{A} → σσ {A})
-----------------------------------
@ -368,9 +368,9 @@ cong-exts{Γ}{Δ}{σ}{σ}{B} ss {A} = extensionality λ x → lemma {x}
lemma : ∀{x} → exts σ x ≡ exts σ x
lemma {Z} = refl
lemma {S x} = cong (rename S_) (cong-app (ss {A}) x)
\end{code}
```
\begin{code}
```
cong-sub : ∀{Γ Δ}{σ σ : Subst Γ Δ}{A}{M M : Γ ⊢ A}
→ (∀{A} → σσ {A}) → M ≡ M
------------------------------
@ -380,18 +380,18 @@ cong-sub {Γ} {Δ} {σ} {σ} {A} {ƛ M} ss refl =
cong ƛ_ (cong-sub {σ = exts σ}{σ = exts σ} {M = M} (cong-exts ss) refl)
cong-sub {Γ} {Δ} {σ} {σ} {A} {L · M} ss refl =
cong₂ _·_ (cong-sub {M = L} ss refl) (cong-sub {M = M} ss refl)
\end{code}
```
\begin{code}
```
cong-sub-zero : ∀{Γ}{B : Type}{M M : Γ ⊢ B}
→ M ≡ M
-----------------------------------------
→ ∀{A} → subst-zero M ≡ (subst-zero M) {A}
cong-sub-zero {Γ}{B}{M}{M} mm' {A} =
extensionality λ x → cong (λ z → subst-zero z x) mm'
\end{code}
```
\begin{code}
```
cong-cons : ∀{Γ Δ}{A}{M N : Δ ⊢ A}{σ τ : Subst Γ Δ}
→ M ≡ N → (∀{A} → σ {A} ≡ τ {A})
--------------------------------
@ -401,9 +401,9 @@ cong-cons{Γ}{Δ}{A}{M}{N}{σ}{τ} refl st {A} = extensionality lemma
lemma : (x : Γ , A ∋ A) → (M • σ) x ≡ (M • τ) x
lemma Z = refl
lemma (S x) = cong-app st x
\end{code}
```
\begin{code}
```
cong-seq : ∀{Γ Δ Σ}{σ σ : Subst Γ Δ}{τ τ′ : Subst Δ Σ}
→ (∀{A} → σ {A} ≡ σ {A}) → (∀{A} → τ {A} ≡ τ′ {A})
→ ∀{A} → (σ ⨟ τ) {A} ≡ (σ ⨟ τ′) {A}
@ -422,7 +422,7 @@ cong-seq {Γ}{Δ}{Σ}{σ}{σ}{τ}{τ′} ss' tt' {A} = extensionality lemma
≡⟨⟩
(σ ⨟ τ′) x
\end{code}
```
## Relating `rename`, `exts`, `ext`, and `subst-zero` to the σ algebra
@ -439,7 +439,7 @@ Because `subst` uses the `exts` function, we need the following lemma
which says that `exts` and `ext` do the same thing except that `ext`
works on renamings and `exts` works on substitutions.
\begin{code}
```
ren-ext : ∀ {Γ Δ}{B C : Type} {ρ : Rename Γ Δ}
→ ren (ext ρ {B = B}) ≡ exts (ren ρ) {C}
ren-ext {Γ}{Δ}{B}{C}{ρ} = extensionality λ x → lemma {x = x}
@ -447,12 +447,12 @@ ren-ext {Γ}{Δ}{B}{C}{ρ} = extensionality λ x → lemma {x = x}
lemma : ∀ {x : Γ , B ∋ C} → (ren (ext ρ)) x ≡ exts (ren ρ) x
lemma {x = Z} = refl
lemma {x = S x} = refl
\end{code}
```
With this lemma in hand, the proof is a straightforward induction on
the term `M`.
\begin{code}
```
rename-subst-ren : ∀ {Γ Δ}{A} {ρ : Rename Γ Δ}{M : Γ ⊢ A}
→ rename ρ M ≡ ⟪ ren ρ ⟫ M
rename-subst-ren {M = ` x} = refl
@ -469,11 +469,11 @@ rename-subst-ren {ρ = ρ}{M = ƛ N} =
⟪ ren ρ ⟫ (ƛ N)
rename-subst-ren {M = L · M} = cong₂ _·_ rename-subst-ren rename-subst-ren
\end{code}
```
The substitution `ren S_` is equivalent to `↑`.
\begin{code}
```
ren-shift : ∀{Γ}{A}{B}
→ ren S_ ≡ ↑ {A = B} {A}
ren-shift {Γ}{A}{B} = extensionality λ x → lemma {x = x}
@ -481,11 +481,11 @@ ren-shift {Γ}{A}{B} = extensionality λ x → lemma {x = x}
lemma : ∀ {x : Γ ∋ A} → ren (S_{B = B}) x ≡ ↑ {A = B} x
lemma {x = Z} = refl
lemma {x = S x} = refl
\end{code}
```
The substitution `rename S_ M` is equivalent to shifting: `⟪ ↑ ⟫ M`.
\begin{code}
```
rename-shift : ∀{Γ} {A} {B} {M : Γ ⊢ A}
→ rename (S_{B = B}) M ≡ ⟪ ↑ ⟫ M
rename-shift{Γ}{A}{B}{M} =
@ -496,14 +496,14 @@ rename-shift{Γ}{A}{B}{M} =
≡⟨ cong-sub{M = M} ren-shift refl ⟩
⟪ ↑ ⟫ M
\end{code}
```
Next we prove the equation `exts-cons-shift`, which states that `exts`
is equivalent to cons'ing Z onto the sequence formed by applying `σ`
and then shifting. The proof is by case analysis on the variable `x`,
using `rename-subst-ren` for when `x = S y`.
\begin{code}
```
exts-cons-shift : ∀{Γ Δ} {A B} {σ : Subst Γ Δ}
→ exts σ {A} {B} ≡ (` Z • (σ ⨟ ↑))
exts-cons-shift = extensionality λ x → lemma{x = x}
@ -512,11 +512,11 @@ exts-cons-shift = extensionality λ x → lemma{x = x}
→ exts σ x ≡ (` Z • (σ ⨟ ↑)) x
lemma {x = Z} = refl
lemma {x = S y} = rename-subst-ren
\end{code}
```
As a corollary, we have a similar correspondence for `ren (ext ρ)`.
\begin{code}
```
ext-cons-Z-shift : ∀{Γ Δ} {ρ : Rename Γ Δ}{A}{B}
→ ren (ext ρ {B = B}) ≡ (` Z • (ren ρ ⨟ ↑)) {A}
ext-cons-Z-shift {Γ}{Δ}{ρ}{A}{B} =
@ -527,12 +527,12 @@ ext-cons-Z-shift {Γ}{Δ}{ρ}{A}{B} =
≡⟨ exts-cons-shift{σ = ren ρ} ⟩
((` Z) • (ren ρ ⨟ ↑))
\end{code}
```
Finally, the `subst-zero M` substitution is equivalent to cons'ing `M`
onto the identity substitution.
\begin{code}
```
subst-Z-cons-ids : ∀{Γ}{A B : Type}{M : Γ ⊢ B}
→ subst-zero M ≡ (M • ids) {A}
subst-Z-cons-ids = extensionality λ x → lemma {x = x}
@ -541,7 +541,7 @@ subst-Z-cons-ids = extensionality λ x → lemma {x = x}
→ subst-zero M x ≡ (M • ids) x
lemma {x = Z} = refl
lemma {x = S x} = refl
\end{code}
```
## Proofs of sub-abs, sub-id, and rename-id
@ -549,7 +549,7 @@ subst-Z-cons-ids = extensionality λ x → lemma {x = x}
The equation `sub-abs` follows immediately from the equation
`exts-cons-shift`.
\begin{code}
```
sub-abs : ∀{Γ Δ} {σ : Subst Γ Δ} {N : Γ , ★ ⊢ ★}
→ ⟪ σ ⟫ (ƛ N) ≡ ƛ ⟪ (` Z) • (σ ⨟ ↑) ⟫ N
sub-abs {σ = σ}{N = N} =
@ -560,25 +560,25 @@ sub-abs {σ = σ}{N = N} =
≡⟨ cong ƛ_ (cong-sub{M = N} exts-cons-shift refl) ⟩
ƛ ⟪ (` Z) • (σ ⨟ ↑) ⟫ N
\end{code}
```
The proof of `sub-id` requires the following lemma which says that
extending the identity substitution produces the identity
substitution.
\begin{code}
```
exts-ids : ∀{Γ}{A B}
→ exts ids ≡ ids {Γ , B} {A}
exts-ids {Γ}{A}{B} = extensionality lemma
where lemma : (x : Γ , B ∋ A) → exts ids x ≡ ids x
lemma Z = refl
lemma (S x) = refl
\end{code}
```
The proof of `⟪ ids ⟫ M ≡ M` now follows easily by induction on `M`,
using `exts-ids` in the case for `M ≡ ƛ N`.
\begin{code}
```
sub-id : ∀{Γ} {A} {M : Γ ⊢ A}
→ ⟪ ids ⟫ M ≡ M
sub-id {M = ` x} = refl
@ -593,11 +593,11 @@ sub-id {M = ƛ N} =
ƛ N
sub-id {M = L · M} = cong₂ _·_ sub-id sub-id
\end{code}
```
The `rename-id` equation is a corollary is `sub-id`.
\begin{code}
```
rename-id : ∀ {Γ}{A} {M : Γ ⊢ A}
→ rename (λ {A} x → x) M ≡ M
rename-id {M = M} =
@ -610,13 +610,13 @@ rename-id {M = M} =
≡⟨ sub-id ⟩
M
\end{code}
```
## Proof of sub-idR
The proof of `sub-idR` follows directly from `sub-id`.
\begin{code}
```
sub-idR : ∀{Γ Δ} {σ : Subst Γ Δ} {A}
→ (σ ⨟ ids) ≡ σ {A}
sub-idR {Γ}{σ = σ}{A} =
@ -627,7 +627,7 @@ sub-idR {Γ}{σ = σ}{A} =
≡⟨ extensionality (λ x → sub-id) ⟩
σ
\end{code}
```
## Proof of sub-sub
@ -644,7 +644,7 @@ specialization for renaming.
This in turn requires the following lemma about `ext`.
\begin{code}
```
compose-ext : ∀{Γ Δ Σ}{ρ : Rename Δ Σ} {ρ : Rename Γ Δ} {A B}
→ ((ext ρ) ∘ (ext ρ)) ≡ ext (ρρ) {B} {A}
compose-ext = extensionality λ x → lemma {x = x}
@ -653,13 +653,13 @@ compose-ext = extensionality λ x → lemma {x = x}
→ ((ext ρ) ∘ (ext ρ)) x ≡ ext (ρρ) x
lemma {x = Z} = refl
lemma {x = S x} = refl
\end{code}
```
To prove that composing renamings is equivalent to applying one after
the other using `rename`, we proceed by induction on the term `M`,
using the `compose-ext` lemma in the case for `M ≡ ƛ N`.
\begin{code}
```
compose-rename : ∀{Γ Δ Σ}{A}{M : Γ ⊢ A}{ρ : Rename Δ Σ}{ρ : Rename Γ Δ}
→ rename ρ (rename ρ M) ≡ rename (ρρ) M
compose-rename {M = ` x} = refl
@ -675,13 +675,13 @@ compose-rename {Γ}{Δ}{Σ}{A}{ƛ N}{ρ}{ρ} = cong ƛ_ G
rename (ext (ρρ)) N
compose-rename {M = L · M} = cong₂ _·_ compose-rename compose-rename
\end{code}
```
The next lemma states that if a renaming and substitution commute on
variables, then they also commute on terms. We explain the proof in
detail below.
\begin{code}
```
commute-subst-rename : ∀{Γ Δ}{M : Γ ⊢ ★}{σ : Subst Γ Δ}
{ρ : ∀{Γ} → Rename Γ (Γ , ★)}
→ (∀{x : Γ ∋ ★} → exts σ {B = ★} (ρ x) ≡ rename ρ (σ x))
@ -716,7 +716,7 @@ commute-subst-rename{Γ}{Δ}{ƛ N}{σ}{ρ} r =
commute-subst-rename {M = L · M}{ρ = ρ} r =
cong₂ _·_ (commute-subst-rename{M = L}{ρ = ρ} r)
(commute-subst-rename{M = M}{ρ = ρ} r)
\end{code}
```
The proof is by induction on the term `M`.
@ -753,7 +753,7 @@ to prove this directly by equational reasoning in the σ algebra, but
that would require the `sub-assoc` equation, whose proof depends on
`sub-sub`, which in turn depends on this lemma.)
\begin{code}
```
exts-seq : ∀{Γ Δ Δ′} {σ₁ : Subst Γ Δ} {σ₂ : Subst Δ Δ′}
→ ∀ {A} → (exts σ₁ ⨟ exts σ₂) {A} ≡ exts (σ₁ ⨟ σ₂)
exts-seq = extensionality λ x → lemma {x = x}
@ -771,7 +771,7 @@ exts-seq = extensionality λ x → lemma {x = x}
≡⟨⟩
rename S_ ((σ₁ ⨟ σ₂) x)
\end{code}
```
The proof proceed by cases on `x`.
@ -785,7 +785,7 @@ The proof proceed by cases on `x`.
Now we come to the proof of `sub-sub`, which we explain below.
\begin{code}
```
sub-sub : ∀{Γ Δ Σ}{A}{M : Γ ⊢ A} {σ₁ : Subst Γ Δ}{σ₂ : Subst Δ Σ}
→ ⟪ σ₂ ⟫ (⟪ σ₁ ⟫ M) ≡ ⟪ σ₁ ⨟ σ₂ ⟫ M
sub-sub {M = ` x} = refl
@ -800,7 +800,7 @@ sub-sub {Γ}{Δ}{Σ}{A}{ƛ N}{σ₁}{σ₂} =
ƛ ⟪ exts ( σ₁ ⨟ σ₂) ⟫ N
sub-sub {M = L · M} = cong₂ _·_ (sub-sub{M = L}) (sub-sub{M = M})
\end{code}
```
We proceed by induction on the term `M`.
@ -821,7 +821,7 @@ We proceed by induction on the term `M`.
The following corollary of `sub-sub` specializes the first
substitution to a renaming.
\begin{code}
```
rename-subst : ∀{Γ Δ Δ′}{M : Γ ⊢ ★}{ρ : Rename Γ Δ}{σ : Subst Δ Δ′}
→ ⟪ σ ⟫ (rename ρ M) ≡ ⟪ σρ ⟫ M
rename-subst {Γ}{Δ}{Δ′}{M}{ρ}{σ} =
@ -834,7 +834,7 @@ rename-subst {Γ}{Δ}{Δ′}{M}{ρ}{σ} =
≡⟨⟩
σρ ⟫ M
\end{code}
```
## Proof of sub-assoc
@ -842,7 +842,7 @@ rename-subst {Γ}{Δ}{Δ′}{M}{ρ}{σ} =
The proof of `sub-assoc` follows directly from `sub-sub` and the
definition of sequencing.
\begin{code}
```
sub-assoc : ∀{Γ Δ Σ Ψ : Context} {σ : Subst Γ Δ} {τ : Subst Δ Σ}
{θ : Subst Σ Ψ}
→ ∀{A} → (σ ⨟ τ) ⨟ θ ≡ (σ ⨟ τ ⨟ θ) {A}
@ -859,7 +859,7 @@ sub-assoc {Γ}{Δ}{Σ}{Ψ}{σ}{τ}{θ}{A} = extensionality λ x → lemma{x = x}
≡⟨⟩
(σ ⨟ τ ⨟ θ) x
\end{code}
```
## Proof of subst-zero-exts-cons
@ -868,7 +868,7 @@ The last equation we needed to prove `subst-zero-exts-cons` was
the equations for `exts` and `subst-zero` and then apply the σ algebra
equation to arrive at the normal form `M • σ`.
\begin{code}
```
subst-zero-exts-cons : ∀{Γ Δ}{σ : Subst Γ Δ}{B}{M : Δ ⊢ B}{A}
→ exts σ ⨟ subst-zero M ≡ (M • σ) {A}
subst-zero-exts-cons {Γ}{Δ}{σ}{B}{M}{A} =
@ -887,7 +887,7 @@ subst-zero-exts-cons {Γ}{Δ}{σ}{B}{M}{A} =
≡⟨ cong-cons refl (sub-idR{σ = σ}) ⟩
M • σ
\end{code}
```
## Proof of the substitution lemma
@ -908,7 +908,7 @@ normal form
We then do the same with the right-hand side, arriving at the same
normal form.
\begin{code}
```
subst-commute : ∀{Γ Δ}{N : Γ , ★ ⊢ ★}{M : Γ ⊢ ★}{σ : Subst Γ Δ }
→ ⟪ exts σ ⟫ N [ ⟪ σ ⟫ M ] ≡ ⟪ σ ⟫ (N [ M ])
subst-commute {Γ}{Δ}{N}{M}{σ} =
@ -941,14 +941,14 @@ subst-commute {Γ}{Δ}{N}{M}{σ} =
≡⟨ cong ⟪ σ ⟫ (sym (cong-sub{M = N} subst-Z-cons-ids refl)) ⟩
σ ⟫ (N [ M ])
\end{code}
```
A corollary of `subst-commute` is that `rename` also commutes with
substitution. In the proof below, we first exchange `rename ρ` for
the substitution `⟪ ren ρ ⟫`, and apply `subst-commute`, and
then convert back to `rename ρ`.
\begin{code}
```
rename-subst-commute : ∀{Γ Δ}{N : Γ , ★ ⊢ ★}{M : Γ ⊢ ★}{ρ : Rename Γ Δ }
→ (rename (ext ρ) N) [ rename ρ M ] ≡ rename ρ (N [ M ])
rename-subst-commute {Γ}{Δ}{N}{M}{ρ} =
@ -964,12 +964,12 @@ rename-subst-commute {Γ}{Δ}{N}{M}{ρ} =
≡⟨ sym (rename-subst-ren) ⟩
rename ρ (N [ M ])
\end{code}
```
To present the substitution lemma, we introduce the following notation
for substituting a term `M` for index 1 within term `N`.
\begin{code}
```
__ : ∀ {Γ A B C}
→ Γ , B , C ⊢ A
→ Γ ⊢ B
@ -977,17 +977,17 @@ __ : ∀ {Γ A B C}
→ Γ , C ⊢ A
__ {Γ} {A} {B} {C} N M =
subst {Γ , B , C} {Γ , C} (exts (subst-zero M)) {A} N
\end{code}
```
The substitution lemma is stated as follows and proved as a corollary
of the `subst-commute` lemma.
\begin{code}
```
substitution : ∀{Γ}{M : Γ , ★ , ★ ⊢ ★}{N : Γ , ★ ⊢ ★}{L : Γ ⊢ ★}
→ (M [ N ]) [ L ] ≡ (M L ) [ (N [ L ]) ]
substitution{M = M}{N = N}{L = L} =
sym (subst-commute{N = M}{M = N}{σ = subst-zero L})
\end{code}
```
## Notes
@ -1009,4 +1009,3 @@ This chapter uses the following unicode:
⨟ U+2A1F Z NOTATION SCHEMA COMPOSITION (C-x 8 RET Z NOTATION SCHEMA COMPOSITION)
U+3014 LEFT TORTOISE SHELL BRACKET (\( option 9 on page 2)
U+3015 RIGHT TORTOISE SHELL BRACKET (\) option 9 on page 2)

View file

@ -6,9 +6,9 @@ permalink : /Untyped/
next : /Acknowledgements/
---
\begin{code}
```
module plfa.Untyped where
\end{code}
```
In this chapter we play with variations on a theme:
@ -43,7 +43,7 @@ the range of different lambda calculi one may encounter.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong)
open import Data.Empty using (⊥; ⊥-elim)
@ -56,7 +56,7 @@ open import Relation.Nullary using (¬_; Dec; yes; no)
open import Relation.Nullary.Decidable using (map)
open import Relation.Nullary.Negation using (contraposition)
open import Relation.Nullary.Product using (_×-dec_)
\end{code}
```
## Untyped is Uni-typed
@ -76,7 +76,7 @@ can now be defined in the language itself.
First, we get all our infix declarations out of the way:
\begin{code}
```
infix 4 _⊢_
infix 4 _∋_
infixl 5 _,_
@ -84,48 +84,48 @@ infixl 5 _,_
infix 6 ƛ_
infix 6 _
infixl 7 _·_
\end{code}
```
## Types
We have just one type:
\begin{code}
```
data Type : Set where
★ : Type
\end{code}
```
#### Exercise (`Type≃`)
Show that `Type` is isomorphic to ``, the unit type.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Contexts
As before, a context is a list of types, with the type of the
most recently bound variable on the right:
\begin{code}
```
data Context : Set where
∅ : Context
_,_ : Context → Type → Context
\end{code}
```
We let `Γ` and `Δ` range over contexts.
#### Exercise (`Context≃`)
Show that `Context` is isomorphic to ``.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Variables and the lookup judgment
Inherently typed variables correspond to the lookup judgment. The
rules are as before:
\begin{code}
```
data _∋_ : Context → Type → Set where
Z : ∀ {Γ A}
@ -136,7 +136,7 @@ data _∋_ : Context → Type → Set where
→ Γ ∋ A
---------
→ Γ , B ∋ A
\end{code}
```
We could write the rules with all instances of `A` and `B`
replaced by `★`, but arguably it is clearer not to do so.
@ -152,7 +152,7 @@ Inherently typed terms correspond to the typing judgment, but with
`★` as the only type. The result is that we check that terms are
well-scoped — that is, that all variables they mention are in scope —
but not that they are well-typed:
\begin{code}
```
data _⊢_ : Context → Type → Set where
`_ : ∀ {Γ A}
@ -170,7 +170,7 @@ data _⊢_ : Context → Type → Set where
→ Γ ⊢ ★
------
→ Γ ⊢ ★
\end{code}
```
Now we have a tiny calculus, with only variables, abstraction, and
application. Below we will see how to encode naturals and
fixpoints into this calculus.
@ -180,24 +180,24 @@ fixpoints into this calculus.
As before, we can convert a natural to the corresponding de Bruijn
index. We no longer need to lookup the type in the context, since
every variable has the same type:
\begin{code}
```
count : ∀ {Γ} → → Γ ∋ ★
count {Γ , ★} zero = Z
count {Γ , ★} (suc n) = S (count n)
count {∅} _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
We can then introduce a convenient abbreviation for variables:
\begin{code}
```
#_ : ∀ {Γ} → → Γ ⊢ ★
# n = ` count n
\end{code}
```
## Test examples
Our only example is computing two plus two on Church numerals:
\begin{code}
```
twoᶜ : ∀ {Γ} → Γ ⊢ ★
twoᶜ = ƛ ƛ (# 1 · (# 1 · # 0))
@ -209,7 +209,7 @@ plusᶜ = ƛ ƛ ƛ ƛ (# 3 · # 1 · (# 2 · # 1 · # 0))
2+2ᶜ : ∅ ⊢ ★
2+2ᶜ = plusᶜ · twoᶜ · twoᶜ
\end{code}
```
Before, reduction stopped when we reached a lambda term, so we had to
compute `` plusᶜ · twoᶜ · twoᶜ · sucᶜ · `zero `` to ensure we reduced
to a representation of the natural four. Now, reduction continues
@ -220,18 +220,18 @@ two.
## Renaming
Our definition of renaming is as before. First, we need an extension lemma:
\begin{code}
```
ext : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ∋ A)
-----------------------------------
→ (∀ {A B} → Γ , B ∋ A → Δ , B ∋ A)
ext ρ Z = Z
ext ρ (S x) = S (ρ x)
\end{code}
```
We could replace all instances of `A` and `B` by `★`, but arguably it is
clearer not to do so.
Now it is straightforward to define renaming:
\begin{code}
```
rename : ∀ {Γ Δ}
→ (∀ {A} → Γ ∋ A → Δ ∋ A)
------------------------
@ -239,24 +239,24 @@ rename : ∀ {Γ Δ}
rename ρ (` x) = ` (ρ x)
rename ρ (ƛ N) = ƛ (rename (ext ρ) N)
rename ρ (L · M) = (rename ρ L) · (rename ρ M)
\end{code}
```
This is exactly as before, save that there are fewer term forms.
## Simultaneous substitution
Our definition of substitution is also exactly as before.
First we need an extension lemma:
\begin{code}
```
exts : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ⊢ A)
----------------------------------
→ (∀ {A B} → Γ , B ∋ A → Δ , B ⊢ A)
exts σ Z = ` Z
exts σ (S x) = rename S_ (σ x)
\end{code}
```
Again, we could replace all instances of `A` and `B` by `★`.
Now it is straightforward to define substitution:
\begin{code}
```
subst : ∀ {Γ Δ}
→ (∀ {A} → Γ ∋ A → Δ ⊢ A)
------------------------
@ -264,13 +264,13 @@ subst : ∀ {Γ Δ}
subst σ (` k) = σ k
subst σ (ƛ N) = ƛ (subst (exts σ) N)
subst σ (L · M) = (subst σ L) · (subst σ M)
\end{code}
```
Again, this is exactly as before, save that there are fewer term forms.
## Single substitution
It is easy to define the special case of substitution for one free variable:
\begin{code}
```
subst-zero : ∀ {Γ B} → (Γ ⊢ B) → ∀ {A} → (Γ , B ∋ A) → (Γ ⊢ A)
subst-zero M Z = M
subst-zero M (S x) = ` x
@ -281,21 +281,21 @@ _[_] : ∀ {Γ A B}
---------
→ Γ ⊢ A
_[_] {Γ} {A} {B} N M = subst {Γ , B} {Γ} (subst-zero M) {A} N
\end{code}
```
## Neutral and normal terms
Reduction continues until a term is fully normalised. Hence, instead
of values, we are now interested in _normal forms_. Terms in normal
form are defined by mutual recursion with _neutral_ terms:
\begin{code}
```
data Neutral : ∀ {Γ A} → Γ ⊢ A → Set
data Normal : ∀ {Γ A} → Γ ⊢ A → Set
\end{code}
```
Neutral terms arise because we now consider reduction of open terms,
which may contain free variables. A term is neutral if it is a
variable or a neutral term applied to a normal term:
\begin{code}
```
data Neutral where
`_ : ∀ {Γ A} (x : Γ ∋ A)
@ -307,11 +307,11 @@ data Neutral where
→ Normal M
---------------
→ Neutral (L · M)
\end{code}
```
A term is a normal form if it is neutral or an abstraction where the
body is a normal form. We use `_` to label neutral terms.
Like `` `_ ``, it is unobtrusive:
\begin{code}
```
data Normal where
_ : ∀ {Γ A} {M : Γ ⊢ A}
@ -323,32 +323,32 @@ data Normal where
→ Normal N
------------
→ Normal (ƛ N)
\end{code}
```
We introduce a convenient abbreviation for evidence that a variable is neutral:
\begin{code}
```
#_ : ∀ {Γ} (n : ) → Neutral {Γ} (# n)
# n = ` count n
\end{code}
```
For example, here is the evidence that the Church numeral two is in
normal form:
\begin{code}
```
_ : Normal (twoᶜ {∅})
_ = ƛ ƛ ( # 1 · ( # 1 · ( # 0)))
\end{code}
```
The evidence that a term is in normal form is almost identical to
the term itself, decorated with some additional primes to indicate
neutral terms, and using `#` in place of `#`
We will also need to characterise terms that are applications:
\begin{code}
```
data Application : ∀ {Γ A} → Γ ⊢ A → Set where
ap : ∀ {Γ} {L M : Γ ⊢ ★}
-------------------
→ Application (L · M)
\end{code}
```
## Reduction step
@ -368,7 +368,7 @@ call-by-name and to enable full normalisation:
* A new rule `ζ` is added, to enable reduction underneath a lambda.
Here are the formalised rules:
\begin{code}
```
infix 2 _—→_
data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -393,7 +393,7 @@ data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
→ N —→ N
-----------
→ ƛ N —→ ƛ N
\end{code}
```
#### Exercise (`variant-1`)
@ -401,9 +401,9 @@ How would the rules change if we want call-by-value where terms
normalise completely? Assume that `β` should not permit reduction
unless both terms are in normal form.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise (`variant-2`)
@ -412,15 +412,15 @@ do not reduce underneath lambda? Assume that `β`
permits reduction when both terms are values (that is, lambda
abstractions). What would `2+2ᶜ` reduce to in this case?
\begin{code}
```
-- Your code goes here
\end{code}
```
## Reflexive and transitive closure
We cut-and-paste the previous definition:
\begin{code}
```
infix 2 _—↠_
infix 1 begin_
infixr 2 _—→⟨_⟩_
@ -443,13 +443,13 @@ begin_ : ∀ {Γ} {A} {M N : Γ ⊢ A}
------
→ M —↠ N
begin M—↠N = M—↠N
\end{code}
```
## Example reduction sequence
Here is the demonstration that two plus two is four:
\begin{code}
```
_ : 2+2ᶜ —↠ fourᶜ
_ =
begin
@ -467,7 +467,7 @@ _ =
—→⟨ ζ (ζ (ξ₂ (` S Z) (ξ₂ (` S Z) β))) ⟩
ƛ (ƛ # 1 · (# 1 · (# 1 · (# 1 · # 0))))
\end{code}
```
After just two steps the top-level term is an abstraction,
and `ζ` rules drive the rest of the normalisation. In the
applications of the `ξ₂` rule, the argument ``(` S Z)`` is
@ -487,7 +487,7 @@ it for open, well-scoped terms. The definition of normal form permits
free variables, and we have no terms that are not functions.
A term makes progress if it can take a step or is in normal form:
\begin{code}
```
data Progress {Γ A} (M : Γ ⊢ A) : Set where
step : ∀ {N : Γ ⊢ A}
@ -499,10 +499,10 @@ data Progress {Γ A} (M : Γ ⊢ A) : Set where
Normal M
----------
→ Progress M
\end{code}
```
If a term is well-scoped then it satisfies progress:
\begin{code}
```
progress : ∀ {Γ A} → (M : Γ ⊢ A) → Progress M
progress (` x) = done ( ` x)
progress (ƛ N) with progress N
@ -517,7 +517,7 @@ progress (L@(_ · _) · M) with progress L
... | done ( NeuL) with progress M
... | step M—→M = step (ξ₂ NeuL M—→M)
... | done NrmM = done ( NeuL · NrmM)
\end{code}
```
We induct on the evidence that the term is well-scoped:
* If the term is a variable, then it is in normal form.
@ -550,13 +550,13 @@ application.
As previously, progress immediately yields an evaluator.
Gas is specified by a natural number:
\begin{code}
```
data Gas : Set where
gas : → Gas
\end{code}
```
When our evaluator returns a term `N`, it will either give evidence that
`N` is normal or indicate that it ran out of gas:
\begin{code}
```
data Finished {Γ A} (N : Γ ⊢ A) : Set where
done :
@ -567,11 +567,11 @@ data Finished {Γ A} (N : Γ ⊢ A) : Set where
out-of-gas :
----------
Finished N
\end{code}
```
Given a term `L` of type `A`, the evaluator will, for some `N`, return
a reduction sequence from `L` to `N` and an indication of whether
reduction finished:
\begin{code}
```
data Steps : ∀ {Γ A} → Γ ⊢ A → Set where
steps : ∀ {Γ A} {L N : Γ ⊢ A}
@ -579,9 +579,9 @@ data Steps : ∀ {Γ A} → Γ ⊢ A → Set where
→ Finished N
----------
→ Steps L
\end{code}
```
The evaluator takes gas and a term and returns the corresponding steps:
\begin{code}
```
eval : ∀ {Γ A}
→ Gas
→ (L : Γ ⊢ A)
@ -592,14 +592,14 @@ eval (gas (suc m)) L with progress L
... | done NrmL = steps (L ∎) (done NrmL)
... | step {M} L—→M with eval (gas m) M
... | steps M—↠N fin = steps (L —→⟨ L—→M ⟩ M—↠N) fin
\end{code}
```
The definition is as before, save that the empty context `∅`
generalises to an arbitrary context `Γ`.
## Example
We reiterate our previous example. Two plus two is four, with Church numerals:
\begin{code}
```
_ : eval (gas 100) 2+2ᶜ ≡
steps
((ƛ
@ -649,7 +649,7 @@ _ : eval (gas 100) 2+2ᶜ ≡
(` (S Z)) ·
( (` (S Z)) · ( (` (S Z)) · ( (` (S Z)) · ( (` Z)))))))))
_ = refl
\end{code}
```
## Naturals and fixpoint
@ -680,7 +680,7 @@ zero branch of the case. (The cases could be in either order.
We put the successor case first to ease comparison with Church numerals.)
Here is the representation of naturals encoded with de Bruijn indexes:
\begin{code}
```
`zero : ∀ {Γ} → (Γ ⊢ ★)
`zero = ƛ ƛ (# 0)
@ -689,7 +689,7 @@ Here is the representation of naturals encoded with de Bruijn indexes:
case : ∀ {Γ} → (Γ ⊢ ★) → (Γ ⊢ ★) → (Γ , ★ ⊢ ★) → (Γ ⊢ ★)
case L M N = L · (ƛ N) · M
\end{code}
```
Here we have been careful to retain the exact form of our previous
definitions. The successor branch expects an additional variable to
be in scope (as indicated by its type), so it is converted to an
@ -710,14 +710,14 @@ This works because:
f · (μ f)
With de Bruijn indices, we have the following:
\begin{code}
```
μ_ : ∀ {Γ} → (Γ , ★ ⊢ ★) → (Γ ⊢ ★)
μ N = (ƛ ((ƛ (# 1 · (# 0 · # 0))) · (ƛ (# 1 · (# 0 · # 0))))) · (ƛ N)
\end{code}
```
The argument to fixpoint is treated similarly to the successor branch of case.
We can now define two plus two exactly as before:
\begin{code}
```
infix 5 μ_
two : ∀ {Γ} → Γ ⊢ ★
@ -728,7 +728,7 @@ four = `suc `suc `suc `suc `zero
plus : ∀ {Γ} → Γ ⊢ ★
plus = μ ƛ ƛ (case (# 1) (# 0) (`suc (# 3 · # 0 · # 1)))
\end{code}
```
Because `` `suc `` is now a defined term rather than primitive,
it is no longer the case that `plus · two · two` reduces to `four`,
but they do both reduce to the same normal term.
@ -738,9 +738,9 @@ but they do both reduce to the same normal term.
Use the evaluator to confirm that `plus · two · two` and `four`
normalise to the same term.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `multiplication-untyped` (recommended)
@ -749,9 +749,9 @@ multiplication from previous chapters with the Scott
representation and the encoding of the fixpoint operator.
Confirm that two times two is four.
\begin{code}
```
-- Your code goes here
\end{code}
```
#### Exercise `encode-more` (stretch)
@ -759,9 +759,9 @@ Along the lines above, encode all of the constructs of
Chapter [More][plfa.More],
save for primitive numbers, in the untyped lambda calculus.
\begin{code}
```
-- Your code goes here
\end{code}
```
## Unicode

View file

@ -18,5 +18,5 @@ permalink : /Citing/
author = {Philip Wadler and Wen Kokke},
title = {Programming Language Foundations in {A}gda},
note = {Available at \url{http://plfa.inf.ed.ac.uk/}},
year = 2019,
year = 2019,
}

View file

@ -4,9 +4,9 @@ layout : page
permalink : /Assignment1/
---
\begin{code}
```
module Assignment1 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -27,7 +27,7 @@ Please ensure your files execute correctly under Agda!
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; sym)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
@ -35,7 +35,7 @@ open import Data.Nat using (; zero; suc; _+_; _*_; _∸_; _≤_; z≤n; s≤s
open import Data.Nat.Properties using (+-assoc; +-identityʳ; +-suc; +-comm;
≤-refl; ≤-trans; ≤-antisym; ≤-total; +-monoʳ-≤; +-monoˡ-≤; +-mono-≤)
open import plfa.Relations using (_<_; z<s; s<s; zero; suc; even; odd)
\end{code}
```
## Naturals
@ -73,12 +73,12 @@ Compute `5 ∸ 3` and `3 ∸ 5`, writing out your reasoning as a chain of equati
A more efficient representation of natural numbers uses a binary
rather than a unary system. We represent a number as a bitstring.
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
For instance, the bitstring
1011
@ -132,7 +132,7 @@ days using a finite story of creation, as
[earlier][plfa.Naturals#finite-creation]
#### Exercise `+-swap` (recommended) {#plus-swap}
#### Exercise `+-swap` (recommended) {#plus-swap}
Show
@ -189,7 +189,7 @@ for all naturals `m`, `n`, and `p`.
#### Exercise `Bin-laws` (stretch) {#Bin-laws}
Recall that
Recall that
Exercise [Bin][plfa.Naturals#Bin]
defines a datatype `Bin` of bitstrings representing natural numbers
and asks you to define functions
@ -218,7 +218,7 @@ Give an example of a preorder that is not a partial order.
Give an example of a partial order that is not a preorder.
#### Exercise `≤-antisym-cases` {#leq-antisym-cases}
#### Exercise `≤-antisym-cases` {#leq-antisym-cases}
The above proof omits cases where one argument is `z≤n` and one
argument is `s≤s`. Why is it ok to omit them?
@ -267,7 +267,7 @@ Show that the sum of two odd numbers is even.
#### Exercise `Bin-predicates` (stretch) {#Bin-predicates}
Recall that
Recall that
Exercise [Bin][plfa.Naturals#Bin]
defines a datatype `Bin` of bitstrings representing natural numbers.
Representations are not unique due to leading zeros.
@ -310,6 +310,5 @@ and back is the identity.
---------------
to (from x) ≡ x
\end{code}
(Hint: For each of these, you may first need to prove related
properties of `One`.)

View file

@ -4,9 +4,9 @@ layout : page
permalink : /Assignment2/
---
\begin{code}
```
module Assignment2 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -27,7 +27,7 @@ Please ensure your files execute correctly under Agda!
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; sym)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
@ -50,7 +50,7 @@ open import Data.Product using (Σ; _,_; ∃; Σ-syntax; ∃-syntax)
open import plfa.Relations using (_<_; z<s; s<s)
open import plfa.Isomorphism using (_≃_; ≃-sym; ≃-trans; _≲_; extensionality)
open plfa.Isomorphism.≃-Reasoning
\end{code}
```
## Equality
@ -70,25 +70,25 @@ regard to inequality. Rewrite both `+-monoˡ-≤` and `+-mono-≤`.
#### Exercise `≃-implies-≲`
Show that every isomorphism implies an embedding.
\begin{code}
```
postulate
≃-implies-≲ : ∀ {A B : Set}
→ A ≃ B
-----
→ A ≲ B
\end{code}
→ A ≲ B
```
#### Exercise `_⇔_` (recommended) {#iff}
Define equivalence of propositions (also known as "if and only if") as follows.
\begin{code}
```
record _⇔_ (A B : Set) : Set where
field
to : A → B
from : B → A
open _⇔_
\end{code}
```
Show that equivalence is reflexive, symmetric, and transitive.
#### Exercise `Bin-embedding` (stretch) {#Bin-embedding}
@ -97,12 +97,12 @@ Recall that Exercises
[Bin][plfa.Naturals#Bin] and
[Bin-laws][plfa.Induction#Bin-laws]
define a datatype of bitstrings representing natural numbers.
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
And ask you to define the following functions:
to : → Bin
@ -129,7 +129,7 @@ Show sum is commutative up to isomorphism.
#### Exercise `⊎-assoc`
Show sum is associative up to ismorphism.
Show sum is associative up to ismorphism.
#### Exercise `⊥-identityˡ` (recommended)
@ -137,25 +137,25 @@ Show zero is the left identity of addition.
#### Exercise `⊥-identityʳ`
Show zero is the right identity of addition.
Show zero is the right identity of addition.
#### Exercise `⊎-weak-×` (recommended)
Show that the following property holds.
\begin{code}
```
postulate
⊎-weak-× : ∀ {A B C : Set} → (A ⊎ B) × C → A ⊎ (B × C)
\end{code}
```
This is called a _weak distributive law_. Give the corresponding
distributive law, and explain how it relates to the weak version.
#### Exercise `⊎×-implies-×⊎`
Show that a disjunct of conjuncts implies a conjunct of disjuncts.
\begin{code}
```
postulate
⊎×-implies-×⊎ : ∀ {A B C D : Set} → (A × B) ⊎ (C × D) → (A ⊎ C) × (B ⊎ D)
\end{code}
```
Does the converse hold? If so, prove; if not, give a counterexample.
@ -214,10 +214,10 @@ Show that each of these implies all the others.
#### Exercise `Stable` (stretch)
Say that a formula is _stable_ if double negation elimination holds for it.
\begin{code}
```
Stable : Set → Set
Stable A = ¬ ¬ A → A
\end{code}
```
Show that any negated formula is stable, and that the conjunction
of two stable formulas is stable.
@ -227,41 +227,41 @@ of two stable formulas is stable.
#### Exercise `∀-distrib-×` (recommended)
Show that universals distribute over conjunction.
\begin{code}
```
postulate
∀-distrib-× : ∀ {A : Set} {B C : A → Set} →
(∀ (x : A) → B x × C x) ≃ (∀ (x : A) → B x) × (∀ (x : A) → C x)
\end{code}
```
Compare this with the result (`→-distrib-×`) in
Chapter [Connectives][plfa.Connectives].
#### Exercise `⊎∀-implies-∀⊎`
Show that a disjunction of universals implies a universal of disjunctions.
\begin{code}
```
postulate
⊎∀-implies-∀⊎ : ∀ {A : Set} { B C : A → Set } →
(∀ (x : A) → B x) ⊎ (∀ (x : A) → C x) → ∀ (x : A) → B x ⊎ C x
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
#### Exercise `∃-distrib-⊎` (recommended)
Show that existentials distribute over disjunction.
\begin{code}
```
postulate
∃-distrib-⊎ : ∀ {A : Set} {B C : A → Set} →
∃[ x ] (B x ⊎ C x) ≃ (∃[ x ] B x) ⊎ (∃[ x ] C x)
\end{code}
```
#### Exercise `∃×-implies-×∃`
Show that an existential of conjunctions implies a conjunction of existentials.
\begin{code}
```
postulate
∃×-implies-×∃ : ∀ {A : Set} { B C : A → Set } →
∃[ x ] (B x × C x) → (∃[ x ] B x) × (∃[ x ] C x)
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
#### Exercise `∃-even-odd`
@ -278,13 +278,13 @@ Show that `y ≤ z` holds if and only if there exists a `x` such that
#### Exercise `∃¬-implies-¬∀` (recommended)
Show that existential of a negation implies negation of a universal.
\begin{code}
```
postulate
∃¬-implies-¬∀ : ∀ {A : Set} {B : A → Set}
→ ∃[ x ] (¬ B x)
--------------
→ ¬ (∀ x → B x)
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
@ -327,39 +327,38 @@ Using the above, establish that there is an isomorphism between `` and
#### Exercise `_<?_` (recommended)
Analogous to the function above, define a function to decide strict inequality.
\begin{code}
```
postulate
_<?_ : ∀ (m n : ) → Dec (m < n)
\end{code}
```
#### Exercise `_≡?_`
Define a function to decide whether two naturals are equal.
\begin{code}
```
postulate
_≡?_ : ∀ (m n : ) → Dec (m ≡ n)
\end{code}
```
#### Exercise `erasure`
Show that erasure relates corresponding boolean and decidable operations.
\begin{code}
```
postulate
∧-× : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ ∧ ⌊ y ⌋ ≡ ⌊ x ×-dec y ⌋
-× : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ ⌊ y ⌋ ≡ ⌊ x ⊎-dec y ⌋
not-¬ : ∀ {A : Set} (x : Dec A) → not ⌊ x ⌋ ≡ ⌊ ¬? x ⌋
\end{code}
```
#### Exercise `iff-erasure` (recommended)
Give analogues of the `_⇔_` operation from
Give analogues of the `_⇔_` operation from
Chapter [Isomorphism][plfa.Isomorphism#iff],
operation on booleans and decidables, and also show the corresponding erasure.
\begin{code}
```
postulate
_iff_ : Bool → Bool → Bool
_⇔-dec_ : ∀ {A B : Set} → Dec A → Dec B → Dec (A ⇔ B)
iff-⇔ : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ iff ⌊ y ⌋ ≡ ⌊ x ⇔-dec y ⌋
\end{code}
iff-⇔ : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ iff ⌊ y ⌋ ≡ ⌊ x ⇔-dec y ⌋
```

View file

@ -4,9 +4,9 @@ layout : page
permalink : /Assignment3/
---
\begin{code}
```
module Assignment3 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -27,7 +27,7 @@ Please ensure your files execute correctly under Agda!
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; sym)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
@ -49,62 +49,62 @@ open import plfa.Lists using (List; []; _∷_; [_]; [_,_]; [_,_,_]; [_,_,_,_];
_++_; reverse; map; foldr; sum; All; Any; here; there; _∈_)
open import plfa.Lambda hiding (ƛ_⇒_; case_[zero⇒_|suc_⇒_]; μ_⇒_; plus)
open import plfa.Properties hiding (value?; unstuck; preserves; wttdgs)
\end{code}
```
#### Exercise `reverse-++-commute` (recommended)
Show that the reverse of one list appended to another is the
reverse of the second appended to the reverse of the first.
\begin{code}
```
postulate
reverse-++-commute : ∀ {A : Set} {xs ys : List A}
→ reverse (xs ++ ys) ≡ reverse ys ++ reverse xs
\end{code}
```
#### Exercise `reverse-involutive` (recommended)
A function is an _involution_ if when applied twice it acts
as the identity function. Show that reverse is an involution.
\begin{code}
```
postulate
reverse-involutive : ∀ {A : Set} {xs : List A}
→ reverse (reverse xs) ≡ xs
\end{code}
```
#### Exercise `map-compose`
Prove that the map of a composition is equal to the composition of two maps.
\begin{code}
```
postulate
map-compose : ∀ {A B C : Set} {f : A → B} {g : B → C}
→ map (g ∘ f) ≡ map g ∘ map f
\end{code}
```
The last step of the proof requires extensionality.
#### Exercise `map-++-commute`
Prove the following relationship between map and append.
\begin{code}
```
postulate
map-++-commute : ∀ {A B : Set} {f : A → B} {xs ys : List A}
→ map f (xs ++ ys) ≡ map f xs ++ map f ys
\end{code}
```
#### Exercise `map-Tree`
Define a type of trees with leaves of type `A` and internal
nodes of type `B`.
\begin{code}
```
data Tree (A B : Set) : Set where
leaf : A → Tree A B
node : Tree A B → B → Tree A B → Tree A B
\end{code}
```
Define a suitable map operator over trees.
\begin{code}
```
postulate
map-Tree : ∀ {A B C D : Set}
→ (A → C) → (B → D) → Tree A B → Tree C D
\end{code}
```
#### Exercise `product` (recommended)
@ -116,31 +116,31 @@ For example,
#### Exercise `foldr-++` (recommended)
Show that fold and append are related as follows.
\begin{code}
```
postulate
foldr-++ : ∀ {A B : Set} (_⊗_ : A → B → B) (e : B) (xs ys : List A) →
foldr _⊗_ e (xs ++ ys) ≡ foldr _⊗_ (foldr _⊗_ e ys) xs
\end{code}
```
#### Exercise `map-is-foldr`
Show that map can be defined using fold.
\begin{code}
```
postulate
map-is-foldr : ∀ {A B : Set} {f : A → B} →
map f ≡ foldr (λ x xs → f x ∷ xs) []
\end{code}
```
This requires extensionality.
#### Exercise `fold-Tree`
Define a suitable fold function for the type of trees given earlier.
\begin{code}
```
postulate
fold-Tree : ∀ {A B C : Set}
→ (A → C) → (C → B → C → C) → Tree A B → C
\end{code}
```
#### Exercise `map-is-fold-Tree`
@ -149,23 +149,23 @@ Demonstrate an analogue of `map-is-foldr` for the type of trees.
#### Exercise `sum-downFrom` (stretch)
Define a function that counts down as follows.
\begin{code}
```
downFrom : → List
downFrom zero = []
downFrom (suc n) = n ∷ downFrom n
\end{code}
```
For example,
\begin{code}
```
_ : downFrom 3 ≡ [ 2 , 1 , 0 ]
_ = refl
\end{code}
```
Prove that the sum of the numbers `(n - 1) + ⋯ + 0` is
equal to `n * (n ∸ 1) / 2`.
\begin{code}
```
postulate
sum-downFrom : ∀ (n : )
→ sum (downFrom n) * 2 ≡ n * (n ∸ 1)
\end{code}
```
#### Exercise `foldl`
@ -199,25 +199,25 @@ Show that the equivalence `All-++-⇔` can be extended to an isomorphism.
First generalise composition to arbitrary levels, using
[universe polymorphism][plfa.Equality#unipoly].
\begin{code}
```
_∘_ : ∀ {ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set ℓ₁} {B : Set ℓ₂} {C : Set ℓ₃}
→ (B → C) → (A → B) → A → C
(g ∘′ f) x = g (f x)
\end{code}
```
Show that `Any` and `All` satisfy a version of De Morgan's Law.
\begin{code}
```
postulate
¬Any≃All¬ : ∀ {A : Set} (P : A → Set) (xs : List A)
→ (¬_ ∘′ Any P) xs ≃ All (¬_ ∘′ P) xs
\end{code}
```
Do we also have the following?
\begin{code}
```
postulate
¬All≃Any¬ : ∀ {A : Set} (P : A → Set) (xs : List A)
→ (¬_ ∘′ All P) xs ≃ Any (¬_ ∘′ P) xs
\end{code}
```
If so, prove; if not, explain why.
@ -234,11 +234,11 @@ for some element of a list. Give their definitions.
Define the following variant of the traditional `filter` function on lists,
which given a list and a decidable predicate returns all elements of the
list satisfying the predicate.
\begin{code}
```
postulate
filter? : ∀ {A : Set} {P : A → Set}
→ (P? : Decidable P) → List A → ∃[ ys ]( All P ys )
\end{code}
```
## Lambda
@ -253,7 +253,7 @@ two natural numbers.
We can make examples with lambda terms slightly easier to write
by adding the following definitions.
\begin{code}
```
ƛ_⇒_ : Term → Term → Term
ƛ′ (` x) ⇒ N = ƛ x ⇒ N
ƛ′ _ ⇒ _ = ⊥-elim impossible
@ -268,9 +268,9 @@ case _ [zero⇒ _ |suc _ ⇒ _ ] = ⊥-elim impossible
μ′ (` x) ⇒ N = μ x ⇒ N
μ′ _ ⇒ _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
The definition of `plus` can now be written as follows.
\begin{code}
```
plus : Term
plus = μ′ + ⇒ ƛ′ m ⇒ ƛ′ n ⇒
case m
@ -280,7 +280,7 @@ plus = μ′ + ⇒ ƛ′ m ⇒ ƛ′ n ⇒
+ = ` "+"
m = ` "m"
n = ` "n"
\end{code}
```
Write out the definition of multiplication in the same style.
#### Exercise `_[_:=_]` (stretch)
@ -327,10 +327,10 @@ proof of `progress` above.
Combine `progress` and `—→¬V` to write a program that decides
whether a well-typed term is a value.
\begin{code}
```
postulate
value? : ∀ {A M} → ∅ ⊢ M ⦂ A → Dec (Value M)
\end{code}
```
#### Exercise `subst` (stretch)
@ -372,11 +372,3 @@ Give an example of an ill-typed term that does get stuck.
#### Exercise `unstuck` (recommended)
Provide proofs of the three postulates, `unstuck`, `preserves`, and `wttdgs` above.

View file

@ -4,9 +4,9 @@ layout : page
permalink : /Assignment4/
---
\begin{code}
```
module Assignment4 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -36,7 +36,7 @@ before and after code you add, to indicate your changes.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong; cong₂; _≢_)
open import Data.Empty using (⊥; ⊥-elim)
@ -44,21 +44,21 @@ open import Data.Nat using (; zero; suc; _+_; _*_)
open import Data.Product using (_×_; ∃; ∃-syntax) renaming (_,_ to ⟨_,_⟩)
open import Data.String using (String; _≟_)
open import Relation.Nullary using (¬_; Dec; yes; no)
\end{code}
```
## DeBruijn
\begin{code}
```
module DeBruijn where
\end{code}
```
Remember to indent all code by two spaces.
\begin{code}
```
open import plfa.DeBruijn
\end{code}
```
#### Exercise (`mul`) (recommended)
@ -82,16 +82,16 @@ Using the evaluator, confirm that two times two is four.
## More
\begin{code}
```
module More where
\end{code}
```
Remember to indent all code by two spaces.
### Syntax
\begin{code}
```
infix 4 _⊢_
infix 4 _∋_
infixl 5 _,_
@ -108,11 +108,11 @@ Remember to indent all code by two spaces.
infix 9 `_
infix 9 S_
infix 9 #_
\end{code}
```
### Types
\begin{code}
```
data Type : Set where
` : Type
_⇒_ : Type → Type → Type
@ -122,19 +122,19 @@ Remember to indent all code by two spaces.
` : Type
`⊥ : Type
`List : Type → Type
\end{code}
```
### Contexts
\begin{code}
```
data Context : Set where
∅ : Context
_,_ : Context → Type → Context
\end{code}
```
### Variables and the lookup judgment
\begin{code}
```
data _∋_ : Context → Type → Set where
Z : ∀ {Γ A}
@ -145,11 +145,11 @@ Remember to indent all code by two spaces.
→ Γ ∋ B
---------
→ Γ , A ∋ B
\end{code}
```
### Terms and the typing judgment
\begin{code}
```
data _⊢_ : Context → Type → Set where
-- variables
@ -244,11 +244,11 @@ Remember to indent all code by two spaces.
--------------
→ Γ ⊢ C
\end{code}
```
### Abbreviating de Bruijn indices
\begin{code}
```
lookup : Context → → Type
lookup (Γ , A) zero = A
lookup (Γ , _) (suc n) = lookup Γ n
@ -263,11 +263,11 @@ Remember to indent all code by two spaces.
#_ : ∀ {Γ} → (n : ) → Γ ⊢ lookup Γ n
# n = ` count n
\end{code}
```
## Renaming
\begin{code}
```
ext : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ∋ A) → (∀ {A B} → Γ , A ∋ B → Δ , A ∋ B)
ext ρ Z = Z
ext ρ (S x) = S (ρ x)
@ -287,11 +287,11 @@ Remember to indent all code by two spaces.
rename ρ (`proj₁ L) = `proj₁ (rename ρ L)
rename ρ (`proj₂ L) = `proj₂ (rename ρ L)
rename ρ (case× L M) = case× (rename ρ L) (rename (ext (ext ρ)) M)
\end{code}
```
## Simultaneous Substitution
\begin{code}
```
exts : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ⊢ A) → (∀ {A B} → Γ , A ∋ B → Δ , A ⊢ B)
exts σ Z = ` Z
exts σ (S x) = rename S_ (σ x)
@ -311,11 +311,11 @@ Remember to indent all code by two spaces.
subst σ (`proj₁ L) = `proj₁ (subst σ L)
subst σ (`proj₂ L) = `proj₂ (subst σ L)
subst σ (case× L M) = case× (subst σ L) (subst (exts (exts σ)) M)
\end{code}
```
## Single and double substitution
\begin{code}
```
_[_] : ∀ {Γ A B}
→ Γ , A ⊢ B
→ Γ ⊢ A
@ -339,11 +339,11 @@ Remember to indent all code by two spaces.
σ Z = W
σ (S Z) = V
σ (S (S x)) = ` x
\end{code}
```
## Values
\begin{code}
```
data Value : ∀ {Γ A} → Γ ⊢ A → Set where
-- functions
@ -376,14 +376,14 @@ Remember to indent all code by two spaces.
→ Value W
----------------
→ Value `⟨ V , W ⟩
\end{code}
```
Implicit arguments need to be supplied when they are
not fixed by the given arguments.
## Reduction
\begin{code}
```
infix 2 _—→_
data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -509,11 +509,11 @@ not fixed by the given arguments.
→ Value W
----------------------------------
→ case× `⟨ V , W ⟩ M —→ M [ V ][ W ]
\end{code}
```
## Reflexive and transitive closure
\begin{code}
```
infix 2 _—↠_
infix 1 begin_
infixr 2 _—→⟨_⟩_
@ -536,12 +536,12 @@ not fixed by the given arguments.
------
→ M —↠ N
begin M—↠N = M—↠N
\end{code}
```
## Values do not reduce
\begin{code}
```
V¬—→ : ∀ {Γ A} {M N : Γ ⊢ A}
→ Value M
----------
@ -552,12 +552,12 @@ not fixed by the given arguments.
V¬—→ V-con ()
V¬—→ V-⟨ VM , _ ⟩ (ξ-⟨,⟩₁ M—→M) = V¬—→ VM M—→M
V¬—→ V-⟨ _ , VN ⟩ (ξ-⟨,⟩₂ _ N—→N) = V¬—→ VN N—→N
\end{code}
```
## Progress
\begin{code}
```
data Progress {A} (M : ∅ ⊢ A) : Set where
step : ∀ {N : ∅ ⊢ A}
@ -613,12 +613,12 @@ not fixed by the given arguments.
progress (case× L M) with progress L
... | step L—→L = step (ξ-case× L—→L)
... | done (V-⟨ VM , VN ⟩) = step (β-case× VM VN)
\end{code}
```
## Evaluation
\begin{code}
```
data Gas : Set where
gas : → Gas
@ -651,16 +651,16 @@ not fixed by the given arguments.
... | done VL = steps (L ∎) (done VL)
... | step {M} L—→M with eval (gas m) M
... | steps M—↠N fin = steps (L —→⟨ L—→M ⟩ M—↠N) fin
\end{code}
```
## Examples
\begin{code}
```
cube : ∅ ⊢ Nat ⇒ Nat
cube = ƛ (# 0 `* # 0 `* # 0)
_ : cube · con 2 —↠ con 8
_ =
_ =
begin
cube · con 2
—→⟨ β-ƛ V-con ⟩
@ -726,7 +726,7 @@ not fixed by the given arguments.
—→⟨ β-case× V-con V-zero ⟩
`⟨ `zero , con 42 ⟩
\end{code}
```
#### Exercise `More` (recommended in part)
@ -761,21 +761,21 @@ In this case, the simulation is _not_ lock-step.
## Inference
\begin{code}
```
module Inference where
\end{code}
```
Remember to indent all code by two spaces.
### Imports
\begin{code}
```
import plfa.More as DB
\end{code}
```
### Syntax
\begin{code}
```
infix 4 _∋_⦂_
infix 4 _⊢_↑_
infix 4 _⊢_↓_
@ -790,11 +790,11 @@ Remember to indent all code by two spaces.
infixl 7 _·_
infix 8 `suc_
infix 9 `_
\end{code}
```
### Identifiers, types, and contexts
\begin{code}
```
Id : Set
Id = String
@ -805,11 +805,11 @@ Remember to indent all code by two spaces.
data Context : Set where
∅ : Context
_,_⦂_ : Context → Id → Type → Context
\end{code}
```
### Terms
\begin{code}
```
data Term⁺ : Set
data Term⁻ : Set
@ -825,11 +825,11 @@ Remember to indent all code by two spaces.
`case_[zero⇒_|suc_⇒_] : Term⁺ → Term⁻ → Id → Term⁻ → Term⁻
μ_⇒_ : Id → Term⁻ → Term⁻
_↑ : Term⁺ → Term⁻
\end{code}
```
### Sample terms
\begin{code}
```
two : Term⁻
two = `suc (`suc `zero)
@ -838,11 +838,11 @@ Remember to indent all code by two spaces.
`case (` "m") [zero⇒ ` "n" ↑
|suc "m" ⇒ `suc (` "p" · (` "m" ↑) · (` "n" ↑) ↑) ])
` ⇒ ` ⇒ `
\end{code}
```
### Lookup
### Lookup
\begin{code}
```
data _∋_⦂_ : Context → Id → Type → Set where
Z : ∀ {Γ x A}
@ -854,11 +854,11 @@ Remember to indent all code by two spaces.
→ Γ ∋ x ⦂ A
-----------------
→ Γ , y ⦂ B ∋ x ⦂ A
\end{code}
```
### Bidirectional type checking
\begin{code}
```
data _⊢_↑_ : Context → Term⁺ → Type → Set
data _⊢_↓_ : Context → Term⁻ → Type → Set
@ -913,12 +913,12 @@ Remember to indent all code by two spaces.
→ A ≡ B
-------------
→ Γ ⊢ (M ↑) ↓ B
\end{code}
```
### Type equality
\begin{code}
```
_≟Tp_ : (A B : Type) → Dec (A ≡ B)
` ≟Tp ` = yes refl
` ≟Tp (A ⇒ B) = no λ()
@ -928,11 +928,11 @@ Remember to indent all code by two spaces.
... | no A≢ | _ = no λ{refl → A≢ refl}
... | yes _ | no B≢ = no λ{refl → B≢ refl}
... | yes refl | yes refl = yes refl
\end{code}
```
### Prerequisites
\begin{code}
```
dom≡ : ∀ {A A B B} → A ⇒ B ≡ A ⇒ B → A ≡ A
dom≡ refl = refl
@ -941,31 +941,31 @@ Remember to indent all code by two spaces.
ℕ≢⇒ : ∀ {A B} → ` ≢ A ⇒ B
ℕ≢⇒ ()
\end{code}
```
### Unique lookup
\begin{code}
```
uniq-∋ : ∀ {Γ x A B} → Γ ∋ x ⦂ A → Γ ∋ x ⦂ B → A ≡ B
uniq-∋ Z Z = refl
uniq-∋ Z (S x≢y _) = ⊥-elim (x≢y refl)
uniq-∋ (S x≢y _) Z = ⊥-elim (x≢y refl)
uniq-∋ (S _ ∋x) (S _ ∋x) = uniq-∋ ∋x ∋x
\end{code}
```
### Unique synthesis
\begin{code}
```
uniq-↑ : ∀ {Γ M A B} → Γ ⊢ M ↑ A → Γ ⊢ M ↑ B → A ≡ B
uniq-↑ (⊢` ∋x) (⊢` ∋x) = uniq-∋ ∋x ∋x
uniq-↑ (⊢L · ⊢M) (⊢L · ⊢M) = rng≡ (uniq-↑ ⊢L ⊢L)
uniq-↑ (⊢↓ ⊢M) (⊢↓ ⊢M) = refl
\end{code}
uniq-↑ (⊢↓ ⊢M) (⊢↓ ⊢M) = refl
```
## Lookup type of a variable in the context
\begin{code}
```
ext∋ : ∀ {Γ B x y}
→ x ≢ y
→ ¬ ∃[ A ]( Γ ∋ x ⦂ A )
@ -983,11 +983,11 @@ Remember to indent all code by two spaces.
... | no x≢y with lookup Γ x
... | no ¬∃ = no (ext∋ x≢y ¬∃)
... | yes ⟨ A , ⊢x ⟩ = yes ⟨ A , S x≢y ⊢x ⟩
\end{code}
```
### Promoting negations
\begin{code}
```
¬arg : ∀ {Γ A B L M}
→ Γ ⊢ L ↑ A ⇒ B
→ ¬ Γ ⊢ M ↓ A
@ -1001,12 +1001,12 @@ Remember to indent all code by two spaces.
---------------
→ ¬ Γ ⊢ (M ↑) ↓ B
¬switch ⊢M A≢B (⊢↑ ⊢M A≡B) rewrite uniq-↑ ⊢M ⊢M = A≢B A≡B
\end{code}
```
## Synthesize and inherit types
\begin{code}
```
synthesize : ∀ (Γ : Context) (M : Term⁺)
-----------------------
→ Dec (∃[ A ](Γ ⊢ M ↑ A))
@ -1023,7 +1023,7 @@ Remember to indent all code by two spaces.
... | yes ⟨ ` , ⊢L ⟩ = no (λ{ ⟨ _ , ⊢L · _ ⟩ → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ A ⇒ B , ⊢L ⟩ with inherit Γ M A
... | no ¬⊢M = no (¬arg ⊢L ¬⊢M)
... | yes ⊢M = yes ⟨ B , ⊢L · ⊢M ⟩
... | yes ⊢M = yes ⟨ B , ⊢L · ⊢M ⟩
synthesize Γ (M ↓ A) with inherit Γ M A
... | no ¬⊢M = no (λ{ ⟨ _ , ⊢↓ ⊢M ⟩ → ¬⊢M ⊢M })
... | yes ⊢M = yes ⟨ A , ⊢↓ ⊢M ⟩
@ -1040,7 +1040,7 @@ Remember to indent all code by two spaces.
inherit Γ (`suc M) (A ⇒ B) = no (λ())
inherit Γ (`case L [zero⇒ M |suc x ⇒ N ]) A with synthesize Γ L
... | no ¬∃ = no (λ{ (⊢case ⊢L _ _) → ¬∃ ⟨ ` , ⊢L ⟩})
... | yes ⟨ _ ⇒ _ , ⊢L ⟩ = no (λ{ (⊢case ⊢L _ _) → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ _ ⇒ _ , ⊢L ⟩ = no (λ{ (⊢case ⊢L _ _) → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ ` , ⊢L ⟩ with inherit Γ M A
... | no ¬⊢M = no (λ{ (⊢case _ ⊢M _) → ¬⊢M ⊢M })
... | yes ⊢M with inherit (Γ , x ⦂ `) N A
@ -1054,11 +1054,11 @@ Remember to indent all code by two spaces.
... | yes ⟨ A , ⊢M ⟩ with A ≟Tp B
... | no A≢B = no (¬switch ⊢M A≢B)
... | yes A≡B = yes (⊢↑ ⊢M A≡B)
\end{code}
```
### Erasure
\begin{code}
```
∥_∥Tp : Type → DB.Type
` ∥Tp = DB.`
∥ A ⇒ B ∥Tp = ∥ A ∥Tp DB.⇒ ∥ B ∥Tp
@ -1084,7 +1084,7 @@ Remember to indent all code by two spaces.
∥ ⊢case ⊢L ⊢M ⊢N ∥⁻ = DB.case ∥ ⊢L ∥⁺ ∥ ⊢M ∥⁻ ∥ ⊢N ∥⁻
∥ ⊢μ ⊢M ∥⁻ = DB.μ ∥ ⊢M ∥⁻
∥ ⊢↑ ⊢M refl ∥⁻ = ∥ ⊢M ∥⁺
\end{code}
```
#### Exercise `bidirectional-mul` (recommended) {#bidirectional-mul}

View file

@ -5,9 +5,9 @@ permalink : /Exam/
---
\begin{code}
```
module Exam where
\end{code}
```
**IMPORTANT** For ease of marking, when modifying the given code please write
@ -18,7 +18,7 @@ before and after code you add, to indicate your changes.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong; _≢_)
open import Data.Empty using (⊥; ⊥-elim)
@ -28,15 +28,15 @@ open import Data.Product using (∃; ∃-syntax) renaming (_,_ to ⟨_,_⟩)
open import Data.String using (String; _≟_)
open import Relation.Nullary using (¬_; Dec; yes; no)
open import Relation.Binary using (Decidable)
\end{code}
```
## Problem 1
\begin{code}
```
module Problem1 where
open import Function using (_∘_)
\end{code}
```
Remember to indent all code by two spaces.
@ -51,13 +51,13 @@ Remember to indent all code by two spaces.
Remember to indent all code by two spaces.
\begin{code}
```
module Problem2 where
\end{code}
```
### Infix declarations
\begin{code}
```
infix 4 _⊢_
infix 4 _∋_
infixl 5 _,_
@ -70,12 +70,12 @@ module Problem2 where
infix 8 `suc_
infix 9 `_
infix 9 S_
infix 9 #_
\end{code}
infix 9 #_
```
### Types and contexts
\begin{code}
```
data Type : Set where
_⇒_ : Type → Type → Type
` : Type
@ -83,11 +83,11 @@ module Problem2 where
data Context : Set where
∅ : Context
_,_ : Context → Type → Context
\end{code}
```
### Variables and the lookup judgment
\begin{code}
```
data _∋_ : Context → Type → Set where
Z : ∀ {Γ A}
@ -98,11 +98,11 @@ module Problem2 where
→ Γ ∋ A
---------
→ Γ , B ∋ A
\end{code}
```
### Terms and the typing judgment
\begin{code}
```
data _⊢_ : Context → Type → Set where
`_ : ∀ {Γ} {A}
@ -141,11 +141,11 @@ module Problem2 where
→ Γ , A ⊢ A
----------
→ Γ ⊢ A
\end{code}
```
### Abbreviating de Bruijn indices
\begin{code}
```
lookup : Context → → Type
lookup (Γ , A) zero = A
lookup (Γ , _) (suc n) = lookup Γ n
@ -160,11 +160,11 @@ module Problem2 where
#_ : ∀ {Γ} → (n : ) → Γ ⊢ lookup Γ n
# n = ` count n
\end{code}
```
### Renaming
\begin{code}
```
ext : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ∋ A)
-----------------------------------
→ (∀ {A B} → Γ , B ∋ A → Δ , B ∋ A)
@ -182,11 +182,11 @@ module Problem2 where
rename ρ (`suc M) = `suc (rename ρ M)
rename ρ (case L M N) = case (rename ρ L) (rename ρ M) (rename (ext ρ) N)
rename ρ (μ N) = μ (rename (ext ρ) N)
\end{code}
```
### Simultaneous Substitution
\begin{code}
```
exts : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ⊢ A)
----------------------------------
→ (∀ {A B} → Γ , B ∋ A → Δ , B ⊢ A)
@ -204,14 +204,14 @@ module Problem2 where
subst σ (`suc M) = `suc (subst σ M)
subst σ (case L M N) = case (subst σ L) (subst σ M) (subst (exts σ) N)
subst σ (μ N) = μ (subst (exts σ) N)
\end{code}
```
### Single substitution
\begin{code}
```
_[_] : ∀ {Γ A B}
→ Γ , B ⊢ A
→ Γ ⊢ B
→ Γ ⊢ B
---------
→ Γ ⊢ A
_[_] {Γ} {A} {B} N M = subst {Γ , B} {Γ} σ {A} N
@ -219,11 +219,11 @@ module Problem2 where
σ : ∀ {A} → Γ , B ∋ A → Γ ⊢ A
σ Z = M
σ (S x) = ` x
\end{code}
```
### Values
\begin{code}
```
data Value : ∀ {Γ A} → Γ ⊢ A → Set where
V-ƛ : ∀ {Γ A B} {N : Γ , A ⊢ B}
@ -238,11 +238,11 @@ module Problem2 where
→ Value V
--------------
→ Value (`suc V)
\end{code}
```
### Reduction
\begin{code}
```
infix 2 _—→_
data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -285,12 +285,12 @@ module Problem2 where
β-μ : ∀ {Γ A} {N : Γ , A ⊢ A}
---------------
→ μ N —→ N [ μ N ]
\end{code}
```
### Reflexive and transitive closure
\begin{code}
```
infix 2 _—↠_
infix 1 begin_
infixr 2 _—→⟨_⟩_
@ -313,12 +313,12 @@ module Problem2 where
------
→ M —↠ N
begin M—↠N = M—↠N
\end{code}
```
### Progress
\begin{code}
```
data Progress {A} (M : ∅ ⊢ A) : Set where
step : ∀ {N : ∅ ⊢ A}
@ -348,11 +348,11 @@ module Problem2 where
... | done V-zero = step (β-zero)
... | done (V-suc VL) = step (β-suc VL)
progress (μ N) = step (β-μ)
\end{code}
```
### Evaluation
\begin{code}
```
data Gas : Set where
gas : → Gas
@ -385,25 +385,25 @@ module Problem2 where
... | done VL = steps (L ∎) (done VL)
... | step {M} L—→M with eval (gas m) M
... | steps M—↠N fin = steps (L —→⟨ L—→M ⟩ M—↠N) fin
\end{code}
```
## Problem 3
Remember to indent all code by two spaces.
\begin{code}
```
module Problem3 where
\end{code}
```
### Imports
\begin{code}
```
import plfa.DeBruijn as DB
\end{code}
```
### Syntax
\begin{code}
```
infix 4 _∋_⦂_
infix 4 _⊢_↑_
infix 4 _⊢_↓_
@ -416,34 +416,34 @@ module Problem3 where
infixl 7 _·_
infix 8 `suc_
infix 9 `_
\end{code}
```
### Types
\begin{code}
```
data Type : Set where
_⇒_ : Type → Type → Type
` : Type
\end{code}
```
### Identifiers
### Identifiers
\begin{code}
```
Id : Set
Id = String
\end{code}
```
### Contexts
\begin{code}
```
data Context : Set where
∅ : Context
_,_⦂_ : Context → Id → Type → Context
\end{code}
```
### Terms
\begin{code}
```
data Term⁺ : Set
data Term⁻ : Set
@ -459,11 +459,11 @@ module Problem3 where
`case_[zero⇒_|suc_⇒_] : Term⁺ → Term⁻ → Id → Term⁻ → Term⁻
μ_⇒_ : Id → Term⁻ → Term⁻
_↑ : Term⁺ → Term⁻
\end{code}
```
### Lookup
### Lookup
\begin{code}
```
data _∋_⦂_ : Context → Id → Type → Set where
Z : ∀ {Γ x A}
@ -475,11 +475,11 @@ module Problem3 where
→ Γ ∋ x ⦂ A
-----------------
→ Γ , y ⦂ B ∋ x ⦂ A
\end{code}
```
### Bidirectional type checking
\begin{code}
```
data _⊢_↑_ : Context → Term⁺ → Type → Set
data _⊢_↓_ : Context → Term⁻ → Type → Set
@ -534,12 +534,12 @@ module Problem3 where
→ A ≡ B
-------------
→ Γ ⊢ (M ↑) ↓ B
\end{code}
```
### Type equality
\begin{code}
```
_≟Tp_ : (A B : Type) → Dec (A ≡ B)
` ≟Tp ` = yes refl
` ≟Tp (A ⇒ B) = no λ()
@ -549,11 +549,11 @@ module Problem3 where
... | no A≢ | _ = no λ{refl → A≢ refl}
... | yes _ | no B≢ = no λ{refl → B≢ refl}
... | yes refl | yes refl = yes refl
\end{code}
```
### Prerequisites
\begin{code}
```
dom≡ : ∀ {A A B B} → A ⇒ B ≡ A ⇒ B → A ≡ A
dom≡ refl = refl
@ -562,31 +562,31 @@ module Problem3 where
ℕ≢⇒ : ∀ {A B} → ` ≢ A ⇒ B
ℕ≢⇒ ()
\end{code}
```
### Unique lookup
\begin{code}
```
uniq-∋ : ∀ {Γ x A B} → Γ ∋ x ⦂ A → Γ ∋ x ⦂ B → A ≡ B
uniq-∋ Z Z = refl
uniq-∋ Z (S x≢y _) = ⊥-elim (x≢y refl)
uniq-∋ (S x≢y _) Z = ⊥-elim (x≢y refl)
uniq-∋ (S _ ∋x) (S _ ∋x) = uniq-∋ ∋x ∋x
\end{code}
```
### Unique synthesis
\begin{code}
```
uniq-↑ : ∀ {Γ M A B} → Γ ⊢ M ↑ A → Γ ⊢ M ↑ B → A ≡ B
uniq-↑ (⊢` ∋x) (⊢` ∋x) = uniq-∋ ∋x ∋x
uniq-↑ (⊢L · ⊢M) (⊢L · ⊢M) = rng≡ (uniq-↑ ⊢L ⊢L)
uniq-↑ (⊢↓ ⊢M) (⊢↓ ⊢M) = refl
\end{code}
uniq-↑ (⊢↓ ⊢M) (⊢↓ ⊢M) = refl
```
## Lookup type of a variable in the context
\begin{code}
```
ext∋ : ∀ {Γ B x y}
→ x ≢ y
→ ¬ ∃[ A ]( Γ ∋ x ⦂ A )
@ -604,11 +604,11 @@ module Problem3 where
... | no x≢y with lookup Γ x
... | no ¬∃ = no (ext∋ x≢y ¬∃)
... | yes ⟨ A , ⊢x ⟩ = yes ⟨ A , S x≢y ⊢x ⟩
\end{code}
```
### Promoting negations
\begin{code}
```
¬arg : ∀ {Γ A B L M}
→ Γ ⊢ L ↑ A ⇒ B
→ ¬ Γ ⊢ M ↓ A
@ -622,12 +622,12 @@ module Problem3 where
---------------
→ ¬ Γ ⊢ (M ↑) ↓ B
¬switch ⊢M A≢B (⊢↑ ⊢M A≡B) rewrite uniq-↑ ⊢M ⊢M = A≢B A≡B
\end{code}
```
## Synthesize and inherit types
\begin{code}
```
synthesize : ∀ (Γ : Context) (M : Term⁺)
-----------------------
→ Dec (∃[ A ](Γ ⊢ M ↑ A))
@ -644,7 +644,7 @@ module Problem3 where
... | yes ⟨ ` , ⊢L ⟩ = no (λ{ ⟨ _ , ⊢L · _ ⟩ → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ A ⇒ B , ⊢L ⟩ with inherit Γ M A
... | no ¬⊢M = no (¬arg ⊢L ¬⊢M)
... | yes ⊢M = yes ⟨ B , ⊢L · ⊢M ⟩
... | yes ⊢M = yes ⟨ B , ⊢L · ⊢M ⟩
synthesize Γ (M ↓ A) with inherit Γ M A
... | no ¬⊢M = no (λ{ ⟨ _ , ⊢↓ ⊢M ⟩ → ¬⊢M ⊢M })
... | yes ⊢M = yes ⟨ A , ⊢↓ ⊢M ⟩
@ -661,7 +661,7 @@ module Problem3 where
inherit Γ (`suc M) (A ⇒ B) = no (λ())
inherit Γ (`case L [zero⇒ M |suc x ⇒ N ]) A with synthesize Γ L
... | no ¬∃ = no (λ{ (⊢case ⊢L _ _) → ¬∃ ⟨ ` , ⊢L ⟩})
... | yes ⟨ _ ⇒ _ , ⊢L ⟩ = no (λ{ (⊢case ⊢L _ _) → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ _ ⇒ _ , ⊢L ⟩ = no (λ{ (⊢case ⊢L _ _) → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ ` , ⊢L ⟩ with inherit Γ M A
... | no ¬⊢M = no (λ{ (⊢case _ ⊢M _) → ¬⊢M ⊢M })
... | yes ⊢M with inherit (Γ , x ⦂ `) N A
@ -675,5 +675,4 @@ module Problem3 where
... | yes ⟨ A , ⊢M ⟩ with A ≟Tp B
... | no A≢B = no (¬switch ⊢M A≢B)
... | yes A≡B = yes (⊢↑ ⊢M A≡B)
\end{code}
```

View file

@ -4,9 +4,9 @@ layout : page
permalink : /PUC-Assignment1/
---
\begin{code}
```
module PUC-Assignment1 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -24,7 +24,7 @@ Please ensure your files execute correctly under Agda!
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; sym)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
@ -32,7 +32,7 @@ open import Data.Nat using (; zero; suc; _+_; _*_; _∸_; _≤_; z≤n; s≤s
open import Data.Nat.Properties using (+-assoc; +-identityʳ; +-suc; +-comm;
≤-refl; ≤-trans; ≤-antisym; ≤-total; +-monoʳ-≤; +-monoˡ-≤; +-mono-≤)
open import plfa.Relations using (_<_; z<s; s<s; zero; suc; even; odd)
\end{code}
```
## Naturals
@ -70,12 +70,12 @@ Compute `5 ∸ 3` and `3 ∸ 5`, writing out your reasoning as a chain of equati
A more efficient representation of natural numbers uses a binary
rather than a unary system. We represent a number as a bitstring.
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
For instance, the bitstring
1011
@ -129,7 +129,7 @@ days using a finite story of creation, as
[earlier][plfa.Naturals#finite-creation]
#### Exercise `+-swap` (recommended) {#plus-swap}
#### Exercise `+-swap` (recommended) {#plus-swap}
Show
@ -186,7 +186,7 @@ for all naturals `m`, `n`, and `p`.
#### Exercise `Bin-laws` (stretch) {#Bin-laws}
Recall that
Recall that
Exercise [Bin][plfa.Naturals#Bin]
defines a datatype `Bin` of bitstrings representing natural numbers
and asks you to define functions
@ -215,7 +215,7 @@ Give an example of a preorder that is not a partial order.
Give an example of a partial order that is not a preorder.
#### Exercise `≤-antisym-cases` {#leq-antisym-cases}
#### Exercise `≤-antisym-cases` {#leq-antisym-cases}
The above proof omits cases where one argument is `z≤n` and one
argument is `s≤s`. Why is it ok to omit them?
@ -264,7 +264,7 @@ Show that the sum of two odd numbers is even.
#### Exercise `Bin-predicates` (stretch) {#Bin-predicates}
Recall that
Recall that
Exercise [Bin][plfa.Naturals#Bin]
defines a datatype `Bin` of bitstrings representing natural numbers.
Representations are not unique due to leading zeros.
@ -307,6 +307,5 @@ and back is the identity.
---------------
to (from x) ≡ x
\end{code}
(Hint: For each of these, you may first need to prove related
properties of `One`.)

View file

@ -4,9 +4,9 @@ layout : page
permalink : /PUC-Assignment2/
---
\begin{code}
```
module PUC-Assignment2 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -24,7 +24,7 @@ Please ensure your files execute correctly under Agda!
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; sym)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
@ -52,7 +52,7 @@ open import plfa.Isomorphism using (_≃_; ≃-sym; ≃-trans; _≲_; extensiona
open plfa.Isomorphism.≃-Reasoning
open import plfa.Lists using (List; []; _∷_; [_]; [_,_]; [_,_,_]; [_,_,_,_];
_++_; reverse; map; foldr; sum; All; Any; here; there; _∈_)
\end{code}
```
## Equality
@ -72,25 +72,25 @@ regard to inequality. Rewrite both `+-monoˡ-≤` and `+-mono-≤`.
#### Exercise `≃-implies-≲`
Show that every isomorphism implies an embedding.
\begin{code}
```
postulate
≃-implies-≲ : ∀ {A B : Set}
→ A ≃ B
-----
→ A ≲ B
\end{code}
→ A ≲ B
```
#### Exercise `_⇔_` (recommended) {#iff}
Define equivalence of propositions (also known as "if and only if") as follows.
\begin{code}
```
record _⇔_ (A B : Set) : Set where
field
to : A → B
from : B → A
open _⇔_
\end{code}
```
Show that equivalence is reflexive, symmetric, and transitive.
#### Exercise `Bin-embedding` (stretch) {#Bin-embedding}
@ -99,12 +99,12 @@ Recall that Exercises
[Bin][plfa.Naturals#Bin] and
[Bin-laws][plfa.Induction#Bin-laws]
define a datatype of bitstrings representing natural numbers.
\begin{code}
```
data Bin : Set where
nil : Bin
x0_ : Bin → Bin
x1_ : Bin → Bin
\end{code}
```
And ask you to define the following functions:
to : → Bin
@ -139,25 +139,25 @@ Show zero is the left identity of addition.
#### Exercise `⊥-identityʳ`
Show zero is the right identity of addition.
Show zero is the right identity of addition.
#### Exercise `⊎-weak-×` (recommended)
Show that the following property holds.
\begin{code}
```
postulate
⊎-weak-× : ∀ {A B C : Set} → (A ⊎ B) × C → A ⊎ (B × C)
\end{code}
```
This is called a _weak distributive law_. Give the corresponding
distributive law, and explain how it relates to the weak version.
#### Exercise `⊎×-implies-×⊎`
Show that a disjunct of conjuncts implies a conjunct of disjuncts.
\begin{code}
```
postulate
⊎×-implies-×⊎ : ∀ {A B C D : Set} → (A × B) ⊎ (C × D) → (A ⊎ C) × (B ⊎ D)
\end{code}
```
Does the converse hold? If so, prove; if not, give a counterexample.
@ -216,10 +216,10 @@ Show that each of these implies all the others.
#### Exercise `Stable` (stretch)
Say that a formula is _stable_ if double negation elimination holds for it.
\begin{code}
```
Stable : Set → Set
Stable A = ¬ ¬ A → A
\end{code}
```
Show that any negated formula is stable, and that the conjunction
of two stable formulas is stable.
@ -229,41 +229,41 @@ of two stable formulas is stable.
#### Exercise `∀-distrib-×` (recommended)
Show that universals distribute over conjunction.
\begin{code}
```
postulate
∀-distrib-× : ∀ {A : Set} {B C : A → Set} →
(∀ (x : A) → B x × C x) ≃ (∀ (x : A) → B x) × (∀ (x : A) → C x)
\end{code}
```
Compare this with the result (`→-distrib-×`) in
Chapter [Connectives][plfa.Connectives].
#### Exercise `⊎∀-implies-∀⊎`
Show that a disjunction of universals implies a universal of disjunctions.
\begin{code}
```
postulate
⊎∀-implies-∀⊎ : ∀ {A : Set} { B C : A → Set } →
(∀ (x : A) → B x) ⊎ (∀ (x : A) → C x) → ∀ (x : A) → B x ⊎ C x
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
#### Exercise `∃-distrib-⊎` (recommended)
Show that existentials distribute over disjunction.
\begin{code}
```
postulate
∃-distrib-⊎ : ∀ {A : Set} {B C : A → Set} →
∃[ x ] (B x ⊎ C x) ≃ (∃[ x ] B x) ⊎ (∃[ x ] C x)
\end{code}
```
#### Exercise `∃×-implies-×∃`
Show that an existential of conjunctions implies a conjunction of existentials.
\begin{code}
```
postulate
∃×-implies-×∃ : ∀ {A : Set} { B C : A → Set } →
∃[ x ] (B x × C x) → (∃[ x ] B x) × (∃[ x ] C x)
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
#### Exercise `∃-even-odd`
@ -272,7 +272,7 @@ How do the proofs become more difficult if we replace `m * 2` and `1 + m * 2`
by `2 * m` and `2 * m + 1`? Rewrite the proofs of `∃-even` and `∃-odd` when
restated in this way.
#### Exercise `∃-+-≤`
#### Exercise `∃-|-≤`
Show that `y ≤ z` holds if and only if there exists a `x` such that
`x + y ≡ z`.
@ -280,13 +280,13 @@ Show that `y ≤ z` holds if and only if there exists a `x` such that
#### Exercise `∃¬-implies-¬∀` (recommended)
Show that existential of a negation implies negation of a universal.
\begin{code}
```
postulate
∃¬-implies-¬∀ : ∀ {A : Set} {B : A → Set}
→ ∃[ x ] (¬ B x)
--------------
→ ¬ (∀ x → B x)
\end{code}
```
Does the converse hold? If so, prove; if not, explain why.
@ -329,41 +329,41 @@ Using the above, establish that there is an isomorphism between `` and
#### Exercise `_<?_` (recommended)
Analogous to the function above, define a function to decide strict inequality.
\begin{code}
```
postulate
_<?_ : ∀ (m n : ) → Dec (m < n)
\end{code}
```
#### Exercise `_≡?_`
Define a function to decide whether two naturals are equal.
\begin{code}
```
postulate
_≡?_ : ∀ (m n : ) → Dec (m ≡ n)
\end{code}
```
#### Exercise `erasure`
Show that erasure relates corresponding boolean and decidable operations.
\begin{code}
```
postulate
∧-× : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ ∧ ⌊ y ⌋ ≡ ⌊ x ×-dec y ⌋
-× : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ ⌊ y ⌋ ≡ ⌊ x ⊎-dec y ⌋
not-¬ : ∀ {A : Set} (x : Dec A) → not ⌊ x ⌋ ≡ ⌊ ¬? x ⌋
\end{code}
```
#### Exercise `iff-erasure` (recommended)
Give analogues of the `_⇔_` operation from
Give analogues of the `_⇔_` operation from
Chapter [Isomorphism][plfa.Isomorphism#iff],
operation on booleans and decidables, and also show the corresponding erasure.
\begin{code}
```
postulate
_iff_ : Bool → Bool → Bool
_⇔-dec_ : ∀ {A B : Set} → Dec A → Dec B → Dec (A ⇔ B)
iff-⇔ : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ iff ⌊ y ⌋ ≡ ⌊ x ⇔-dec y ⌋
\end{code}
iff-⇔ : ∀ {A B : Set} (x : Dec A) (y : Dec B) → ⌊ x ⌋ iff ⌊ y ⌋ ≡ ⌊ x ⇔-dec y ⌋
```
## Lists
@ -371,56 +371,56 @@ postulate
Show that the reverse of one list appended to another is the
reverse of the second appended to the reverse of the first.
\begin{code}
```
postulate
reverse-++-commute : ∀ {A : Set} {xs ys : List A}
→ reverse (xs ++ ys) ≡ reverse ys ++ reverse xs
\end{code}
```
#### Exercise `reverse-involutive` (recommended)
A function is an _involution_ if when applied twice it acts
as the identity function. Show that reverse is an involution.
\begin{code}
```
postulate
reverse-involutive : ∀ {A : Set} {xs : List A}
→ reverse (reverse xs) ≡ xs
\end{code}
```
#### Exercise `map-compose`
Prove that the map of a composition is equal to the composition of two maps.
\begin{code}
```
postulate
map-compose : ∀ {A B C : Set} {f : A → B} {g : B → C}
→ map (g ∘ f) ≡ map g ∘ map f
\end{code}
```
The last step of the proof requires extensionality.
#### Exercise `map-++-commute`
Prove the following relationship between map and append.
\begin{code}
```
postulate
map-++-commute : ∀ {A B : Set} {f : A → B} {xs ys : List A}
→ map f (xs ++ ys) ≡ map f xs ++ map f ys
\end{code}
```
#### Exercise `map-Tree`
Define a type of trees with leaves of type `A` and internal
nodes of type `B`.
\begin{code}
```
data Tree (A B : Set) : Set where
leaf : A → Tree A B
node : Tree A B → B → Tree A B → Tree A B
\end{code}
```
Define a suitable map operator over trees.
\begin{code}
```
postulate
map-Tree : ∀ {A B C D : Set}
→ (A → C) → (B → D) → Tree A B → Tree C D
\end{code}
```
#### Exercise `product` (recommended)
@ -432,31 +432,31 @@ For example,
#### Exercise `foldr-++` (recommended)
Show that fold and append are related as follows.
\begin{code}
```
postulate
foldr-++ : ∀ {A B : Set} (_⊗_ : A → B → B) (e : B) (xs ys : List A) →
foldr _⊗_ e (xs ++ ys) ≡ foldr _⊗_ (foldr _⊗_ e ys) xs
\end{code}
```
#### Exercise `map-is-foldr`
Show that map can be defined using fold.
\begin{code}
```
postulate
map-is-foldr : ∀ {A B : Set} {f : A → B} →
map f ≡ foldr (λ x xs → f x ∷ xs) []
\end{code}
```
This requires extensionality.
#### Exercise `fold-Tree`
Define a suitable fold function for the type of trees given earlier.
\begin{code}
```
postulate
fold-Tree : ∀ {A B C : Set}
→ (A → C) → (C → B → C → C) → Tree A B → C
\end{code}
```
#### Exercise `map-is-fold-Tree`
@ -465,23 +465,23 @@ Demonstrate an analogue of `map-is-foldr` for the type of trees.
#### Exercise `sum-downFrom` (stretch)
Define a function that counts down as follows.
\begin{code}
```
downFrom : → List
downFrom zero = []
downFrom (suc n) = n ∷ downFrom n
\end{code}
```
For example,
\begin{code}
```
_ : downFrom 3 ≡ [ 2 , 1 , 0 ]
_ = refl
\end{code}
```
Prove that the sum of the numbers `(n - 1) + ⋯ + 0` is
equal to `n * (n ∸ 1) / 2`.
\begin{code}
```
postulate
sum-downFrom : ∀ (n : )
→ sum (downFrom n) * 2 ≡ n * (n ∸ 1)
\end{code}
```
#### Exercise `foldl`
@ -515,25 +515,25 @@ Show that the equivalence `All-++-⇔` can be extended to an isomorphism.
First generalise composition to arbitrary levels, using
[universe polymorphism][plfa.Equality#unipoly].
\begin{code}
```
_∘_ : ∀ {ℓ₁ ℓ₂ ℓ₃ : Level} {A : Set ℓ₁} {B : Set ℓ₂} {C : Set ℓ₃}
→ (B → C) → (A → B) → A → C
(g ∘′ f) x = g (f x)
\end{code}
```
Show that `Any` and `All` satisfy a version of De Morgan's Law.
\begin{code}
```
postulate
¬Any≃All¬ : ∀ {A : Set} (P : A → Set) (xs : List A)
→ (¬_ ∘′ Any P) xs ≃ All (¬_ ∘′ P) xs
\end{code}
```
Do we also have the following?
\begin{code}
```
postulate
¬All≃Any¬ : ∀ {A : Set} (P : A → Set) (xs : List A)
→ (¬_ ∘′ All P) xs ≃ Any (¬_ ∘′ P) xs
\end{code}
```
If so, prove; if not, explain why.
@ -550,8 +550,8 @@ for some element of a list. Give their definitions.
Define the following variant of the traditional `filter` function on lists,
which given a list and a decidable predicate returns all elements of the
list satisfying the predicate.
\begin{code}
```
postulate
filter? : ∀ {A : Set} {P : A → Set}
→ (P? : Decidable P) → List A → ∃[ ys ]( All P ys )
\end{code}
```

View file

@ -4,9 +4,9 @@ layout : page
permalink : /PUC-Assignment3/
---
\begin{code}
```
module PUC-Assignment3 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -24,7 +24,7 @@ Please ensure your files execute correctly under Agda!
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong; sym)
open Eq.≡-Reasoning using (begin_; _≡⟨⟩_; _≡⟨_⟩_; _∎)
@ -46,7 +46,7 @@ open import plfa.Lists using (List; []; _∷_; [_]; [_,_]; [_,_,_]; [_,_,_,_];
_++_; reverse; map; foldr; sum; All; Any; here; there; _∈_)
open import plfa.Lambda hiding (ƛ_⇒_; case_[zero⇒_|suc_⇒_]; μ_⇒_; plus)
open import plfa.Properties hiding (value?; unstuck; preserves; wttdgs)
\end{code}
```
## Lambda
@ -60,7 +60,7 @@ two natural numbers.
We can make examples with lambda terms slightly easier to write
by adding the following definitions.
\begin{code}
```
ƛ_⇒_ : Term → Term → Term
ƛ′ (` x) ⇒ N = ƛ x ⇒ N
ƛ′ _ ⇒ _ = ⊥-elim impossible
@ -75,9 +75,9 @@ case _ [zero⇒ _ |suc _ ⇒ _ ] = ⊥-elim impossible
μ′ (` x) ⇒ N = μ x ⇒ N
μ′ _ ⇒ _ = ⊥-elim impossible
where postulate impossible : ⊥
\end{code}
```
The definition of `plus` can now be written as follows.
\begin{code}
```
plus : Term
plus = μ′ + ⇒ ƛ′ m ⇒ ƛ′ n ⇒
case m
@ -87,7 +87,7 @@ plus = μ′ + ⇒ ƛ′ m ⇒ ƛ′ n ⇒
+ = ` "+"
m = ` "m"
n = ` "n"
\end{code}
```
Write out the definition of multiplication in the same style.
#### Exercise `_[_:=_]` (stretch)
@ -134,10 +134,10 @@ proof of `progress` above.
Combine `progress` and `—→¬V` to write a program that decides
whether a well-typed term is a value.
\begin{code}
```
postulate
value? : ∀ {A M} → ∅ ⊢ M ⦂ A → Dec (Value M)
\end{code}
```
#### Exercise `subst` (stretch)
@ -179,11 +179,3 @@ Give an example of an ill-typed term that does get stuck.
#### Exercise `unstuck` (recommended)
Provide proofs of the three postulates, `unstuck`, `preserves`, and `wttdgs` above.

View file

@ -4,9 +4,9 @@ layout : page
permalink : /PUC-Assignment4/
---
\begin{code}
```
module PUC-Assignment4 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -33,7 +33,7 @@ before and after code you add, to indicate your changes.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong; cong₂; _≢_)
open import Data.Empty using (⊥; ⊥-elim)
@ -41,21 +41,21 @@ open import Data.Nat using (; zero; suc; _+_; _*_)
open import Data.Product using (_×_; ∃; ∃-syntax) renaming (_,_ to ⟨_,_⟩)
open import Data.String using (String; _≟_)
open import Relation.Nullary using (¬_; Dec; yes; no)
\end{code}
```
## DeBruijn
\begin{code}
```
module DeBruijn where
\end{code}
```
Remember to indent all code by two spaces.
\begin{code}
```
open import plfa.DeBruijn
\end{code}
```
#### Exercise (`mul`) (recommended)
@ -79,16 +79,16 @@ Using the evaluator, confirm that two times two is four.
## More
\begin{code}
```
module More where
\end{code}
```
Remember to indent all code by two spaces.
### Syntax
\begin{code}
```
infix 4 _⊢_
infix 4 _∋_
infixl 5 _,_
@ -105,11 +105,11 @@ Remember to indent all code by two spaces.
infix 9 `_
infix 9 S_
infix 9 #_
\end{code}
```
### Types
\begin{code}
```
data Type : Set where
` : Type
_⇒_ : Type → Type → Type
@ -119,19 +119,19 @@ Remember to indent all code by two spaces.
` : Type
`⊥ : Type
`List : Type → Type
\end{code}
```
### Contexts
\begin{code}
```
data Context : Set where
∅ : Context
_,_ : Context → Type → Context
\end{code}
```
### Variables and the lookup judgment
\begin{code}
```
data _∋_ : Context → Type → Set where
Z : ∀ {Γ A}
@ -142,11 +142,11 @@ Remember to indent all code by two spaces.
→ Γ ∋ B
---------
→ Γ , A ∋ B
\end{code}
```
### Terms and the typing judgment
\begin{code}
```
data _⊢_ : Context → Type → Set where
-- variables
@ -241,11 +241,11 @@ Remember to indent all code by two spaces.
--------------
→ Γ ⊢ C
\end{code}
```
### Abbreviating de Bruijn indices
\begin{code}
```
lookup : Context → → Type
lookup (Γ , A) zero = A
lookup (Γ , _) (suc n) = lookup Γ n
@ -260,11 +260,11 @@ Remember to indent all code by two spaces.
#_ : ∀ {Γ} → (n : ) → Γ ⊢ lookup Γ n
# n = ` count n
\end{code}
```
## Renaming
\begin{code}
```
ext : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ∋ A) → (∀ {A B} → Γ , A ∋ B → Δ , A ∋ B)
ext ρ Z = Z
ext ρ (S x) = S (ρ x)
@ -284,11 +284,11 @@ Remember to indent all code by two spaces.
rename ρ (`proj₁ L) = `proj₁ (rename ρ L)
rename ρ (`proj₂ L) = `proj₂ (rename ρ L)
rename ρ (case× L M) = case× (rename ρ L) (rename (ext (ext ρ)) M)
\end{code}
```
## Simultaneous Substitution
\begin{code}
```
exts : ∀ {Γ Δ} → (∀ {A} → Γ ∋ A → Δ ⊢ A) → (∀ {A B} → Γ , A ∋ B → Δ , A ⊢ B)
exts σ Z = ` Z
exts σ (S x) = rename S_ (σ x)
@ -308,11 +308,11 @@ Remember to indent all code by two spaces.
subst σ (`proj₁ L) = `proj₁ (subst σ L)
subst σ (`proj₂ L) = `proj₂ (subst σ L)
subst σ (case× L M) = case× (subst σ L) (subst (exts (exts σ)) M)
\end{code}
```
## Single and double substitution
\begin{code}
```
_[_] : ∀ {Γ A B}
→ Γ , A ⊢ B
→ Γ ⊢ A
@ -336,11 +336,11 @@ Remember to indent all code by two spaces.
σ Z = W
σ (S Z) = V
σ (S (S x)) = ` x
\end{code}
```
## Values
\begin{code}
```
data Value : ∀ {Γ A} → Γ ⊢ A → Set where
-- functions
@ -373,14 +373,14 @@ Remember to indent all code by two spaces.
→ Value W
----------------
→ Value `⟨ V , W ⟩
\end{code}
```
Implicit arguments need to be supplied when they are
not fixed by the given arguments.
## Reduction
\begin{code}
```
infix 2 _—→_
data _—→_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
@ -506,11 +506,11 @@ not fixed by the given arguments.
→ Value W
----------------------------------
→ case× `⟨ V , W ⟩ M —→ M [ V ][ W ]
\end{code}
```
## Reflexive and transitive closure
\begin{code}
```
infix 2 _—↠_
infix 1 begin_
infixr 2 _—→⟨_⟩_
@ -533,12 +533,12 @@ not fixed by the given arguments.
------
→ M —↠ N
begin M—↠N = M—↠N
\end{code}
```
## Values do not reduce
\begin{code}
```
V¬—→ : ∀ {Γ A} {M N : Γ ⊢ A}
→ Value M
----------
@ -549,12 +549,12 @@ not fixed by the given arguments.
V¬—→ V-con ()
V¬—→ V-⟨ VM , _ ⟩ (ξ-⟨,⟩₁ M—→M) = V¬—→ VM M—→M
V¬—→ V-⟨ _ , VN ⟩ (ξ-⟨,⟩₂ _ N—→N) = V¬—→ VN N—→N
\end{code}
```
## Progress
\begin{code}
```
data Progress {A} (M : ∅ ⊢ A) : Set where
step : ∀ {N : ∅ ⊢ A}
@ -610,12 +610,12 @@ not fixed by the given arguments.
progress (case× L M) with progress L
... | step L—→L = step (ξ-case× L—→L)
... | done (V-⟨ VM , VN ⟩) = step (β-case× VM VN)
\end{code}
```
## Evaluation
\begin{code}
```
data Gas : Set where
gas : → Gas
@ -648,16 +648,16 @@ not fixed by the given arguments.
... | done VL = steps (L ∎) (done VL)
... | step {M} L—→M with eval (gas m) M
... | steps M—↠N fin = steps (L —→⟨ L—→M ⟩ M—↠N) fin
\end{code}
```
## Examples
\begin{code}
```
cube : ∅ ⊢ Nat ⇒ Nat
cube = ƛ (# 0 `* # 0 `* # 0)
_ : cube · con 2 —↠ con 8
_ =
_ =
begin
cube · con 2
—→⟨ β-ƛ V-con ⟩
@ -723,7 +723,7 @@ not fixed by the given arguments.
—→⟨ β-case× V-con V-zero ⟩
`⟨ `zero , con 42 ⟩
\end{code}
```
#### Exercise `More` (recommended in part)
@ -737,5 +737,3 @@ to confirm it returns the expected answer.
* an alternative formulation of unit type
* empty type (recommended)
* lists

View file

@ -4,9 +4,9 @@ layout : page
permalink : /PUC-Assignment5/
---
\begin{code}
```
module PUC-Assignment5 where
\end{code}
```
## YOUR NAME AND EMAIL GOES HERE
@ -33,7 +33,7 @@ before and after code you add, to indicate your changes.
## Imports
\begin{code}
```
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; trans; cong; cong₂; _≢_)
open import Data.Empty using (⊥; ⊥-elim)
@ -41,26 +41,26 @@ open import Data.Nat using (; zero; suc; _+_; _*_)
open import Data.Product using (_×_; ∃; ∃-syntax) renaming (_,_ to ⟨_,_⟩)
open import Data.String using (String; _≟_)
open import Relation.Nullary using (¬_; Dec; yes; no)
\end{code}
```
## Inference
\begin{code}
```
module Inference where
\end{code}
```
Remember to indent all code by two spaces.
### Imports
\begin{code}
```
import plfa.More as DB
\end{code}
```
### Syntax
\begin{code}
```
infix 4 _∋_⦂_
infix 4 _⊢_↑_
infix 4 _⊢_↓_
@ -75,11 +75,11 @@ Remember to indent all code by two spaces.
infixl 7 _·_
infix 8 `suc_
infix 9 `_
\end{code}
```
### Identifiers, types, and contexts
\begin{code}
```
Id : Set
Id = String
@ -90,11 +90,11 @@ Remember to indent all code by two spaces.
data Context : Set where
∅ : Context
_,_⦂_ : Context → Id → Type → Context
\end{code}
```
### Terms
\begin{code}
```
data Term⁺ : Set
data Term⁻ : Set
@ -110,11 +110,11 @@ Remember to indent all code by two spaces.
`case_[zero⇒_|suc_⇒_] : Term⁺ → Term⁻ → Id → Term⁻ → Term⁻
μ_⇒_ : Id → Term⁻ → Term⁻
_↑ : Term⁺ → Term⁻
\end{code}
```
### Sample terms
\begin{code}
```
two : Term⁻
two = `suc (`suc `zero)
@ -126,11 +126,11 @@ Remember to indent all code by two spaces.
2+2 : Term⁺
2+2 = plus · two · two
\end{code}
```
### Lookup
### Lookup
\begin{code}
```
data _∋_⦂_ : Context → Id → Type → Set where
Z : ∀ {Γ x A}
@ -142,11 +142,11 @@ Remember to indent all code by two spaces.
→ Γ ∋ x ⦂ A
-----------------
→ Γ , y ⦂ B ∋ x ⦂ A
\end{code}
```
### Bidirectional type checking
\begin{code}
```
data _⊢_↑_ : Context → Term⁺ → Type → Set
data _⊢_↓_ : Context → Term⁻ → Type → Set
@ -201,12 +201,12 @@ Remember to indent all code by two spaces.
→ A ≡ B
-------------
→ Γ ⊢ (M ↑) ↓ B
\end{code}
```
### Type equality
\begin{code}
```
_≟Tp_ : (A B : Type) → Dec (A ≡ B)
` ≟Tp ` = yes refl
` ≟Tp (A ⇒ B) = no λ()
@ -216,11 +216,11 @@ Remember to indent all code by two spaces.
... | no A≢ | _ = no λ{refl → A≢ refl}
... | yes _ | no B≢ = no λ{refl → B≢ refl}
... | yes refl | yes refl = yes refl
\end{code}
```
### Prerequisites
\begin{code}
```
dom≡ : ∀ {A A B B} → A ⇒ B ≡ A ⇒ B → A ≡ A
dom≡ refl = refl
@ -229,31 +229,31 @@ Remember to indent all code by two spaces.
ℕ≢⇒ : ∀ {A B} → ` ≢ A ⇒ B
ℕ≢⇒ ()
\end{code}
```
### Unique lookup
\begin{code}
```
uniq-∋ : ∀ {Γ x A B} → Γ ∋ x ⦂ A → Γ ∋ x ⦂ B → A ≡ B
uniq-∋ Z Z = refl
uniq-∋ Z (S x≢y _) = ⊥-elim (x≢y refl)
uniq-∋ (S x≢y _) Z = ⊥-elim (x≢y refl)
uniq-∋ (S _ ∋x) (S _ ∋x) = uniq-∋ ∋x ∋x
\end{code}
```
### Unique synthesis
\begin{code}
```
uniq-↑ : ∀ {Γ M A B} → Γ ⊢ M ↑ A → Γ ⊢ M ↑ B → A ≡ B
uniq-↑ (⊢` ∋x) (⊢` ∋x) = uniq-∋ ∋x ∋x
uniq-↑ (⊢L · ⊢M) (⊢L · ⊢M) = rng≡ (uniq-↑ ⊢L ⊢L)
uniq-↑ (⊢↓ ⊢M) (⊢↓ ⊢M) = refl
\end{code}
uniq-↑ (⊢↓ ⊢M) (⊢↓ ⊢M) = refl
```
## Lookup type of a variable in the context
\begin{code}
```
ext∋ : ∀ {Γ B x y}
→ x ≢ y
→ ¬ ∃[ A ]( Γ ∋ x ⦂ A )
@ -271,11 +271,11 @@ Remember to indent all code by two spaces.
... | no x≢y with lookup Γ x
... | no ¬∃ = no (ext∋ x≢y ¬∃)
... | yes ⟨ A , ⊢x ⟩ = yes ⟨ A , S x≢y ⊢x ⟩
\end{code}
```
### Promoting negations
\begin{code}
```
¬arg : ∀ {Γ A B L M}
→ Γ ⊢ L ↑ A ⇒ B
→ ¬ Γ ⊢ M ↓ A
@ -289,12 +289,12 @@ Remember to indent all code by two spaces.
---------------
→ ¬ Γ ⊢ (M ↑) ↓ B
¬switch ⊢M A≢B (⊢↑ ⊢M A≡B) rewrite uniq-↑ ⊢M ⊢M = A≢B A≡B
\end{code}
```
## Synthesize and inherit types
\begin{code}
```
synthesize : ∀ (Γ : Context) (M : Term⁺)
-----------------------
→ Dec (∃[ A ](Γ ⊢ M ↑ A))
@ -311,7 +311,7 @@ Remember to indent all code by two spaces.
... | yes ⟨ ` , ⊢L ⟩ = no (λ{ ⟨ _ , ⊢L · _ ⟩ → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ A ⇒ B , ⊢L ⟩ with inherit Γ M A
... | no ¬⊢M = no (¬arg ⊢L ¬⊢M)
... | yes ⊢M = yes ⟨ B , ⊢L · ⊢M ⟩
... | yes ⊢M = yes ⟨ B , ⊢L · ⊢M ⟩
synthesize Γ (M ↓ A) with inherit Γ M A
... | no ¬⊢M = no (λ{ ⟨ _ , ⊢↓ ⊢M ⟩ → ¬⊢M ⊢M })
... | yes ⊢M = yes ⟨ A , ⊢↓ ⊢M ⟩
@ -328,7 +328,7 @@ Remember to indent all code by two spaces.
inherit Γ (`suc M) (A ⇒ B) = no (λ())
inherit Γ (`case L [zero⇒ M |suc x ⇒ N ]) A with synthesize Γ L
... | no ¬∃ = no (λ{ (⊢case ⊢L _ _) → ¬∃ ⟨ ` , ⊢L ⟩})
... | yes ⟨ _ ⇒ _ , ⊢L ⟩ = no (λ{ (⊢case ⊢L _ _) → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ _ ⇒ _ , ⊢L ⟩ = no (λ{ (⊢case ⊢L _ _) → ℕ≢⇒ (uniq-↑ ⊢L ⊢L) })
... | yes ⟨ ` , ⊢L ⟩ with inherit Γ M A
... | no ¬⊢M = no (λ{ (⊢case _ ⊢M _) → ¬⊢M ⊢M })
... | yes ⊢M with inherit (Γ , x ⦂ `) N A
@ -342,11 +342,11 @@ Remember to indent all code by two spaces.
... | yes ⟨ A , ⊢M ⟩ with A ≟Tp B
... | no A≢B = no (¬switch ⊢M A≢B)
... | yes A≡B = yes (⊢↑ ⊢M A≡B)
\end{code}
```
### Erasure
\begin{code}
```
∥_∥Tp : Type → DB.Type
` ∥Tp = DB.`
∥ A ⇒ B ∥Tp = ∥ A ∥Tp DB.⇒ ∥ B ∥Tp
@ -372,7 +372,7 @@ Remember to indent all code by two spaces.
∥ ⊢case ⊢L ⊢M ⊢N ∥⁻ = DB.case ∥ ⊢L ∥⁺ ∥ ⊢M ∥⁻ ∥ ⊢N ∥⁻
∥ ⊢μ ⊢M ∥⁻ = DB.μ ∥ ⊢M ∥⁻
∥ ⊢↑ ⊢M refl ∥⁻ = ∥ ⊢M ∥⁺
\end{code}
```
#### Exercise `bidirectional-mul` (recommended) {#bidirectional-mul}

View file

@ -39,7 +39,7 @@ Lectures and tutorials take place Fridays and some Thursdays in 548L.
<td><b>Fri 26 Apr</b></td>
<td><a href="/Equality/">Equality</a> &amp;
<a href="/Isomorphism/">Isomorphism</a> &amp;
<a href="/Connectives/">Connectives</a></td>
<a href="/Connectives/">Connectives</a></td>
</tr>
<tr>
<td><b>Fri 3 May</b></td>
@ -105,4 +105,3 @@ For instructions on how to set up Agda for PLFA see [Getting Started](/GettingSt
Submit assignments by email to [wadler@inf.ed.ac.uk](mailto:wadler@inf.ed.ac.uk).
Attach a single file named `PUC-Assignment1.lagda` or the like. Include
your name and email in the submitted file.

View file

@ -104,7 +104,7 @@ For instructions on how to set up Agda for PLFA see [Getting Started](/GettingSt
* [Assignment 5](/tspl/first-mock.pdf) cw5 due 4pm Thursday 22 November (Week 10)
<br />
Use file [Exam][Exam]. Despite the rubric, do **all three questions**.
Assignments are submitted by running
``` bash