updating Connectives, finishing exponentials

This commit is contained in:
wadler 2018-03-13 18:59:10 -03:00
parent bbee34baa2
commit 2e2ebb5665
6 changed files with 2495 additions and 12 deletions

1170
src/Connectives-old.lagda Normal file

File diff suppressed because it is too large Load diff

1153
src/Connectives.lagda Normal file

File diff suppressed because it is too large Load diff

View file

@ -220,13 +220,20 @@ zero + n = n
(suc m) + n = suc (m + n)
\end{code}
To save space we postulate (rather than prove in full) two lemmas,
and then repeat the proof of commutativity.
To save space we postulate (rather than prove in full) two lemmas.
\begin{code}
postulate
+-identity : ∀ (m : ) → m + zero ≡ m
+-suc : ∀ (m n : ) → m + suc n ≡ suc (m + n)
\end{code}
This is our first use of a *postulate*. A postulate specifies a
signature for an identifier but no definition. Here we postulate
something proved earlier to save space. Postulates must be used with
caution. If we postulate something false then we could use Agda to
prove anything whatsoever.
We then repeat the proof of commutativity.
\begin{code}
+-comm : ∀ (m n : ) → m + n ≡ n + m
+-comm m zero =
begin
@ -392,6 +399,49 @@ Nonetheless, rewrite is a vital part of the Agda toolkit,
as earlier examples have shown.
## Extensionality {!#extensionality}
Extensionality asserts that they only way to distinguish functions is
by applying them; if two functions applied to the same argument always
yield the same result, then they are the same functions. It is the
converse of `cong-app`, introduced earlier.
Agda does not presume extensionality, but we can postulate that it holds.
\begin{code}
postulate
extensionality : ∀ {A B : Set} {f g : A → B} → (∀ (x : A) → f x ≡ g x) → f ≡ g
\end{code}
Postulating extensionality does not lead to difficulties, as it is
known to be consistent with the theory that underlies Agda.
As an example, consider that we need results from two libraries,
one where addition is defined as above, and one where it is
defined the other way around.
\begin{code}
_+_ :
m + zero = m
m + suc n = suc (m + n)
\end{code}
Applying commutativity, it is easy to show that both operators always
return the same result given the same arguments.
\begin{code}
same-app : ∀ (m n : ) → m + n ≡ m + n
same-app m n rewrite +-comm m n = helper m n
where
helper : ∀ (m n : ) → m + n ≡ n + m
helper m zero = refl
helper m (suc n) = cong suc (helper m n)
\end{code}
However, it might be convenient to assert that the two operators are
actually indistinguishable. This we can do via two applications of
extensionality.
\begin{code}
same : _+_ ≡ _+_
same = extensionality λ{m → extensionality λ{n → same-app m n}}
\end{code}
We will occasionally have need to postulate extensionality in what follows.
## Leibniz equality
The form of asserting equivalence that we have used is due to Martin Löf,

View file

@ -22,7 +22,7 @@ open Eq.≡-Reasoning
## Isomorphism
In set theory, two sets are isomorphic if they are in one-to-one correspondence.
Here is the formal definition of isomorphism.
Here is a formal definition of isomorphism.
\begin{code}
record _≃_ (A B : Set) : Set where
field
@ -106,12 +106,19 @@ is equivalent to a function `f` defined by the equations
f pᵢ = eᵢ
where the `pᵢ` are patterns (left-hand sides of an equation) and the `eᵢ` are expressions
(right-hand side of an equation). Thus, in the above, `to` and `from` are both
bound to identity functions, and `from∘to` and `to∘from` are both bound to functions
that discard their argument and return `refl`. In this case, `refl` alone is an adequate
proof since for the left inverse, `λ{y → y} (λ{x → x} x)` simplifies to `x`, and similarly
for the right inverse.
where the `pᵢ` are patterns (left-hand sides of an equation) and the
`eᵢ` are expressions (right-hand side of an equation). Often using an
anonymous lambda expression is more convenient than using a named
function: it avoids a lengthy type declaration; and the definition
appears exactly where the function is used, so there is no need for
the writer to remember to declare it in advance, or for the reader to
search for the definition elsewhere in the code.
In the above, `to` and `from` are both bound to identity functions,
and `from∘to` and `to∘from` are both bound to functions that discard
their argument and return `refl`. In this case, `refl` alone is an
adequate proof since for the left inverse, `λ{y → y} (λ{x → x} x)`
simplifies to `x`, and similarly for the right inverse.
To show isomorphism is symmetric, we simply swap the roles of `to`
and `from`, and `from∘to` and `to∘from`.

View file

@ -567,6 +567,36 @@ indicate that an operator associates to the right, or just `infix` to
indicate that parentheses are always required to disambiguate.
## Currying
We have chosen to represent a function of two arguments in terms
of a function of the first argument that returns a function of the
second argument. This trick goes by the name *currying*.
Agda, like other functional languages such as
ML and Haskell, is designed to make currying easy to use. Function
arrows associate to the left and application associates to the right.
stands for → ()
and
_+_ 2 3 stands for (_+_ 2) 3
The term `_+_ 2` by itself stands for the function that adds two to
its argument, hence applying it to three yields five.
Currying is named for Haskell Curry, after whom the programming
language Haskell is also named. Curry's work dates to the 1930's.
When I first learned about currying, I was told it was misattributed,
since the same idea was previously proposed by Moses Schönfinkel in
the 1920's. I was told a joke: "It should be called schönfinkeling,
but currying is tastier". Only later did I learn that the explanation
of the misattribution was itself a misattribution. The idea actully
appears in the *Begriffschrift* of Gottlob Frege, published in 1879.
We should call it frigging!
## The story of creation, revisited
Just as our inductive definition defines the naturals in terms of the

View file

@ -86,7 +86,80 @@ These rules differ only slightly from Gentzen's original. We
have written `A × B` where Gentzen wrote `A & B`, for reasons
that will become clear later.
[following not used in above: We also adopt an idea from Gentzen's
sequent calculus, writing `Γ ⊢ A` to make explicit the set of
assumptions `Γ` from which proposition `A` follows.]
## Natural deduction as a logic (no terms)
We adopt an idea from Gentzen's sequent calculus, writing `Γ ⊢ A` to
make explicit the set of assumptions `Γ` from which proposition `A`
follows.
Γ ⊢ A
Γ ⊢ B
--------- ×-I
Γ ⊢ A × B
Γ ⊢ A × B
--------- ×-E₁
Γ ⊢ A
Γ ⊢ A × B
--------- ×-E₁₂
Γ ⊢ B
----- -I
Γ ⊢
Γ ⊢ A
--------- +-I₁
Γ ⊢ A ⊎ B
Γ ⊢ B
--------- +-I₂
Γ ⊢ A ⊎ B
Γ ⊢ A ⊎ B
Γ , A ⊢ C
Γ , B ⊢ C
----------- +-E
Γ ⊢ C
Γ ⊢ ⊥
----- ⊥-E
Γ ⊢ C
Γ , A ⊢ B
--------- →-I
Γ ⊢ A → B
Γ ⊢ A → B
Γ ⊢ A
--------- →-E
Γ ⊢ B
Γ , A ⊢ ⊥
--------- ¬-I
Γ ⊢ ¬ A
Γ ⊢ ¬ A
Γ ⊢ A
-------- ¬-E
Γ ⊢ ⊥
Γ , x : A ⊢ B
------------------ ∀-I (x does not appear free in Γ)
Γ ⊢ ∀ (x : A) → B
Γ ⊢ ∀ (x : A) → B
Γ ⊢ M : A
----------------- ∀-E
Γ ⊢ B [ x := M ]
Γ ⊢ M : A
Γ ⊢ B [ x := M ]
----------------- ∃-I
Γ ⊢ ∃ [ x : A ] B
Γ ⊢ ∃ [ x : A ] B
Γ , x : A ⊢ C
----------------- ∃-E (x does not appear free in Γ or C)
Γ ⊢ C