day 2 pfenning

This commit is contained in:
Michael Zhang 2024-06-04 10:31:23 -04:00
parent 299da9bef4
commit f5dc06fc69
11 changed files with 472 additions and 11 deletions

4
.gitignore vendored
View file

@ -1,2 +1,4 @@
*.pdf
.DS_Store
.DS_Store
*.agdai
_build

4
Makefile Normal file
View file

@ -0,0 +1,4 @@
deploy:
rsync -azr \
--exclude .git \
./ root@veil:/home/blogDeploy/public/public-files/oplss-2024

27
ahmed/day1.agda Normal file
View file

@ -0,0 +1,27 @@
module ahmed.day1 where
open import Data.Product
data Type : Set where
Bool : Type
_-→_ : Type Type Type
data Term : Set where
True : Term
False : Term
If_Then_Else_ : Term Term Term Term
λ[_::_]_ : Term Type Term Term
_∙_ : Term Term Term
data isValue : Term Set where
TrueValue : isValue True
FalseValue : isValue False
LambdaValue : (x : Term) (τ : Type) (e : Term) isValue (λ[ x :: τ ] e)
Value = Σ Term isValue
data evaluationContext : Set where
dot : evaluationContext
EIf_Then_Else_ : evaluationContext Term Term evaluationContext
EAppLeft : evaluationContext Term evaluationContext
EAppRight : Value evaluationContext evaluationContext

View file

@ -1,13 +1,9 @@
#import "../common.typ": *
#import "@preview/prooftrees:0.1.0": *
#set page(width: 5in, height: 8in, margin: 0.4in)
#show: doc => conf(doc)
#let ifthenelse(e, e1, e2) = $"if" #e "then" #e1 "else" #e2$
#let subst(x, v, e) = $#e [#v\/#x]$
#let isValue(x) = $#x "value"$
#let mapstostar = $op(arrow.r.long.bar)^*$
#let safe = $"safe"$
#let db(x) = $bracket.l.double #x bracket.r.double$
#let TODO = text(fill: red)[*TODO*]
= Mon. Jun 3 \@ 14:00
@ -50,7 +46,6 @@ Statics
Operational semantics
#let mapsto = $arrow.r.long.bar$
#rect[$e mapsto e'$]

View file

@ -1,3 +1,15 @@
#import "@preview/prooftrees:0.1.0": *
#set page(width: 5in, height: 8in, margin: 0.4in)
#let db(x) = $bracket.l.double #x bracket.r.double$
#let isValue(x) = $#x "value"$
#let mapsto = $arrow.r.long.bar$
#let mapstostar = $op(arrow.r.long.bar)^*$
#let subst(x, v, e) = $#e [#v\/#x]$
#let TODO = text(fill: red)[*TODO*]
#let conf(doc) = {
show link: underline
set page(width: 5in, height: 8in, margin: 0.4in)
doc
}

2
oplss-2024.agda-lib Normal file
View file

@ -0,0 +1,2 @@
include: .
depend: standard-library

4
oplss.typ Normal file
View file

@ -0,0 +1,4 @@
#include "./ahmed/notes.typ"
#include "./pfenning/notes.typ"
#include "./silva/notes.typ"

1
pfenning/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
snax

BIN
pfenning/lec1.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 960 KiB

View file

@ -1,4 +1,11 @@
#import "../common.typ": *
#import "@preview/prooftrees:0.1.0": *
#import "@preview/algo:0.3.3": algo, i, d, comment, code
#show: doc => conf(doc)
#let n(t) = text(fill: rgb("#aa3333"))[#t]
#let evalto = $arrow.r.hook$
#let with = $op(\&)$
= Adjoint Functional Programming
@ -24,8 +31,8 @@ Language to be studied is called "snax"
- #tree(axi[], uni[$() : 1$])
- #tree(axi[], uni[$isValue(())$])
- +
- ${e : A}{"inl" e : A + B}$
- ${e : B}{"inr" e : A + B}$
- #tree(axi[$e : A$], uni[$"inl" e : A + B$])
- #tree(axi[$e : B$], uni[$"inr" e : A + B$])
- #tree(axi[$isValue("v")$], uni[$isValue("inl" v)$])
- #tree(axi[$isValue("v")$], uni[$isValue("inr" v)$])
- $A + B :equiv +\{"inl":A, "inr": B\}$
@ -133,3 +140,236 @@ Language to be studied is called "snax"
axi[$Gamma tack.r e : A_k (k in L)$],
uni[$Gamma tack.r k(e) : +{l : A_l}_(l in L)$],
)
== Lecture 2
Type system
- Proving something different than type soundness; we want to demonstrate that no garbage is created.
- Set up a close correspondence between the static rules for inference and dynamic rules for evaluation.
- This should be _very_ close ideally.
At runtime, if you have $Gamma tack.r e : A$
- $e$ is what you're executing
- you don't want to carry $A$ around during runtime
- $Gamma$ corresponds to a set of variables (you need this) $eta : Gamma$
#rect[$eta : Gamma$]
#tree(axi[], uni[$(dot) : (dot)$])
#tree(axi[$eta : Gamma$], axi[$dot tack.r v : A$], bin[$eta , x mapsto v : Gamma , x : A$])
("environment" refers to $eta$, while context refers to $Gamma$)
Runtime values are _always_ closed.
*Theorem.*
Under the conditions $Gamma tack.r e : A$ and $eta : Gamma$, then $eta tack.r e arrow.r.hook v$ and $dot tack.r v : A$.
However, this rule is problematic:
#tree(
axi[$Delta tack.r e_1 : A$],
axi[$Gamma tack.r e_2 : B$],
bin[$Delta,Gamma tack.r (e_1, e_2) : A times B$]
)
This evaluates to:
#tree(
axi[$e_1 arrow.r.hook v_1$],
axi[$e_2 arrow.r.hook v_2$],
bin[$eta tack.r (e_1, e_2) arrow.r.hook (v_1, v_2)$],
)
How to split the environment in this case?
It's not feasible to split the free variables.
One idea is we can just pass the whole environment into both sides:
#tree(
axi[$eta tack.r e_1 arrow.r.hook v_1$],
axi[$eta tack.r e_2 arrow.r.hook v_2$],
bin[$eta tack.r (e_1, e_2) arrow.r.hook (v_1, v_2)$],
)
The typechecker tells us that everything is used up correctly, so at runtime we can use this assumption.
Using subtractive approach, this looks like:
#tree(
axi[$eta tack.r e_1 arrow.r.hook v_1 #n[$tack.l eta_1$]$],
axi[$#n[$eta_1$] tack.r e_2 arrow.r.hook v_2 #n[$tack.l eta_2$]$],
bin[$#n[$eta_2$] tack.r (e_1, e_2) arrow.r.hook (v_1, v_2)$],
)
The subtractive approach has problems:
- When you execute your program, you are forced to go left-to-right. You can't run in parallel.
Additive approach:
#rect[$Gamma tack.r e: A tack.l Omega$]
where $Omega$ are the variables that are _actually_ used
For example, rules for pairs:
#tree(
axi[$Gamma tack.r e_1 : A tack.l Omega_1$],
axi[$Gamma tack.r e_2 : B tack.l Omega_2$],
bin[$Gamma tack.r (e_1,e_2) : A times B tack.l Omega_1, Omega_2$],
)
The $Omega_1 , Omega_2$ disjoint union, and is undefined if they share a variable.
The runtime rule corresponds to:
#tree(
axi[$eta tack.r e_1 arrow.r.hook v_1 tack.l omega_1$],
axi[$eta tack.r e_2 arrow.r.hook v_2 tack.l omega_2$],
bin[$eta tack.r (e_1, e_2) arrow.r.hook (v_1, v_2) tack.l omega_1, omega_2$]
)
If we typechecked correctly, $omega_1$ and $omega_2$ would be disjoint as well.
*Soundness.* If $Gamma tack.r e : A tack.l Omega$ then $Omega tack.r e : A$ ($Omega subset Gamma$)
*Completeness.* If $Omega tack.r e : A$ and $Gamma supset Omega$ then $Gamma tack.r e : A tack.l Omega$
The $Omega tack.r e : A$ is a _precise_ judgement: $Omega$ _only_ contains the variables used in $e$.
You would need two different sets of typing rules, one that has the $Omega$ rules and one that doesn't.
Prove this by rule induction.
Change the original theorem:
*New Theorem.*
Under the conditions $Gamma tack.r e : A tack.l Omega$ and $eta : Gamma$ and $omega : Omega$,
then $eta tack.r e arrow.r.hook v tack.l omega$ and $dot tack.r v : A$.
This can now be proven with the new dynamics.
How to prove that there's no garbage?
For all top level computations, $eta tack.r e arrow.r.hook v tack.l eta$.
This means that everything is used.
This can be proven using rule induction.
Updated theorem, to point out that $e arrow.r.hook v$ may not necessarily be true because termination is not proven by rule induction.
*New Theorem.*
Under the conditions $Gamma tack.r e : A tack.l Omega$ and $eta : Gamma$ and $omega : Omega$,
#n[and $eta tack.r e arrow.r.hook v tack.l omega$] then $dot tack.r v : A$.
(under affine logic, the top-level judgment $Gamma tack.r e : A tack.l Omega$ requires only that $Omega subset Gamma$, not that $Omega = Gamma$)
Example of evaluation rule that matches the typing rule:
#tree(
axi[$eta tack.r e arrow.r.hook (v_1, v_2) tack.l omega$],
axi[$eta, x mapsto v_1 , y mapsto v_2 tack.r e' arrow.r.hook v' tack.l (omega', x mapsto v_1, y mapsto v_2)$],
bin[$eta tack.r "match" e "with" (x, y) arrow.r.double e' arrow.r.hook v' tack.l (omega, omega')$]
)
=== Looking at typing rules as logic
#rect[Natural Deduction]
$Gamma ::= (dot) | Gamma , A$
#tree(
axi[],
uni[$A tack.r A$]
)
#tree(
axi[$Delta tack.r A$],
axi[$Gamma tack.r B$],
bin[$Delta, Gamma, tack.r A times B$]
)
#tree(
axi[$Delta tack.r A times B$],
axi[$Gamma, A, B tack.r C$],
bin[$Delta, Gamma tack.r C$]
)
These are like a rule of logic. For plus:
#tree(
axi[$Delta tack.r A$],
uni[$Delta tack.r A + B$],
)
#tree(
axi[$Delta tack.r B$],
uni[$Delta tack.r A + B$],
)
Proof by cases:
#tree(
axi[$Delta tack.r A + B$],
axi[$Gamma , A tack.r C$],
axi[$Gamma , B tack.r C$],
tri[$Delta , Gamma tack.r C$],
)
Every assumption has to be used _exactly_ once. This is called *linear logic*.
(notation uses $times.circle$ and $plus.circle$ instead of $times$ and $plus$)
Linear logic is weak by itself, just as linear type system is weak without global definitions.
Summary of operators:
#image("lec1.jpg")
$!A$ is read "of course A", lets you re-use assumptions. We don't have this except by top-level definitions.
In the judgement $Sigma ; Gamma tack.r e : A$, $Sigma$ contains definitions that you can use however many times you want, and $Gamma$ contains the definitions that are created linearly.
Distinction between positive and negative types:
- Lambdas cannot be pattern-matched against, you have to apply it.
- However, for $times.circle$ and $plus.circle$ you can directly observe their structure.
In this case, $A with B$, read "A with B":
#tree(
axi[$Gamma tack.r A$],
axi[$Gamma tack.r B$],
bin[$Gamma tack.r A with B$],
)
This is sound because only one of them can be extracted:
#tree(
axi[$Gamma tack.r e_1:A$],
axi[$Gamma tack.r e_2:B$],
bin[$Gamma tack.r angle.l e_1,e_2 angle.r : A with B$],
)
#tree(
axi[$Gamma tack.r e : A with B$],
uni[$Gamma tack.r e.pi_1 : A$]
)
#tree(
axi[$Gamma tack.r e : A with B$],
uni[$Gamma tack.r e.pi_2 : B$]
)
"Lazy pair" you can only extract one side at a time. There are also "lazy records":
$with { l : A_l}_(l in L)$
#tree(
axi[$Gamma tack.r e_l : A_l (forall l in L)$],
uni[$Gamma tack.r {l = e_l}_(l in L) : with {l : A_l}_(l in L)$]
)
#tree(
axi[$Gamma tack.r e : with { l : A_l}_(l in L) (forall l in L)$],
uni[$Gamma tack.r e.k : A_k (k in L)$]
)

174
silva/notes.typ Normal file
View file

@ -0,0 +1,174 @@
#import "../common.typ": *
#import "@preview/prooftrees:0.1.0": *
#show: doc => conf(doc)
#let tru = $"true"$
#let fls = $"false"$
= Program Analysis with Kleene Algebra with Tests
== Mon Jun 3 \@ 15:40
#quote(block: true)[
Kleene algebra with tests is an algebraic framework that can be used to reason about imperative programs. It has been applied across a wide variety of areas including program transformations, concurrency control, compiler optimizations, cache control, networking, and more. In these lectures, we will provide an overview of Kleene Algebra with Tests, including the syntax, semantics, and the coalgebraic theory underlying decision procedures for program equivalence. We will illustrate how it can be used as a core framework for program verification, including successful extensions and fundamental limitations.
]
#table(
columns: (1fr, auto, 1fr),
align: (auto, center, auto),
```
while a & b do
p;
while a do
q;
while a & b do
p;
```,
$eq.quest$,
```
while a do
if b then
p;
else
q;
```
)
Study equivalence of uninterpreted simple imperative programs.
=== Regular expressions
Basic elements
$e ::= 0 | 1 | a in A | ...$
where $A$ is finite.
$e ::= ... | e ; e | e + e | e *$
Set of words:
#rect[$db(e) : 2^(A^*) = cal(P)(A^*)$]
- $db(0) = emptyset$
- $db(1) = {epsilon }$
- $db(a) = {a }$
- For every symbol there's also an expression that's the same thing.
This is an abuse of notation
- $db(e + f) = db(e) union db(f)$
- $db(#[$e ; f$]) = db(e) circle.filled db(f)$
- $forall (U, V : 2^(A^*)) U circle.filled V = { u v | u in U , v in V }$
- this is word concatenation
- $db(e^*) = union.big_(n in bb(N)) db(e)^n$
These are the denotational semantics of regular expressions.
Examples:
- $(1 + a) ; (1 + b) mapsto {epsilon , a , b, a b }$
- $(a + b)^* mapsto {a , b}^*$
- $(a^* b)^* a^* mapsto {a , b}^*$
These last two are equal $(a + b)^* equiv (a^* b)^* a^*$ because they have the same denotational semantics.
This is _denesting_.
These denesting rules can also be applied to programs such as the example at the start.
For regular expressions and extensions of regular expressions, you can come up with some finite number of 3-bar equations to prove any equivalence between regular expressions.
#rect[
*Definition (Kleene's Theorem).*
Let $L$ be a regular language.
Then the following are equivalent:
1. $L = db(e)$ for some regexp $e$
2. $L$ is accepted by a DFA
]
Regular languages do not include things like $A^n B^n$.
See: #link("https://en.wikipedia.org/wiki/Chomsky_hierarchy")[Chomsky Hierarchy of languages].
- Regular sets are constructed with a similar construction as regular expressions.
Proof:
- #[
Direction 1 -> 2 first.
https://en.wikipedia.org/wiki/Brzozowski_derivative
($L = db(e)$ for some regexp $e$) $arrow.r$ ($L$ is accepted by a DFA)
Build an automaton directly from the expression. Use _derivatives_ as if it was a function.
DFA :
- S : finite set of states
- $F : S arrow.r bb(2)$
- $t : S arrow.r S^A$
- this must be deterministic
To construct a DFA out of an expression, construct 2 functions:
1. #[
$E : "RE" arrow.r bb(2)$
E stands for "empty word", this asks "is there an empty word"? Because this will indicate where you can stop
- $E(0) = 0$
- $E(1) = 1$
- $E(a) = 0$
- $E(e ; f) = E(e) times E(f)$
- $E(e + f) = E(e) or E(f)$
- $E(e^*) = 1$
]
2. #[
$L : cal(P)(A^*)$
- $L_a = { u | a u in L }$
$D_a : "RE" arrow.r "RE"^A$
- Note: $X arrow.r X^A equiv (X arrow.r X)_(a in A)$
Definition:
- $D_a (0) = 0$
- $D_a (1) = 0$
- $D_a (a) = 1$
- $D_a (b) = 0$
- $D_a (e ; f) = (D_a (e) ; f) + (E(e) ; D_a (f))$
- If $E(e)$ did not have the empty word, then $E(e) = 0$, then $0 + A = A$.
This is the case as the case split:
$cases(#[$D_a (e) ; f "if" E(e) = 0$], #[$D_a (e) ; f + D_a (f) "if" E(e) = 1$])$
- $D_a (e + f) = D_a (e) + D_a (f)$
- $D_a (e^*) = D_a (e) ; e^*$
]
This is not a DFA yet, because it needs to be shown that it's finite.
Pick a *start state*.
You can pick $e$ as your start state.
From there, collect states you can reach.
Then, argue that the states you can reach are finite.
It's not actually true that this is finite!
- If $e = (a^*)^*$, then $D_a (e) = (1 ; a^*) ; (a^*)^*$
- Then $D_a (D_a (e)) = (0;a^* + 1;a^*) ; (a^*)^* + (a^*) ; (a^*)^*$
- TODO: This isn't completely right but the point is that it expands infinitely and there is a common part
Need to look at the equivalent classes of derivatives, since $D$ only understands syntax and keeps around a lot of unnecessary baggage.
ACI (associativity, commutativity, I? (TODO))
This isn't completely necessary to show finiteness, since the repeating term is the same.
Delete unnecessary $(+ f)$ terms in the $+$ case.
This will give you finiteness.
]
Merge states of a non-deterministic finite automaton, using the peanut approach.
Alternative #link("https://en.wikipedia.org/wiki/Thompson%27s_construction")[*Thompson's construction*]