MessagesAndRefinement chapter: a pass through it all

This commit is contained in:
Adam Chlipala 2016-05-08 18:59:36 -04:00
parent 6333420f53
commit 254f370544

View file

@ -4018,7 +4018,7 @@ The threads of the two complementary operations \emph{rendezvous} and pass the m
Packages of semantics and proof techniques for such languages are often called \emph{process algebras}\index{process algebra}, as they support an algebraic style of reasoning about the source code of message-passing programs.
That is, we prove laws very similar to the familiar equations of algebra, and use those laws to ``rewrite'' inside larger processes, by replacing their subprocesses with others we have shown suitably equivalent.
It's a powerful technique for highly modular proofs, which we develop in the rest of this chapter for one concrete synchronous language.
Well-known process algebras include the $\pi$-calculus and the Calculus of Communicating Systems; the one we focus on is idiosyncratic and designed partly to make the Coq proofs manageable.
Well-known process algebras include the $\pi$-calculus\index{$\pi$-calculus} and the Calculus of Communicating Systems\index{Calculus of Communicating Systems}; the one we focus on is idiosyncratic and designed partly to make the Coq proofs manageable.
\section{An Object Language with Synchronous Message Passing}
@ -4041,6 +4041,7 @@ Here's the intuitive explanation of each syntax construction.
\item \textbf{Fresh channel generation}\index{fresh channel generation} $\newp{\vec{c}}{x}{p(x)}$ creates a new \emph{private} channel to be used by the body process $p(x)$, where we replace $x$ with the channel that is chosen. Following tradition, we use the Greek letter $\nu$\index{$\nu$}\index{nu} (nu) for this purpose. Each generation operation takes a parameter $\vec{c}$, which we call the \emph{support}\index{support} of the operation. It gives a list of channels already in use for other purposes, so that the fresh channel must not equal any of them. (We assume an infinite domain of channels, so that, for any specific list, it is always possible to find a channel not in that list.)
\item \textbf{Abstraction boundaries} $\block{c}{p}$ prevent ``the outside world'' from sending $p$ any messages on channel $c$ or receiving any messages from $p$ via $c$. That is, $c$ is treated as a local channel for $p$.
\abstraction
\item \textbf{Sends} $\send{c}{v}{p}$ and \textbf{receives} $\recv{c}{x}{p(x)}$, where we use an exclamation mark to suggest ``telling something'' and a question mark to suggest ``asking something.'' Processes of these kinds can rendezvous when they agree on the channel. When $\send{c}{v}{p_1}$ and $\recv{c}{x}{p_2(x)}$ rendezvous, they respectively evolve to $p_1$ and $p_2(v)$.
@ -4104,7 +4105,7 @@ $$\infer{\lts{\parl{p_1}{p_2}}{l}{\parl{p'_1}{p_2}}}{
Finally, a duplication can spawn a new copy (``thread'') at any time.
$$\infer{\lts{\dup{p}}{}{\parl{\dup{p}}{p}}}{}$$
The labelled transition system approach may seem a bit unwieldy for just explaining the behavior of programs.
The labelled-transition-system approach may seem a bit unwieldy for just explaining the behavior of programs.
Where it really pays off is in supporting a modular, algebraic reasoning style about processes, which we turn to next.
@ -4144,7 +4145,7 @@ p'_1 & p'_2 \arrow{l}{R^{-1}}
\newcommand{\refines}[2]{#1 \leq #2}
\invariants
This notion of simulation has quite a lot in common with our well-worn concept of invariants of transition systems.
Simulations have quite a lot in common with our well-worn concept of invariants of transition systems.
Simulation can be seen as a kind of natural generalization of invariants, which are predicates over single states, into relations that apply to states of two different transition systems that need to evolve in (approximate) lock-step.
We define \emph{refinement} $\refines{p_1}{p_2}$ to indicate that there exists a simulation $R$ such that $p_1 \; R \; p_2$.
@ -4154,7 +4155,7 @@ Luckily, this somewhat involved definition is easily related back to our intuiti
If $\refines{p_1}{p_2}$, then every trace generated by $p_1$ is also generated by $p_2$.
\end{theorem}
\begin{proof}
By induction on any trace generated by $p_1$.
By induction on executions of $p_1$.
\end{proof}
Refinement is also a preorder\index{preorder}.
@ -4251,7 +4252,7 @@ $$\begin{array}{l}
\end{array}$$
Note that, without the abstraction boundaries at the start, this fact would not be derivable.
We would need to worry about meddlesome threads in our environment interacting directly with $c_1$ or $c_2$, spoiling the protocol, and forcing us to add extra cases to the righthand side of the refinement.
We would need to worry about meddlesome threads in our environment interacting directly with $c_1$ or $c_2$, spoiling the protocol and forcing us to add extra cases to the righthand side of the refinement.