After this commit, in the type checker, when checking convertability, we first compute a normal form without expanding opaque terms.
If the terms are convertible, then we are done, and saved a lot of time by not expanding unnecessary definitions.
If they are not, instead of throwing an error, we try again expanding the opaque terms.
This seems to be the best of both worlds.
The opaque flag is a hint for the type checker, but it would never prevent us from type checking a valid term.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator produces better proof terms. This is particularly important when we have to prove the remaining holes using tactics.
For example, in one of the tests, the elaborator was producing the sub-expression
(λ x : N, if ((λ x::1 : N, if (P a x x::1) ⊥ ⊤) == (λ x : N, ⊤)) ⊥ ⊤)
After, this commit it produces
(λ x : N, ¬ ∀ x::1 : N, ¬ P a x x::1)
The expressions above are definitionally equal, but the second is easier to work with.
Question: do we really need hidden definitions?
Perhaps, we can use only the opaque flag.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator was failing in the following scenario:
- Failing constraint of the form
ctx |- ?m1 =:= ?m2
where
?m2 is assigned to ?m1,
and ?m1 is unassigned.
has_metavar(?m2, ?m1) returns true, and a cycle is incorrectly reported.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit also includes a new test that exposes the problem.
The options in the io_state object were being lost.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The parser had a nasty ambiguity. For example,
f Type 1
had two possible interpretations
(f (Type) (1))
or
(f (Type 1))
To fix this issue, whenever we want to specify a particular universe, we have to precede 'Type' with a parenthesis.
Examples:
(Type 1)
(Type U)
(Type M + 1)
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator was missing solutions because of the missing condition at is_simple_ho_match.
This commit also adds a new test that exposes the problem.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit also adds a new test that exposes the problem.
The scoped_map should not be used for caching values in the normalizer and type_checker. When we extend the context, the meaning of all variables is modified (we are essentially performing a lift). So, the values stored in the cache are not correct in the new context.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commits also adds a new unit test that demonstrates non-termination due to this kind of constraint.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The "quota" hack used before this commit was inefficient, and too hackish.
This commit uses two lists of constraints: active and delayed.
The delayed constraints are only processed when there are no active constraints.
We use a simple index to quickly find which delayed constraints have assigned metavariables.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
checkpoint
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The idea is to catch the inconsistency in constraints such as:
ctx |- ?m[inst:0 v] == fun x, ?m a x
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The modifications started at commit 1852c86948 made a big difference. For example, before these changes test tests/lean/implicit7.lean generated complicated constraints such as:
[x : Type; a : ?M::29[inst:1 ?M::0[lift:0:1]] x] ⊢ Pi B : Type, (Pi _ : x, (Pi _ : (?M::35[inst:0 #0, inst:1 #2, inst:2 #4, inst:3 #6, inst:5 #5, inst:6 #7, inst:7 #9, inst:9 #9, inst:10 #11, inst:13 ?M::0[lift:0:13]] x a B _), (?M::36[inst:1 #1, inst:2 #3, inst:3 #5, inst:4 #7, inst:6 #6, inst:7 #8, inst:8 #10, inst:10 #10, inst:11 #12, inst:14 ?M::0[lift:0:14]] x a B _ _))) ≈
?M::22 x a
After the changes, only very simple constraints are generated. The most complicated one is:
[] ⊢ Pi a : ?M::0, (Pi B : Type, (Pi _ : ?M::0, (Pi _ : B, ?M::0))) ≈ Pi x : ?M::17, ?M::18
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit also simplifies the method check_pi in the type_checker and type_inferer.
It also fixes process_meta_app in the elaborator.
The problem was in the method process_meta_app and process_meta_inst.
They were processing convertability constrains as equality constraints.
For example, process_meta_app would handle
ctx |- Type << ?f b
as
ctx |- Type =:= ?f b
This is not correct because a ?f that returns (Type U) for b satisfies the first but not the second.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This modification was motivated by a bug exposed by tst17 at tests/kernel/type_checker.
metavar_env is now a smart point to metavar_env_cell.
ro_metavar_env is a read-only smart pointer. It is useful to make sure we are using proof_state correctly.
example showing that the approach for caching metavar_env is broken in the type_checker
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
http://luajit.org/install.html
If you're building a 64 bit application on OSX which links directly or
indirectly against LuaJIT, you need to link your main executable with
these flags:
-pagezero_size 10000 -image_base 100000000
The environment object is a "smart-pointer".
Before this commit, the use of "const &" for environment objects was broken.
For example, suppose we have a function f that should not modify the input environment.
Before this commit, its signature would be
void f(environment const & env)
This is broken, f's implementation can easilty convert it to a read-write pointer by using
the copy constructor.
environment rw_env(env);
Now, f can use rw_env to update env.
To fix this issue, we now have ro_environment. It is a shared *const* pointer.
We can convert an environment into a ro_environment, but not the other way around.
ro_environment can also be seen as a form of documentation.
For example, now it is clear that type_inferer is not updating the environment, since its constructor takes a ro_environment.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Lean was spending 17% on the runtime "throwing exceptions" in the test tests/lean/implicit7.lean
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Synthesizer is not part of the elaborator anymore.
The elaborator fills the "easy" holes.
The remaining holes are filled using different techniques (e.g., tactic framework) that are independent of the elaborator.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
On Linux, -D STATIC=ON does not work if MULTI_THREAD support is enabled.
If we search for "pthread static crash" we find other projects with the same problem.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit allows us to build Lean without the pthread dependency.
It is also useful if we want to implement multi-threading on top of Boost.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
After this commit, a value of type 'expr' cannot be a reference to nullptr.
This commit also fixes several bugs due to the use of 'null' expressions.
TODO: do the same for kernel objects, sexprs, etc.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
I also reduced the stack size to 8 Mb in the tests at tests/lean and tests/lean/slow. The idea is to simulate stackoverflow conditions.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
We flat applications. So, (g a b) is actually ((g a) b).
So, we must be able to unify (?f ?x) with (g a b).
Solution:
?g <- (g a)
?x <- b
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit fixes a problem exposed by t13.lean.
It has a theorem of the form:
Theorem T1 (A B : Bool) : A /\ B -> B /\ A :=
fun assumption : A /\ B,
let lemma1 := (show A by auto),
lemma2 := (show B by auto)
in (show B /\ A by auto)
When to_goal creates a goal for the metavariable associated with (show B /\ A by auto) it receives a context and proposition of the form
[ A : Bool, B : Bool, assumption : A /\ B, lemma1 := Conjunct1 assumption, lemma2 := Conjunct2 assumption ] |- B /\ A
The context_entries "lemma1 := Conjunct1 assumption" and "lemma2 := Conjunct2 assumption" do not have a domain (aka type).
Before this commit, to_goal would simply replace and references to "lemma1" and "lemma2" in "B /\ A" with their definitions.
Note that, "B /\ A" does not contain references to "lemma1" and "lemma2". Then, the following goal is created
A : Bool, B : Bool, assumption : A /\ B |- B /\ A
That is, the lemmas are not available when solving B /\ A.
Thus, the tactic auto produced the following (weird) proof for T1, where the lemmas are computed but not used.
Theorem T1 (A B : Bool) (assumption : A ∧ B) : B ∧ A :=
let lemma1 := Conjunct1 assumption,
lemma2 := Conjunct2 assumption
in Conj (Conjunct2 assumption) (Conjunct1 assumption)
This commit fixed that. It computes the types of "Conjunct1 assumption" and "Conjunct2 assumption", and creates the goal
A : Bool, B : Bool, assumption : A /\ B, lemma1 : A, lemma2 : B |- B /\ A
After this commit, the proof for theorem T1 is
Theorem T1 (A B : Bool) (assumption : A ∧ B) : B ∧ A :=
let lemma1 := Conjunct1 assumption,
lemma2 := Conjunct2 assumption
in Conj lemma2 lemma1
as expected.
Finally, this example suggests that the encoding
Theorem T1 (A B : Bool) : A /\ B -> B /\ A :=
fun assumption : A /\ B,
let lemma1 : A := (by auto),
lemma2 : B := (by auto)
in (show B /\ A by auto)
is more efficient than
Theorem T1 (A B : Bool) : A /\ B -> B /\ A :=
fun assumption : A /\ B,
let lemma1 := (show A by auto),
lemma2 := (show B by auto)
in (show B /\ A by auto)
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The type checker (and type inferer) were not handling correctly Pi expressions where the type universe cannot be established due to the occurrence of metavariables. In this case, a max-constraint is created. The problem is that the domain and body of the Pi are in different contexts. The constrain generated before this commit was incorrect, it could contain a free variable. This commit fix the issue by using the context of the body, and lifting the free variables in the domain by 1.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit improves the condition for showing that an equality(and convertability) constraint cannot be solved. A nice consequence is that Lean produces nicer error messages. For example, the error message for unit test elab1.lean is more informative.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Remark: on Windows, Ctrl-D does not seem to work.
So, this commit also changes the Lean startup message.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
When LEAN_THREAD_UNSAFE=ON, we:
- Do not run tests at tests/lua/threads
- Disable thread object at Lua API
- par tactical becomes an alias for interleave
- Disable some unit tests that use threads
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This is a very convenient feature for interrupting non-terminating user scripts.
Before this commit, the user had to manually invoke check_interrupt() in potentially expensive loops. Now, this is not needed anymore.
Remark: we still have to check whether this trick works with LuaJIT or not.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This fix tries to fix two failures on our unit tests.
tests/kernel/normalizer
tests/kernel/type_checker
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
operator bool() may produce unwanted conversions.
For example, we had the following bug in the code base.
...
object const & obj = find_object(const_name(n));
if (obj && obj.is_builtin() && obj.get_name() == n)
...
obj.get_name() has type lean::name
n has type lean::expr
Both have 'operator bool()', then the compiler uses the operator to
convert them to Boolean, and then compare the result.
Of course, this is not our intention.
After this commit, the compiler correctly signs the error.
The correct code is
...
object const & obj = find_object(const_name(n));
if (obj && obj.is_builtin() && obj.get_name() == const_name(n))
...
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
When using tactics for proving theorems, a common pattern is
Theorem T : <proposition> := _.
apply <tactic>.
...
done.
This commit allows the user to write the simplified form:
Theorem T : <proposition>.
apply <tactic>.
...
done.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Recursive functions that may go very deep should invoke the function check_stack. It throws an exception if the amount of stack space is limited.
The function check_system() is syntax sugar for
check_interrupted();
check_stack();
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Proof/Cex builders and tactics implemented in Lua had a "strong reference" to script_state. If they are stored in the Lua state, then we get a cyclic reference.
That is, script_state points to these objects, and they point back to script_state.
To avoid this memory leak, this commit defines a weak reference for script_state objects. The Proof/Cex builders and tactics now store a weak reference to the Lua state.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
There was a bug in the app_rewriter1_tst. If we apply the ADD_COMM RW to
f(0), then the result should be f(0) since there is nothing to do for
ADD_COMM.
f(0) = f(0)
The proof for this equality should be Refl(Nat, f(0)). But it was
Refl(Nat -> Nat, f)
which is wrong. Somehow, the previous kernel didn't detect this type
mismatch and recent changes of the kernel found the problem.
I fixed the bug and re-enable the test as it was.
rewrite_* functions take the rewriting results of the sub-components and
construct the rewriting result for the main component.
For instance, rewrite_app function takes env, ctx, and the value v s.t.
v = (e_0 e_1 ... e_n)
and the rewriting results for e_i's as a vector(buffer)
(e'_0, pf_0 -- proof of e_0 = e'_0)
(e'_1, pf_1 -- proof of e_1 = e'_1)
...
(e'_n, pf_n -- proof of e_n = e'_n).
Then rewrite_app function construct the new v'
v' = (e'_0 e'_1 ... e'_n)
and the proof of v = v' which is constructed with pf_i's.
These functions are used in the component rewriters such as app_RW and
let_type_RW, as well as more complicated rewriters such as depth
rewriter.
In expression code blocks, we do not have to write a "return".
After this commit, the argument of an apply command is a Lua expression instead of a Lua block of code. That is, we can now write
apply (** REPEAT(ORELSE(imp_tactic, conj_tactic, conj_hyp_tactic, assumption_tactic)) **)
instead of
apply (** return REPEAT(ORELSE(imp_tactic, conj_tactic, conj_hyp_tactic, assumption_tactic)) **)
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
For example, after this commit, we can write
simple_tac = REPEAT(ORELSE(imp_tactic, conj_tactic)) .. assumption_tactic
instead of
simple_tac = REPEAT(ORELSE(imp_tactic(), conj_tactic())) .. assumption_tactic()
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Before this commit, the elaborator would only assign ?M <- P, if P was normalized. This is bad since normalization may "destroy" the structure of P.
For example, consider the constraint
[a : Bool; b : Bool; c : Bool] ⊢ ?M::1 ≺ implies a (implies b (and a b))
Before this, ?M::1 will not be assigned to the "implies-term" because the "implies-term" is not normalized yet.
So, the elaborator would continue to process the constraint, and convert it into:
[a : Bool; b : Bool; c : Bool] ⊢ ?M::1 ≺ if Bool a (if Bool b (if Bool (if Bool a (if Bool b false true) true) false true) true) true
Now, ?M::1 is assigned to the term
if Bool a (if Bool b (if Bool (if Bool a (if Bool b false true) true) false true) true) true
This is bad, since the original structure was lost.
This commit also contains an example that only works after the commit is applied.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>