Projections build more general solutions. This commit also adds a test that demonstrates the issue. Before this commit, the elaborator would produce the "constant" predicate (fun x, a + b = b + a).
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The example tests/lua/simp1.lua demonstrates the issue.
The higher-order matcher matches closed terms that are definitionally equal.
So, given a definition
definition a := 1
it will match 'a' with '1' since they are definitionally equal.
Then, if we have a theorem
theorem a_eq_1 : a = 1
as a rewrite rule, it was triggering the following infinite loop when simplifying the expression "a"
a --> 1 --> 1 --> 1 ...
The first simplification is expected. The other ones are not.
The problem is that "1" is definitionally equal to "a", and they match.
The rewrite_rule_set manager accepts the rule a --> 1 since the left-hand-side does not occur in the right-hand-side.
To avoid this loop, we test if the new expression is not equal to the previous one.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Convertability constraints are harder to solve than equality constraints, and it seems they don't buy us anything definitions. They are just increasing the search space for the elaborator.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Now, we are again using the following invariant for simplifier_fn::result
The type of in the equality of the result is definitionally equal to the
type of the resultant expression.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
For example, in the hpiext axiom, the resultant equality if for (Type M+1)
axiom hpiext {A A' : TypeM} {B : A -> TypeM} {B' : A' -> TypeM} :
A = A' -> (∀ x x', x == x' -> B x = B' x') -> (∀ x, B x) = (∀ x, B' x)
even if the actual arguments A, A’, B, B’ "live" in a much smaller universe (e.g., Type).
So, it would be great if we could move the resultant equality back to the right universe.
I don't see how to do it right now.
The other solution would require a major rewrite of the code base.
We would have to support universe level arguments like Agda, and write the axiom hpiext as:
axiom hpiext {l : level} {A A' : (Type l)} {B : A -> (Type l)} {B' : A' -> (Type l)} :
A = A' -> (∀ x x', x == x' -> B x = B' x') -> (∀ x, B x) = (∀ x, B' x)
This is the first instance I found where it is really handy to have this feature.
I think this would be a super clean solution, but it would require a big rewrite in the code base.
Another problem is that the actual semantics that Agda has for this kind of construction is not clear to me.
For instance, sometimes Agda reports that the type of an expression is (Set omega).
An easier to implement hack is to support "axiom templates".
We create instances of hipext "on-demand" for different universe levels.
This is essentially what Coq does, since the universe levels are implicit in Coq.
This is not as clean as the Agda approach, but it is much easier to implement.
A super dirty trick is to include some instances of hpiext for commonly used universes
(e.g., Type and (Type 1)).
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The optimization was incorrect if the term indirectly contained a metavariable.
It could happen if the term contained a free variable that was assigned in the context to a term containing a metavariable.
This commit also adds a new test that exposes the problem.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Before this commit, the elaborator was solving constraints of the form
ctx |- (?m x) == (f x)
as
?m <- (fun x : A, f x) where A is the domain of f.
In our kernel, the terms f and (fun x, f x) are not definitionally equal.
So, the solution above is not the only one. Another possible solution is
?m <- f
Depending of the circumstances we want ?m <- (fun x : A, f x) OR ?m <- f.
For example, when Lean is elaborating the eta-theorem in kernel.lean, the first solution should be used:
?m <- (fun x : A, f x)
When we are elaborating the axiom_of_choice theorem, we need to use the second one:
?m <- f
Of course, we can always provide the parameters explicitly and bypass the elaborator.
However, this goes against the idea that the elaborator can do mechanical steps for us.
This commit addresses this issue by creating a case-split
?m <- (fun x : A, f x)
OR
?m <- f
Another solution is to implement eta-expanded normal forms in the Kernel.
With this change, we were able to cleanup the following "hacks" in kernel.lean:
@eps_ax A (nonempty_ex_intro H) P w Hw
@axiom_of_choice A B P H
where we had to explicitly provided the implicit arguments
This commit also improves the imitation step for Pi-terms that are actually arrows.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
It is not incorrect to use size, but it can easily overflow due to sharing.
The following script demonstrates the problem:
local f = Const("f")
local a = Const("a")
function mk_shared(d)
if d == 0 then
return a
else
local c = mk_shared(d-1)
return f(c, c)
end
end
print(mk_shared(33):size())
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
It now can handle (?m t) where t is not a locally bound variable, but ?m and all free variables in t are assigned.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The idea is to support conditional equations where the left-hand-side does not contain all theorem arguments, but the missing arguments can be inferred using type inference.
For example, we will be able to have the eta theorem as rewrite rule:
theorem eta {A : TypeU} {B : A → TypeU} (f : ∀ x : A, B x) : (λ x : A, f x) = f
:= funext (λ x : A, refl (f x))
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator was failing in the following higher-order constraint
ctx |- (?M a) = (?M b)
This constraint has solution, but the missing condition was making the elaborator to reduce this problem to
ctx |- a = b
That does not have a solution.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The method is_proposition was using an optimization that became incorrect after we identified Pi and forall.
It was assuming that any Pi expression is not a proposition.
This is not true anymore. Now, (Pi x : A, B) is a proposition if B is a proposition.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The universe constraint manager is more flexible now.
We don't need to start with a huge universe U >= 512.
We can start small, and increase it on demand.
If module mod1 needs it, it can always add
universe U >= 3
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator was not handling correctly constraints of the form
ctx |- ?m << (Pi x : A, B)
and
ctx |- (Pi x : A, B) << ?m
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit affects different modules.
I used the following approach:
1- I store the metavariable environment at unification_failure_justifications. The idea is to capture the set of instantiated metavariables at the time of failure.
2- I added a remove_detail function. It removes propagation steps from the justification tree object. I also remove the backtracking search space associated with higher-order unificiation. I keep only the search related to case-splits due to coercions and overloads.
3- I use the metavariable environment captured at step 1 when pretty printing the justification of an elaborator_exception.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This modification improves the effectiveness of the process_metavar_inst procedure in the Lean elaborator.
For example, suppose we have the constraint
ctx |- ?M1[inst:0 ?M2] == a
If ?M1 and ?M2 are unassigned, then we have to consider the two possible solutions:
?M1 == a
or
?M1 == #0 and ?M2 == a
On the other hand, if ?M2 is assigned to b, then we can ignore the second case.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
There is a lot to be done. We should do the same for Nat, Int and Real.
We also should cleanup the file builtin.cpp and builtin.h.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit also adds several new theorems that are useful for implementing the simplifier.
TODO: perhaps we should remove the declarations at basic_thms.h?
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The idea is to allow users to define their own commands using Lua.
The builtin command Find is now written in Lua.
This commit also fixes a bug in the get_formatter() Lua API.
It also adds String arguments to macros.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
After this commit, in the type checker, when checking convertability, we first compute a normal form without expanding opaque terms.
If the terms are convertible, then we are done, and saved a lot of time by not expanding unnecessary definitions.
If they are not, instead of throwing an error, we try again expanding the opaque terms.
This seems to be the best of both worlds.
The opaque flag is a hint for the type checker, but it would never prevent us from type checking a valid term.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator produces better proof terms. This is particularly important when we have to prove the remaining holes using tactics.
For example, in one of the tests, the elaborator was producing the sub-expression
(λ x : N, if ((λ x::1 : N, if (P a x x::1) ⊥ ⊤) == (λ x : N, ⊤)) ⊥ ⊤)
After, this commit it produces
(λ x : N, ¬ ∀ x::1 : N, ¬ P a x x::1)
The expressions above are definitionally equal, but the second is easier to work with.
Question: do we really need hidden definitions?
Perhaps, we can use only the opaque flag.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator was failing in the following scenario:
- Failing constraint of the form
ctx |- ?m1 =:= ?m2
where
?m2 is assigned to ?m1,
and ?m1 is unassigned.
has_metavar(?m2, ?m1) returns true, and a cycle is incorrectly reported.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit also includes a new test that exposes the problem.
The options in the io_state object were being lost.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The elaborator was missing solutions because of the missing condition at is_simple_ho_match.
This commit also adds a new test that exposes the problem.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commits also adds a new unit test that demonstrates non-termination due to this kind of constraint.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The "quota" hack used before this commit was inefficient, and too hackish.
This commit uses two lists of constraints: active and delayed.
The delayed constraints are only processed when there are no active constraints.
We use a simple index to quickly find which delayed constraints have assigned metavariables.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
checkpoint
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The idea is to catch the inconsistency in constraints such as:
ctx |- ?m[inst:0 v] == fun x, ?m a x
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The modifications started at commit 1852c86948 made a big difference. For example, before these changes test tests/lean/implicit7.lean generated complicated constraints such as:
[x : Type; a : ?M::29[inst:1 ?M::0[lift:0:1]] x] ⊢ Pi B : Type, (Pi _ : x, (Pi _ : (?M::35[inst:0 #0, inst:1 #2, inst:2 #4, inst:3 #6, inst:5 #5, inst:6 #7, inst:7 #9, inst:9 #9, inst:10 #11, inst:13 ?M::0[lift:0:13]] x a B _), (?M::36[inst:1 #1, inst:2 #3, inst:3 #5, inst:4 #7, inst:6 #6, inst:7 #8, inst:8 #10, inst:10 #10, inst:11 #12, inst:14 ?M::0[lift:0:14]] x a B _ _))) ≈
?M::22 x a
After the changes, only very simple constraints are generated. The most complicated one is:
[] ⊢ Pi a : ?M::0, (Pi B : Type, (Pi _ : ?M::0, (Pi _ : B, ?M::0))) ≈ Pi x : ?M::17, ?M::18
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit also simplifies the method check_pi in the type_checker and type_inferer.
It also fixes process_meta_app in the elaborator.
The problem was in the method process_meta_app and process_meta_inst.
They were processing convertability constrains as equality constraints.
For example, process_meta_app would handle
ctx |- Type << ?f b
as
ctx |- Type =:= ?f b
This is not correct because a ?f that returns (Type U) for b satisfies the first but not the second.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This modification was motivated by a bug exposed by tst17 at tests/kernel/type_checker.
metavar_env is now a smart point to metavar_env_cell.
ro_metavar_env is a read-only smart pointer. It is useful to make sure we are using proof_state correctly.
example showing that the approach for caching metavar_env is broken in the type_checker
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The environment object is a "smart-pointer".
Before this commit, the use of "const &" for environment objects was broken.
For example, suppose we have a function f that should not modify the input environment.
Before this commit, its signature would be
void f(environment const & env)
This is broken, f's implementation can easilty convert it to a read-write pointer by using
the copy constructor.
environment rw_env(env);
Now, f can use rw_env to update env.
To fix this issue, we now have ro_environment. It is a shared *const* pointer.
We can convert an environment into a ro_environment, but not the other way around.
ro_environment can also be seen as a form of documentation.
For example, now it is clear that type_inferer is not updating the environment, since its constructor takes a ro_environment.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Lean was spending 17% on the runtime "throwing exceptions" in the test tests/lean/implicit7.lean
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Synthesizer is not part of the elaborator anymore.
The elaborator fills the "easy" holes.
The remaining holes are filled using different techniques (e.g., tactic framework) that are independent of the elaborator.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit allows us to build Lean without the pthread dependency.
It is also useful if we want to implement multi-threading on top of Boost.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
After this commit, a value of type 'expr' cannot be a reference to nullptr.
This commit also fixes several bugs due to the use of 'null' expressions.
TODO: do the same for kernel objects, sexprs, etc.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
I also reduced the stack size to 8 Mb in the tests at tests/lean and tests/lean/slow. The idea is to simulate stackoverflow conditions.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
We flat applications. So, (g a b) is actually ((g a) b).
So, we must be able to unify (?f ?x) with (g a b).
Solution:
?g <- (g a)
?x <- b
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit fixes a problem exposed by t13.lean.
It has a theorem of the form:
Theorem T1 (A B : Bool) : A /\ B -> B /\ A :=
fun assumption : A /\ B,
let lemma1 := (show A by auto),
lemma2 := (show B by auto)
in (show B /\ A by auto)
When to_goal creates a goal for the metavariable associated with (show B /\ A by auto) it receives a context and proposition of the form
[ A : Bool, B : Bool, assumption : A /\ B, lemma1 := Conjunct1 assumption, lemma2 := Conjunct2 assumption ] |- B /\ A
The context_entries "lemma1 := Conjunct1 assumption" and "lemma2 := Conjunct2 assumption" do not have a domain (aka type).
Before this commit, to_goal would simply replace and references to "lemma1" and "lemma2" in "B /\ A" with their definitions.
Note that, "B /\ A" does not contain references to "lemma1" and "lemma2". Then, the following goal is created
A : Bool, B : Bool, assumption : A /\ B |- B /\ A
That is, the lemmas are not available when solving B /\ A.
Thus, the tactic auto produced the following (weird) proof for T1, where the lemmas are computed but not used.
Theorem T1 (A B : Bool) (assumption : A ∧ B) : B ∧ A :=
let lemma1 := Conjunct1 assumption,
lemma2 := Conjunct2 assumption
in Conj (Conjunct2 assumption) (Conjunct1 assumption)
This commit fixed that. It computes the types of "Conjunct1 assumption" and "Conjunct2 assumption", and creates the goal
A : Bool, B : Bool, assumption : A /\ B, lemma1 : A, lemma2 : B |- B /\ A
After this commit, the proof for theorem T1 is
Theorem T1 (A B : Bool) (assumption : A ∧ B) : B ∧ A :=
let lemma1 := Conjunct1 assumption,
lemma2 := Conjunct2 assumption
in Conj lemma2 lemma1
as expected.
Finally, this example suggests that the encoding
Theorem T1 (A B : Bool) : A /\ B -> B /\ A :=
fun assumption : A /\ B,
let lemma1 : A := (by auto),
lemma2 : B := (by auto)
in (show B /\ A by auto)
is more efficient than
Theorem T1 (A B : Bool) : A /\ B -> B /\ A :=
fun assumption : A /\ B,
let lemma1 := (show A by auto),
lemma2 := (show B by auto)
in (show B /\ A by auto)
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The type checker (and type inferer) were not handling correctly Pi expressions where the type universe cannot be established due to the occurrence of metavariables. In this case, a max-constraint is created. The problem is that the domain and body of the Pi are in different contexts. The constrain generated before this commit was incorrect, it could contain a free variable. This commit fix the issue by using the context of the body, and lifting the free variables in the domain by 1.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This commit improves the condition for showing that an equality(and convertability) constraint cannot be solved. A nice consequence is that Lean produces nicer error messages. For example, the error message for unit test elab1.lean is more informative.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
operator bool() may produce unwanted conversions.
For example, we had the following bug in the code base.
...
object const & obj = find_object(const_name(n));
if (obj && obj.is_builtin() && obj.get_name() == n)
...
obj.get_name() has type lean::name
n has type lean::expr
Both have 'operator bool()', then the compiler uses the operator to
convert them to Boolean, and then compare the result.
Of course, this is not our intention.
After this commit, the compiler correctly signs the error.
The correct code is
...
object const & obj = find_object(const_name(n));
if (obj && obj.is_builtin() && obj.get_name() == const_name(n))
...
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Recursive functions that may go very deep should invoke the function check_stack. It throws an exception if the amount of stack space is limited.
The function check_system() is syntax sugar for
check_interrupted();
check_stack();
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Proof/Cex builders and tactics implemented in Lua had a "strong reference" to script_state. If they are stored in the Lua state, then we get a cyclic reference.
That is, script_state points to these objects, and they point back to script_state.
To avoid this memory leak, this commit defines a weak reference for script_state objects. The Proof/Cex builders and tactics now store a weak reference to the Lua state.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
rewrite_* functions take the rewriting results of the sub-components and
construct the rewriting result for the main component.
For instance, rewrite_app function takes env, ctx, and the value v s.t.
v = (e_0 e_1 ... e_n)
and the rewriting results for e_i's as a vector(buffer)
(e'_0, pf_0 -- proof of e_0 = e'_0)
(e'_1, pf_1 -- proof of e_1 = e'_1)
...
(e'_n, pf_n -- proof of e_n = e'_n).
Then rewrite_app function construct the new v'
v' = (e'_0 e'_1 ... e'_n)
and the proof of v = v' which is constructed with pf_i's.
These functions are used in the component rewriters such as app_RW and
let_type_RW, as well as more complicated rewriters such as depth
rewriter.
For example, after this commit, we can write
simple_tac = REPEAT(ORELSE(imp_tactic, conj_tactic)) .. assumption_tactic
instead of
simple_tac = REPEAT(ORELSE(imp_tactic(), conj_tactic())) .. assumption_tactic()
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Before this commit, the elaborator would only assign ?M <- P, if P was normalized. This is bad since normalization may "destroy" the structure of P.
For example, consider the constraint
[a : Bool; b : Bool; c : Bool] ⊢ ?M::1 ≺ implies a (implies b (and a b))
Before this, ?M::1 will not be assigned to the "implies-term" because the "implies-term" is not normalized yet.
So, the elaborator would continue to process the constraint, and convert it into:
[a : Bool; b : Bool; c : Bool] ⊢ ?M::1 ≺ if Bool a (if Bool b (if Bool (if Bool a (if Bool b false true) true) false true) true) true
Now, ?M::1 is assigned to the term
if Bool a (if Bool b (if Bool (if Bool a (if Bool b false true) true) false true) true) true
This is bad, since the original structure was lost.
This commit also contains an example that only works after the commit is applied.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This is very important when several Lua tactics are implemented in the
same Lua State object. In this case, even if we use the par
combinator, a Lua tactic will block the other Lua tactics running in
the same Lua State object.
With this commit, a Lua tactic can use yield to allow other tactics
in the same State object to execute.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The following call sequence is possible:
C++ -> Lua -> C++ -> Lua -> C++
The first block of C++ is the Lean main function.
The main function invokes the Lua interpreter.
The Lua interpreter invokes a C++ Lean API.
Then the Lean API invokes a callback implemented in Lua.
The Lua callback invokes another Lean API.
Now, suppose the Lean API throws an exception.
We want the C++ exception to propagate over the mixed C++/Lua call stack.
We use the clone/rethrow exception idiom to achieve this goal.
Before this commit, the C++ exceptions were converted into strings
using the method what(), and then they were propagated over the Lua
stack using lua_error. A lua_error was then converted into a lua_exception when going back to C++.
This solution was very unsatisfactory, since all C++ exceptions were being converted into a lua_exception, and consequently the structure of the exception was being lost.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
copy_values is not a big if-then-else anymore.
Before this change, whenever we added a new kind of userdata, we would have to update copy_values.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The directory bindings/lua was getting too big and had too many dependencies.
Moreover, it was getting too painful to edit/maintain two different places.
Now, the bindings for module X are in the directory that defines X.
For example, the bindings for util/name.cpp are located at util/name.cpp.
The only exception is the kernel. We do not want to inflate the kernel
with Lua bindings. The bindings for the kernel classes are located
at bindings/kernel_bindings.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Now, it produces the following outcomes:
1- A proof
2- A counterexample
3- A list of (unsolved) final states
Remark: the solve method does not check whether the proof or counterexample is correct.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The idea is to make it clear that io_state is distinguish it from proof_state, and from leanlua_state.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The main motivation is to break the remove the dependency frontends/lean <-- bindings/lua.
This dependency is undesirable because we want to expose the frontends/lean parser and pretty printer objects at bindings/lua.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The new hash code has the property that given expr_cell * c1 and expr_cell * c2,
if c1 != c2 then there is a high propbability that c1->hash_alloc() != c2->hash_alloc().
The structural hash code hash() does not have this property because we may have
c1 != c2, but c1 and c2 are structurally equal.
The new hash code is only compatible with pointer equality.
By compatible we mean, if c1 == c2, then c1->hash_alloc() == c2->hash_alloc().
This property is obvious because hash_alloc() does not have side-effects.
The test tests/lua/big.lua exposes the problem fixed by this commit.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Instead of having m_interrupted flags in several components. We use a thread_local global variable.
The new approach is much simpler to get right since there is no risk of "forgetting" to propagate
the set_interrupt method to sub-components.
The plan is to support set_interrupt methods and m_interrupted flags only in tactic objects.
We need to support them in tactics and tacticals because we want to implement combinators/tacticals such as (try_for T M) that fails if tactic T does not finish in M ms.
For example, consider the tactic:
try-for (T1 ORELSE T2) 5
It tries the tactic (T1 ORELSE T2) for 5ms.
Thus, if T1 does not finish after 5ms an interrupt request is sent, and T1 is interrupted.
Now, if you do not have a m_interrupted flag marking each tactic, the ORELSE combinator will try T2.
The set_interrupt method for ORELSE tactical should turn on the m_interrupted flag.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
For example, this feature is useful when displaying the integer value 10 with coercions enabled. In this case, we want to display "nat_to_int 10" instead of "10".
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
It is incorrect to apply substitutions during normalization.
The problem is that we do not have support for tracking justifications in the normalizer. So, substitutions were being silently applied during normalization. Thus, the correctness of the conflict resolution in the elaboration was being affected.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
We need that when we normalize the assignment in a metavariable environment.
That is, we replace metavariable in a substitution with other assignments.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
elaborator was not handling max constraints where one of the arguments was a Bool. Example:
ctx |- max(Bool, Type) == ?M
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
We may miss solutions, but the solutions found are much more readable.
For example, without this option, for elaboration problem
Theorem Example4 (a b c d e : N) (H: (a = b ∧ b = e ∧ b = c) ∨ (a = d ∧ d = c)) : (h a c) = (h c a) :=
DisjCases H
(fun H1 : _,
let AeqC := Trans (Conjunct1 H1) (Conjunct2 (Conjunct2 H1))
in CongrH AeqC (Symm AeqC))
(fun H1 : _,
let AeqC := Trans (Conjunct1 H1) (Conjunct2 H1)
in CongrH AeqC (Symm AeqC))
the elaborator generates
Theorem Example4 (a b c d e : N) (H : a = b ∧ b = e ∧ b = c ∨ a = d ∧ d = c) : (h a c) = (h c a) :=
DisjCases
H
(λ H1 : if
Bool
(if Bool (a = b) (if Bool (if Bool (if Bool (b = e) (if Bool (b = c) ⊥ ⊤) ⊤) ⊥ ⊤) ⊥ ⊤) ⊤)
⊥
⊤,
let AeqC := Trans (Conjunct1 H1) (Conjunct2 (Conjunct2 H1)) in CongrH AeqC (Symm AeqC))
(λ H1 : if Bool (if Bool (a = d) (if Bool (d = c) ⊥ ⊤) ⊤) ⊥ ⊤,
let AeqC := Trans (Conjunct1 H1) (Conjunct2 H1) in CongrH AeqC (Symm AeqC))
The solution is correct, but it is not very readable. The problem is that the elaborator expands the definitions of \/ and /\.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Motivations:
- We have been writing several comments of the form "... trace/justification..." and "this trace object justify ...".
- Avoid confusion with util/trace.h
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
This normalization rule is not really a computational rule.
It is essentially encoding the reflexivity axiom as computation.
It can also be abaused. For example, with this rule,
the following definition is valid:
Theorem Th : a = a := Refl b
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
The printer and formatter objects are not trusted code.
We moved them to the kernel to be able to provide them as an argument to the trace objects.
Another motivation is to eliminate the kernel_exception_formatter hack.
With the formatter in the kernel, we can implement the pretty printer for kernel exceptions as a virtual method.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
Trace objects will be used to justify steps performed by engines such as the elaborator. We use them to implement non-chronological backtracking in the elaborator. They are also use to justify to the user why something did not work.
The unification constraints are in the kernel because the type checker may create them when type checking a term containing metavariables.
Remark: a minimalistic kernel does not need to include metavariables, unification constraints, nor trace objects. We include these objects in our kernel to minimize code duplication.
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>
- Use hierarchical names instead of unsigned integers to identify metavariables.
- Associate type with metavariable.
- Replace metavar_env with substitution.
- Rename meta_ctx --> local_ctx
- Rename meta_entry --> local_entry
- Disable old elaborator
- Rename unification_problems to unification_constraints
- Add metavar_generator
- Fix metavar unit tests
- Modify type checker to use metavar_generator
- Fix placeholder module
Signed-off-by: Leonardo de Moura <leonardo@microsoft.com>