Can someone comment on the deeper distinctions between this and DeepMinds ILP paper on learning explanatory rules from noisy data? AFAIK thats a special case of SAT?
ILP and SAT are quite different. ILP is much higher level in that it can define new predicates and do recursion, which have no counterparts in SAT. On the other hand, SAT is much more scalable and can be applied to instances orders of magnitude larger. I think ILP's expressiveness and lack of scalability are related.
Inductive Logic Progamming (ILP) is a class of machine learning algorithms that learn logic programs from examples and background knowledge (where background knowledge is a logic theory, i.e. another logic program, as ar the examples).
SAT is the boolean satisfiability problem, the problem of finding variable assignments to the variables of a boolean formula that make the formula true.
So the two are not similar and one is not an instance of the other.
The δILP paper describes a differentiable ILP system that learns logic programs with a differentiable logic representation.
Finally, the paper above (SATNet) describes a differentiable satisfiabiilty solver that can be incorporated in a neural network architecture to enable the nerual net to solve satisfiability problems and perform reasoning.
Looks like they (SATNet) are not incorporating a real (exact) SAT solver but their differentiable "SAT solver" is a MAXSAT estimator which gives an approximation of the maximum number of clauses that can be made true.
I meant was ASP is a higher level 'interface' to SAT, sorry for the confusion. Work done at Imperial by Law and Evans in ILP compiles down to SAT solving via ASP. IDK if δILP goes to SAT.
In some senses we are past the halting problem once we accept that that we will sometimes be wrong or unable to answer. Insofar as it has anything to do with the halting problem, this seems just another way to do that. Might still be useful.