Tag Archives: hyper-binary resolution

TreeLook and transitive reduction

The paper by Heule et al. about hyper-binary resolution using intree-based lookahead is pretty funky. The idea is actually quite simple (and as usual, not exactly trivial to come up with): we re-use past propagations by reversing the order in which literals are normally enqueued.

A simple example

First, a queue is built that starts with a leaf literal and then follows it up through binary clauses until it can. Then it backtracks (adds to the queue a special, * element) and continues. The point of the queue is to have an example order that we can use to dequeue literals from in reverse propagation order. Obviously, there are many different orders in which we can build this queue and I wouldn’t be surprised if there are some nice heuristics one can use. Let’s just assume we have such a queue.

For example if y leads to x, then an example queue will have first element x and then y. So we first enqueue x, propagate, and then we enqueue y. If x already fails, there is no point in enqueuing y (and y is failed along with x). If both y and z lead to x but only z fails, then we don’t have to perform the propagations done by x twice: We enqueue x, propagate, create new decision level, enqueue y, propagate (nothing fails), backtrack 1 decision level, enqueue z, and now we fail. Notice that we didn’t have to propagate x twice even though we probed two literals (y and z) that both entailed x.

Failing mid-way

The paper mentions failed literals that fail mid-way while dequeueing elements. We obviously cannot simply enqueue these literals, as they would be unset next time we backtrack. So these have to be kept in an array and set later, when we are at decision level 0. Further, once we are in a failed state, anything dequeued that is at the same or lower level also fails, so we need to keep an indicator of failure for these literals.

Keeping reasons updated

Let’s suppose we enqueued x and propagated it. Next is y. We enqueue y… but we need to know what is the reason why x got set. The reason is of course the binary clause that we examined when we built the queue: (x, ~y). The reason is needed to be set because we will be jumping backwards through the implication graph to the deepest common ancestor to attach the new hyper-binary clause there. When jumping back, we might need to go back all the way to y, through x. In order to perform transitive reduction (as explained later), we need to know if the binary clause (x, ~y) was redundant or irredundant. This information needs to be stored in the queue and every time we dequeue a new literal y the reason of the previously enqueued literal needs to be set to the inverse of the currently enqueued literal i.e. ~y.

Transitive reduction

Updating reasons becomes a real problem in case we wish to perform transitive reduction. Transitive reduction removes binary clauses that are useless from a binary implication graph reachability perspective. However, if it removes a binary clause that is later used by the queue to update a reason, we encounter a problem. We may update a literal with a reason that is no longer valid as the corresponding binary clause has been replaced by a chain of binary clauses. Later transitive reductions will take into account that this binary clause exists (it doesn’t) and will make further reductions that may be incorrect. In particular, further transitive reductions might remove an element of the chain itself — kind of like biting our own tail.

There seems to be a couple of options to fix the problem:

  1. Not to perform transitive reduction at all. This may have been the intention of the designers, as the BCP_NHBR function does not perform transitive reduction.
  2. Update the queue to reflect the changed set of binary clauses. Unfortunately this would be very expensive and thus basically not doable in reasonable about of time as far as I can tell.
  3. Never remove binary clauses that are used for the queue. This means we need to mark such clauses and then check for markings when removing binary clauses. This is the implementation that I chose. We can immediately unmark a clause once the corresponding element has been dequeued, making it possible to remove it later. In CryptoMiniSat I simply unmark all binary clauses at the end — it’s faster.

Conclusions

I remember some people always asking me why I haven’t yet implemented intree-based probing. It is much faster than normal probing. However, it’s not perfect. For example, it cannot be used to perform a fast depth-first walk of the tree and as such stamping is not really possible while doing it — always updating closing times for already dequeued elements seems to defeat the purpose of the whole idea (i.e. reversing the propagation order). Secondly, I haven’t yet found a way to efficiently perform Stalmarck while doing intree probing. Thirdly, it’s not exactly trivial to implement — as explained above.

[paypal-donation]

Certified UNSAT and CryptoMiniSat

Marijn Heule kindly sent me an email on the 10th of April about DRUP, the new system used this year in the SAT Competition’13 for the UNSAT track. He kindly encouraged me to implement the DRUP system. He personally implemented it into Minisat which was a very helpful lead for me. In this post I will talk about my experiences in implementing DRUP into CryptoMiniSat within a span of 3 days.

Implementation complexity

It took only 3 days, about 1400 lines of code to implement DRUP:

git diff f27c74bbd  c0b6ccc10 --shortstat
 25 files changed, 1372 insertions(+), 307 deletions(-)

It turns out that the biggest problem is that whenever I shorten a clause, I first have to add the shortened version, and then delete the old one. Since I always do in-place literal deletion, this means I have to save the old clause into a temporary place, add the new one and finally delete the old one that has been saved. I will eventually write a C++ wrapper that does this for me, but currently, it’s a lot of

vector origCl(cl.size());
std::copy(cl.begin(), cl.end(), origCl.begin());
[blah...]
drup << cl << " 0" << endl;
drup << "d " << origCl << " 0" << endl;

So, it's a bit messy code. Other than this, the implementation went very smoothly. The biggest pain was not to forget to add to the DRUP output all changed clauses. Since I have implicit binary and tertiary clauses and I manipulate them in-place, they are changed in quite complicated code paths.

If you don't have such complicated code paths, you should be able to implement DRUP within a day or less. I encourage you to do so, it's quite fun!

Remaining uncertainties

I am a bit confused about whether some of the optimisations in CryptoMiniSat work with DRUP. I have been fuzzing the DRUP implementation for about ~1000 CPU hours, but not with all optimisations turned on. Some are a bit shaky. In particular, XOR and stamping&caching come to mind.

I cannot turn DRUP on for the top-level XOR manipulation because otherwise I would need to tell DRUP every Gaussian elimination step. Not funny, and not fast. Well, XOR is not such a big thing, and it is no longer natively implemented in CryptoMiniSat, so not a big deal, really.

The other, more troubling one is stamping and implied literal caching. Luckily I have on-the-fly hyper-binary resolution (this is needed for DRUP with Stalmarck if you think about it), so the binary clauses stored by caching and stamping are there... but they may get deleted by variable elimination, blocked clause elimination and... well, maybe nothing else. Hopefully not. Anyway, I never block binary clauses (does clause blocking ever help? I am confused) and I can of course not delete binary eliminated clauses from DRUP. However... that may make the verification very slow. So, I am at crossroads here. I think I will submit a version with stamping&caching and one without.

In the end, every optimisation can be turned on except for XOR. I find that exceptionally good given the number of tweaks/hacks used by CryptoMiniSat.

Long-term advatages of having DRUP

I think DRUP allows for a lot of possibilities. Naturally I first want to draw resolution graphs. There are plenty of libraries for 3D drawing, and I have already ordered the LEAP controller (a 3D controller), which will come handy to play with the resolution graphs (zoom&out, rotate, etc.).

From there, I want to get stats out of the graph, and I want to present it next to/with the stats that I already generate. For example, how many of the deleted clauses get re-learnt later? How many clauses get used in the resolution graph with the empty node? How often when cleaning with glues? How often when cleaning with activities? For which types of instances?

Linking this with real-world instances by coloring the graph points according to e.g. filter functions in stream ciphers is not very hard and should be quite a lot of fun.

Acknowledgements

I think Marijn Heule deserves a lot of thanks for the work he has put into DRUP (webpage, example, DIFF for MiniSat) and all the help he has given me. I had some initial doubts about whether it's possible to implement at all and I had some minor problems with the checker --- he always replied kindly and promptly. Thanks!

On hyper-binary resolution

Hyper-binary resolution is actually quite straightforward, or at least appears to be. Let’s take the following example. The clauses in our database are the following:

-a V b
-a V c
-b V -c V g
-b V -c V d
-d V g

Let’s set a to true, and see what happens. Immediately, b and c get set to true through binary clauses. If we now propagate g through the clause -b V -c V g, we ought to do hyper-binary resolution straight away, and add the clause -a V g — some call this lazy hyper-binary resolution. Good, one more binary clause!

But then… So now we have nothing to propagate using only binary clauses, we have to propagate using a long clause, -b V -c V d. As good citizens, we also do (lazy) hyper-binary resolution, coming up with the clause -a V d. Good, one more binary clause! One slight glitch now… d propagates to g, using a binary clause. But this means that setting a can propagate to g without -a V g, the first hyper-binary clause we added! So the first hyper-binary clause we added is in fact useless, it needs to be removed. If we applied transitive reduction, it would remove the first hyper-binary clause -a V g automatically.

Let’s go a bit deeper here. How could we have avoided adding the first hyper-binary clause? The obvious answer is: we should have started with -b V -c V d instead of -b V -c V g. But how easy would have it been to know (i.e. calculate) that starting with that different long clause would have made our work easier? I am not sure it would have been easy to know. And of course the example above is very trivial. It could be made much-much more complicated: g could have been reached with any number of hyper-binary resolutions from d — so simple binary look-ahead would not have helped.

I am stuck here. Any ideas?

Visiting Linz

Lately I had the pleasure of visiting Linz, Armin Biere’s workplace, where I gave a quick talk on SAT solver architectures. To me, it was really interesting to think through that presentation — not because it was entirely new or exciting, but because it recapped on many issues that have been bothering me lately. Namely, that it’s difficult to make a really versatile SAT solver, because the low-level choices that must be made (e.g. watchlist scheme) determines so many things when one must make higher-level architectural decisions such as clause sharing or even something as simple as hyper-binary resolution. As for this latter, thanks to Armin Biere’s thoughts I have finally managed to realise why my hyper-binary resolution was so slow: I lately took the decision not to fully propagate binary clauses before propagating normal (i.e. larger) clauses, which meant that doing hyper-binary resolution was much slower as I had to re-calculate the binary graph. The fact of not fully propagating binary clauses before normal clauses seemed also to influence my much higher-level choice of using cached implications, as they (intuitively, and also in practice) help much more if binary clauses are not fully propagated before normal clauses. This latter influence is interesting to think through, as something this trivial shouldn’t — at least in principle — influence such a high-level decision.

Also thanks to Armin Biere, I have managed to grasp a better understanding of lingeling and its superior watchlist scheme. Some of the architectural elements of lingeling’s watchlist scheme are really interesting, and when they get published I will definitely port some of them to CryptoMiniSat. It seems to use much less space, and stores information in a more cache-friendly manner, aiding the processor in its job. A related, interesting pointer that I have learnt is this paper that originally introduced blocking literals, but also talks about a range of other ideas that can help. All in all, it was great to visit Linz and the group of Armin Biere, as I have managed to learn a lot from him and his colleagues.

Hyper-binary resolution: I was wrong, again

I am not perfect, so I make mistakes. One of the more memorable mistakes I have made has been regarding hyper-binary resolution. More specifically, in this post I wrote that I cannot add as many binary clauses using hyper-binary resolution as Armin Biere, one of the leading SAT solver experts, and I proposed a reason that would have meant that CryptoMiniSat was doing things differently and thus better. Well, I was wrong, on multiple accounts.

First of all, there was an awful bug in the code that meant that hyper-binary resolution was not carried out on negated variables. Second, when it was decided that multiple implications need to be attached to a literal, I didn’t check for redundancy. For example, if v1 had to be connected to v2 and v3, I simply added the clauses
-v1 OR v2 (1)
-v1 OR v3 (2)
However, this may not be very efficient, since if there is already a binary clause
-v2 OR v3 (3)
then clause (2) doesn’t need to be added. The added redundant binary clauses reduced the speed of solving while adding no extra knowledge. There were many of them, too: approx 2/3rd of all binary clauses added were redundant.

Hyper-binary resolution is conceptually not too difficult, but takes a lot of thinking to code efficiently, is very time consuming and its benefits are not clear-cut. I believe the problem is that many binary clauses added this way are ultimately useless, since most are connected to variables that will soon be proved to be of a fixed value. Another possibility is that since problems are pretty structured, and it’s usually best to attack problems in a specific way (which is normally correctly guessed by the VSIDS and polarity-guessing&caching heuristics), the binary clauses added by hyper-binary resolution do not help resolving the problem given the (typically correct) attack method employed by SAT solvers. In other words, the information added is nice, but mostly unused. This is just wild speculation, and I may only think this because my code is slow — I believe Armin’s code is faster, so I should have a look.