Tag Archives: branching

CryptoMiniSat 5.8.0 Released

After many months of work, CryptoMiniSat 5.8.0 has been released. In this post I’ll go through the most important changes, and how they helped the solver to be faster and win a few awards, among them 1st place at the SAT incremental track, 3rd place SAT Main track, and 2nd&3d place in the SMT BitVector tracks together with the STP and MinkeyRink solvers.

Gauss-Jordan Elimination

First and foremost, Gauss-Jordan elimination at all levels of the search is now enabled by default. This is thanks to the work detailed in the CAV 2020 paper (video here). The gist of the paper is that we take advantage of the bit-packed matrix and some clever bit field filters to quickly check whether an XOR constraint is propagating, conflicting, or neither. This, and a variety of other improvements lead to about 3-10x speedup for the Gauss-Jordan elimination procedure.

With this speedup, the overhead is quite small, and we enable G-J elimination at all times now. However, there are still limits on the size of the matrix, the number of matrices, and we disable it if it doesn’t seem to improve performance.

As a bit of reflection: our original paper with Nohl and Castelluccia on CryptoMiniSat, featuring Gauss-Jordan elimination at all levels of the search tree was published at SAT 2009. It took about 11 years of work, and in particular the work of Han and Jiang to get to this point, but we finally arrived. The difference is day and night.

Target Phases

This one is really cool, and it’s in CaDiCaL (direct code link here) by Armin Biere, description here (on page 8). If you look at the SAT Race of 2019, you will see that CaDiCaL solved a lot more satisfiable problems than any other solver. If you dig deep enough, you’ll see it’s because of target phases.

Basically, target phases are a variation of phase saving, but instead of saving the phase all the time when backtracking, it only saves it when backtracking from a depth that’s longer than anything seen before. Furthermore, it is doing more than just this: sometimes, it picks only TRUE, and sometimes it picks only FALSE phase. To spice it up, you can keep “local deepest” and “global deepest” if you like, and even pick inverted phases.

It’s pretty self-explanatory if you read this code (basically, just switching between normal, target, inverted, fixed FALSE, fixed TRUE phases) and it helps tremendously. If you look at the graphs of the SAT 2020 competition results (side no. 19 here) you will see a bunch of solvers being way ahead of the competition. That’s target phases right there.

CCAnr Local Search Solver

CryptoMiniSat gained a new local search solver, CCAnr (paper here) and it’s now the default. This is a local search solver by Shaowei Cai who very kindly let me add his solver to CryptoMiniSat and allowed me to add him as an author to the version of CryptoMiniSat that participated in the SAT competition. It’s a local search solver, so it can only solve satisfiable instances, and does so by always working on a full solution candidate that it tries to “massage” into a full solution.

Within CryptoMiniSat, CCAnr takes the starting candidate solution from the phases inside the CDCL solver, and tries to extend it to fit all the clauses. If it finds a satisfying assignment, this is emitted as a result. If it doesn’t, the best candidate solution (the one that satisfies the most clauses) is saved into the CDCL phase and is later used in the CDCL solver. Furthermore, some statistics during the local search phase are saved and then injected into the variable branching heuristics of the CDCL solver, see code here.

Hybrid Variable Branching

Variable branching in CryptoMiniSat has always been a mix of VSIDS (Variable State Independent Decaying Sum, paper here) and Maple (multi-arm bandit based, paper here) heuristics. However, both Maple and VSIDS have a bunch of internal parameters that work best for one, or for another type of SAT problem.

To go around the issue of trying to find a single optimal value for all, CryptoMiniSat now uses a combination of different configurations that is parsed from the command line, such as: “maple1 + maple2 + vsids2 + maple1 + maple2 + vsids1” that allows different configurations for both Maple and VSIDS (v1 and v2 for both) to be configured and used, right from the command line. This configuration system allows for a wider variety of problems to be efficiently solved.

Final Remarks

CryptoMiniSat is now used in many systems. It is the default SAT solver in:

I think the above, especially given their track record of achieving high performance in their respective fields, show that CryptoMiniSat is indeed a well-performing and reliable workhorse. This is thanks to many people, including, but not limited to, Kuldeep Meel, Kian Ming A. Chai, Trevor Hansen, Arijit Shaw, Dan Liew, Andrew V. Jones, Daniel Fremont, Martin Hořeňovský, and others who have all contributed pull requests and valuable feedback. Thanks!

As always, let me know if you have any feedback regarding the solver. You can create a GitHub issue here, and pull request here. I am always interested in new use-cases and I am happy to help integrate it into new systems.

The variable speed of SAT solving

Timings from the article "Attacking Bivium Using SAT Solvers". The authors didn't seem to have randomised the problems enough: the time to solve should increase exponentially, but instead it goes up and down like a roller coaster

Vegard Nossum asked me about the varying time it took for CryptoMiniSat to solve a certain instance that was satisfiable. This inspired me to write an overly long reply, which I think might interest others. So, why does the solving time vary for a specific instance if we permutate the clauses? Intuitively, just by permutating the clauses, the problem itself doesn’t get any easier or harder, so why should that make any difference at all? I will now try to go through the reasons, though they can almost all be summed up as follows: the SAT solver has absolutely no clue what it is solving, and so to solve a problem, it relies on heuristics and randomisation, both of which are influenced by its starting seed, which in turn is influenced by the order of the clauses in the CNF file. And now on to the long explanation.

Let’s divide problems into two categories: satisfiable and unsatisfiable instances. First, let me talk about satisfiable instances. Let’s suppose that we have been solving the problem for some time, and we have connected the dots (variables) well enough through learnt clauses. All that the SAT solver is waiting for is a good guess to set 3-4 variables right, and then it will propagate all variables to the right setting to satisfy all clauses. The problem is therefore to get to the point of only needing to guess 3-4 variables, i.e. to connect the dots, which we do through resolution. We can sacrifice some or all of dot-connecting (i.e. resolution) with some more guessing, and this can in fact be coded down by simply foregoing some of the conflict analysis we do. In the extreme case, all conflict analysis can be disabled, and then the solver would not restart its search. The solver would in essence be brute-forcing the instance through BCP (boolean constraint propagation). It is easy to see why this latter is prone to variation: depending on the ordering of the variables in the search tree, the tree could be orders of magnitude smaller or larger, and the first solution can be at any point in the search tree, essentially at a random place.

If we decide to carry out resolution for satisfiable problems to counter the problem of the variation of the search-tree, it is interesting to realise the following: in most cases, we can not forgo guessing. The reason is simple yet leads to quite interesting properties. Essentially, a SAT instance can have multiple solutions. If there are two solutions, e.g. 01100... and 10011... i.e. the solutions are the complete inverse of one another, then the SAT solver will not be able to prove any variable to any value. The best it could do is to create binary clauses, in the example case for instance

var1=0 -> var2=1, var3=1, var4=0...
and
var1=1 -> var2=0, var3=0, var4=1...

If we do enough resolutions, the “guessing” part will eventually become a solution-selector, i.e. it will select a solution from the set of available solutions. If there are 2^20, evenly distributed in the search space, we might need to set 20 variables before a solution is found through a simple application of BCP. Naturally, SAT solvers don’t normally do this, as they are content in finding one satisfying assignment, reporting that to the user, and exiting. It would be interesting to know the average remaining number of variables that needed to be guessed to solve the solution at the point the SAT solver actually found the solution, but this has not been done yet as far as I know. Then, we would know the trade-off the SAT solver employs between resolution and searching when solving satisfiable instances.

All right, so much for satisfiable instances. What happens with UNSAT instances? Well, the solver must either go through the whole tree and realise there is no solution, or do resolution until it reaches an empty clause, essentially building a resolution tree with an empty clause at its root. Since both of these can be done at the same time, there is a similar trade-off as above, but this time it’s somewhat upside-down. First of all, the search tree can be smaller or larger depending on the ordering of the variables (as before), and secondly, the resolution tree can be smaller or larger depending on the order of resolutions. The minimal resolution tree is (I believe) NP-hard to find, which doesn’t help, but there is at least a minimum resolution tree that limits us, and there is a minimum search tree which we must go through completely, that limits us. So, in contrast to finding satisfying solutions, both of these are complete in some sense, which should make searching for them robust in terms of speed. Finding a satisfying solution is not complete, because, as I noted above, SAT solvers don’t find all satisfying solutions — if they did, they would actually have the same properties as solving an unsatisfiable problem, as they would have to prove that there are no more solutions remaining, essentially proving unsatisfiability.

The above reasoning is probably the reason why SAT solvers tend to behave more robustly when they encounter an unsatisfiable problem. In particular, CryptoMiniSat’s running time seems more robust when solving unsatisfiable problems. Also, for problems that contain a lot of randomly placed solutions in the search space, such as the (in)famous vmpc_33 and vmpc_34 problems, the solving times seem wholly unpredictable for at least two interesting SAT solvers: lingeling and CryptoMiniSat.

Dependent Literals

I have lately been trying to work on branching heuristics. The idea is relatively simple. If we have a cache of what propagates what, we can try to branch on literals that propagate a lot of other literals. The algorithm boils down to the following:

  1. For each literal L, we select a literal DL that propagates L and has the largest tree of propagated literals.. We call this DL the dominating literal. Naturally, some literals don’t have a dominating literal, and some literals appear as dominating literals for many other literals.
  2. When branching, if we are supposed to branch on a literal that has a dominator, we pick the dominator instead of the literal. This not only ensures that the literal will be set, but it will probably also set a number of other literals.
  3. Using the heuristic of step 2, we should reach a conflict earlier, with less decisions taken. This should lead to faster conflicting, and thus faster solving.

Of course things aren’t as simple as they seem. First of all, as George has indicated to me, Jarvisalo and Junttila have published  a paper entitled Limitations of restricted branching in clause learning, which deals with issues related to this branching heursitic. The baseline of that paper is that it’s a bad idea to branch only on dependent (they call them input) variables. I haven’t really got very deep into this, but there are some subtle differences between the heuristic above and dependent variables, notably that I have treated dependent literals above, while Matti and Tommi are treating dependent variables.

As for the advantages of the above heuristic, it seems to work on some instances in practice. In particular, it allows me to solve some cryptographic instances significantly faster — in fact, some instances that have been extremely hard to solve are now solveable. To me, this looks like an indication that the heuristic is not entirely useless. There are some subtle problems, but it seems that a mixture of randomly branching either on the literal or on its dominator fixes most problems. Indeed, I think CryptoMiniSat 3.0.0 will be released with this experimental feature turned on by default.

PS: CryptoMiniSat 3.0.0 is coming soon, I believe. It’s going to provide a fully multi-threaded library&executable, with a MiniSat-compatible syntax, so anyone will be able to multi-thread his or her SMT solver at a whim.