Category Archives: Research

Research-related information

Certified UNSAT and CryptoMiniSat

Marijn Heule kindly sent me an email on the 10th of April about DRUP, the new system used this year in the SAT Competition’13 for the UNSAT track. He kindly encouraged me to implement the DRUP system. He personally implemented it into Minisat which was a very helpful lead for me. In this post I will talk about my experiences in implementing DRUP into CryptoMiniSat within a span of 3 days.

Implementation complexity

It took only 3 days, about 1400 lines of code to implement DRUP:

git diff f27c74bbd  c0b6ccc10 --shortstat
 25 files changed, 1372 insertions(+), 307 deletions(-)

It turns out that the biggest problem is that whenever I shorten a clause, I first have to add the shortened version, and then delete the old one. Since I always do in-place literal deletion, this means I have to save the old clause into a temporary place, add the new one and finally delete the old one that has been saved. I will eventually write a C++ wrapper that does this for me, but currently, it’s a lot of

vector origCl(cl.size());
std::copy(cl.begin(), cl.end(), origCl.begin());
[blah...]
drup << cl << " 0" << endl;
drup << "d " << origCl << " 0" << endl;

So, it's a bit messy code. Other than this, the implementation went very smoothly. The biggest pain was not to forget to add to the DRUP output all changed clauses. Since I have implicit binary and tertiary clauses and I manipulate them in-place, they are changed in quite complicated code paths.

If you don't have such complicated code paths, you should be able to implement DRUP within a day or less. I encourage you to do so, it's quite fun!

Remaining uncertainties

I am a bit confused about whether some of the optimisations in CryptoMiniSat work with DRUP. I have been fuzzing the DRUP implementation for about ~1000 CPU hours, but not with all optimisations turned on. Some are a bit shaky. In particular, XOR and stamping&caching come to mind.

I cannot turn DRUP on for the top-level XOR manipulation because otherwise I would need to tell DRUP every Gaussian elimination step. Not funny, and not fast. Well, XOR is not such a big thing, and it is no longer natively implemented in CryptoMiniSat, so not a big deal, really.

The other, more troubling one is stamping and implied literal caching. Luckily I have on-the-fly hyper-binary resolution (this is needed for DRUP with Stalmarck if you think about it), so the binary clauses stored by caching and stamping are there... but they may get deleted by variable elimination, blocked clause elimination and... well, maybe nothing else. Hopefully not. Anyway, I never block binary clauses (does clause blocking ever help? I am confused) and I can of course not delete binary eliminated clauses from DRUP. However... that may make the verification very slow. So, I am at crossroads here. I think I will submit a version with stamping&caching and one without.

In the end, every optimisation can be turned on except for XOR. I find that exceptionally good given the number of tweaks/hacks used by CryptoMiniSat.

Long-term advatages of having DRUP

I think DRUP allows for a lot of possibilities. Naturally I first want to draw resolution graphs. There are plenty of libraries for 3D drawing, and I have already ordered the LEAP controller (a 3D controller), which will come handy to play with the resolution graphs (zoom&out, rotate, etc.).

From there, I want to get stats out of the graph, and I want to present it next to/with the stats that I already generate. For example, how many of the deleted clauses get re-learnt later? How many clauses get used in the resolution graph with the empty node? How often when cleaning with glues? How often when cleaning with activities? For which types of instances?

Linking this with real-world instances by coloring the graph points according to e.g. filter functions in stream ciphers is not very hard and should be quite a lot of fun.

Acknowledgements

I think Marijn Heule deserves a lot of thanks for the work he has put into DRUP (webpage, example, DIFF for MiniSat) and all the help he has given me. I had some initial doubts about whether it's possible to implement at all and I had some minor problems with the checker --- he always replied kindly and promptly. Thanks!

A variable elimination improvement

Lately, I have been thinking about how to improve variable elimination. It’s one of the most important things in SAT solvers, and it’s not exactly easy to do right.

Variable elimination

Variable elimination simply resolves every occurrence of a literal v1 with every occurrence of the literal \neg v1 , removes the original clauses and adds the resolvents. For example, let’s take the clauses

v1 \vee v2 \vee v3
v1 \vee v4 \vee v5
\neg v1 \vee v10 \vee v11
\neg v1 \vee v12 \vee v13

When v1 gets eliminated the resolvents become

v2 \vee v3 \vee v10 \vee v11
v2 \vee v3 \vee v12 \vee v13
v4 \vee v5 \vee v10 \vee v11
v4 \vee v5 \vee v12 \vee v13

The fun comes when the resolvents are tautological. This happens in this case for example:

v1 \vee v4
\neg v1 \vee v5\vee \neg v4

The resolvent is the clause

v4 \vee \neg v4 \vee v5

Which contains both a literal and its negation and is therefore always true. It’s good to find variables we can eliminate without and side-effects, i.e. variables that eliminate without leaving any resolvents behind. However, it’s not so cheap to find these. Until now.

A fast procedure for calculating the no. of non-tautological resolvents

The method I came up with is the following. For every clause where v1 is inside, I go through every literal and in an array the size of all possible literals, I set a bit. For every clause, I set a different bit. When going through every clause of every literal where \neg v1 is present, I calculate the hamming weight (a popcount(), a native ASM instruction on modern CPUs) of the array’s inverse literals and substruct this hamming weight from the number of clauses v1 was inside. I sum up all these and then the final count will be the number of non-tautological resolvents. Here is a pseudo-code:

mybit = 1
num = 0
for clause in clauses[v1]:
    for l in clause:
        myarray[l] |= mybit

    mybit = mybit << 1
    num += 1

count = 0
for clause in clauses[not v1]:
    tmp = 0
    for l in clause:
        tmp |= myarray[not l]
    count += num - popcount(tmp)

print "number of non-tautological resolvents: %d" % count

I think this is pretty neat. Notice that this is linear in the number of literals in the clauses where v1 and \neg v1 is present. The only limitation of this approach is that ‘myarray’ has to have enough bits in its elements to hold ‘num’ number of bits. This is of course non-trivial and can be expensive in terms of memory (and cache-misses) but I still find this approach rather fun.

Using this procedure, I can check whether all resolvents are tautological, and if so, remove all the clauses and not calculate anything at all. Since this happens very often, I save a lot of calculation.

CryptoMinisat 3.1 released

CryptoMinisat 3.1 has been released. The short changelog is:

$ git diff cryptoms-3.0 cryptoms-3.1 --shortstat
 84 files changed, 3079 insertions(+), 2751 deletions(-)

The changes made were threefold. First, memory usage has been greatly reduced. This is crucial, because memory usage was over 7GB on certain instances. Secondly, the implication cache wasn’t very well-used and an idea that came to my mind greatly improved performance on most problems. Finally, time limiting of some inprocessing techniques on certain types of problems has been improved.

Memory usage reduction

On instances that produced a lot of long learnt clauses the memory usage was very high. These learnt clauses were all automatically linked in to the occurrence list and consequently took large amounts of memory, sometimes up to 10GB. On other instances, the original clauses were too numerous and too large, so putting even them into the occurrence list was too much. On these instances, variable elimination is not carried out (or carried out only later, when enough original clauses have been removed/shortened). To debug some of these problems, I wrote a fuzzer that generates extremely large problems with many binary and many long clauses, it’s available here as “largefuzzer”. It’s actually quite nice with many-many binary clauses so it also can fuzz the problems encountered with probing of extremely weird and large instances.

Implied literal usage improvement

CryptoMiniSat uses implied literals, i.e. caches what literals were propagated by each literal during probing. It then re-uses this information to subsume and/or strengthen clauses. This is kind of similar to stamping though uses more memory. It is actually useful to have alongside stamping, and I now do both — propagating DFS that stamping requires is expensive though updating cache during DFS is just as easy as during quasi-BFS.

The trick I discovered while playing around with cached implied literals is that if literal L1 propagates L2 and also !L2 then that means there are conceptually two binary clauses in the solver (!L1, L2), (!L1, !L2), so !L1 is TRUE. This is of course trivial, but I never checked for this. The question most would raise is: why would L1 propagate both L2 and !L2 and not fail? The answer is kind of tricky, but very interesting. Let’s say at one point, L1 propagates L2 due to a learnt clause, but that learnt clause is then removed. A new learnt clause is then later learnt, and with that learnt clause in place, L1 propagates !L2. Now, without caching, this would be ignored. Caching memorizes past conceptual binary clauses and re-uses this information.

This is not an optimization that only looks good on paper, it is very good to have. With this one optimization, I gained 5 instances from the SAT Comp’09 instances with a 1000s timeout (196 solved -> 201 solved). I can’t right now imagine how this could be done with stamping effectively, but that doesn’t mean it’s not possible. Though, according to my experience, stamping doesn’t preserve that much information over time as it’s being updated (renumbered) frequently while the cache is only improved over time, never shrunk. A possibility would be to have more than one stamp system and round-robin selecting them. However that would mean that sorting of clauses (for shrinking) would need to be done more than once, and sorting them is already relatively expensive. I sometimes feel that what stamping gains in memory it looses on sorting (i.e. processing time) and lower coverage (re-numbering).

More precise time-limiting

Martin Maurer has been kind enough to file a lot of bug reports about probing and variable elimination taking too much time, sometimes upwards of 150s when they should take around 20-30s maximum. While investigating, it tuned out that the problem was very weird indeed. While trying to eliminate or probe one variable the time for that one variable took upwards of 100s. This was completely unexpected as the code only checked for timeouts on a per-variable basis. In the end, the code had to be improved to track time on an intra-variable basis in both systems. While at it, I also added intra-variable time-tracking to implicit clause subsumption and strengthening too. So, over-times should less prevalent from now on. As an interesting side-note, time-limiting on probing is now so fine-grained that a 32-bit unsigned integer would overflow within 15s if used as the time-tracker.

On benchmark randomization

As many of you have heard, the SAT Competition for this year has been announced. You can send in your benchmarks between the 12th and the 22nd of April, so get started. I have a bunch of benchmarks I have already submitted about 2 years ago, still waiting for any reply from those organizers — but the organizers are different this year, so fingers crossed.

What I want to talk about today is benchmark randomization. This is a very-very touchy topic. However, I fear that it’s touchy for the wrong reasons, and so I think it’s important to talk about it in detail.

What is benchmark randomization?

Benchmark randomization is when a benchmark that is submitted is shuffled around a bit. There are many ways to shuffle a problem, and I will discuss this in a bit, but the point is that the problem at hand that is described by the benchmark CNF should not be changed, or changed only in a very-very minor way, such that everyone agrees that it doesn’t affect the core problem itself as described by the CNF.

Why do we need shuffling?

We need shuffling because simply put, there aren’t enough good benchmarks and so the benchmarks of yesteryear (and the year before, and before, and…) re-appear often. This would be OK if SAT solvers couldn’t be tuned to solving specific problems faster. Note that I am not suggesting that SAT solvers are intentionally manipulated to solve specific problems faster by unscrupulous researchers. Instead, the following happens.

Unintentional random seed improvements

Researchers test the performance of their SAT solvers on specific instances and then tune their solvers, testing the performance again and again on the same instances to check if they have improved performance. Logically this is the best way to test and improve performance: use the same well-defined test-set all the time for meaningful comparison. Since the researcher wants to use the instances that he/she thinks is the current use-case of SAT solvers, he naturally uses the instances of SAT competitions, since those are representative. I did and still do the same.

So, researchers add their idea to a SAT solver, and test. If the idea is not improving things then some change is made and tested again. Since modern CDCL SAT solvers behave quite randomly, and since any change in the source code changes the behaviour quite significantly, a small change in the source code (tuning of a parameter, for example) will change the behaviour. And since the set of problems tested on is fixed, there is a chance that more problems will be solved. If more are solved, the researcher might correctly interpret this as a general improvement, not specific to the problem set. However, it may very well be generic, it is also specific.

The above suggests that the randomness of the SAT solver is completely unintentionally tuned to specific problems — a subset of which will appear next year in the competition.

Easy fixes

Since there aren’t enough benchmark problems, and in particular some benchmark types are rare, I suggest to fix the unintentional tuning of solvers to specific problems by changing the benchmarks in minor ways. Here is a list, with an explanation why I think it’s OK to perform the manipulation:

  1. Propagate variables. Unitary clauses are often part of benchmarks. Propagating some of these, some recursively, gives quite a bit of problem space variation. Propagation is performed by every CDCL SAT solver, and I think many would be  surprised if it didn’t help SAT solvers that worked differently than  current SAT solvers. Agreeing on performing partial propagation is something that shouldn’t be too difficult.
  2. Renumber variables. For some variable X that is not used (or is fixed to a value that has been propagated), every variable that is higher than X is decremented by one, and the CNF header is fixed to reflect this change.  Such a minor renumbering may be approved by every researcher as something that doesn’t change the problem or its structure. Note that if  partial propagation is performed there should be quite a number of variables that can be removed. Renumbering some, but not others is a way to shuffle the problem. A more radical way of renumbering variables would be to completely shuffle them, however that would change the way the problem is described in quite a radical way, so some would correctly object and it’s not necessary anyway.
  3. Replace equivalent literals. Perform strongly connected component analysis and replace equivalent literals. This has been shown to significantly improve performance and I have never seen a case where it doesn’t. Since equivalent literal replacement can be performed with a lot of freedom, this is quite a bit of shuffling space. For example, if v1=v2=v3, then any of the v1, v2, v3 can be the one that replaces the rest in the CNF. Picking one randomly is a way to shuffle the instance

There are other ways of shuffling, but either they change the instance too much (e.g. blocked clause removal), or can be undone quite easily (e.g. shuffling the order of the clauses). In fact, (3) is already quite a touchy issue I think, but with (1) and (2) all could agree on. Neither requires the order of the literals or the order of the clauses to change — some clauses (e.g. unitary ones) and literals (some of those that are set) would be removed, but that’s all. The problem remains essentially unchanged such that most probably even the original problem author would easily recognize it. However, it would be different from a SAT solver point of view: these changes would change the random seed of the solver, forcing the solver to behave in a way that is less tuned to this specific problem instance.

Conclusion

SAT solvers are currently tuned too much to specific instances. This is not intentional by the researchers, however it still affects the results. To obtain better, less biased results we should shuffle the problem instances we have. Above, I suggested three ways to shuffle the instances in such a way that most would agree they don’t disturb or change the complexity of the underlying problem described by the instance. I hope that some of these suggestions will be employed, if not this year then for next year’s SAT competition such that we could reach better, more meaningful results.

Collecting solver data

Lately, I have been putting a lot of effort into collecting data about solver behaviour and dumping it on-the-fly to a MySQL database. Some of this data is then presented by a PHP&javascript interface, such as this.The goal is to better understand how solving works, and thus to possibly come up with better strategies for solving. The requirements of the data gathered are: speed to calculate, size to store and usefulness.

Gathered data

There are some obvious statistics that one would like to collect, such as number of conflicts, propagations and conflicts per second, etc — these are all gathered by almost every SAT solver out there. However, it would be interesting to have more. Here is what CryptoMiniSat now gathers and dumps using highly efficient prepared bulk insert statements, using at most 5% of time.

For each variable, dumped once in a while:

  • no. times propagated false, and true (separately)
  • no. times decided false and true (sep.)
  • no. times flipped polarity
  • avg & variance of decision level
  • avg & variance of propagation level

For each clause larger than 3-long, dumped once in a while:

  • activity
  • conflict at which it was introduced (if learnt)
  • number of propagations made
  • number of conflicts made
  • number of times any of its literal was looked at during BCP
  • number of times it was looked at (dereferenced) during BCP
  • number of times used to resolve with during learnt clause generation
  • number of resolutions needed to during its generation (if learnt clause)

For earch restart:

  • no. of reducible&irreducible (i.e. learnt&non-learnt) binary clauses
  • no. of red&irred tertiary clauses (separately)
  • no. of red&irred long clauses (sep)
  • avg, variance,  min and max of learnt clause glues
  • avg, var, min, max of learnt clause sizes
  • avg, var, min, max of number of resolutions for 1st UIP
  • avg,var,min,max of branch depths
  • avg,var,min,max of backjump length –in the number of literals unassigned
  • avg,var,min,max of backjump lenght — in the number of levels backjumped
  • avg, var, min, max times a conflict followed a conflict without decisions being made
  • avg,var,min,max of agility
  • no. of propagations by red&irred binary clauses
  • no. of props by red&irred tertiary clauses
  • no. of props by red&irred long clauses
  • no. of conflicts by red&irred binary clauses (separately)
  • no. of conflicts by red&irred tertiary clauses (sep)
  • no. of conflicts by red&irred  long clauses (sep)
  • no. of learnt unit, binary, tertiary, and long clauses (sep)
  • avg,var, min,max of watchlist size traversed during BCP
  • time spent
  • no. of decisions
  • no. of propagations
  • no. of variables flipped
  • no. of variables set positive&negative (sep)
  • no. of variables currently unset
  • no. of variables currently replaced
  • no. of variables currently eliminated
  • no. of variables currently set

For each learnt clause database cleaning:

  • for reducible clauses removed: all the data in the “clauses larger than 3-long” data above, summed up
  • for reducible clauses not removed: all the data in the “clauses larger than 3-long” data above, summed up
  • no. of clauses removed
  • no. of clauses not removed
  • for all irred clauses (these are not touched): all the data in the “clauses larger than 3-long” data above, summed up

For every N conflicts:

  • clause size distribution
  • clause glue distribution
  • clause size and glue scatter data

This is all, and is not all mine. Suggestions were made by many, some as much as a year ago: George Katsirelos, Luca Melette, Vegard Nossum, Valentin-Mayer Eichberger, Ben Schlabs,  Said Jabbour and Niklas Sörensson. Naturally, none of these people necessarily approve of how I gather data or what I do with the data gathered, but they helped, so listing them is only fair.

What’s not yet or cannot be gathered

Two suggestions are still on the TODO list:

  • counting the number of conflicts done while a learnt clause was “locked”, i.e. has propagated in the current search tree. This stat of a learnt clause could tell us if the clause seemed essential to the search or not. If a clause propagated once, at the bottom of the search tree, and then the variable propagated was quickly unset, it’s not the same as if the same clause propagated at the top of the search tree, and then the variable it set was essentially never unset.
  • for each variable, counting the number of conflicts done while the variable was set. This is interesting, because if a variable was propagated only once, at top-level, it will seem like it was never set (set only once), yet it may have had a huge influence on the search tree and consequently on the learnt clauses.

Both of these require a change in the algorithm used to backjump and although I am a bit worried about the performance, I will add these soon.

Unfortunately, the data about variables can’t really be dumped so often, because it’s huge for large problems with millions of variables. So I guess I will only dump that for the most active N variables, for some sane definition of “active”. Similarly, the data about individual clauses is not dumped, only in a summed-up way during database cleaning.

Suggestions?

In case you have any ideas what else could be interesting to dump, please put it as a comment. The idea is to dump data that is cheap to compute and cheap to dump yet would be interesting for some reason or another. I am prepared to add stuff to datastructures, as you have probably guessed from the above. Yes, it makes the clause database less space-efficient, and so increases cache-miss. But on large problems, it’s going to be a cache-miss most of the time anyway, and a cache-fetch brings in 128B of data, which should be plenty for both the stats and the clause. Similarly with variables. So, don’t refrain from suggesting some stuff that takes space. Also, weird stuff is interesting. The most weird stat on the list above is definitely the “conflict after a conflict without decisions” (suggested by Said Jabbour) which I would have guessed to be zero, or near-zero, yet is pretty high, in the 30+% range.

Suggestions are welcome!