Tag Archives: Release

CMSGen, a Fast Uniform-Like Sampler

Uniform sampling is a problem where you are given a solution space and you have to present solutions uniformly, at random. In some cases, this is quite simple, say, for the lotto. Just pick 5 random numbers from a box and we are done! For the lotto the solution space is very easy to generate. However, when there are constraints on the solution space, it starts to get tricky.

Let’s say that I have a function I want to test, but the input to the function has some real-world constraints like e.g. the 1st parameter must be larger than the second, the 2nd parameter must be divisible by the 3rd etc. If I want to test that this function operates correctly, one way to do it is to generate 100 uniformly random inputs that don’t violate any of the constraints, run the function, and see if all is OK. For this, I need a fast way of generating uniform samples given the constraints on the solution space.

Sampler speed vs. accuracy

There have been many samplers proposed in the literature. I personally have worked on one called UniGen, a guaranteed approximate probabilistic sampler, meaning that it’ll give approximately uniform samples most of the time, and we have a proof to back this up. It’s a great sampler and will work very fast on many instances. However, for really complex solution spaces, it can have trouble. Say, you want to generate interesting test inputs for your deep learning algorithm. Deep neural networks tend to be extremely complex when translated to binary constraints, so UniGen will likely not be fast enough. It would give very good quality samples (i.e. properly uniform samples), but if it’s too slow, we may want to exchange quality of samples for speed of generation.

There are two well-known samplers that are supposed to generate uniform samples on complex solution spaces, QuickSampler (code), and STS (code), but give no guarantees, let’s call these “uniform-like” samplers. Unfortunately, the paper by Chakraborty et al and its resulting code Barbarik showed that these uniform-like samplers are highly non-uniform. Barbarik is a pretty neat idea that basically constructs solution spaces with known solution distributions and then asks the sampler to generate uniform samples. Knowing the solution space, Barbarik can then verify how non-uniform the sampler is. Imagine having a box with 1000 balls, half of them blue and the other half green. Now if I ask the sampler to give me 50 balls at random, and all of them are green, I’d be a bit surprised to say the least. It’d be like the 5-lottery having the same numbers 3 times in a row. Possible, but… not very likely. If I do this experiment 100 times, and I always get 50 green balls, it’s fair to conclude that the sampler is not uniform.

Our new uniform-like sampler, CMSGen

Given an effective tester, Barbarik, we (Priyanka Golia, Sourav Chakraborty, Kuldeep S. Meel, and myself) thought perhaps we can follow the nowadays very successful test-driven development (TDD) methodology. All we have to do is to make our sampler pass the test of Barbarik, while being at least as fast as STS/QuickSampler, and we’ll be good to go. In fact, given Barbarik, it only took about a week of playing around with CryptoMiniSat’s options to beat both STS and QuickSampler in both accuracy and speed. This speaks volumes to how important it is to have a robust, reliable, and fast testing framework that can give immediate feedback about the quality of samples generated.

Our new uniform-like sampler, based on CryptoMiniSat, is called CMSGen (research paper here), and effectively takes CyrptoMiniSat and applies the following set of changes, through pre-set command line options:

  • Pick polarities at random. Normally, SAT solvers use polarity caching scheme, but of course we want uniform samples over all the search space, so we need to pick polarities at random.
  • Branch on variables at random. Normally, SAT solver branch on variables that will most likely lead to a conflict to maximize search efficiency (the VSIDS heuristic). However, we want to explore the solution space as evenly as possible, and so we want to approach the solution space from as many angles as possible. If you think about the search space as an N-dimensional binary cube, then we are trying to approach this cube as any ways as possible.
  • Turn off all pre- and inprocessing. Pre and inprocessing in SAT solvers are used to minimize the instance, transforming it into something easier to solve, e.g. through Bounded Variable Elimination. We later reconstruct a viable solution based on the solution to the transformed instance. However, the transformed instance may (and often will!) have a very different solution space. We cannot have that, so we must turn this off. To be fair, some pre- and inprocessing could be left intact, e.g. subsumption and self-subsuming elimination, perhaps a future paper :)
  • Restart at static intervals. Restarts are nowadays often dynamic in modern SAT solvers, or even if not dynamic, then follow a non-regular pattern. However, that could disturb how we find solution. Imagine, let’s say that solutions with variable A set to TRUE are very easy to find, but solutions with FALSE are very hard to find. What will happen? Well, in restarts where A was randomly set to TRUE, we’ll always quickly find a solution and output it. But for restarts when A was randomly set to FALSE, the system would struggle to find a solution, and after some conflicts, it will simply restart into a status where hopefully A is set to TRUE, and it can find a solution again. It is quite clear to see that this will lead to serious issues with sampling quality. Hence, we set an adjustable but static restart interval of 100 conflicts, with higher values typically leading to more uniform samples.

Performance of CMSGen

Performance of the system is on the ridiculous scale in comparison with other samplers:

When it comes to 2-wise coverage, i.e. the quality of samples, the data speaks for itself (note, UniGen is missing here because it was too slow):

Note that between STS and QuickSampler, STS is both the more uniform sampler, but also the slower one. CMSGen overcomes this limitation: it’s both faster than QuickSampler, and more uniform than STS.

And of course, the Barbarik tester gives “Accept” on CMSGen much more often than on STS or QuickSampler:

Conclusion

If you need non-guaranteed uniform but fast sampling, I’d go and try out CMSGen. It’s really a completely different beast. It’s not a guaranteed uniform sampler, but it’s incredibly effective. In fact, it’s so effective and works so well, it took me a full year to figure out how best to generate problems for it where it wouldn’t be uniform. But that’s another paper, and another blog post! In the meantime, the sampler is here, go check it out!

CryptoMiniSat 5.8.0 Released

After many months of work, CryptoMiniSat 5.8.0 has been released. In this post I’ll go through the most important changes, and how they helped the solver to be faster and win a few awards, among them 1st place at the SAT incremental track, 3rd place SAT Main track, and 2nd&3d place in the SMT BitVector tracks together with the STP and MinkeyRink solvers.

Gauss-Jordan Elimination

First and foremost, Gauss-Jordan elimination at all levels of the search is now enabled by default. This is thanks to the work detailed in the CAV 2020 paper (video here). The gist of the paper is that we take advantage of the bit-packed matrix and some clever bit field filters to quickly check whether an XOR constraint is propagating, conflicting, or neither. This, and a variety of other improvements lead to about 3-10x speedup for the Gauss-Jordan elimination procedure.

With this speedup, the overhead is quite small, and we enable G-J elimination at all times now. However, there are still limits on the size of the matrix, the number of matrices, and we disable it if it doesn’t seem to improve performance.

As a bit of reflection: our original paper with Nohl and Castelluccia on CryptoMiniSat, featuring Gauss-Jordan elimination at all levels of the search tree was published at SAT 2009. It took about 11 years of work, and in particular the work of Han and Jiang to get to this point, but we finally arrived. The difference is day and night.

Target Phases

This one is really cool, and it’s in CaDiCaL (direct code link here) by Armin Biere, description here (on page 8). If you look at the SAT Race of 2019, you will see that CaDiCaL solved a lot more satisfiable problems than any other solver. If you dig deep enough, you’ll see it’s because of target phases.

Basically, target phases are a variation of phase saving, but instead of saving the phase all the time when backtracking, it only saves it when backtracking from a depth that’s longer than anything seen before. Furthermore, it is doing more than just this: sometimes, it picks only TRUE, and sometimes it picks only FALSE phase. To spice it up, you can keep “local deepest” and “global deepest” if you like, and even pick inverted phases.

It’s pretty self-explanatory if you read this code (basically, just switching between normal, target, inverted, fixed FALSE, fixed TRUE phases) and it helps tremendously. If you look at the graphs of the SAT 2020 competition results (side no. 19 here) you will see a bunch of solvers being way ahead of the competition. That’s target phases right there.

CCAnr Local Search Solver

CryptoMiniSat gained a new local search solver, CCAnr (paper here) and it’s now the default. This is a local search solver by Shaowei Cai who very kindly let me add his solver to CryptoMiniSat and allowed me to add him as an author to the version of CryptoMiniSat that participated in the SAT competition. It’s a local search solver, so it can only solve satisfiable instances, and does so by always working on a full solution candidate that it tries to “massage” into a full solution.

Within CryptoMiniSat, CCAnr takes the starting candidate solution from the phases inside the CDCL solver, and tries to extend it to fit all the clauses. If it finds a satisfying assignment, this is emitted as a result. If it doesn’t, the best candidate solution (the one that satisfies the most clauses) is saved into the CDCL phase and is later used in the CDCL solver. Furthermore, some statistics during the local search phase are saved and then injected into the variable branching heuristics of the CDCL solver, see code here.

Hybrid Variable Branching

Variable branching in CryptoMiniSat has always been a mix of VSIDS (Variable State Independent Decaying Sum, paper here) and Maple (multi-arm bandit based, paper here) heuristics. However, both Maple and VSIDS have a bunch of internal parameters that work best for one, or for another type of SAT problem.

To go around the issue of trying to find a single optimal value for all, CryptoMiniSat now uses a combination of different configurations that is parsed from the command line, such as: “maple1 + maple2 + vsids2 + maple1 + maple2 + vsids1” that allows different configurations for both Maple and VSIDS (v1 and v2 for both) to be configured and used, right from the command line. This configuration system allows for a wider variety of problems to be efficiently solved.

Final Remarks

CryptoMiniSat is now used in many systems. It is the default SAT solver in:

I think the above, especially given their track record of achieving high performance in their respective fields, show that CryptoMiniSat is indeed a well-performing and reliable workhorse. This is thanks to many people, including, but not limited to, Kuldeep Meel, Kian Ming A. Chai, Trevor Hansen, Arijit Shaw, Dan Liew, Andrew V. Jones, Daniel Fremont, Martin Hořeňovský, and others who have all contributed pull requests and valuable feedback. Thanks!

As always, let me know if you have any feedback regarding the solver. You can create a GitHub issue here, and pull request here. I am always interested in new use-cases and I am happy to help integrate it into new systems.

ApproxMCv3, a modern approximate model counter

This blogpost and its underlying work has been brewing for many years, and I’m extremely happy to be able to share it with you now. Kuldeep Meel and myself have been working very hard on speeding up approximate model counting for SAT and I think we have made real progress. The research paper, accepted at AAAI-19 is available here. The code is available here (release with static binary here). The main result is that we can solve a lot more problems than before. The speed of solving is orders(!) of magnitude faster than the previous best system:

Background

The idea of approximate model counting, originally by Chakraborty, Meel and Vardi was a huge hit back in 2013, and many papers have followed it, trying to improve its results. All of them were basically tied to CryptoMiniSat, the SAT solver that I maintain, as all of them relied on XOR constraints being added to the regular CNF of a typical SAT problem.

So it made sense to examine what CryptoMiniSat could do to improve the speed of approximate counting. This time interestingly coincided with me giving up on XORs in CryptoMiniSat. The problem was the following. A lot of new in- and preprocessing systems were being invented, mostly by Armin Biere et al, and I quickly realised that I simply couldn’t keep adding them, because they didn’t take into account XOR constraints. They handled CNF just fine, but not XORs. So XORs became a burden, and I removed them in versions 3 and 4 of CryptoMiniSat. But there was need, and Kuldeep made it very clear to me that this is an exciting area. So, they had to come back.

Blast-Inprocess-Recover-Destroy

But how to both have and not have XOR constraints? Re-inventing all the algorithms for XORs was not a viable option. The solution I came up with was a rather trivial one: forget the XORs during inprocessing and recover them after. The CNF would always remain the source of truth. Extracting all the XORs after in- and preprocessing would allow me to run the Gauss-Jordan elimination on the XORs post-recovery. So I can have the cake and eat it too.

The process is conceptually quite easy:

  1. Blast all XORs into clauses that are in the input using intermediate variables. I had all the setup for this, as I was doing Bounded Variable Addition  (also by Biere et al.) so I didn’t have to write code to “hide” these additional variables.
  2. Perform pre- or inprocessing. I actually only do inprocessing nowadays (as it has faster startup time). But preprocessing is just inprocessing at the start ;)
  3. Recover the XORs from the CNF. There were some trivial methods around. They didn’t work as well as one would have hoped, but more on that later
  4. Run the CDCL and Gauss-Jordan code at the same time.
  5. Destroy the XORs and goto 2.

This system allows for everything to be in CNF form, lifting the XORs out when necessary and then forgetting them when it’s convenient. All of these steps are rather trivial, except, as I later found out, recovery.

XOR recovery

Recovering XORs sounds like a trivial task. Let’s say we have the following clauses

 x1 V  x2 V  x3
-x1 V -x2 V  x3
 x1 V -x2 V -x3
-x1 V  x2 V -x3

This is conceptually equivalent to the XOR v1+v2+v3=1. So recovering this is trivial, and has been done before, by Heule in particular, in his PhD thesis. The issue with the above is the following: a stronger system than the above still implies the XOR, but doesn’t look the same. Let me give an example:

 x1 V  x2 V  x3
-x1 V -x2 V  x3
 x1 V -x2 V -x3
-x1 V  x2

This is almost equivalent to the previous set of clauses, but misses a literal from one of the clauses. It still implies the XOR of course. Now what? And what to do when missing literals mean that an entire clause can be missing? The algorithm to recover XORs in such cases is non-trivial. It’s non-trivial not only because of the complexity of how many combinations of missing literals and clauses there can be (it’s exponential) but because one must do this work extremely fast because SAT solvers are sensitive to time.

The algorithm that is in the paper explains all the bit-fiddling and cache-friendly data layout used along with some fun algorithms that I’m sure some people will like. We even managed to use compiler intrinsics to use target-specific assembly instructions for hamming weight calculation. It’s a blast. Take a look.

The results

The results, as shown above, speak for themselves. Problems that took thousands of seconds to solve can now be solved under 20. The reason for such incredible speedup is basically the following. CryptoMiniSatv2 was way too clunky and didn’t have all the fun stuff that CryptoMiniSatv5 has, plus the XOR handling was incorrect, loosing XORs and the like. The published algorithm solves the underlying issue and allows CNF pre- and inprocessing to happen independent of XORs, thus enabling CryptoMiniSatv5 to be used in all its glory. And CryptoMiniSatv5 is fast, as per the this year’s SAT Competition results.

Some closing words

Finally, I want to say thank you to Kuldeep Meel who got me into the National University of Singapore to do the work above and lots of other cool work, that we will hopefully publish soon. I would also like to thank the National Supercomputing Center Singapore  that allowed us to run a ton of benchmarks on their machines, using at least 200 thousand CPU hours to make this paper. This gave us the chance to debug all the weird edge-cases and get this system up to speed where it beats the best exact counters by a wide margin. Finally, thanks to all the great people I had the chance to meet and sometimes work with at NUS, it was a really nice time.

CryptoMiniSat 5.6.3 Released

The latest CryptoMiniSat, version 5.6.3 has been released. This release marks the 12’000th commit to this solver that has weathered more than I originally intended it to weather. It’s been an interesting ride, and I have a lot to thank Kuldeep and NSCC‘s ASPIRE-1 cluster for this release. I have burned over 200k CPU hours to make this release, so it’s a pretty well-performing release (out-performing anything out there, by a wide margin), though I’m working very hard to make sure that neither I nor anyone else will have to burn anything close to that to make a well-performing SAT solver.

The solver has some interesting new algorithms inside, the most interesting of which is Gauss-Jordan elimination using a Simplex-like method, generously contributed by Jie-Hong Roland Jiang and Cheng-Shen Han from the National Taiwan University. This addition should finally settle the issues regarding Gaussian vs Gauss-Jordan elimination in SAT solvers. Note that to use this novel system, you must configure with “cmake -DUSE_GAUSS=ON ..” and then re-compile.

What’s also interesting is what’s not inside, though. I have been reading (maybe too much) Nassim Taleb and he is very much into via negativa. So I tried removing algorithms that have been in the solver for a while and mostly nobody would question if they are useful. In the end I removed the following algorithms from running by default, each removal leading to better solving time:

  • Regular probing. Intree probing is significantly better, so regular probing is not needed. Thanks Matti/Marijn/Armin!
  • Stamping. This was a big surprise, especially because I also had to remove caching, which is my own, crappy (“different”) version of stamping. I knew that it wasn’t always so good to have, but damn. It was a hard call, but if it’s just slowing it down, what can I do. It’s weird.
  • Burst searching. This is when I search for a short period with high randomness all over the search space. I thought it would allow me to explore the search space in places where VSIDS/Maple doesn’t. Why this is slowing the solver down so much may say more about search heuristics/variable bumping/clause bumping than anything.
  • Note that I never had blocked clause elimination. It doesn’t work well for incremental solving. In fact, it doesn’t work at all, though apparently the authors have some new work that allows it to work, super-interesting!

I’m nowadays committed to understanding this damned thing rather than adding another impossible-to-explain magic constant  to make the solving 10% faster. I think there is interesting stuff out there that could be done to make SAT solvers 10x, not 10%, faster.