Category Archives: Research

Research-related information

SAT Solvers as Smart Search Engines

Satisfiability problem solvers, or SAT solvers for short, try to find a solution to decidable, finite problems such as cryptography, planning, scheduling, and the like. They are very finely tuned engines that can be looked at in two main ways . One is to see them as proof generators, where the SAT solver is building a proof of unsatisfiability as it runs, i.e. it tries to prove that there is no solution to the problem. Another way is to see SAT solvers as smart search engines. In this blog post, I’ll take this latter view and try to explain why I think intermediary variables are important. So, for the sake of argument, let’s forget that SAT solvers sometimes restart the search (forgetting where they were before) and learn clauses (cutting down the search space and remembering where not go again). Let’s just pretend all they do is search.

Searching

The CryptoMiniSat SAT solver used to be able to generate graphs that show how a search through the search space went. Search spaces in these domains are exponential in size, say, 2^n in case there are n variables involved. I don’t have the search visualization code anymore but below is an example of such a search tree. The search starts at the very top not far from the middle, it descends towards the bottom left, then iteratively backtracks all the way to the top, and then descends towards the bottom right. Every pentagon at the bottom of a line is a place where the SAT solver backtracked. Observe that it never goes all the way back to the top — except once, when the top assignment needs to be flipped. Instead, it only goes back some way, partially unassigning variables. The bottom right corner is where the solution is found after many-many partial backtracks and associated partial unassignements:

What I want you to take away from this graph is the following: the solver iteratively tries to set a variable a value, calculates forward, and if it doesn’t work, it will partially backtrack, flip its value to its opposite, then descend again.

Brute force search vs. SAT solving

Trying one value and then trying the other sounds suspiciously like brute force search. Brute force search does exactly that, in a systematic and incredibly efficient way. We can build highly specialized executables and even hardware, to perform this task. If you look at e.g. Bitcoin mining, you will see a lot of specialized hardware, ASICs, doing brute-force search. And if you look at rainbow tables, you’ll see a lot of bit slicing.

So why waste our time doing all this fancy value propagation and backtracking when we could use a much more effective, systematic search system? The answer is, if you generated your problem description wrongly, then basically, for no good reason, and you are probably better off doing brute-force search. But if you did well, then a SAT solver can perform a significantly better search than brute-force. The trick lies in intermediary variables, and partial value assignments.

Partial value assignments

So let’s say that your brute force engine is about to check one input variable setting. It sets the input variables, runs the whole algorithm, and computes the output. The output is wrong though. Here is where things go weird. The brute force engine now completely erases its state, takes another input and runs the whole algorithm again. 

So, brute force does the whole calculation again, starting from a clean state, every time. What we have to recognize is that this is actually a design choice. Another design choice is to calculate what variables were affected by one of the input bits, unset these variables, flip the input bit value, and continue running the calculation. This has the following requirements:

  1. A way to quickly determine which intermediate values depend on which other ones so we can unset variables and know which intermediate, already calculated, dependent variables also need to be unset.
  2. A way to quickly unset variables
  3. A good set of intermediary values so we can keep as much state of the calculation as possible

If you think about it, the above is what SAT solvers do, well mostly. In fact, they do (1) only partially: they allow variables only to be unset in reverse chronological order. Calculating and maintaining a complete dependency graph seems too expensive. So we unset more variables than we need to. But we can unset them quickly and correctly and we compensate for the lack of correct dependency check in (1) by caching polarities. This caches the independent-but-nevertheless-unset variables’ values and then hopes to reassign them later to the correct value. Not perfect, but not too shabby either.

Modeling and intermediary variables

To satisfy requirement (3) one must have a good set of intermediary variables in the input problem (described in DIMACS format), so the SAT solver can both backtrack and evaluate partially. Unfortunately, this is not really in the hands of the SAT solver. It is in the hands of the person describing the problem. Modeling is the art of transforming a problem that is usually expressed in natural language (such as “A person cannot be scheduled to be on a night shift twice in a row”) into a problem that can be given to a SAT solver.

Modeling has lots of interesting constraints, one of which I often hear and I am confused by: that it should minimize the number of variables. Given the above, that SAT solvers can be seen at as partial evaluation engines that thrive on the fact that they can partially evaluate and partially backtrack, why would anyone try to minimize the number of variables? If the solver has no intermediary variables to backtrack to, the solver will simply backtrack all the way to the beginning every time, thus becoming a really bad brute-force engine that incidentally tracks a dependency graph and is definitely non-optimized for the task at hand.

Some final thoughts

In the above I tried to take a premise, i.e. that SAT solvers are just search engines, and ran with it. I don’t think the results are that surprising. Of course, nothing is black-and-white. Having hundreds of millions of variables in your input is not exactly optimal. But minimizing the number of variables given to a SAT solver at the expense of expressive intermediate variables is a huge no-no.

Bosphorus, an ANF and CNF simplifier and converter

I am happy to finally release a piece of work that I have started many years ago at Security Research Labs (many thanks to Karsten Nohl there). Back in the days, it helped us to break multiple real-world ciphers. The released system is called Bosphorus and has been released with major, game-changing work by Davin Choo and Kian Ming A. Chai from DSO National Laboratories Singapore and great help by Kuldeep Meel from NUS. The paper will be published at the DATE 2019 conference.

ANFs and CNFs

Algebraic Normal Form is a form that is used by most cryptographers to describe symmetric ciphers, hash algorithms, and lately a lot of post-quantum asymmetric ciphers. It’s a very simple notation that basically looks like this:

x1 ⊕ x2 ⊕ x3 = 0
x1 * x2 ⊕ x2 * x3 + 1 = 0

Where “⊕” represents XOR and “*” represents the AND operator. So the first line here is an XOR of binary variables x1, x2 and x3 and their XOR must be equal to 0. The second line means that “(x1 AND x2) XOR (x2 AND x3)” must be equal to 1. This normal form allows to see a bunch of interesting things. For example, it allows us to see the so-called “maximum degree” of the set of equations, where the degree is the maximum number of variables AND-ed together in one line. The above set of equations has a maximum degree of 2, as (x1*x2) is of degree 2. Degrees can often be a good indicator for the complexity of a problem.

What’s good about ANFs is that there are a number of well-known algorithms to break problems described in them. For example, one can do (re)linearization and Gauss-Jordan elimination, or one could run Grobner-basis algorithms such as F4/F5 on it. Sometimes, the ANFs can also be solved by converting them to another normal form, Conjunctive Normal Form (CNF), used by SAT solvers. The CNF normal form looks like:

x1 V x2 V x3
-x1 V x3

Where x1, x2 and x3 are binary variables, “V” is the logical OR, and each line must be equal to TRUE. Using CNF is interesting, because the solvers used to solve them, SAT solvers, typically provide a different set of trade-offs for solving than ANF problem solvers. SAT solvers tend to use more CPU time but a lot less memory, sometimes making problems viable to solve in the “real world”. Whereas sometimes breaking of a cipher is enough to be demonstrated on paper, it also happens that one wants to break a cipher in the real world.

Bridging and Simplifying

Bosphorus is I believe a first of its kind system that allows ANFs to be simplified using both CNF- and ANF-based systems. It can also convert between the two normal forms and can act both as an ANF and a CNF preprocessor, like SatELite (by Een and Biere) was for CNF. I believe this makes Bosphorus unique and also uniquely useful, especially if you are working on ANFs.

Bosphorus uses an iterative architecture that performs the following set of steps, either until it runs out of time or until fixedpoint:

  1. Replace variables and propagate constants in the ANF
  2. Run limited Extended Linarization (XL) and inject back unit and binary XORs
  3. Run limited ElimLin and inject back unit and binary XORs
  4. Convert to CNF, run a SAT solver for a limited number of conflicts and inject back unit and binary (and potentially longer) XORs

In other words, the system is an iterative simplifier/preprocessor that invokes multiple reasoning systems to try to simplify the problem as much as possible. It can outright solve the system, as most of these reasoning systems are complete, but the point is to run them only to a certain limit and inject back into the ANF the easily “digestible” information. The simplified ANF can then either be output as an ANF or a CNF.

Bosphorus can also take a CNF as input, perform the trivial transformation of it to ANF and then treat it as an ANF. This allows the CNF to be simplified using techniques previously unavailable to CNF systems, such as XL.

ANF to CNF Conversion

I personally think that ANF-to-CNF conversion is actually not that hard, and that’s why there hasn’t been too much academic effort devoted to it. However, it’s an important step without which a lot of opportunities would be missed.

The implemented system contains a pretty advanced ANF-to-CNF converter, using Karnaugh tables through Espresso, XOR cutting, monomial reuse, etc. It should give you a pretty optimal CNF for all ANFs. So you can use Bosphorus also just as an ANF-to-CNF converter, though it’s so much more.

Final Thoughts

What I find coolest about Bosphorus is that it can simplify/preprocess ANF systems so more heavyweight ANF solvers can have a go at them. Its ANF simplification is so powerful, it can even help to solve some CNFs by lifting them to ANF, running the ANF simplifiers, converting it back to CNF, and solving that(!). I believe our initial results, published in the paper, are very encouraging. Further, the system is in a ready-to-use state: there is a Docker image, the source should build without a hitch, and there is even a precompiled Linux binary. Good luck using it, and let me know how it went!

ApproxMCv3, a modern approximate model counter

This blogpost and its underlying work has been brewing for many years, and I’m extremely happy to be able to share it with you now. Kuldeep Meel and myself have been working very hard on speeding up approximate model counting for SAT and I think we have made real progress. The research paper, accepted at AAAI-19 is available here. The code is available here (release with static binary here). The main result is that we can solve a lot more problems than before. The speed of solving is orders(!) of magnitude faster than the previous best system:

Background

The idea of approximate model counting, originally by Chakraborty, Meel and Vardi was a huge hit back in 2013, and many papers have followed it, trying to improve its results. All of them were basically tied to CryptoMiniSat, the SAT solver that I maintain, as all of them relied on XOR constraints being added to the regular CNF of a typical SAT problem.

So it made sense to examine what CryptoMiniSat could do to improve the speed of approximate counting. This time interestingly coincided with me giving up on XORs in CryptoMiniSat. The problem was the following. A lot of new in- and preprocessing systems were being invented, mostly by Armin Biere et al, and I quickly realised that I simply couldn’t keep adding them, because they didn’t take into account XOR constraints. They handled CNF just fine, but not XORs. So XORs became a burden, and I removed them in versions 3 and 4 of CryptoMiniSat. But there was need, and Kuldeep made it very clear to me that this is an exciting area. So, they had to come back.

Blast-Inprocess-Recover-Destroy

But how to both have and not have XOR constraints? Re-inventing all the algorithms for XORs was not a viable option. The solution I came up with was a rather trivial one: forget the XORs during inprocessing and recover them after. The CNF would always remain the source of truth. Extracting all the XORs after in- and preprocessing would allow me to run the Gauss-Jordan elimination on the XORs post-recovery. So I can have the cake and eat it too.

The process is conceptually quite easy:

  1. Blast all XORs into clauses that are in the input using intermediate variables. I had all the setup for this, as I was doing Bounded Variable Addition  (also by Biere et al.) so I didn’t have to write code to “hide” these additional variables.
  2. Perform pre- or inprocessing. I actually only do inprocessing nowadays (as it has faster startup time). But preprocessing is just inprocessing at the start ;)
  3. Recover the XORs from the CNF. There were some trivial methods around. They didn’t work as well as one would have hoped, but more on that later
  4. Run the CDCL and Gauss-Jordan code at the same time.
  5. Destroy the XORs and goto 2.

This system allows for everything to be in CNF form, lifting the XORs out when necessary and then forgetting them when it’s convenient. All of these steps are rather trivial, except, as I later found out, recovery.

XOR recovery

Recovering XORs sounds like a trivial task. Let’s say we have the following clauses

 x1 V  x2 V  x3
-x1 V -x2 V  x3
 x1 V -x2 V -x3
-x1 V  x2 V -x3

This is conceptually equivalent to the XOR v1+v2+v3=1. So recovering this is trivial, and has been done before, by Heule in particular, in his PhD thesis. The issue with the above is the following: a stronger system than the above still implies the XOR, but doesn’t look the same. Let me give an example:

 x1 V  x2 V  x3
-x1 V -x2 V  x3
 x1 V -x2 V -x3
-x1 V  x2

This is almost equivalent to the previous set of clauses, but misses a literal from one of the clauses. It still implies the XOR of course. Now what? And what to do when missing literals mean that an entire clause can be missing? The algorithm to recover XORs in such cases is non-trivial. It’s non-trivial not only because of the complexity of how many combinations of missing literals and clauses there can be (it’s exponential) but because one must do this work extremely fast because SAT solvers are sensitive to time.

The algorithm that is in the paper explains all the bit-fiddling and cache-friendly data layout used along with some fun algorithms that I’m sure some people will like. We even managed to use compiler intrinsics to use target-specific assembly instructions for hamming weight calculation. It’s a blast. Take a look.

The results

The results, as shown above, speak for themselves. Problems that took thousands of seconds to solve can now be solved under 20. The reason for such incredible speedup is basically the following. CryptoMiniSatv2 was way too clunky and didn’t have all the fun stuff that CryptoMiniSatv5 has, plus the XOR handling was incorrect, loosing XORs and the like. The published algorithm solves the underlying issue and allows CNF pre- and inprocessing to happen independent of XORs, thus enabling CryptoMiniSatv5 to be used in all its glory. And CryptoMiniSatv5 is fast, as per the this year’s SAT Competition results.

Some closing words

Finally, I want to say thank you to Kuldeep Meel who got me into the National University of Singapore to do the work above and lots of other cool work, that we will hopefully publish soon. I would also like to thank the National Supercomputing Center Singapore  that allowed us to run a ton of benchmarks on their machines, using at least 200 thousand CPU hours to make this paper. This gave us the chance to debug all the weird edge-cases and get this system up to speed where it beats the best exact counters by a wide margin. Finally, thanks to all the great people I had the chance to meet and sometimes work with at NUS, it was a really nice time.

CryptoMiniSat and Parallel SAT Solving

Since CryptoMiniSat has been getting quite a number of awards with parallel SAT solving, it’s about time I talk about how it does that.There is a ton of literature on parallel SAT solving, and I unfortunately I have barely had time to read any of them. The only research within the parallel SAT solving area that I think has truly weathered the test of time is Plingeling and Treengeling — and they are really interesting to play with. The rest most likely  has some merit too, but I am usually suspect as the results are often –unintentionally — skewed to show how well the new idea performs and in the end they rarely win too many awards, especially not in the long run (this is where Plingeling and Treengeling truly shine). I personally haven’t published what I do in this scene because I have always found it to be a bit too easy and to hence have little merit for publication — but maybe one day I will.

Note: by unintentionally skewed results I mean that as you change parameters, some will inevitably be better than others because of randomness in the SAT solving. This randomness is easy to mistake for positive results. It has happened to everyone, I’m sure, including myself.

Exploiting CryptoMiniSat-specific features

CryptoMiniSat has many different inprocessing systems and many parameters to turn them on/off or to tune them. It has over 60k lines of code which allows this kind of flexibility. This is unlike the Maple*/Glucose* set of solvers, all coming from MiniSat, which basically can do one thing, and one thing only, really well. That seriously helps in the single-threaded setup, but may be an issue when it comes to multi-threading. They have (almost) no inprocessing (there is now vivification in some Maple* solvers), and no complicated preprocessing techniques other than BVE, subsumption and self-subsuming resolution. So, there is little to turn on and off, and there are very few parameters — and the few parameters that are there are all hard-coded into the solver, making them difficult to change.

CryptoMiniSat in parallel mode

To run in parallel mode, CMS takes advantage of its potential heterogeneity by running N different threads, each with radically different parameter settings, and exchanging nothing but unit and binary clauses(!) with the most rudimentary locking system. No exchange of longer clauses, no lockless exchanges, no complicated multi-lock system. One lock for unit clauses, one for binary, even for 24 threads. Is this inefficient? Yes, but it seems good enough, and I haven’t really had too many people asking for parallel performance. To illustrate, here are the parameter sets of the different threads used and here is the sharing and locking system. It’seriously simple, I suggest you take a peek, especially at the parameter sets.

Note that the literature is full of papers explaining what kind of complicated methods can be used to exchange clauses using different heuristics, with pretty graphs, complicated reasoning, etc. I have to admit that it might be useful to do that, however, just running heterogeneous solvers in parallel and exchanging unit&binary clauses performs really well. In fact, it performs so well that I never, ever, in the entire development history of 7 years of CMS, ran even one full experiment to check parallel performance. I usually concentrate on single-threaded performance because checking parallel performance is really expensive.

Checking the performance of a 24 thread setup is about 15x more expensive than the single-threaded variant. I don’t really want to burn the resources for that, as I think it’s good enough as it is. It’s mostly beating solvers with horrendously complicated systems inside them with many research papers backing them up, etc. I think the current performance is proof enough that making things complicated is not the only way to go. Maybe one day I will implement some more sophisticated clause sharing, e.g. sharing clauses that are longer than binary and then I won’t be able to claim that I am doing something quite simple. I will think about it.

Conclusions

I am kinda proud of the parallel performance of CMS as it can showcase the heterogeneity of the system and the different capabilities of the solver. It’s basically doing a form of acrobatics where the solver can behave like a very agile SAT solver with one set of parameters or like a huge monolith with another set of parameters. Since there are many different parameters, there are many different dimensions, and hence there are many orthogonal parameter sets. It’s sometimes interesting to read through the different parameter settings and wonder why one set works so much better than the other on a particular type of benchmark. Maybe there could be some value in investigating that.

CryptoMiniSat 5.6.3 Released

The latest CryptoMiniSat, version 5.6.3 has been released. This release marks the 12’000th commit to this solver that has weathered more than I originally intended it to weather. It’s been an interesting ride, and I have a lot to thank Kuldeep and NSCC‘s ASPIRE-1 cluster for this release. I have burned over 200k CPU hours to make this release, so it’s a pretty well-performing release (out-performing anything out there, by a wide margin), though I’m working very hard to make sure that neither I nor anyone else will have to burn anything close to that to make a well-performing SAT solver.

The solver has some interesting new algorithms inside, the most interesting of which is Gauss-Jordan elimination using a Simplex-like method, generously contributed by Jie-Hong Roland Jiang and Cheng-Shen Han from the National Taiwan University. This addition should finally settle the issues regarding Gaussian vs Gauss-Jordan elimination in SAT solvers. Note that to use this novel system, you must configure with “cmake -DUSE_GAUSS=ON ..” and then re-compile.

What’s also interesting is what’s not inside, though. I have been reading (maybe too much) Nassim Taleb and he is very much into via negativa. So I tried removing algorithms that have been in the solver for a while and mostly nobody would question if they are useful. In the end I removed the following algorithms from running by default, each removal leading to better solving time:

  • Regular probing. Intree probing is significantly better, so regular probing is not needed. Thanks Matti/Marijn/Armin!
  • Stamping. This was a big surprise, especially because I also had to remove caching, which is my own, crappy (“different”) version of stamping. I knew that it wasn’t always so good to have, but damn. It was a hard call, but if it’s just slowing it down, what can I do. It’s weird.
  • Burst searching. This is when I search for a short period with high randomness all over the search space. I thought it would allow me to explore the search space in places where VSIDS/Maple doesn’t. Why this is slowing the solver down so much may say more about search heuristics/variable bumping/clause bumping than anything.
  • Note that I never had blocked clause elimination. It doesn’t work well for incremental solving. In fact, it doesn’t work at all, though apparently the authors have some new work that allows it to work, super-interesting!

I’m nowadays committed to understanding this damned thing rather than adding another impossible-to-explain magic constant  to make the solving 10% faster. I think there is interesting stuff out there that could be done to make SAT solvers 10x, not 10%, faster.