Tag Archives: visualisation

SAT Solvers as Smart Search Engines

Satisfiability problem solvers, or SAT solvers for short, try to find a solution to decidable, finite problems such as cryptography, planning, scheduling, and the like. They are very finely tuned engines that can be looked at in two main ways . One is to see them as proof generators, where the SAT solver is building a proof of unsatisfiability as it runs, i.e. it tries to prove that there is no solution to the problem. Another way is to see SAT solvers as smart search engines. In this blog post, I’ll take this latter view and try to explain why I think intermediary variables are important. So, for the sake of argument, let’s forget that SAT solvers sometimes restart the search (forgetting where they were before) and learn clauses (cutting down the search space and remembering where not go again). Let’s just pretend all they do is search.

Searching

The CryptoMiniSat SAT solver used to be able to generate graphs that show how a search through the search space went. Search spaces in these domains are exponential in size, say, 2^n in case there are n variables involved. I don’t have the search visualization code anymore but below is an example of such a search tree. The search starts at the very top not far from the middle, it descends towards the bottom left, then iteratively backtracks all the way to the top, and then descends towards the bottom right. Every pentagon at the bottom of a line is a place where the SAT solver backtracked. Observe that it never goes all the way back to the top — except once, when the top assignment needs to be flipped. Instead, it only goes back some way, partially unassigning variables. The bottom right corner is where the solution is found after many-many partial backtracks and associated partial unassignements:

What I want you to take away from this graph is the following: the solver iteratively tries to set a variable a value, calculates forward, and if it doesn’t work, it will partially backtrack, flip its value to its opposite, then descend again.

Brute force search vs. SAT solving

Trying one value and then trying the other sounds suspiciously like brute force search. Brute force search does exactly that, in a systematic and incredibly efficient way. We can build highly specialized executables and even hardware, to perform this task. If you look at e.g. Bitcoin mining, you will see a lot of specialized hardware, ASICs, doing brute-force search. And if you look at rainbow tables, you’ll see a lot of bit slicing.

So why waste our time doing all this fancy value propagation and backtracking when we could use a much more effective, systematic search system? The answer is, if you generated your problem description wrongly, then basically, for no good reason, and you are probably better off doing brute-force search. But if you did well, then a SAT solver can perform a significantly better search than brute-force. The trick lies in intermediary variables, and partial value assignments.

Partial value assignments

So let’s say that your brute force engine is about to check one input variable setting. It sets the input variables, runs the whole algorithm, and computes the output. The output is wrong though. Here is where things go weird. The brute force engine now completely erases its state, takes another input and runs the whole algorithm again. 

So, brute force does the whole calculation again, starting from a clean state, every time. What we have to recognize is that this is actually a design choice. Another design choice is to calculate what variables were affected by one of the input bits, unset these variables, flip the input bit value, and continue running the calculation. This has the following requirements:

  1. A way to quickly determine which intermediate values depend on which other ones so we can unset variables and know which intermediate, already calculated, dependent variables also need to be unset.
  2. A way to quickly unset variables
  3. A good set of intermediary values so we can keep as much state of the calculation as possible

If you think about it, the above is what SAT solvers do, well mostly. In fact, they do (1) only partially: they allow variables only to be unset in reverse chronological order. Calculating and maintaining a complete dependency graph seems too expensive. So we unset more variables than we need to. But we can unset them quickly and correctly and we compensate for the lack of correct dependency check in (1) by caching polarities. This caches the independent-but-nevertheless-unset variables’ values and then hopes to reassign them later to the correct value. Not perfect, but not too shabby either.

Modeling and intermediary variables

To satisfy requirement (3) one must have a good set of intermediary variables in the input problem (described in DIMACS format), so the SAT solver can both backtrack and evaluate partially. Unfortunately, this is not really in the hands of the SAT solver. It is in the hands of the person describing the problem. Modeling is the art of transforming a problem that is usually expressed in natural language (such as “A person cannot be scheduled to be on a night shift twice in a row”) into a problem that can be given to a SAT solver.

Modeling has lots of interesting constraints, one of which I often hear and I am confused by: that it should minimize the number of variables. Given the above, that SAT solvers can be seen at as partial evaluation engines that thrive on the fact that they can partially evaluate and partially backtrack, why would anyone try to minimize the number of variables? If the solver has no intermediary variables to backtrack to, the solver will simply backtrack all the way to the beginning every time, thus becoming a really bad brute-force engine that incidentally tracks a dependency graph and is definitely non-optimized for the task at hand.

Some final thoughts

In the above I tried to take a premise, i.e. that SAT solvers are just search engines, and ran with it. I don’t think the results are that surprising. Of course, nothing is black-and-white. Having hundreds of millions of variables in your input is not exactly optimal. But minimizing the number of variables given to a SAT solver at the expense of expressive intermediate variables is a huge no-no.

Clause glues are a mystery to me

Note: the work below has been done in collaboration with Vegard Nossum, but the wording and some of the (mis-)conculsions are mine. If you are interested, check out his master thesis, it’s quite incredible

Anyone who has ever tried to really understand clause glues in SAT solvers have probably wondered what they really mean. Their definition is simple: the number of variables in the final conflict clause that come from different decision levels. An explanation of these terms can be found here. On the surface, this sounds very simple and clean: the number of different decision levels should somehow be connected to the number of variables that need to be set before the learnt clause activates itself, i.e. it causes a propagation or a conflict. Unfortunately, this is just the surface, because if you try to draw 2-3 implication graphs, you will see that in fact the gap between the number of variables needed to be set (let’s call this ‘activation number’) and the glue can be virtually anything, making the glue a potentially bad indicator of the activation number.

The original reasoning

The original reasoning behind glues is the following: variables in the same decision level, called ‘blocks of variables’ have a chance to be linked together through direct dependencies, and these dependencies should be expressed somehow in order to reduce the number of decisions needed to reach a conflict (and thus ultimately reduce the search space). To me, this reasoning is less clear than the one above. In fact, there are about as many intuitions about glues as the number of people I have met.

Glucosetrack

With Vegard Nossum we have developed (in exactly one day) something quite fun. On the face of it, it’s just glucose 1.0, plain and simple. However, it has an option, “-track”, which does the following: whenever a learnt clause is about to cause a conflict, it jumps over this learnt clause, saves the state of the solver, and works on until the next conflict in order to measure the amount of work the SAT solver would have had to do if that particular learnt clause had not been there. Then, when the next clause wishes to cause a conflict, it records the amount of propagation and decisions between the original conflict and this new conflict, resets the state to the place saved, and continues on its journey as if nothing had happened. The fun part here is that the state is completely reset, meaning that the solver behaves exactly as glucose, but at the same time it records how much search that particular cause has saved. This is very advantageous because it doesn’t give some magical number like glue, but actually measures the usefulness of the given clause. Here is a typical output:

The output is in SQL format for easy SQL import. The “size” is the clause size, “glue” is the glue number, “conflicts” is the number of times the clause caused a conflict, “props” is the number of propagations gained by having that clause (i.e. by doing the conflict early), “bogoprops” is an approximation of the amount of time gained based on the number of watchlists and the type of clauses visited during propagation, and “decisions” is the number of decisions gained. The list is sorted according to “bogoprops”, though once the output is imported to MySQL, many sortings are possible. You might notice that the ‘glue’ is 1 for some clauses (e.g. on the second output line) — these are clauses that have caused a propagation at decision level 0, so they will eventually be removed through clause-cleaning, since they are satisfied. Notice that high up, there are some relatively large clauses (of size 102 for example) with glue 2, that gain quite a lot in terms of time of search. The gained conflicts/propagations/etc. are all cleared after every clause-cleaning, though since clauses are uniquely indexed (‘idx’), they can be accumulated in all sorts of ways.

The program is 2-5x slower than normal glucose, but considering that it has to save an extreme amount of state due to the watchlists being so difficult to handle and clauses changing all the time, I think it does the job quite well — as a research tool it’s certainly quite usable. In case you wish to download it, it’s up in GIT here, and you can download a source tarball here. To build, issue “cmake .” and “make”. Note that the tool only measures the amount of search saved by having the clause around when it tries to conflict. It does not measure the usefulness of the propagations that a learnt clause makes. Also, it doesn’t measure the other side of the coin: the (potentially better) conflict generated by using this clause instead of the other one. In other words, it gives a one-sided view (no measure of help through propagation) of a one-sided view (doesn’t measure the quality of difference between the conflict clauses generated). Oh well, it was a one-day hack.

Experiments

I have made very few experiments with glucosetrack, but you might be interested in the following result. I have taken UTI-20-10p0, ran it until completion, imported the output into MySQL, and executed the following two queries. The first one:

calculates the average number of saved propagations between each round of cleaning for clauses of glue >= 2 (i.e. clauses that didn’t eventually cause a propagation at decision level 0), and of size >= 2, because unitary clauses are of no interest. The second is very similar:

which calculates the same as above, but for size.

Some explanation is in order here regarding why I didn’t count SUM(), and instead opted for AVG(). In fact I personally did make graphs for SUM(), but Vegard corrected me: there is in fact no point in doing that. If I came up with a new glue calculation function that gave an output of ‘1’ for every clause, then the SUM for that function would look perfect: every clause would be in the same bracket, saving a lot of propagations, but that would not help me make a choice of which clauses to throw out. But the point of glues is exactly that: to help me decide which clauses to throw out. So what we really want is a usefulness metric that tells me that if I keep clauses in that bracket, how much do I gain per clause. The AVG() gives me that.

Here goes the AVG() graph for the last clause cleaning (clause cleaning iteration 33):

Notice that the y axis is in logscale. In case you are interested in a larger graph, here it is. The graph for clause cleaning iteration 22 is:

(Iteration 11 has high fluctuations due to less data, but for the interested, here it is). I think it’s visible that glues are good distinguishers. The graph for glues drops down early and stays low. For sizes, the graph is harder to read. Strangely, short clauses are not that good, and longer clauses are better on average. If I had to make a choice about which clauses to keep based on the size graph, it would be a hard choice to make: I would be having trouble selecting a group that is clearly better than the rest. There are no odd-one-out groups. On the other hand, it’s easier to characterise which clauses are good to have in terms of glues: take the low glue ones, preferably below 10, though we can skip the very low ones if we are really picky. An interesting side-effect of the inverse inclination of the size and glue graphs and the fact that “glue<=size” is that maybe we could choose better clauses to keep if we go for larger clauses that have a low glue.

Conclusions

Unfortunately, there are no real conclusions to this post. I guess running glucosetrack for far more than just one example, and somehow also making it measure the difference between the final conflict clauses’ effectiveness would help to write some partially useful conclusion. Vegard and me have tried to put some time and effort into this, but to not much avail I am afraid.

PS: Notice that glucosetrack allows you to generate many of the graphs here using the right SQL query.

Collecting solver data

Lately, I have been putting a lot of effort into collecting data about solver behaviour and dumping it on-the-fly to a MySQL database. Some of this data is then presented by a PHP&javascript interface, such as this.The goal is to better understand how solving works, and thus to possibly come up with better strategies for solving. The requirements of the data gathered are: speed to calculate, size to store and usefulness.

Gathered data

There are some obvious statistics that one would like to collect, such as number of conflicts, propagations and conflicts per second, etc — these are all gathered by almost every SAT solver out there. However, it would be interesting to have more. Here is what CryptoMiniSat now gathers and dumps using highly efficient prepared bulk insert statements, using at most 5% of time.

For each variable, dumped once in a while:

  • no. times propagated false, and true (separately)
  • no. times decided false and true (sep.)
  • no. times flipped polarity
  • avg & variance of decision level
  • avg & variance of propagation level

For each clause larger than 3-long, dumped once in a while:

  • activity
  • conflict at which it was introduced (if learnt)
  • number of propagations made
  • number of conflicts made
  • number of times any of its literal was looked at during BCP
  • number of times it was looked at (dereferenced) during BCP
  • number of times used to resolve with during learnt clause generation
  • number of resolutions needed to during its generation (if learnt clause)

For earch restart:

  • no. of reducible&irreducible (i.e. learnt&non-learnt) binary clauses
  • no. of red&irred tertiary clauses (separately)
  • no. of red&irred long clauses (sep)
  • avg, variance,  min and max of learnt clause glues
  • avg, var, min, max of learnt clause sizes
  • avg, var, min, max of number of resolutions for 1st UIP
  • avg,var,min,max of branch depths
  • avg,var,min,max of backjump length –in the number of literals unassigned
  • avg,var,min,max of backjump lenght — in the number of levels backjumped
  • avg, var, min, max times a conflict followed a conflict without decisions being made
  • avg,var,min,max of agility
  • no. of propagations by red&irred binary clauses
  • no. of props by red&irred tertiary clauses
  • no. of props by red&irred long clauses
  • no. of conflicts by red&irred binary clauses (separately)
  • no. of conflicts by red&irred tertiary clauses (sep)
  • no. of conflicts by red&irred  long clauses (sep)
  • no. of learnt unit, binary, tertiary, and long clauses (sep)
  • avg,var, min,max of watchlist size traversed during BCP
  • time spent
  • no. of decisions
  • no. of propagations
  • no. of variables flipped
  • no. of variables set positive&negative (sep)
  • no. of variables currently unset
  • no. of variables currently replaced
  • no. of variables currently eliminated
  • no. of variables currently set

For each learnt clause database cleaning:

  • for reducible clauses removed: all the data in the “clauses larger than 3-long” data above, summed up
  • for reducible clauses not removed: all the data in the “clauses larger than 3-long” data above, summed up
  • no. of clauses removed
  • no. of clauses not removed
  • for all irred clauses (these are not touched): all the data in the “clauses larger than 3-long” data above, summed up

For every N conflicts:

  • clause size distribution
  • clause glue distribution
  • clause size and glue scatter data

This is all, and is not all mine. Suggestions were made by many, some as much as a year ago: George Katsirelos, Luca Melette, Vegard Nossum, Valentin-Mayer Eichberger, Ben Schlabs,  Said Jabbour and Niklas Sörensson. Naturally, none of these people necessarily approve of how I gather data or what I do with the data gathered, but they helped, so listing them is only fair.

What’s not yet or cannot be gathered

Two suggestions are still on the TODO list:

  • counting the number of conflicts done while a learnt clause was “locked”, i.e. has propagated in the current search tree. This stat of a learnt clause could tell us if the clause seemed essential to the search or not. If a clause propagated once, at the bottom of the search tree, and then the variable propagated was quickly unset, it’s not the same as if the same clause propagated at the top of the search tree, and then the variable it set was essentially never unset.
  • for each variable, counting the number of conflicts done while the variable was set. This is interesting, because if a variable was propagated only once, at top-level, it will seem like it was never set (set only once), yet it may have had a huge influence on the search tree and consequently on the learnt clauses.

Both of these require a change in the algorithm used to backjump and although I am a bit worried about the performance, I will add these soon.

Unfortunately, the data about variables can’t really be dumped so often, because it’s huge for large problems with millions of variables. So I guess I will only dump that for the most active N variables, for some sane definition of “active”. Similarly, the data about individual clauses is not dumped, only in a summed-up way during database cleaning.

Suggestions?

In case you have any ideas what else could be interesting to dump, please put it as a comment. The idea is to dump data that is cheap to compute and cheap to dump yet would be interesting for some reason or another. I am prepared to add stuff to datastructures, as you have probably guessed from the above. Yes, it makes the clause database less space-efficient, and so increases cache-miss. But on large problems, it’s going to be a cache-miss most of the time anyway, and a cache-fetch brings in 128B of data, which should be plenty for both the stats and the clause. Similarly with variables. So, don’t refrain from suggesting some stuff that takes space. Also, weird stuff is interesting. The most weird stat on the list above is definitely the “conflict after a conflict without decisions” (suggested by Said Jabbour) which I would have guessed to be zero, or near-zero, yet is pretty high, in the 30+% range.

Suggestions are welcome!

Visualizing SAT solving

Visualizing the solving of mizh-md5-47-3.cnf


Visualizing what happens during SAT solving has been a long-term goal of mine, and finally, I have managed to pull together something that I feel confident about. The system is fully explained in the liked image on the right, including how to read the graphs and why I made them. Here, I would like to talk about the challenges I had to overcome to create the system.

Gathering information

Gathering information during solving is challenging for two reasons. First, it’s hard to know what to gather. Second, gathering the information should not affect overall speed of the solver (or only minimally), so the code to gather the information has to be well-written. To top it all, if much information is gathered, these have to be structured in a sane way, so it’s easy to access later.

It took me about 1-1.5 months to write the code to gather all information I wanted. It took a lot of time to correctly structure and to decide about how to store/summarize the information gathered. There is much more gathered than shown on the webpage, but more about that below.

Selecting what to display, and how

This may sound trivial. Some would simply say: just display all information! But what we really want is not just plain information: what good is it to print 100’000 numbers on a screen? The data has to be displayed in a meaningful and visually understandable way.

Getting to the current layout took a lot of time and many-many discussions with all all my friends and colleagues. I am eternally grateful for their input — it’s hard to know how good a layout is until someone sees it for the first time, and completely misunderstands it. Then you know you have to change it: until then, it was trivial to you what the graph meant, after all, you made it!

What to display is a bit more complex. There is a lot of data gathered, but what is interesting? Naturally, I couldn’t display everything, so I had to select. But selection may become a form of misrepresentation: if some important data isn’t displayed, the system is effectively lying. So, I tried to add as much as possible that still made sense. This lead to a very large table of graphs, but I think it’s still understandable. Further, the graphs can be moved around (just drag their labels), so doing comparative analysis is not hampered much by the large set of graphs.

The final layout is much influenced by Edward Tufte‘s books. Most graphic libraries for javascript, including what I used, Dygraphs, contain a lot of chartjunk by default. For example, the professional library HighCharts is full chartjunk (just look at their webpage), and is apparently used by many Fortune 500 companies. I was appalled at this — many-many graph libraries, none that offers a clean look? Luckily, I could do away with all that colorful beautifying mess — the data is interesting, and demands no embellishments.

Creating the webpage

Creating the webpage to display what I wanted was quite difficult. I am no expert at PHP or HTML, and this was the first time I had touched javascript. Although the final page doesn’t show it much, I struggled quite a bit with all these different tools. If I had to do this again, I would choose to use a page generation framework. This time, I wrote everything by hand.

I am most proud of two things on the webpage. First is the histogram at the bottom of the graphs. I know it may not seem like it, but that is all done with a javascript I wrote that pulls data from an array that could be dynamically changed. I think it does what it’s supposed to do, and does it well. The second is that I had to tweak the graph library used (Dygraphs, the best library out there, hands down), because it was too slow at printing these ~30 graphs. The graphs can be zoomed (just click and drag on X axis), and when zooming in&out the speed was really terrible. It now works relatively fast though I had to tweak the system to trade speed for a bit of visual beauty.

Final thoughts

Making the visualization webpage was a long marathon. I feel like it’s OK now, even though there were quite a number of ideas that weren’t implemented in the end. I hope you will enjoy playing with it as much as I have enjoyed making it.

Understanding Implication Graphs

Having won the SAT Race with CryptoMiniSat, I think I can now confess that I still don’t understand conflict generation. So today, I sat down and I tried to understand it. The result is some fun code, a lot of reading, and great pictures. Let me share with you these auto-generated graphs — the generator will be released with CMSat 3.0.0, so you will be able to generate them, too. These graphs show something called an ‘implication graph’, which is nothing more than a graphical way to show how propagations were made by the reasoning engine. For instance, if variable x1 and x2 are both FALSE, then clause ‘x1 V x2 V x3’ will force x3 to be TRUE to satisfy the clause. Okay, so much for talk, let’s see the graphs!

Our first implication graph had a clause “-70 55 42” (light green box) that caused a conflict — that is to say, somehow the variable setting {x70=TRUE, x55=FALSE, x42=FALSE} was set, the SAT solver realised that this is wrong, and something has to be done. Let’s look at what lead here! Guesses are coloured orange here, so it seems we made the following guesses during our SAT solving: x71=TRUE, x73 = FALSE, x42=FALSE. Now, we could just say, oh, well, setting x73 to FALSE was a dumb idea, just reverse that guess, and be done with it. That’s the easy (and the fast) way, but we are less lazy than that and we want to understand the reasons. So we do something called the conflict analysis to find something called an asserting clause that will not only let us reverse the guess x73=FALSE, but will also give the SAT solver a reason why that is a necessary assignment given its previous guesses and their consequences.

Consequences in the graph are coloured dark green, and there are exactly 3 of them: x70=TRUE, x55=FALSE and x56=TRUE. Each of these is a consequence of a clause that is in the SAT solver, marked on the edge(s) leading to it. For example, x56 is set to TRUE, because of the guess x73=FALSE and the clause “x56 V x73” marked on the edge leading to x56. A consequence that has one edge leading to it was propagated by a 2-long clause, a consequence that has 2 edges leading to it was propagated by a 3-long clause (e.g. consequence “x70=TRUE”, propagated by “x70 V x55 V x73”), etc. Okay, so how do we get to the reason? Well, we start out with the conflict, the clause “-x70 V x55 V x42” (at the bottom), and we do resolutions with this clause, going bottom up. We do as many resolutions as needed to reach the first UIP (also called “articulation vertex” or “dominator”), which is a unique point on the graph through which all paths go through from the highest decision level guess, in this case x73=FALSE, at decision level 56 (noted with a @56). If you take a look, the paths from x73=FALSE diverge from the very first point onwards, and there is no single point where they converge again. This means that the (first and only) dominator will be x73=FALSE itself, and we have to resolve with all clauses on the path:

  1. Start with clause “-x70 V x55 V x42”
  2. Resolve with clause “x70 V x55 V x73”
  3. Resolve with clause “-x55 V-x56 V -x71”
  4. Resolve with clause “x56 V x73”

This list is noted on the bottommost box’s lowest entry. When we resolve with these clauses, we are actually doing cuts on the graph, like this:

My drawing capabilities aren’t exactly great, but you can see that the cuts are successively higher and higher, until we reach the cut that has on its outer part the literals -x73, x71 and -x42. Unsurprisingly, exactly the inverse of these literals are what make up the final conflict clause. Now, if we unroll the implications until decision level 47 (but not the guess x42=FALSE), then with this clause added, variable x73 will propagate to TRUE automatically, essentially reversing the wrong guess — but also giving a reason why it needs to be reversed.

If you enjoyed the previous graph, here is another to entertain you further:

And its corresponding, not too obvious solution:

Note that I was too lazy to draw cuts 10, 11 and 12 around the assignment x145=FALSE (at decision level 39). The UIP in this case is again at the decision variable, x164. If you are still interested, here are some other examples:

PS: Thanks to George Katsirelos for his help understanding this (disclaimer: all faults belong to me alone).