Tag Archives: cluster

Thoughts on SAT@home

If you have ever wondered how your multi-month SAT problem could be solved, the answer is here: SAT@home by some kind Russian researchers! This idea has been brewing in my head, we even started a mini-project with a  friendly American researcher, but we never got it into a working condition :(. It’s great that someone took the time to do it, though. Creating such a setup has some easier and harder parts. Setting up a BOINC server is not that hard, but it needs to run 24/7, which is difficult, unless you are department head at a university or you have a company behind you. Then the scripts need to be written, which is relatively easy to do, but testing and debugging is time-consuming. The really hard part, though, is how to distribute the workload. Treatises could be written on that, and the one above uses the simplest method, that of cutting up the problem along a couple of variables. It’s a good start, I think: starting easy is always the best way to start — the first automobile didn’t try to achieve 300km/h speeds, and it didn’t have to.

In the long run, I guess the setups will get more and more complicated, combining many techniques from cutting on variables, to clause sharing with a multitude of clause usefulness heuristics, to sharing parameter usefulness statistics. Workshares themselves could eventually be diversified, some doing ‘simple’ search without resolution, some doing only resolution (and clause cleaning?), some doing only (specific) simplification algorithms such as subsumption or strengthening, some doing data analysis on the current state of the problem such as reachability analysis, number of gates or other higher-level elements such as adders in the problem, etc. In other words, there are many ways to improve this. Maybe even a common interface to such a system could be laid down where different SAT solvers could be plugged in and used without notice.

A friend of mine has once written me a mail describing his work, which essentially talks about paying for a webpage not by looking at adverts, but by running a javascript program. The idea is, of course, that we could write a SAT solver in javascript, and make people run the script by putting it on a well-visited webpage. The server would then distribute SAT problems to the javascripts, which would send some computed data back (like learnt clauses), while the person is surfing the page. Cheap (maybe even free) and essentially unlimited CPU time for hard problems. We could use it to answer some fun math questions, or break some cryptographic systems, for example… just have to find someone crazy enough to write a SAT solver in javascript! :)

Wasting cluster time

Today marked the 100th cluster experiment I have conducted on the Grid’5000 cluster in Grenoble. The computers in this cluster are very powerful beasts, equipped with two Xeon Core i7-based processors, each with 4 cores for 8 cores in total with some impressive memory size and connectivity. The sad part is that since CryptoMiniSat is sequential software, I only use 1 core from the 8 total, since I cannot allow to have fluctuating test results due to memory-bandwidth problems when running multiple instances of the solver on the same physical computer. I have just counted, and using a very conservative estimate, I have wasted 41.8 years of  CPU core time to get CryptoMiniSat up to speed.

Was it worth wasting such immense amount of CPU core time to speed up a SAT solver? I think yes. First of all, I only used the cluster during night and during the weekends, when load was negligible. So, in a way I wasn’t wasting much apart from my own time launching the cluster experiments, collecting and analysing the results. Secondly, the feedback from the cluster immensely influenced my research and engineering decisions, and taught me many things regarding SAT. The best thing I have learnt is: never trust your intuitions when it comes to SAT. Secondly, good ideas on paper rarely turn out to be good ideas when first implemented, but with heavy tuning, they usually turn out to work well. Let me detail two examples.

Clause Vivification (I called it asymmetric branching) is a relatively simple example of why tuning is always necessary. This optimisation shortens clauses it is fed to, however, it is slow. The idea I used when tuning this optimisation is that the longest clauses are probably responsible for the slow solving and the long generated conflict clauses. So, I sort clauses according to size first, and then do clause vivification on the topmost (i.e. longest) clauses. Furthermore, I realised that enqueueing literals 2-by-2 on the stack (instead of enqueueing 1-by-1) saves significant time, and is almost as good as enqueueing them 1-by-1 — the difference is at most 1 literal in the final simplified clause.

A more complex example is that of speeding up self-subsuming resolution. The problem was that keeping all clauses attached in the watchlists slowed down self-subsuming resolution. However, the obvious solution of detaching all clauses and reattaching them after self-subs. resolution was incomplete, strangely enough. With multiple cluster experiments it became apparent that keeping clauses attached is important, because some propagations are made during the running of the algorithm, and propagating these (very few!) variables is crucial. The solution was to keep natively stored (2- and 3-long clauses) in the watchlists. These natively kept clauses don’t have pointers in the watchlists, and therefore their representation at the pointer’s location can be updated. The watchlists are therefore partially cleaned, then self-subsuming resolution is carried out, then the watchlists are fully cleaned and finally the watchlists are fully populated — not exactly a trivial algorithm, but it achieves both goals of fast and complete self-subsuming resolution.


Off-topic: For those who like contemporary art, here is a gem