STP and CryptoMiniSat in the competitions of 2015

Both the SAT Race and the SMT competition of 2015 are now over. The results are in and it’s time to draw some conclusions. CryptoMiniSat won the SAT Race’s incremental solving track, and the SMT solver that uses (among other solvers) CryptoMiniSat, CVC4, won the SMT competiton. These two wins are not unrelated — CryptoMiniSat is used in an incremental way by the CVC4 team. What CryptoMiniSat and STP (the other project I work on) hasn’t done is win their respective main competitions. Let me reflect on these quickly.

The SMT competition

The SMT competition’s quantifier free boolean vector track (QF_BV) was won by Boolector.  Boolector is an amazing piece of engineering and so is the SAT solver it uses, lingeling. Second was CVC using CryptoMiniSat and third was STP. The reason for STP being third was because I managed to add a bug into STP and never fuzzed it. So now fuzzing is part of the continuous integration suite. Without this bug, STP would have come pretty close to Boolector, of which I’m proud of.

In general, STP needs to be greatly overhauled. It’s a legacy codebase with global variables, weird dependencies and long list of weirdnesses. It’s also performing amazingly well, so one needs to be careful how to approach it. I have been trying to cut it up and slice it any way I can so it makes more sense to read the code and work on it, but it’s still quite hard.

The SAT competition

The SAT competition had its share of surprises. One such surprise was that lingeling didn’t win any main prizes. I wonder why it wasn’t submitted to the incremental track, for example. Another surprise was the 1st and 2nd places of two solvers by the same author, Jingchao Chen, who unfortunately I have never met or conversed with. I was also a bit disappointed that CryptoMiniSat didn’t make it too high on the list. There have been a number of issues I tried to fix, but apparently  that wasn’t enough. I will be interested in the sources of the winning solvers, see if I can learn something from them.

It’s really hard to create winning SAT solvers that behave well in all circumstances because it takes massive amount of time to code them up and properly test them. Especially the latter is really complicated and writing good test harnesses takes concentrated effort. My test scripts are easily over a thousand lines of code, mostly python. The continuous integration and build systems (both necessary for timely releases) are another thousand, and my AWS scripts for performance testing is almost a thousand as well. That’s more code than some early, well-performing SAT solvers had.

Most competing SAT solvers (with the exception of lingeling) don’t have such elaborate test and release systems in place. Although maintaining such systems seems like a burden, in the long run, they pay off well. For example, the incremental track is particularly hard to compete in without proper tests and hence CryptoMiniSat had a good chance there — and I don’t see why lingeling didn’t compete, given that is must have extremely elaborate test and release systems, too.

Conclusions

It has been a good year for CryptoMiniSat and STP. Some of their strengths have been highlighted more than others, and progress has been made on all fronts. They might both be released as part of standard packages in Debian, Fedora and other distros. All this sounds great. I have put lots of effort into both, and some of that effort is paying off, which is good to hear.

On the CryptoMiniSat side, in the future, I need to concentrate more on experimentation so it can be as agile as some of the newly winning solvers. On the STP side, I need to put more effort into fuzzing, testing, and refactoring.