CryptoMiniSat: 8000 commits later

The GitHub repository for CryptoMiniSat just hit 8000 commits. To celebrate this rather weird and crazy fact, let me put together a bit of a history.

The Beginnings

CryptoMiniSat began as a way of trying to prove that a probabilistic cryptographic scheme was not possible to break using SAT solvers. This was the year 2009, and I was working in Grenoble at INRIA. It was a fun time and I was working really hard to prove what I wanted to prove, to the point that I created a probabilistic SAT solver where one could add probability weights to clauses. The propagations and conflict engine would only work if the conflict or propagation was supported by multiple clauses. Thus began a long range of my unpublished work in SAT. This SAT solver, curiously, still works and solves problems quite well. It’s a lot of fun, actually — just add some random clause into the database that bans the only correct solution and the solver will find the correct solution. A bit of a hackery, but hey, it works. Also, it’s so much faster than doing that solving with the “right” systems, it’s not even worth comparing.

The same year, my friend, who later became my boss, came around because he had the code of the Mifare protocol in his hands but he couldn’t crack the crypto. He pushed me to use the SAT solver for something practical and in just two days we did it, we modelled the cipher and could break it — but it took too much time to solve. So the rest of the year I was working on improving the solver as much as I could. He had the idea for the XOR clauses, which I implemented in one night of crazy idea-flash.

Winning the 2010 SAT Race

Then came the end of 2009 and I was bored in Paris, wanting to work on SAT solvers no matter what. There was a completely unrelated project that I had to work on for about 2 months to get a full year of salary. This was all thanks to Claude Castelluccia who pulled that project together amazingly well, and we did so well we got selected as “Project Phare” which I already forgot what it meant, but we were around top 5%. So I had 10 months to spare and I worked all weekdays and all weekends on my shiny new SAT solver which my friend, later boss, also thought would be good to call CryptoMiniSat. I picked the name, it was the EuroCrypt’09 conference and he flew in from Berlin. I wanted to give credit to the MiniSat guys, something that later nobody did.

I put the solver through a lot of stress test on Grid’5000, the French supercomputing cluster and I submitted it to the SAT Race. Winning it was something completely unanticipated. What was even more unanticipated was that everybody thought it was because the solver had XOR clause support. Of course it had nothing to do with it, and Gaussian elimination was not even compiled into the executable. It was interesting to win, but once I was back in the lab, things got a bit weird. My supervisor at the place was clearly not impressed. They kept on using MiniSat without ever asking me about SAT solvers.

The Berlin years: 2010-2014

I went to work for my friend in Berlin who I already worked with on Crypto-1. I was now working in security consultancy, but I continued to work on my SAT solver of course and submitted it to the SAT Competition of 2011. The system was exceedingly fast — it should have won, but didn’t, because I memory-outed on a number of input instances. The problem was multi-fold, but essentially, the number of inprocessing systems  grew to such a large number that making sure that none blew up was hard. Also, my testing was inadequate, something that I didn’t properly start fixing for years. I still won some prizes, including a gold medal for parallel solving, which was super-funny as all I did was add a blocking structure that shared unitary and binary clauses and that’s it. I fondly remember the description of some of the parallel solvers I beat that were so complicated even to explain that I gave up reading their descriptions. I can only wonder how much time it took to write them.

The years rolled by and I never got around to properly test or performance test the system. Whenever I submitted it, it did OK and I wasn’t too sad about it, but it never did too well. There was always something that didn’t work, that broke just the last day. Every time there was some unanticipated input with 2000 disconnected components or such. It was fun to compete, and I tried to clean up the code, but I never got to win.

The past year of CryptoMiniSat

The beginning of this year I decided to do something slightly different. It was clear that CryptoMiniSat was not winning any major prizes. At the same time, I started working at another company, Cisco, where I saw a lot of work being done on testing. This inspired me to put a lot more effort into both performance and regular testing. It’s strange how an environment influences you. I started putting in hooks and building systems around the program. I built an AWS system for performance testing. I spun up a Jenkins server to perform regular long-running tests. I added code coverage metrics. I hooked in Coverity for good. I created ways to dump internal state and check it. Some of these were already present for the SAT Race 2015, some weren’t. But I was quite content on winning a difficult prize there, the incremental track — I think that validated my approach to testing. The incremental prize was not difficult to win because it was difficult to be fast. It was difficult to win because you had to be returning the right result even though you were in the middle of the run. The race also pointed to where the bugs were still hiding: the parallel solving didn’t work because I didn’t early-exit the other threads when one thread found the satisfying solution. I never properly performance-tested the parallel version.


There aren’t many conclusions to be had. It turns out testing is hard, and performance testing is both hard and expensive. I tried to escape them both but in the end, they both bit me. I’m now concentrating on these areas. I think CryptoMiniSat has potential, it just needs to work right :)