“Representation is the essence of programming”
Frederic P. Brooks Jr., “The Mythical Man-month”
Looking through the literature on SAT solvers, it is rare to find any algorithm that uses a form of time-memory trade-off. In fact, it is rare to find any algorithm that uses too much memory. This also shows up in practice, as it’s rare to find a SAT solver using too much memory. This translates to the following: if a SAT solver can in an effective way utilize the typically abundant memory, that SAT solver will have an advantage. Based on this logic, I think CryptoMiniSat is doing good.
What I have realised is that it’s great to know all the literals we can reach from any given literal. Let me explain. Let’s say we have a look at literal “v1”, i.e. “v1 = true”. We propagate this literal as usual, and we reach a set of literals that are implied. For example, we reach “v2, v3, -v4”. This knowledge is regularly computed by modern SAT solvers, but is also quickly thrown away. I advocate keeping this, and here is why. There are a number of very interesting things we can do with these, three of which I have found useful.
Number one. This is the best, and the least obvious one. The algorithm used for computing equivalent literals (i.e. “v1 = -v2” or “v1 = v4”) is a variation of Tarjan’s algorithm for finding strongly connected components (SCC). This algorithm requires a set of binary clauses such as “-v1 v2” and “v1 -v2” as input to find equivalent literals, in the example case “v1 = -v2”. The most obvious way to “feed” this algorithm with information is to give it all the binary clauses we have. But that misses out on all the binary clauses that are not directly apparent in the problem, but could be derived from the problem. For example, if “-v1 -v4” was not directly in the problem, SCC cannot find the equivalence “v1 = v4”. Naturally, by using our cache, we can be sure that “-v1 -v4” is part of the problem (since “v1” propagates “-v4”). So, using this cache, we can substantially increase the power of SCC. More equivalent literals lead to less variables, and an overall faster solving.
Number two. Clause vivification is a method to make the original clauses of the problem shorter. It’s a relatively simple algorithm that enqueues the literals in the clauses one-by-one and checks if their propagation leads to a conflict. If so, then the clause can be shortened. With the cache, we can do something similar, though less powerful in terms of reasoning power, but far more effective in terms of speed. The trick is to simply try every single literal in the clause: if a literal propagates the inverse of another literal in the clause, we can remove it. For example, if the clause is “a b -c”, and in the cache for “b” there is “f,g,h,j,-c”, then we know that conceptually there exists a clause “-b -c”, which means we can now remove “b” from the original clause, using the the self-subsuming resolution rule. This form of vivification is, although technically less strong than normal vivification, is typically 50-100x faster than normal vivification and is almost as powerful. This kind of speed advantage means it can essentially be carried out without (time) penalty.
Number three. When generating conflict clauses, MiniSat-type conflict minimisation has now become commonplace. This uses the clauses involved in the conflict to carry out self-subsuming resolution on the conflict clause generated. Just as with clause vivification, we can use our cache to carry out self-subsuming resolution with the (conceptually binary) clauses stored in the cache. This is not much different from clause vivification, but it allows us to do simplification in the middle of a restart, instead of patiently waiting for the restart to end. Furthermore, it can uncover that certain learnt clauses can become binary, thus alleviating the problem of cleaning “useless” learnt clauses that could have become binary through, e.g. clause vivification.
I am aware that all the above three could be carried out without a cache — in fact, CryptoMiniSat could do (and most of the time did) all of the above without it. The advantage of having the cache is speed: things that used to take immense amounts of time can now be done really fast. However, what interests me the most in this cache, is the new uses that will come of it. I was originally only aware of number (3), then when I realised (1), I dived deep and realised that (2) can be done. I think more uses will eventually emerge.