All posts by msoos

Thoughts on SAT@home

If you have ever wondered how your multi-month SAT problem could be solved, the answer is here: SAT@home by some kind Russian researchers! This idea has been brewing in my head, we even started a mini-project with a  friendly American researcher, but we never got it into a working condition :(. It’s great that someone took the time to do it, though. Creating such a setup has some easier and harder parts. Setting up a BOINC server is not that hard, but it needs to run 24/7, which is difficult, unless you are department head at a university or you have a company behind you. Then the scripts need to be written, which is relatively easy to do, but testing and debugging is time-consuming. The really hard part, though, is how to distribute the workload. Treatises could be written on that, and the one above uses the simplest method, that of cutting up the problem along a couple of variables. It’s a good start, I think: starting easy is always the best way to start — the first automobile didn’t try to achieve 300km/h speeds, and it didn’t have to.

In the long run, I guess the setups will get more and more complicated, combining many techniques from cutting on variables, to clause sharing with a multitude of clause usefulness heuristics, to sharing parameter usefulness statistics. Workshares themselves could eventually be diversified, some doing ‘simple’ search without resolution, some doing only resolution (and clause cleaning?), some doing only (specific) simplification algorithms such as subsumption or strengthening, some doing data analysis on the current state of the problem such as reachability analysis, number of gates or other higher-level elements such as adders in the problem, etc. In other words, there are many ways to improve this. Maybe even a common interface to such a system could be laid down where different SAT solvers could be plugged in and used without notice.

A friend of mine has once written me a mail describing his work, which essentially talks about paying for a webpage not by looking at adverts, but by running a javascript program. The idea is, of course, that we could write a SAT solver in javascript, and make people run the script by putting it on a well-visited webpage. The server would then distribute SAT problems to the javascripts, which would send some computed data back (like learnt clauses), while the person is surfing the page. Cheap (maybe even free) and essentially unlimited CPU time for hard problems. We could use it to answer some fun math questions, or break some cryptographic systems, for example… just have to find someone crazy enough to write a SAT solver in javascript! :)

CCC Camp’11

In case you’ve missed it, the CCC Camp was a great opportunity to meet people both working in security and otherwise. I have even met a very kind Taiwanese researcher who worked on SAT and Gröbner basis: in fact, if you haven’t had the chance to read this paper, I highly recommend it. A set of kind Taiwanese researchers recommended this paper to me, and I think it’s the most interesting SAT paper I have read in the past year.

We at SRLabs have made two releases during this camp, one that breaks GPRS encryption, and one that breaks smart card ROM encryption. I was involved with the first release, essentially working on the crypto part. In case you are interested in the videos, the one on GPRS is uploaded here, and the one on smart card ROM encryption is here. This reminds me of something: the videos from the MIT SAT/SMT Summer School are missing :( Well, given my fail there, maybe that’s a good thing :)

Failed literal probing and UIP

I have just realised that CryptoMiniSat, having won a number of medals, does one of the most basic things, failed literal probing, all wrong. Let me tell you why it’s all wrong. In essence, failed literal probing is trivial. We enqueue a fact, such as a, and then propagate it. If this fails, i.e. if two opposing facts such as g and -g are derived, we know that a cannot be true, so we set -a. Easy. Or maybe not so easy.

The devil is in the details, so let’s see how we derived both g and -g from a. Let’s assume that we have the following set of binary clauses:
-a V b
-b V c
-b V d
-d V e
-d V f
-e V g
-f V -g

which, from the point of view of a is best described as the graph:

Propagating "a", deriving both "g" and "-g"

The problem is, that if we only derive -a from this graph, we end up with only that one literal set, because -a doesn’t propagate anything on the clauses. This is quite sad, because, in fact, we could derive something stronger. From the graph it is evident that setting d would have failed: the graph would simply have its upper part cut away completely, but the lower part, including the derivation of both g and -g would still stand:

Deriving both "g" and "-g" from "d"

What is special about node d? Well, it’s where the 1st UIP, the first unique implication point, lies. And it is quite simple to see that -d is in fact the strongest thing we can derive from this graph. It’s much stronger than simply -a, because setting -d propagates on the clauses, giving -b,-a, setting three variables instead of one, including -a. An interesting observation is the following: deriving -b is the 2nd UIP, and deriving -a is the last UIP. In other words, at least in this most simple case, 1st UIP is in fact the strongest, and 2nd, 3rd.. last UIP are less strong in strict order.

Let me remark on one more thing about failed literal probing. Once failed literal probing has been done on literal x and it visited the node set N, there is no need to try to do failed literal probing on any nodes in N, since they cannot possibly fail. Although the failing of a literal can have consequences on the failing of other literals, if we ignore this side-effect, we could speed up failed literal probing by marking literals already visited, and only carrying out failed literal probing on ones that haven’t been marked. This is really trivial, but I haven’t been using it :S

I am quite sure that some advanced SAT solvers (such as lingeling) do all of the above things right. It’s probably only CryptoMiniSat that fails miserably :)

Note: there is a subtle bug with marking literals visited as “OK”. If two different subgraphs could fail, but we abort on the first one, then we might mark a literal OK when in fact it isn’t. It is therefore easiest not to mark any literals if the probe failed (which is very rare).

Note to self: higher level autarkies

While reading this thesis, I have had a thought about autarkies. Let me first define what an autarky in SAT solving is:

A partial assignment phi is called a weak autarky for F if “phi*F is a subset of F” holds, while phi is an autarky for F if phi is a weak autarky for all sub-clause-sets F' of F. (Handbook of Satisfiability, p. 355)

What does this mean? Well, it means that in the clause-set {x V y, x} the assignment y=False is a weak autarky, because assigning y to False will lead to a clause that is already in the clause set (the clause x), while the assignment of x=True is a (normal) autarky, because it will satisfy all clauses that x is in. The point of autarkies is that we can remove clauses from the clause set by assigning values to a set of variables, while not changing the (un)satisfiability property of the original problem. In other words, autarkies are a cheap way of reducing the problem size. The only problem is that it seems to be relatively expensive to search for them, so conflict-driven SAT solvers don’t normally search for them (but, some lookahead solvers such as march by Marijn Heule do).

So, what is the idea? Well, I think we can have autarkies for equivalent literals, too. Here is an example CNF:

 a V d V -f
-b V d V -f
-a V e
 b V e

setting a = -b will not change the satisfiability of the problem.

We can add any number of clause pairs of the form X V Y where Y is any set of literals not in {a, -a, b, -b}, and X is (-)a in clause 1 and (-)b in clause 2. Further, one of the two variables, say, a can be in any clauses at will (in any literal form), though variable b can then only be in clauses defined by the clause pairs. Example:

 a V  d V -f
-b V  d V -f
-a V  e
 b V  e
 a V  c V -h
 a V -f

An extension that is pretty simple from here, is that we can even have clauses whose Y part is somewhat different: some literals can be missing, though again in a controlled way. For example:

 a V d
-b V d V -f
-a V e
 b V e

is possible. It would restrict b a bit more than necessary, but if a is the only one of the pairs missing literals, we are still OK I believe.

Finally, let me give an example of what these higher-level autarkies are capable of. Let’s assume we have the clause set:

 a V  d
-b V  d V -f
-a V  e
 b V  e
 a V  c V -h
 a V -f
 a V  b V  f V -g

setting a=-b now simplifies this to:

 a V  d
-a V  e
 a V  c V -h
 a V -f

which is equisatisfiable to the first one. Further, if the latter is satisfiable, computing the satisfying solution to the former given a solution to the latter is trivial.

Learnt clause graphs

“Stay hungry. Stay foolish.”

In my last post, I talked a little bit about learnt clause size distributions (histograms, really), and after a long chat with Vegard Nossum, we came up with some nice 3D graphs of learnt clause size distributions et al. It’s basically a lot of brainstorming that is quite entertaining I think. First of all, instead of waiting for Godot (i.e. someone to tell me that is has been done already), we got down an dirty with time-sliced histograms, so we can see how a specific problem gets solved. Again, I won’t go into much analysis, as I think it’s quite detrimental to novel ways of interpretation, instead, I will simply display the graphs. All simplification algorithms have been switched off for these graphs, though interestingly, they look pretty similar even if simplifications are turned on. Before we start, let me note that all tools to generate these simply and easily will be available in CryptoMS-2.9.2 (and are already available from the git repository of course — nothing I do on CMS is secret)

Grain

Let’s first start with the Grain cipher — we have seen it before and it had quite a peculiar shape. The learnt clause size distribution for this cipher is the following:

The Z axis contains the slices– one slice is 3’000 conflicts, and time flows from the back to the front (we didn’t solve the instance).The Y axis is number of clauses, and X axis is the size of the clause. Notice the bump around clause size ~130. An interesting graph would now be to compare this with glues of these clauses… so it goes:

Pretty similar, actually, except that the bump is around 110. Now, this bump could be fully useless. Or the holy grail. We don’t know… until we see some graphs ;) The graph for propagations caused by learnt clauses of specific sizes is the following:

The Z axis here is a bit different: it accumulates 3’000 propagations caused by learnt clauses, and then generates a slice. That’s the reason there are 600 of them in contrast to the one above, where there are 33 (for ~100’000 conflicts). Notice that most of the propagations are caused by small clauses. The same for glues:

Now the number of conflicts caused by learnt clauses of specific sizes:

Notice the slight colourchange on this graph at around 130 — compared to the previous 2 graphs, conflicts are done by a relatively good number of longer clauses, too. Let’s do the same with glues now:

Colourchange (and a bit visible bump, too) at around 110. Actually, it might be best to put the Y axis into logscale… next time.

ACG-20-10p0

Now that we have gone through Grain, let’s try another problem. For example the ACG-20-10p0 problem from SAT Comp’09. The clause size distributions are now:

And the same for glues:

And the propagations according to their size:

Notice that the number of slices is quite large. Now the conflicts caused by learnt clauses, according to their size:

Notice that compared to the previous graph, the distribution is shifted a lot more towards longer clauses, similarly to the Grain example. Also notice that there were 25 slices in total, and 12 slices’ worth of conflicts were caused by learnt clauses — in other words, about 50% of all conflicts are caused by learnt clauses. I was aware of this behaviour because CryptoMS 1.0 had a statistical interface, where this was quite obvious to see — unfortunately, that statistical interface is gone now, but may be back in one form or another in CryptoMS 3.0.0. For Grain, we had 33 slices, and 8 slices’ worth were caused by learnt clauses, which is not 50% but still pretty high, considering that the number of learnt clauses is usually far less than the number of original clauses.

A lot of thanks go to Vegard Nossum for all of this. He was the one who inquired about producing these using the new CryptoMiniSat, and once I put the ~5 lines of code into the solver and wrote a hackish awk script, he wrote the perl script to parse the data up, and showed me the beautiful results. I just couldn’t wait to share them with you :)