A great essay on security in the cyberworld

I just read this paper on cybersecurity by Daniel E. Geer, and I was very impressed. Unfortunately I haven’t heard from the author yet, but I regularly used to read the blog of Bruce Schneier, and this essay basically puts the same ideas into perspective.

I have found the essay to be very interesting, and very thought-provoking. It shows very well that security in the cyber (or cyber-connected) world is very difficult to attain. There are no simple solutions. I personally think that security in the real world (not only the cyber world) is also very difficult to obtain, and unfortunately politicians tend to go the easy way, and simply bend in front of the will of the people by implementing “security measures” that in the end don’t help much (if at all) in terms of security, but reassure the people. An example of this is the ban of liquids on airplanes, while cockpits in Europe are still not reinforced — the former doesn’t achieve much but is very visible (and so is more of a security theater), while the latter would be much less visible, but also much more effective. Also to note, that the former takes a lot of man-power to implement, and inconveniences the users (thus taking their time, too), while the latter would be relatively cheap.

The mentality that leads us to believe that bombing is more of a threat is that most people expect planes to be blown up, while hijacking is most only an afterthought. This serious mistake is probably a psychological effect, as most people tend to remember visually colourful incidents more, and Hollywood has made use of the “blow-up” effect too much, etching it into the brains of most people, even decision-makers. However, it is important to remember, that most serious problems in airports and airplanes were carried out through the use of arms other than bombs: to take a trivial example, no planes used in 9/11 were bombed. As a side-note, reinforced cockpits would have prevented all of 9/11, and European cockpits are still not reinforced, but my toothpaste is always taken away — a serious defect, I say.

The dates of the SAT Race 2010 have been announced!

The dates of the SAT Race 2010 have been announced! The deadline is the end of April, so I have to get CryptoMiniSat bug-free by then.

I have decided to merge the code into STP, the Simple Theorem Prover made by some MIT researchers, among them Vijay Ganesh, with whom I have have worked quite a lot during the final months of 2009. Hopefully, with the help of the STP team, CryptoMiniSat will be bug-free by the end of this month, and through testing in STP, fine-tuning of options will be carried out. There are 16 new modules in the solver, and all have heuristic cut-offs, which have to be tuned. Naturally, I tried to use sensible defaults, but problems can vary widely, and different problems need different cut-offs. For instance, if the number of clauses is very low, even O(m^2) algorithms can be executed, while if the number of clauses is extremely large, e.g. 100’000, it might take too much time even to execute O(m*log(m)) algorithms.

If you are interested in the new CryptoMiniSat, all you need to do is to observe the developments in STP. I will also make the unstable executables accessible from this blog, through links, whenever I have a new version. Fingers crossed for a correct and fast CryptoMiniSat!

On research in general

I am not sure I am qualified to talk about research in general, but I will try to do my best.

To me, it seems that the research community of any given topic is pretty small. The reason for this is many-fold. Firstly, I suspect that the number of qualified individuals willing to work for a relatively small pay (but with many benefits, like flexible schedule, less stress, etc.) is relatively small. Secondly, any given topic usually reaches a maturity level where the subdomains are very clear, and it is very difficult to say anything reasonably good about a subdomain that one is not acquainted with. For instance, Knuth’s books are brilliant, but even he (someone who is like a semi-god in computer science) acknowledges that he simply cannot be an authority on all the topics covered in the 4th volume of his series. (BTW I just bought Vol4F0 and Vol4F1, wainting for amazon to ship now).

Since the research community is small, everyone gets to know one another. This is great since it helps collaboration, but it also might backlash against newcomers (PhD students), and against people generally not well-acquainted with the field, but who genuinely have good ideas that they wish to publish. I guess it’s a difficult integration process, that gets all the more difficult because it rarely happens that someone can simply stay in the same specific subfield for his entire research career. And even if someone stays in the same field, the field may change so much over time, attracting researchers from many distinct research domains, that even an “oldboy” can feel detached from his/her own topic after a while.

Research that deals with practical things is even more fast-moving than other kinds of research. Just a couple of years ago, research on botnets didn’t exist, yet now it seems it is a very rapidly evolving research domain. SAT solvers – I believe – also fall into the category of practical research. Year after year the solvers evolve so much that trying to compare two solvers with only 1-2 years of difference in their release dates seems nonsensical. This is great because there is a lot of “buzz” going on, but at the same time, it feels like a race against time: inspiring at first, but tiring at the end.

Very theoretical domains rarely have this speed of change. For instance, last year at the SAT’09 conference, I saw Stephen Cook, the person who basically invented the notion of NP completeness (I felt honoured just to be in the same room with him, I must say). Although SAT has changed a lot in the past years (many new applications, e.g. cryptography), but the fundamental problem didn’t change – therefore, he never had the ground taken from under him. The ground sure moved, but he still masters it, I am sure.

Oh well, legends. I met Shamir twice. Very kind person. Also, I met Daniel J. Bernstein at EUROCRYPT’09. He looked somewhat shorter and younger than I imagined, and I liked his openness. I met Lenstra at CCC’08. I was so shocked it was him, I couldn’t even say hello – very embarrassing. He was very friendly, and seemed much younger than his official age would suggest. I really want to meet Knuth, but I guess that might have to wait… forever, maybe. Unless I somehow manage to visit Stanford one day, in which case I will definitely show up at one of his classes. They say he is a terrific speaker.

Why CryptoMiniSat can’t use MATLAB to do Gaussian elimination

Some people, who may not have thought through the problem of implementing Gaussian elimination into SAT solvers, seem to think that it’s just a matter of pulling a matlab function into a solver, and the job is done. Let met explain why I think this is not the case.

Firstly, we don’t wish to execute Gaussian elimination simply before the solving, instead, we wish to execute it during the solving. This means the matrix’s columns need to be changed often, since as we move down the search tree, some variables will be fixed, thus the columns need to be cleared, and the augmented column needs to be updated. But how would a matlab function know which column was changed? These functions are made to work on any given matrix, churn through it, and finish with a result. However, in many cases, the change (=delta) between two matrixes is minimal (i.e. 3rd column from right was changed). In this case, the matlab routine will nevertheless start updating the matrix from the leftmost column, essentially taking far more time than an algorithm that knows that the delta was small.

Secondly, let’s assume that a value like “x1=true” has been found by the matlab function. Since we don’t know where this information came from, there is only one way of adding it: put it into the propagation queue. This, however, would be a grave mistake. By not giving the solver a hint where this propagation came from, the solver cannot use this information during conflict generation, and we will loose most of the benefits. In case a conflict is found by our matlab function, the problem is even worse. What caused the conflict? We simply don’t know. We can send the solver back one decision level, and hope for the best, but non-historical backjumping is one of the main reason SAT solvers perform so well. On the other hand, if we keep another matrix, not assigned with the current assignements but updated with all row-xor and row-swap operations (as in CryptoMiniSat2), then we will have all these informations at our disposal, and the integration of Gaussian elimination into the SAT solving process will be correct.

These two reasons should be sufficient to see that matlab, or really any mathematical package that implements Gaussian elimination is not useful for CryptoMiniSat. Yes, some of their “tricks” could be used, and I think are already being used.

PS: As a side-note, many have told me that the matrixes are sparse, and so I should use a sparse matrix data structure. Although the matrixes are indeed sparse, they are also miniscule. On very small matrixes (<200-300 columns) there is simply no point in doing sparse matrix elimination. Not to mention, that since two different matrixes need to be stored and handled, it is impossible to find a pivot that is optimal for both, thus the density of at least one of the matrixes must evolve faster than optimal, leading to an early switch to a dense matrix representation.

Could monomials be handled natively from SAT solvers?

I recently got a question that intrigued me:

I am new to this SAT solving world but I was wondering whether you thought considerable speedups were possible for crypto type problems (multivariate polys over GF(2)) by simply never converting the problem to cnf at all and thereby avoiding the combinatorial explosion that results in the conversion process. That is using the original xor formulation.

First of all, the question is a follow-up to xor-clauses: they implement XOR-s natively. Using them avoids a number of problems relating to the increase of variables. Why not implement monomials (i.e. “a*b” or “a*b*c”, where “*” is binary AND and variables are binary) natively? They are the only thing left to do. Personally, I am not overly optimistic about them, though. Let me got through some of my reasons here.

Firstly, the “exponential explosion” expressed in the question is in fact much less existent than people tend to think. The reason is that the intelligent variable activity heuristics, unit propagation, and conflict generation tend to take care of a lot of potential problems. Since the propagation of a variable will entail the propagation of many others (it depends, for crypto, around ~100), there is no real explosion, since there is not really 2^n, but more like 2^(n/100) combinations that need to be explored. This argumentation takes away some of the potential benefits that native monomials could bring.

The real problem, though, is the following. By moving monomials into cryptominisat and thus potentially speeding up the solving, conflict generation could become much more complex. So, if moving to an internal monomial representation entails making a mess of conflict generation, then using monomials internally may only make the solving slower.

Another reason that native monomials may not speed up solving so much, is that a lot of clauses inserted when converting monomials are binary clauses, which are extremely well dealt with in the CNF world — it would be hard to do it any better.

As a last, but very minor point, using monomials would increase the complexity of the program, which would mean not only a lot of man-hours lost debugging it, but also a loss of performance due to a (probably non-negligible) increase of instruction cache misses.

Oh well, so those are my reasons. I would be interested if someone has some comments on these, though.