On benchmark randomization

As many of you have heard, the SAT Competition for this year has been announced. You can send in your benchmarks between the 12th and the 22nd of April, so get started. I have a bunch of benchmarks I have already submitted about 2 years ago, still waiting for any reply from those organizers — but the organizers are different this year, so fingers crossed.

What I want to talk about today is benchmark randomization. This is a very-very touchy topic. However, I fear that it’s touchy for the wrong reasons, and so I think it’s important to talk about it in detail.

What is benchmark randomization?

Benchmark randomization is when a benchmark that is submitted is shuffled around a bit. There are many ways to shuffle a problem, and I will discuss this in a bit, but the point is that the problem at hand that is described by the benchmark CNF should not be changed, or changed only in a very-very minor way, such that everyone agrees that it doesn’t affect the core problem itself as described by the CNF.

Why do we need shuffling?

We need shuffling because simply put, there aren’t enough good benchmarks and so the benchmarks of yesteryear (and the year before, and before, and…) re-appear often. This would be OK if SAT solvers couldn’t be tuned to solving specific problems faster. Note that I am not suggesting that SAT solvers are intentionally manipulated to solve specific problems faster by unscrupulous researchers. Instead, the following happens.

Unintentional random seed improvements

Researchers test the performance of their SAT solvers on specific instances and then tune their solvers, testing the performance again and again on the same instances to check if they have improved performance. Logically this is the best way to test and improve performance: use the same well-defined test-set all the time for meaningful comparison. Since the researcher wants to use the instances that he/she thinks is the current use-case of SAT solvers, he naturally uses the instances of SAT competitions, since those are representative. I did and still do the same.

So, researchers add their idea to a SAT solver, and test. If the idea is not improving things then some change is made and tested again. Since modern CDCL SAT solvers behave quite randomly, and since any change in the source code changes the behaviour quite significantly, a small change in the source code (tuning of a parameter, for example) will change the behaviour. And since the set of problems tested on is fixed, there is a chance that more problems will be solved. If more are solved, the researcher might correctly interpret this as a general improvement, not specific to the problem set. However, it may very well be generic, it is also specific.

The above suggests that the randomness of the SAT solver is completely unintentionally tuned to specific problems — a subset of which will appear next year in the competition.

Easy fixes

Since there aren’t enough benchmark problems, and in particular some benchmark types are rare, I suggest to fix the unintentional tuning of solvers to specific problems by changing the benchmarks in minor ways. Here is a list, with an explanation why I think it’s OK to perform the manipulation:

  1. Propagate variables. Unitary clauses are often part of benchmarks. Propagating some of these, some recursively, gives quite a bit of problem space variation. Propagation is performed by every CDCL SAT solver, and I think many would be  surprised if it didn’t help SAT solvers that worked differently than  current SAT solvers. Agreeing on performing partial propagation is something that shouldn’t be too difficult.
  2. Renumber variables. For some variable X that is not used (or is fixed to a value that has been propagated), every variable that is higher than X is decremented by one, and the CNF header is fixed to reflect this change.  Such a minor renumbering may be approved by every researcher as something that doesn’t change the problem or its structure. Note that if  partial propagation is performed there should be quite a number of variables that can be removed. Renumbering some, but not others is a way to shuffle the problem. A more radical way of renumbering variables would be to completely shuffle them, however that would change the way the problem is described in quite a radical way, so some would correctly object and it’s not necessary anyway.
  3. Replace equivalent literals. Perform strongly connected component analysis and replace equivalent literals. This has been shown to significantly improve performance and I have never seen a case where it doesn’t. Since equivalent literal replacement can be performed with a lot of freedom, this is quite a bit of shuffling space. For example, if v1=v2=v3, then any of the v1, v2, v3 can be the one that replaces the rest in the CNF. Picking one randomly is a way to shuffle the instance

There are other ways of shuffling, but either they change the instance too much (e.g. blocked clause removal), or can be undone quite easily (e.g. shuffling the order of the clauses). In fact, (3) is already quite a touchy issue I think, but with (1) and (2) all could agree on. Neither requires the order of the literals or the order of the clauses to change — some clauses (e.g. unitary ones) and literals (some of those that are set) would be removed, but that’s all. The problem remains essentially unchanged such that most probably even the original problem author would easily recognize it. However, it would be different from a SAT solver point of view: these changes would change the random seed of the solver, forcing the solver to behave in a way that is less tuned to this specific problem instance.

Conclusion

SAT solvers are currently tuned too much to specific instances. This is not intentional by the researchers, however it still affects the results. To obtain better, less biased results we should shuffle the problem instances we have. Above, I suggested three ways to shuffle the instances in such a way that most would agree they don’t disturb or change the complexity of the underlying problem described by the instance. I hope that some of these suggestions will be employed, if not this year then for next year’s SAT competition such that we could reach better, more meaningful results.

Collecting solver data

Lately, I have been putting a lot of effort into collecting data about solver behaviour and dumping it on-the-fly to a MySQL database. Some of this data is then presented by a PHP&javascript interface, such as this.The goal is to better understand how solving works, and thus to possibly come up with better strategies for solving. The requirements of the data gathered are: speed to calculate, size to store and usefulness.

Gathered data

There are some obvious statistics that one would like to collect, such as number of conflicts, propagations and conflicts per second, etc — these are all gathered by almost every SAT solver out there. However, it would be interesting to have more. Here is what CryptoMiniSat now gathers and dumps using highly efficient prepared bulk insert statements, using at most 5% of time.

For each variable, dumped once in a while:

  • no. times propagated false, and true (separately)
  • no. times decided false and true (sep.)
  • no. times flipped polarity
  • avg & variance of decision level
  • avg & variance of propagation level

For each clause larger than 3-long, dumped once in a while:

  • activity
  • conflict at which it was introduced (if learnt)
  • number of propagations made
  • number of conflicts made
  • number of times any of its literal was looked at during BCP
  • number of times it was looked at (dereferenced) during BCP
  • number of times used to resolve with during learnt clause generation
  • number of resolutions needed to during its generation (if learnt clause)

For earch restart:

  • no. of reducible&irreducible (i.e. learnt&non-learnt) binary clauses
  • no. of red&irred tertiary clauses (separately)
  • no. of red&irred long clauses (sep)
  • avg, variance,  min and max of learnt clause glues
  • avg, var, min, max of learnt clause sizes
  • avg, var, min, max of number of resolutions for 1st UIP
  • avg,var,min,max of branch depths
  • avg,var,min,max of backjump length –in the number of literals unassigned
  • avg,var,min,max of backjump lenght — in the number of levels backjumped
  • avg, var, min, max times a conflict followed a conflict without decisions being made
  • avg,var,min,max of agility
  • no. of propagations by red&irred binary clauses
  • no. of props by red&irred tertiary clauses
  • no. of props by red&irred long clauses
  • no. of conflicts by red&irred binary clauses (separately)
  • no. of conflicts by red&irred tertiary clauses (sep)
  • no. of conflicts by red&irred  long clauses (sep)
  • no. of learnt unit, binary, tertiary, and long clauses (sep)
  • avg,var, min,max of watchlist size traversed during BCP
  • time spent
  • no. of decisions
  • no. of propagations
  • no. of variables flipped
  • no. of variables set positive&negative (sep)
  • no. of variables currently unset
  • no. of variables currently replaced
  • no. of variables currently eliminated
  • no. of variables currently set

For each learnt clause database cleaning:

  • for reducible clauses removed: all the data in the “clauses larger than 3-long” data above, summed up
  • for reducible clauses not removed: all the data in the “clauses larger than 3-long” data above, summed up
  • no. of clauses removed
  • no. of clauses not removed
  • for all irred clauses (these are not touched): all the data in the “clauses larger than 3-long” data above, summed up

For every N conflicts:

  • clause size distribution
  • clause glue distribution
  • clause size and glue scatter data

This is all, and is not all mine. Suggestions were made by many, some as much as a year ago: George Katsirelos, Luca Melette, Vegard Nossum, Valentin-Mayer Eichberger, Ben Schlabs,  Said Jabbour and Niklas Sörensson. Naturally, none of these people necessarily approve of how I gather data or what I do with the data gathered, but they helped, so listing them is only fair.

What’s not yet or cannot be gathered

Two suggestions are still on the TODO list:

  • counting the number of conflicts done while a learnt clause was “locked”, i.e. has propagated in the current search tree. This stat of a learnt clause could tell us if the clause seemed essential to the search or not. If a clause propagated once, at the bottom of the search tree, and then the variable propagated was quickly unset, it’s not the same as if the same clause propagated at the top of the search tree, and then the variable it set was essentially never unset.
  • for each variable, counting the number of conflicts done while the variable was set. This is interesting, because if a variable was propagated only once, at top-level, it will seem like it was never set (set only once), yet it may have had a huge influence on the search tree and consequently on the learnt clauses.

Both of these require a change in the algorithm used to backjump and although I am a bit worried about the performance, I will add these soon.

Unfortunately, the data about variables can’t really be dumped so often, because it’s huge for large problems with millions of variables. So I guess I will only dump that for the most active N variables, for some sane definition of “active”. Similarly, the data about individual clauses is not dumped, only in a summed-up way during database cleaning.

Suggestions?

In case you have any ideas what else could be interesting to dump, please put it as a comment. The idea is to dump data that is cheap to compute and cheap to dump yet would be interesting for some reason or another. I am prepared to add stuff to datastructures, as you have probably guessed from the above. Yes, it makes the clause database less space-efficient, and so increases cache-miss. But on large problems, it’s going to be a cache-miss most of the time anyway, and a cache-fetch brings in 128B of data, which should be plenty for both the stats and the clause. Similarly with variables. So, don’t refrain from suggesting some stuff that takes space. Also, weird stuff is interesting. The most weird stat on the list above is definitely the “conflict after a conflict without decisions” (suggested by Said Jabbour) which I would have guessed to be zero, or near-zero, yet is pretty high, in the 30+% range.

Suggestions are welcome!

My Dell Vostro 3560 review

In this post I would like to share with you a personal review of my new business line laptop from Dell, the Vostro 3560. It’s a 15.6″ laptop that has Full HD screen using TN display technology, a 750GB HDD spinning at 7500rpm, 6GB of DDR3 RAM clocked at 1600MHz, with the highest possible Core i7 processor available for this machine, the i7-3612QM, that goes to 3.1GHz with all cores still active.

Overview of the laptop

The good

First, let me talk about what I found to be great about this laptop. The display is great: it’s sharp, the colors are good and it’s bright enough even in strong sunlight — though admittedly here in Berlin there is not much of that.

The keyboard is also magnificent. It’s one of the best keyboards I have ever used, even though I am very particular about my peripherals, especially keyboards and mice. I used to own business Thinkpads and this is the first keyboard that I have found to be as good as or maybe even better than a Thinkpad keyboard.

The system is built sturdily, and the materials used don’t feel cheap in your hands. Though the system isn’t light, it feels easy to hold due to its rigidity and smooth (but not slippery) surfaces.

Finally, the system contains extras that are only mentioned in the service manual. For example, there is an mSATA socket inside the laptop, meaning you can plug a 256GB mSATA SSD into the system in a couple of minutes for a reasonable price (around 200EUR), significantly speeding things up.

The mediocre

The CPU is a trade-off. It’s not the i7-3610QM which would go up to 3.3GHz with all cores still active. The reason for this is interestingly not price: the two CPUs cost the same from intel — the reason is heat removal. The CPU installed only generates 35W of heat while the other generates 45W. Essentially, Dell didn’t work hard enough on the cooling system and the result is that a better CPU cannot be installed. Unfortunately, Dell is not the only one that couldn’t install the 3610QM, other manufacturers such as Sony with the Vaio S series had the same issue. Naturally, Apple with the new MacBook Pro lineup didn’t mess this up.

The included charger is rather bulky. Interestingly, there is a slimmer charger available, for a mere 114 EUR — the one included costs 50EUR on the same Dell webpage. I find this to be rather disappointing.

Bulky charger included
Slim charger, available at 114EUR


The wireless card inside the system doesn’t support 5GHz WiFi. This sounds minor, until you go to a conference or a public place with a WiFi where everyone seems to be able to connect, except you. 2.4GHz WiFi uses a band that is substantially more noisy and since routers on that band cover more area, if many people are in the same place, connection can be very spotty. In comparison, my IBM X61t, released in 2007 (!) had 5GHz WiFi, so it’s hardly new technology. In fact, for about 30EUR you can change your WiFi card inside the case for one that supports Bluetooth+802.11abgn, i.e. all that is inside plus 5GHz WiFi. I still cannot understand why a computer that costs 900+ EUR would cheap out on such a component — especially since Dell can get bulk discount, so the upgrade for them would be around 10 EUR at most.

The system comes with a DVD writer included, but I don’t have any use for that. Who uses DVDs nowadays? If I want to watch a movie, I use Blu-Ray — after all, the display is Full HD! Having a Blu-Ray option would have been easy for the manufacturer, as the DVD player included is a standard one, so it can be easily swapped to an internal notebook Blu-Ray reader. Such readers cost around 60EUR at any online retailer. In fact, a Blu-Ray internal writer would cost no more than 80EUR. Dell missed out on this completely, for no good reason. Yes, it’s for business, but even business people need to back up their HDD and a DVD writer with a capacity of 4GB won’t be of any help.

The bad

Build quality is terrible. This is very surprising at this price range and in this category (i.e. business). First off, the screen’s backplate fits so badly to the front that there are holes larger than 2mm on the right side, and almost none on the left:

Left side of panel
Right side of panel



The same, though less accentuated, is true for the palmrest. You can clearly see that the plastic fits differently on the two sides, and in fact fits unevenly on both:

Left side of front
Right side of bottom

 

Similarly, the bottom sticker of the laptop with the most important piece of information, the Service Tag, is of such a low quality that after only 2 months of use the numbers got completely erased, and only the barcode (blacked out for privacy) remains readable:

My Service Tag — other than the barcode (blacked out) it’s unreadable


Finally, Linux support is terrible. I wanted to get reimbursed for the Windows 7 licence, but they refused it, which I think is clearly illegal, but of course I don’t have months to waste and money to burn to get some 50-100 EUR back. In the same spirit, there is no proper support for controlling the fan under Linux, as the highest level of the fan cannot be attained with the linux driver, and there is no proper palm detection support for the touchpad. The AMD Linux driver available at the time of purchase crashed X11 at startup, but the new one (fglrx 8.980) is flawless, and I have to commend AMD the on that.

Conclusions

The Vostro 3560 is a nice piece of machine but it’s not quite business quality and its makers prefer not to hear the word Linux uttered. However, if you are ready to shell out some money on a Blu-Ray drive, don’t intend to use Linux and are content with mediocre build quality with holes between elements the size an edge of a penny, the Vostro 3560 is a good choice. Otherwise, maybe hold out for a better offer.

[paypal-donation]

How to use CryptoMiniSat 2.9 as a library

There have been some questions asked about how to use CryptoMiniSat as a library. Here, I give a simple example usage to help any future potential users. If there are any question that seems unanswered, please feel free to add them as a comment to this post.

First, we initialize the solver with high verbosity:

#include "cmsat/Solver.h"
using namespace CMSat;

SolverConf conf;
conf.verbosity = 2;
Solver solver(conf);

Then we add the variables 0..9:

for(int i = 0; i < 10; i++) {
  solver.newVar();
}

And add a clause that would look in the DIMACS input format as "2 -5 10 0":

vec clause;
clause.push(Lit(1, false));
clause.push(Lit(4, true));
clause.push(Lit(9, false));
solver.addClause(clause);

We now create the assumptions that would look in DIMACS input format as "1 0" and "-5 0":

vec assumps;
assumps.push(Lit(0, false));
assumps.push(Lit(4, true));

We now solve with these assumptions:

lbool ret = solver.solve(assumps);

Let's see if the state of the solver after solving is UNSAT regardless of the assumptions:

if (!solver.okay()) {
  printf("UNSAT!\n");
  exit(0);
}

Note that okay() can only be FALSE if "ret" was "l_False". However, it is important to distinguish between two kinds of l_False: one where only the assumptions couldn't be satisfied, and one where no matter what the assumptions are, the problem is unsatisfiable. See below for more.

Now let's check that the solution is UNSAT relative to the assumptions we have given. In this case, let's print the conflicting clause:

if (ret == l_False) {
  printf("Conflict clause expressed in terms of the assumptions:\n");
  for(unsigned i = 0; i != solver.conflict.size(); i++) {
    printf("%s%d ", (solver.conflict[i].sign()) ? "" : "-", solver.conflict[i].var()+1);
  }
  printf("0\n");
}

The above will print an empty clause if solver.okay() is FALSE.

Now, let's see if the solution is SAT, and if so, print the solution to console, using DIMACS notation:

if (ret == l_True) {
  printf("We have the solution, it's here: \n");
  for (unsigned var = 0; var != solve.nVars(); var++)
    if (S.model[var] != l_Undef) {
      printf("%s%d ", (solver.model[var] == l_True)? "" : "-", var+1);
      printf("0\n");
    }
}

If solver.okay() is still TRUE (which it must be, if the ret was l_True), we can now add new clauses and new variables without restriction, and solve again (potentially with different assumptions). Note that most of the above applies to MiniSat as well.

When compiling the code, you have to pass the parameters such as:

$ g++ -fopenmp -lcryptominisat myprogram.cpp -o myprogram

PS: Thanks go to Martin Albrecht for pointing out some ambiguities in my explanation of the library usage, which have now been fixed.

Visualizing SAT solving

Visualizing the solving of mizh-md5-47-3.cnf


Visualizing what happens during SAT solving has been a long-term goal of mine, and finally, I have managed to pull together something that I feel confident about. The system is fully explained in the liked image on the right, including how to read the graphs and why I made them. Here, I would like to talk about the challenges I had to overcome to create the system.

Gathering information

Gathering information during solving is challenging for two reasons. First, it’s hard to know what to gather. Second, gathering the information should not affect overall speed of the solver (or only minimally), so the code to gather the information has to be well-written. To top it all, if much information is gathered, these have to be structured in a sane way, so it’s easy to access later.

It took me about 1-1.5 months to write the code to gather all information I wanted. It took a lot of time to correctly structure and to decide about how to store/summarize the information gathered. There is much more gathered than shown on the webpage, but more about that below.

Selecting what to display, and how

This may sound trivial. Some would simply say: just display all information! But what we really want is not just plain information: what good is it to print 100’000 numbers on a screen? The data has to be displayed in a meaningful and visually understandable way.

Getting to the current layout took a lot of time and many-many discussions with all all my friends and colleagues. I am eternally grateful for their input — it’s hard to know how good a layout is until someone sees it for the first time, and completely misunderstands it. Then you know you have to change it: until then, it was trivial to you what the graph meant, after all, you made it!

What to display is a bit more complex. There is a lot of data gathered, but what is interesting? Naturally, I couldn’t display everything, so I had to select. But selection may become a form of misrepresentation: if some important data isn’t displayed, the system is effectively lying. So, I tried to add as much as possible that still made sense. This lead to a very large table of graphs, but I think it’s still understandable. Further, the graphs can be moved around (just drag their labels), so doing comparative analysis is not hampered much by the large set of graphs.

The final layout is much influenced by Edward Tufte‘s books. Most graphic libraries for javascript, including what I used, Dygraphs, contain a lot of chartjunk by default. For example, the professional library HighCharts is full chartjunk (just look at their webpage), and is apparently used by many Fortune 500 companies. I was appalled at this — many-many graph libraries, none that offers a clean look? Luckily, I could do away with all that colorful beautifying mess — the data is interesting, and demands no embellishments.

Creating the webpage

Creating the webpage to display what I wanted was quite difficult. I am no expert at PHP or HTML, and this was the first time I had touched javascript. Although the final page doesn’t show it much, I struggled quite a bit with all these different tools. If I had to do this again, I would choose to use a page generation framework. This time, I wrote everything by hand.

I am most proud of two things on the webpage. First is the histogram at the bottom of the graphs. I know it may not seem like it, but that is all done with a javascript I wrote that pulls data from an array that could be dynamically changed. I think it does what it’s supposed to do, and does it well. The second is that I had to tweak the graph library used (Dygraphs, the best library out there, hands down), because it was too slow at printing these ~30 graphs. The graphs can be zoomed (just click and drag on X axis), and when zooming in&out the speed was really terrible. It now works relatively fast though I had to tweak the system to trade speed for a bit of visual beauty.

Final thoughts

Making the visualization webpage was a long marathon. I feel like it’s OK now, even though there were quite a number of ideas that weren’t implemented in the end. I hope you will enjoy playing with it as much as I have enjoyed making it.