This blog post is to promote the tool here (paper here) that converts weighted counting and sampling to their unweighted counterparts. It turns out that weighted counting and sampling is a thing. It’s basically a conjunctive normal form formula, a CNF, with weights attached to each variable. One must count the number of solutions to the CNF, where the weight of each solution is not necessarily 1, but weighted according to the polarity of each variable in the solution. One such formula could look like:
p cnf 2 1 c ind 1 2 0 1 2 0 w 1 0.9
This basically says that we have 2 variables, a single constraint (“v1 or v2”) where a solution with v1=True is worth 0.9 points, but a solution with v1=False is only worth 0.1 points. Or, if we are sampling, we want to bias our samples 9-to-1 for solutions that have v1=True.
Let’s go through this CNF. It has has 3 solutions, with its associated points:
solution points 1 2 0.9 1 -2 0.9 -1 2 0.1 Total points: 1.9
The issue is that we have great tools for doing unweighted counting and sampling but few tools for doing weighted counting and sampling.
Converting Weighted CNFs to Unweighted CNFs
The solution is to use a converter to make an unweighted formula from a weighted formula, and then either divide the number of solutions of the unweighted formula (for counting) or simply cut off variables from the samples (for sampling). This work was done by Chakraborty et al. and presented at IJCAI’15. The basic idea is that we can approximate the weight with a binary representation and encode that in CNF. For example, 0.26 can be approximated with 1/4. If this weight is attached to v1, and there are 10 variables in the original CNF, we generate the extra set of clauses:
12 -1 0 11 -1 0 12 -11 1 0
This CNF has exactly 1 solution when v1=True and 3 solutions when v1=False. So the number of positive solutions is 1/4 of all solutions, i.e. 0.25, close to the requested 0.26.
The above set of clauses is generated from chain formulas that can be easily generated, and are only polynomal in size of the binary precision required. For k-bit precision (above: 2 bit precision), we need only k extra variables and k+1 clauses. See the paper for details, but it’s sufficient to say that it’s easy to generate and, looking through a few examples, quite easy to understand.
Example: Weighted Counting
Let’s do the full run, using our favorite counter, approxmc:
$ cat my.cnf p cnf 2 1 c ind 1 2 0 1 2 0 w 1 0.9 $ ./weighted_to_unweighted.py my.cnf my-unweighted.cnf Orig vars: 2 Added vars: 7 The resulting count you have to divide by: 2**7 $ ./approxmc my-unweighted.cnf […] [appmc] FINISHED AppMC T: 0.01 s [appmc] Number of solutions is: 62*2**2 $ python >>> 62*2**2/2**7 1.9375
So, we got pretty close to the correct weighted count of 1.9. The reason for the discrepancy is because we approximated 0.9 with a 7-bit value, namely 115/2**7, i.e. 0.8984375, and we used an approximate counter. With an exact counter such as sharpSAT, we would have got:
$ ./sharpSAT my-unweighted.cnf [...] # solutions 243 # END $ python >>> 243/2**7 1.8984375
Which is pretty close to the correct weighted count and in fact corresponds to the expected error of 0.9-0.8984375=0.0015625:
$ python >>> from decimal import Decimal >>> Decimal("1.8984375")+\ Decimal("0.9")-Decimal("115")/Decimal(2**7) Decimal('1.9000000')
Example: Weighted Sampling
Weighted-to-unweighted sampling mimics counting. First, we transform our formula:
$ ./weighted_to_unweighted.py my.cnf my-unweighted.cnf Orig vars: 2 Added vars: 7 The resulting count you have to divide by: 2**7
Then we sample it, e.g. using our favorite sampler, UniGen (research paper):
$ ./unigen my-unweighted.cnf --sampleout samples --samples 1000 > /dev/null
The samples now contain extra variables that were used for the translation, so we simply cut them off to the original sampling set size, here, 2:
$ cut -d " " -f 1-2 samples | sort | uniq -c 55 -1 2 478 1 -2 490 1 2
And we are done! The sampling worked as expected: we get proportionally more samples where v1 is True than when it’s False.
Conclusions
Weighted counting is non-trivial but can be approximated using chain formulas, blowing up the original equation in only a minor way. This allows us to use well-optimized unweighted systems to perform the counting and sampling, and then trivially transform the output of these tools to match our original weighted problem.