All posts by msoos

Arjun, our New CNF Model Counting Preprocessor

After many years of work, I am very happy to present our new CNF model counting preprocessor, Arjun [PDF][static binary]. It improves upon our approximate model counter ApproxMC [static binary], and can be used as a stand-alone binary to preprocess CNFs for (projected) model counting . Usage is simple:

./arjun blasted_case110.cnf
c Arjun Version: 43b17ef5899c461b8e947b0f3ca286efcf71464c
c CMS SHA revision 9bf1d82a7b9fa5492cf4f0437cf7110b77ad7230
[..]
c [arjun] original sampling set size: 284
c [arjun] final set size:   15 percent of original: 5.2817 %

Here, Arjun took a CNF that had a projection set of 284 variables, and reduced it to an independent support of only 15 variables. Arjun is now part of ApproxMC by default. Let’s solve a problem with ApproxMC without Arjun:

./approxmc --arjun 0 blasted_squaring42.cnf.gz.no_w.cnf.gz 
c ApproxMC SHA revision 9fb4fdd01b40cc0526c4933e2f2dca402f0ab91f
c CMS SHA revision 9bf1d82a7b9fa5492cf4f0437cf7110b77ad7230
c [appmc] Orig sampling vars size: 349
[after waiting 5000 seconds, it times out]

Now let’s do the same with Arjun (this does not use code from SharpSAT-td):

./approxmc blasted_squaring42.cnf.gz.no_w.cnf.gz 
c ApproxMC SHA revision 9fb4fdd01b40cc0526c4933e2f2dca402f0ab91f
c CMS SHA revision 9bf1d82a7b9fa5492cf4f0437cf7110b77ad7230
c Arjun SHA revision 43b17ef5899c461b8e947b0f3ca286efcf71464c
c [appmc] Orig sampling vars size: 349
c [arjun] final set size: 155 percent of original: 44.4126 %
c [appmc+arjun] Total time: 12.80
c [appmc] Number of solutions is: 80*2**127
s SATISFIABLE
s mc 13611294676837538538534984297270728458240

Done in 12.8 seconds, NICE. Wanna try it out? Arjun+ApproxMC static Linux binary HERE, Arjun static Linux binary HERE. Research paper HERE. Arjun code HERE. Now that the demo is out of the way, let’s get into the nitty-gritty details!

Projected Model Counting

CNF model counting is the problem where you want to count the number of solutions to a set of equations written in the CNF form. This form is quite restrictive, but also very powerful, here is an example:

 x1 OR  x2 OR -x3 = True
x1 OR -x2 = True
x1 OR -x3 = True
x1 OR -x4 = True
-x1 OR x4 = True

The above set of constraints clearly have a solution, e.g. setting x1=True and x4=False will satisfy all constraints. The question in model counting is how many solutions a system has. The above problem has 5 solutions in total:

[Where "1" means TRUE and "-1" means FALSE]
x1 x2 x3 x4
-----------------
-1 -2 -3 4 0
1 2 3 -4 0
1 2 -3 -4 0
1 -2 3 -4 0
1 -2 -3 -4 0

This was kinda easy. However, there are two complications. One complication tends to be is that there are a massive number of solutions. For example, there are 2^200 solutions. Enumerating them one-by-one is just not going to work. Hence, we need some smart systems to do the counting for us, that don’t count one-by-one. Secondly, often we are interested only in the distinct number of solutions over a certain set of variables. Let’s take the above example, and say we want the distinct number of solutions over variables x1, x2 and x4 only. So, let’s delete column for x3 from above table:

x1   x2  x4   Same?
-------------------
-1 -2 4 0
1 2 -4 0 *
1 2 -4 0 *
1 -2 -4 0 +
1 -2 -4 0 +

Notice there are only 3 distinct solutions: the rows marked with * are both the same, and the rows marked with + are also the same. This is called projected counting because we are projecting the solution space over a specific set of variables — in this case, x1, x2, and x4.

Computing the Independent set of a Projection Set

In the above example, our projection set was x1, x2, and x4. However, notice that x1=-x4, due to the constraints:

 x1 OR -x4 = True
-x1 OR x4 = True

Hence, it is good enough to count over x1 and x2, since x4 is determined by x1 and so x4 cannot possibly make the distinct number of solutions smaller or larger. This realization is at the heart of calculating an independent support — we basically want to minimize the set of variables that we project over. The independent support is simply a subset of the projection set, that’s hopefully a lot smaller than the projection set.

An early work on independent support calculation is the B+E preprocessor [PDF], which aims to do two things:  “B” which finds a small independent support, and “E”, which eliminates variables from the problem. Let’s concentrate on “B” here. In the B+E paper, the authors talk about independent support over all variables of the formula, not over a projection set. In other words, their code and paper originally was conceived to minimize a projection set that had all variables inside. In our case, their work would minimize x1..x4, finding that x1,x2,x3 is an independent support of x1..x4.

Implicit Definability

The trick of B+E is to use the idea of so-called implicit definability. Implicit deniability is really simple. If I take x1, x2 and x3, does that define the value of x4? Yep. In fact, x1 on its own would define the value of x4, because x1=-x4.  However, sometimes you have equations that define a variable through a (large) number of other variables. For example:

 x30 OR -x10 OR -x20 = True
-x30 OR x10 = True
-x30 OR x20 = True

This set of constraints describe x10 AND x20=x30. In this case, x10 and x20 together define x30. So if you have the above set of constraints in a CNF, and you have x10, x20, and x30 all in your projection set, you can confidently take out x30 from the set — and the number of distinct solutions projected will not change.

Implicit definability states that we can find these definitions in an implicit way, using a relatively straightforward system. We make two copies of the CNF, let’s call these CNF1 and CNF2. Then, we have variables v1…vN in the first CNF and w1..wN in the second CNF. Now, we create new variables called indicator variables, i1…iN that indicate whether v1=w1, v2=w2… etc. After this setup, the query is simple. Given i1=TRUE (i.e. given v1=w2), is it possible that v2 is different from w2? If it is, then our new independent set must contain both i1 and i2 (If not, we can get rid of i2). Now we move to i3. Let’s set i1=TRUE and i2=TRUE, and ask if it is possible that v3 is different from w3? Otherwise, our independent set remains i1, and we ask, is it possible that i1=TRUE and i2=TRUE, but v3 is different from w3? etc. This way, we test all the variables from v/w1…v/wN, and eventually end up with a set of indicator variables that tell us a minimal independent support.

Explicit Definability

As an external observer, without knowing nothing about implicit definability, CNF copying and renumbering and the like, it is quite easy to figure out that x1=-x4 just by looking at the CNF, since it contains:

 x1 OR -x4 = True
-x1 OR x4 = True

Such set of constraints can be trivially found using Tarjan’s algorithm. This is actually linear in the number of binary (i.e. 2-long) constraints, so this is super-quick to run. Similarly, we can very easily find the AND gate x10 AND x20=x30:

 x30 OR -x10 OR -x20 = True
-x30 OR x10 = True
-x30 OR x20 = True

It’s a quick check on all the constraints that are 3-long and have the corresponding 2-long constraints. Such syntactic checks are well-known parts of SAT solvers. For example, the extremely successful kissat solver has the following lines of code in gates.c:

if (kissat_find_equivalence_gate (solver, lit))
res = true;
else if (kissat_find_and_gate (solver, lit, 0))
res = true;
else if (kissat_find_and_gate (solver, not_lit, 1))
res = true;
else if (kissat_find_if_then_else_gate (solver, lit, 0))
res = true;
else if (kissat_find_if_then_else_gate (solver, not_lit, 1))
res = true;
else if (kissat_find_definition (solver, lit))
res = true;

So, kissat finds equivalence gates (i.e. x1=x2), AND gates (like x10 AND x20=x30), IF-THEN-ELSE gates, and curiously… “definition” gates. What could these be? Well, it turns out kissat uses a very cool trick to find definitions. It takes all constraints that contain a variable, say x30/-x30, and cuts off the x30/-x30:

-x10 OR -x20 = True  <-- we cut of "x30 OR"
x10 = True <-- we cut of "-x30 OR"
x20 = True <-- we cut of "-x30 OR"

Notice that this set of constraints is now unsatisfiable. In fact, for all gates that define a variable, if you take all the constraints that the variable is in, and you chop off the variable, the constraints will together be unsatisfiable. To find such gates, kissat uses a SAT solver called kitten. Kitten solves these problems really fast and allows for UNSAT core extraction (not present in kissat) to find the defining variables.

Arjun

So what did we do with Arjun? We basically do the following set of improvements over B+E — while not even once looking at B+E’s code, since it was close-sourced, and by the time the authors open-sourced it, we already had our paper in the review queue:

  1. We allow taking in a projection set, rather than the full set of variables like B+E does. This allows us to compute and independent support of a projection set, which is extremely important for our model counter ApproxMC. With a small independent support, ApproxMC uses a  smaller matrix, so Gauss-Jordan elimination, which is an O(n^3) algorithm in our implementation, is significantly faster.
  2. We use CryptoMiniSat to our advantage to simplify the CNF formula before running explicit or implicit definability detection. CryptoMiniSat is a powerful system that allows fine-tuned set of heuristics to be run, completely controlled from the API, e.g.  “auto str = string(“intree-probe, occ-backw-sub-str, distill-bins, “); solver.simplify(NULL, &str);”
  3. We use all the different ways to syntactically recover gates: and/or gates, if-then-else gates, and parity (XOR) constraints. We also recover gates semi-syntactically using picosat (instead of kitten) — this is semi-syntactic, because we use syntactic methods to get the occurrence list of a variable and to construct the SAT query, but then use a SAT solver (i.e. a semantic method) to recover the definition, if any. We then construct a directed graph from the gates and greedily find a minimum cover. This is needed because there could be loops, and we don’t want x1 to be defined by x2, and x2 being defined by x1, and accidentally remove both, which would be silly.
  4. We repeat steps 2 & 3 at least 2x to make sure it’s done to a good enough level. This allows us to save time on the expensive next step, implicit definability detection. At this stage, we also do empty occurrence detection — if a variable is in the projection set, but doesn’t even occur in the formula, then obviously the variable can be removed and the count increased by a factor of 2.
  5. We change the SAT solver’s assumptions method to improve implicit definability detection. From the standpoint of a SAT solver, the  queries mentioned in the implicit definability section are so-called assumption queries: assuming i1=TRUE, v2=TRUE, v3=FALSE, is there a solution? The list of assumptions tends to be very long here, so SAT solvers can struggle. Our insight here is to change the traditional way of how assumptions are handled in the SAT solver, and effectively make the SAT solver not backtrack to the beginning every time a query is made. This makes quite a bit of difference in performance.(Note: it turns out this improvement idea has been invented, and re-invented, a few times already, e.g. by SharpSAT-td, and by IntelSAT — see Alexander Nadel’s presentation at SAT 2022).

That’s all folks! These improvements add up to about 5k LoC of code. All in all, they make Arjun more than an order of magnitude faster than B+E, and it can compute an independent support over a projection set, rather than all variables in the formula.

Performance of Arjun

Here’s an example performance graph for B+E vs Arjun, using the set of benchmark instances we have been measuring the performance of ApproxMC on for ~5 years now — perhaps outdated, but at least it’s easy to compare all the values in all of our papers. The performance difference is quite big:

Basically, in B+E took 4932 seconds to compute the independent set of 1752 instances, while Arjun did the same in 117 seconds. Quite nice, about 40x improvement. Notice that due to the way our implicit definability works, it’s possible to e.g. order the variables in such a way as to make querying faster, but end up with a larger independent set. So, here is the comparison of independent sets computed by the same system as per the graph above:

Essentially, the same, modulo some noise. There is some advantage to using B+E in a few cases, and there is some advantage using Arjun in other cases, from the perspective of the calculated independent set. However, notice that Arjun is so much faster, that we could actually run it 2x with different variable ordering, thereby getting a (much) smaller independent set, and often still be a lot faster than B+E. However, there is always a trade-off, so we decided to be approximately as good in the minimization as B+E, while being significantly faster.

Performance Impact on ApproxMC

So, Arjun is fast but what does that mean for ApproxMC? It turns out, it means a lot. ApproxMC adds XOR constraints over the projection set, and the smaller the projection set, the smaller the XORs . Hence, the matrix that the Gauss-Jordan elimination algorithm has to run on inside ApproxMC is much smaller — which is an O(n^3) algorithm in our implementation. Hence, the performance of ApproxMC4 vs Arjun+ApproxMC4 is remarkable:

Basically, we can count in 2.24 seconds with Arjun+ApproxMC4 as many instances as we could count in 5000s with ApproxMC4 alone: more than 3 orders of magnitude improvement. In fact, it’s so fast we’ll need to switch away from our set of benchmark instances, because we are getting close to the limit of 1896, the number of instances in total. Notice that over the years, ApproxMC has improved dramatically in speed. In fact, the earliest ApproxMC could only count ~400 instances within 5000 seconds. We can now count 400 instances within a time limit of only 0.09 seconds.

Conclusions

As it may be evident from winning a number of tracks at the Model Counting Competition, Arjun is not only making ApproxMC faster — we also won the non-projected exact(!) model counting track with an entry called Arjun+SharpSAT-td, where SharpSAT-td [PDF] is an exact model counter by Tuukka Korhonen and Matti Jarvisalo. We hope to improve Arjun further in the future, and we hope Arjun will be become a standard preprocessor for model counters in the Model Counting Competition in the coming years, exploiting its CNF reduction capabilities that are currently available as: “./arjun input.cnf minimized.cnf”, using code and ideas from SharpSAT-td.

Checking Uniform-Like Samplers

Uniform sampling is pretty simple: there are a set of solutions to a set of equations, and I want you to give me N solutions uniformly at random. Say, I have a set of equations that only has the solutions x=1…6, and I ask you to give me solutions uniformly at random. In this case, if the system is not cheating, it should give me solutions exactly like a random dice would give: 1..6, each with the same probability of 1/6. Now, if you give me nothing but 1’s I’d be slightly confused, and eventually would think you may be trying to cheat.

Uniformity is useful not only in gambling, but also e.g. if you want to make sure to cover a good chunk of the potential states in a system. Say, your equations describe how a program can run. Now, uniform solutions to these equations will be examples of the state space of your program. If you want to check that your system is in most cases doing the right thing, you can ask for uniform solutions at random and check the states for inconsistencies or unexpected behaviors.

Uniform Sampling of CNFs

One type of equations that is quite popular is to have all variables boolean (i.e. True/False), and equations being nothing but an “AND of ORs”, i.e. something like: “(a OR b) AND (b OR c OR d) AND (-f OR -g)”. These types of equations, also known as CNF, can express quite complicate things, e.g. (parts of) computer programs, logical circuits (e.g. parts of CPUs), and more. What’s nice about CNF is that there are many different tools to convert your problem language into CNF.

Given CNF as the intermediate language, all you need to do is run a uniform sampler on the CNF, and interpret the samples given your transformation. For example, you could translate your Ethereum cryptocurrency contract into QF_BV logic and blast that to CNF using e.g. the STP solver. Then you can get uniform random examples of the execution of your cryptocurrency contract using e.g. UniGen.

There are many different samplers for CNFs, and they mostly fall into two categories: ones that give guarantees and are hence truly uniform, and ones that don’t give guarantees and therefore fall into what we’ll call uniform-like samplers. These latter samplers tend to be significantly faster than truly uniform samplers, however, they can, and sometimes do, give non-uniform samples. I personally help maintain one truly uniform sampler, UniGen (PDF), and one uniform-like sampler, CMSGen (PDF). Other popular truly uniform samplers include KUS (PDF) and SPUR (PDF).

Catching Uniform-Like Samplers: Barbarik

If I give you a sampler and ask you to tell me if it’s truly uniform or simply uniform-like, what should you do? How should we distinguish one from the other, without looking into the internals of the system? It turns out that this is not a simple question to answer. It is very reasonable to assume that there are e.g. 2^200 solutions, so e.g. trying to prove that a sampler is uniform by saying that it should only output the same sample twice rarely is not really meaningful. In this latter case, you’d need to get approx 2^100 samples from a truly uniform sampler before it will likely give two colliding samples. While a uniform-like sampler may collide at e.g. only 2^60 (i.e. a trillion times earlier), that’s still a lot of computation, and only a single data point.

Doing this efficiently is the question that was on the minds of the authors of Barbarik. Basically, their idea was the following: take a CNF, remove all solutions but two, and blow up these two solutions into equally many solutions each. Say, you find two solutions to a CNF, one BLUE and one RED. Barbarik will take BLUE, make 100 blue balls out of it, and take RED, and make 100 red balls out of it. Then, it will give the CNF with the 200 solutions, half blue, half red, to the sampler under test. Then it asks the sampler to give it a bunch of balls. We of course expect approx 50%-50% red-blue distribution of balls from a truly uniform sampler.

Barbarik fails a solver if it is gives a distribution too far away from the 50%-50% we’d expect. Barbarik runs this check many times, with different example blue/red solutions that are “blown up” to multiple solutions. Given different base CNFs, and many tries, it is possible to differentiate the good from the bad… most of the time:

If you take a look at the table above, QuickSampler and STS, both uniform-like samplers, are rejected by Barbarik given 50 different CNFs. In contrast, UniGen survives all 50 tests with flying colors. Notice, however, that CMGen also survived all tests.

The Birth of CMSGen

When Barbarik was being created, I was fortunate enough to be present, and so I decided to tweak my SAT solver, CryptoMiniSat purely using command-line parameters, until Barbarik could not distinguish it from a truly uniform sampler. This, in my view, showcased how extremely important a test system was: I actually had a bug in CryptoMiniSat’s randomization that Barbarik clearly showed and I had to fix to get higher quality samples. We called the resulting uniform-like sampler CMSGen, and it was used in the function synthesis tool Manthan (PDF), that blew all other function synthesis system out of the water, thanks to its innovative design and access to high quality, fast samples from CMSGen:

Notice that within 200 seconds Manthan outperformed all other function synthesis systems, even if we give the other system 7000 seconds to work with. If you are interested in the details of this crazy improvement over previous state-of-the-art, check out the slides here or the video here (these are all work of my coworkers, I am not an author).

While CMSGen was clearly fast and powerful, it bothered me endlessly that Barbarik couldn’t demonstrate that it wasn’t a truly uniform sampler. This eventually lead to the development of ScalBarbarik.

The Birth of ScalBarbarik

Since CMSGen passed all the tests of Barbarik, we had to come up with a new trick to distinguish it from truly uniform samplers. ScalBarbarik‘s (PDF) underlying system is still the same as Barbarik: we take a CNF, take 2 samples from it, and blow both of these samples up to a certain number. However, how we blow them up is where the trick lies. Before, they were both blown up the same way into solutions that are equally easy/hard to find. However, this time around, we’ll make one of the solution types much harder to find than the other. For this, we’ll use Vegard Nossum’s SHA-1 CNF generator (PDF) to force the system to reverse a partial, reduced-round SHA-1 hash, with some fixed inputs & outputs. This allows us to both change the complexity of the problem and the number of solutions to it rather easily.

While one set of solutions will be hard to find, the other ones will be trivial to find — if one special variable is set to TRUE, the finding a set of solutions is trivial But when it’s set to FALSE, the system has to reverse a reduce-round SHA-1. The logic behind this is that the uniform-like systems will likely be finding the easy solutions with much higher probability than the hard solutions, so they will sample much more unevenly. Indeed that’s the case:

Note that as the hardness parameter is increased, CMSGen is rejected more and more often, and eventually it’s rejected for all CNFs. Also interesting to note is that QuickSampler and STS get both rejected, as before, but this time around, STS gets rejected for all 50 CNFs, rather than for only 36 out of 50. In other words, ScalBarbarik is overall a stronger/better distinguisher.

Conclusions

With the birth of Barbarik, a set of uniform-like testers were shown to be less than ideal, and a new, more robust near-uniform sampler was born, CMSGen. But as a the gauntlet has been throw down by CMSGen, a new tool emerged, ScalBarbarik, to help find the non-truly uniform samplers. With this cycle in mind, I hope that new, more elaborate, and higher-quality uniform-like samplers will emerge that will be able to beat ScalBarbarik at its own game, improving the quality of the sampling while maintaining the speed advantage that uniform-like samplers enjoy over truly uniform samplers. With better uniform-like sampling tools, hopefully we’ll be able to make headway in automated test case generation (imagine having it as part of all development IDEs), higher performance function synthesis, and hopefully even more diverse areas of interest for the general public.

A Tale of Shift Left, Shift Right, and MEV

The NIST Cybersecurity Framework’s five Functions are: Identify, Protect, Detect, Respond, and Recover. Within this framework “shift left” means that we shift our focus towards Identify and Protect, i.e. towards fixing issues before our system is deployed into production. Say, if you have a bug in your Apple Store/Google Play app, you’d prefer fixing it before you deploy it to the internet, or, similarly you prefer fixing the bug in your cryptocurrency contract before you deploy it in the wild (i.e. “on the mainnet”). While shift left makes lots of sense (let’s prevent the issue in the first place!), in many cases, preventing all issues is likely impossible.

Shifting Left

Firstly, it is important to state that preventing all issues is often impossible because of what safety practitioners call the “blunt end”. When writing code, practitioners, i.e. developers, are at what safety literature would call the “sharp end”: they are like airplane pilots, directly at the control of their craft. While it’s easy to imagine that pilots are in “full control”, in reality, airplane pilots depend on and are affected by many things: accurate weather reports, quality and amount of training, certification bodies, proper aircraft maintenance, production pressure from corporate, etc. These are what safety literature calls the “blunt end” — things that contribute to both success and failure, but are not directly visible/obvious. When the 10th plane drops out of the sky this same year, and Boeing keeps blaming the (now dead) pilots, you know it’s the blunt end you ought to be looking at.

What this boils down to in terms of shift left is that while developers are in “full control”, in reality, there is a large blunt end that they have little to no control over, and which may set up the conditions such that failure is sometimes inevitable, even with the most careful of developers. An easy example of this is previously unknown vulnerability classes: for example unknown issues with CPU cache timing side-channels in classical IT security, or flash loan attacks for cryptocurrency contracts, which are a relatively new phenomenon, allowing anyone to instantly obtain a large capital for a fraction of time to break assumptions about a contract, which can help attackers. Another slightly less obvious example is production pressure, where competing entities are known to be working on the same idea/system and releasing first is critical for business success. It is one thing to “ask for time”, it’s another thing when you gotta release or it was all for nothing. While techniques like fuzzing, formal verification, coding best practices, etc. can help, they are neither bulletproof, nor are they reasonable to expect to be done fully for all contracts.

Notice that given the nature of cryptocurrencies where “code is the law”, it seems counter-intuitive to think that there are factors at play other than the code itself. On the surface, the rules of the game seem set and all mistakes seem to happen to/with code, hence the culprit is always clear. In other words, it seems as if there is no “blunt end”. However, as with all interesting endeavors, in my view, blunt ends are abound, even if they only manifest themselves in code in subtle ways. For example, before overflow protection was enabled by default on the most used Ethereum compiler, Solidity, overflow issues were possible (e.g. issues where 255+1=0, oops!), and could have lead to attacks. In other words, changing the compiler defaults can prevent a type of attack. Similarly, the SMT Checker, shipped as part of Solidity and Remix, can also warn developers of potential mistakes. These systems change the landscape, preventing the “sharp end”, i.e. developers, from making mistakes.

Shifting Right

The IT security sector has realized for some time now that spending all their effort on shift left, i.e. identifying issues and preventing them pre-deployment, is not adequate. With some effort, time, and money being spent on detection, adequate response, and recovery, i.e. post-deployment and hence on the “right” side, the impact of attacks or unforeseen events can be minimized. This is incredibly important, because as per above, it is often impossible to prevent all attacks. From phishing attacks, novel attack classes, persistent attackers, to production pressures, some attacks will go through, and be successful — at which point the only question is how successful will they be, i.e. will they reach all (or any) of their intended goals, or will they be stopped early, minimizing the damage caused.

There are some interesting properties of spending proper time dealing with post-production incidents. Firstly, it allows the system designers and developers to understand what attack patterns are typical against systems. In classical IT security, e.g. honeypots are often used to understand attacker behavior. These are (often under-protected) systems deployed either on the public internet, or more interestingly, on the intranet(s) of organizations, to see if (and how) attackers try to attack it. These systems can help spot (novel) attack patterns, and attackers in general. Other information-sharing systems are also in-place in the traditional IT security world: in Germany the BSI is responsible to warn large enterprises of (novel/successful) attack patterns in the wild. Within the space of cryptocurrencies, attacks tend to be on the public chain, and can be observed directly both as they happen (unless they are in a so-called private mempool, more on this later), and retrospectively, thanks to the public ledger. There is plenty to learn from how others succeed, or fail.

MEV, and On-Chain Security-as-a-Service

With all the above out of the way, let’s get to the meat of this article: how behavior patterns in the cryptocurrency world that are often cited as undesirable are actually doing a form of shift right. This inadvertent shift right is something I believe we could exploit to improve the safety of the chain overall.

Generalized frontrunning is a technique that consists of people listening to the set of sent, but unconfirmed transactions on the chain, and seeing if copying the same transaction themselves would make them money. They then submit the same transaction with a higher fee, giving more fee to the miners who confirm transactions. This leads to both transactions being executed, but theirs being executed first (and the 2nd likely failing), thereby giving them the money. Clearly, miners (who confirm transactions) could always do such frontrunning, and could do it much easier, since they decide who is going to be first, so they can make themselves first without paying a higher (or any) fee. Transaction frontrunning is slightly more complicated: it tries to both front- and back-run a transaction (also called sandwiching), manipulating the price before the transaction is executed (the “front” part) and then selling the excess (the “back” part). Generalized and transaction frontrunning attacks all fit under the umbrella of transaction reordering attacks and are also called Miner Extractable Value, or MEV for short. MEV is big business, with 8 million USD extracted in the past 30 days on the Ethereum chain, as of writing.

When you perform generalized frontrunning, you do two things: (1) you constantly check for potential value to be extracted, such an attack would do, and (2) you make sure your transaction runs first (hence the name “front-run”) that extracts the value to get the value for yourself. This clearly maps to the post-deployment part of the Cybersecurity Framework of NIST: detect & respond. In other words, it would actually make sense to constantly run a bot on-chain to detect and front run attacks against your own contracts. In a sense, the frontrunners are operating what in the traditional IT security world would be called a SOC — a Security Operations Center that is constantly on the lookout for attacks.

In this sense, MEV is not undesired, but a price to pay to have a non-stop security system. Of course, it is not a given that MEV bot operators extracting MEV will give back your money for the attack they front-ran. However, I posit that it’s much more likely that you’ll get your money back from them than the attackers. For example, at the MEV day for Devconnect, there was a discussion I participated in where a large MEV bot operator mentioned that they were willing to give back the funds of attacks they front-ran for a 10% fee. I expect that large organizations will in the future run their own MEV bots, and medium-sized ones will either simply pay the fee (10% or whatever it will be) when an attack happens, or perhaps pay a monthly fee for the protection service provided.

(Footnote: there is not only bad MEV. For example, backrunning (i.e. running after) liquidation events are incentivized to happen, and are considered normal for the system to run as designed.)

Current Solutions to MEV, and their Impact

There are two obvious ways to deal with the MEV issue: creating economic incentives to reduce their impact, and hiding pending transactions. Let’s to to get into both.

One way to deal with MEV is by not making pending transaction visible. So-called private mempools are operated by some miners (i.e. confirmers of transactions) that don’t allow the public to see the not-yet confirmed transactions. This is advantageous because it means that attackers cannot run clients to frontrun/sandwich/etc. pending transactions sent to these private mempools. In a way, private mempools are kind of a trusted environment, where you can be safe from people trying to extract MEV. However, it also means that malicious behavior submitted to these private pools cannot be front-run. Since these private pools are trust-based, it would be possible to allow some trusted parties to observe these private pools and frontrun attacks. Rather than being purely trust-based, it is possible that some sort of incentives-based system could also be set up, which seems like an open question/potential for improvement. This would be similar to what Flashbots does. Flashbots tries to use economic incentives through an open marketplace to make MEV extraction less disruptive to the ecosystem (among other benefits). However, it does not work for private mempools, since, well, they are private, and Flashbots requires publicly visible unconfirmed transactions to be able to work, due to its open marketplace concept.

MEV is not only big business, it’s also wasteful: often, the original transaction will fail, but still execute, consuming computational resources and burning money. MEV systems trying to outbid one another for the prize are effectively participating in an all-pay auction, i.e. everyone pays for the good but only one gets it. Thankfully Flashbots solves this by running an off-chain auction that is not all-pay. However, once we move from proof-of-work to proof-of-stake (i.e. to ETH 2.0), the attack transaction can be directly proposed to the next validator, skipping the public pool entirely. Essentially, the attacker can choose be in the private mempool by default (with some caveats, see Proposer-Builder Separation, and MEV-Boost), rather than by exception.

Conclusions

While Miner Extractable Value (MEV) seems on the surface to be an undesirable side-effect of the combination of current cryptocurrency protocols and the public nature of digital ledgers, it can also be seen as a feature. This feature could allow one to perform detection & response to attacks, allowing legitimate entities to mitigate the impact of attacks on contracts post-deployment. There is some evidence that some people running MEV extraction bots are already effectively providing a form of post-deployment attack-mitigation-as-a-service for a fee. However, as usual, preventative measures are both more reliable and cheaper, and should be the focus of most efforts against attacks.

Proof Traces for SAT solvers

If you have a look at a modern SAT solver, say kissat or CryptoMiniSat, you’ll see that they have well over 20k lines of code. Reviewing that for bugs is basically impossible. So how can you trust that the solver is giving a correct result? Well, if the result is that the input formula is satisfiable, it’s quite easy — we can simply substitute the solution (all modern SAT solvers provide a satisfying assignment) to the formula, and it should satisfy each constraint. This is really fast to do, linear in the size of the formula.

However, if the solver gives a result that the formula is unsatisfiable (UNSAT), what do we do? Well, we need some form of assurance. One would be to check the code, but again, that’s just too much work. We could run it with another SAT solver, but there have been cases of the same bug being in more than one solver. Actually, this bug seems to have been a conceptual mistake copy-pasted, quite weird. In fact, there is a well-known GCC version that will miscompile most MiniSat-based SAT solvers such that they report wrong results(!). We could also do some fuzzing, but do you want to sit on an airplane knowing its flight control software was fuzz-tested for a full 10 minutes to “meet” compliance requirements? No. We need strong assurances. So, let’s ask the SAT solver to produce a proof for the UNSAT result! This is what proof traces are all about.

The Binary Resolution Operator

At its core, a proof trace essentially records the set of operations that must be performed by a system to arrive at the 0=1 equation (i.e. at obvious nonsense), starting with the input equations. Let’s see a simple example. Let’s have our input formula (so-called CNF) be:

 x1 V  x2 = TRUE
 x1 V -x2 = TRUE
-x1 V  x2 = TRUE
-x1 V -x2 = TRUE

Where x1 and x2 are boolean variables. Actually, let’s make this a bit easier to read by a computer:

 1  2 0
 1 -2 0
-1  2 0
-1 -2 0

This the same as the above, but closer to the standard DIMACS format. Each line is called a clause, each line is made up of literals like 1, -1, 2, and -2, and each clause is terminated by a “0”, marking the end of the clause.

The above CNF has no solution to its set of clauses: no matter what we set x1 and x2, at least one of the clauses is not satisfied. OK, so how do I prove that there is no solution? Well, the easiest way to do it is through the binary resolution operator. The binary resolution basically says that if two clauses have one literal that’s inverted, such as “x1” and “-x1”, then we can do this: (A OR x1) RESOLVE (B OR -x1) => A OR B, where “A” and “B” are any set of literals. So, e.g. “x1 V x2 V -x3” resolved with “x3 V x4” becomes “x1 V x2 V x4”. Nice, so let’s try to prove that the above problem is unsatisfiable:

 1  2 0 RESOLVE  1 -2 0 =>  1 0
-1  2 0 RESOLVE -1 -2 0 => -1 0
 1  0   RESOLVE -1 0    =>  0

In essence, we resolved 2 clauses to prove that x1 is TRUE, then resolved another two to prove that x1 is FALSE, and then we resolved “x1” with “-x1” to arrive at EMPTY=TRUE, i.e. 0=1, which is obviously nonsense.

The RUP notation

In the so-called RUP notation (“Reverse Unit Propagation”), the above is simply written as:

a  1 0
a -1 0
a 0

Basically, we simply write down each derived clause one after the other as “added” (=”a”). It’s called Reverse Unit Propagation, because it’s possible to prove each of these clauses to be correct, by starting from the top, and performing simple propagation [where propagation is substitution of the values into the equations, and checking if any variable is forced to a value] . Let me explain. Say we want to prove “1 0” is correct. Then, we set all the literals’ inverted values, here, “x1=FALSE”, and propagate on the original clauses plus any clause that’s above “1 0” (and hence has been proven to be correct). If we set x1=FALSE, then the original formula’s clause “1 2 0” will propagate x2=TRUE and then the original formula’s “1 -2 0” conflicts immediately (all its literals are now FALSE). Hence, propagation lead to a conflict! This means that indeed, “1 0” must hold.

What’s important to realize here is that each of the lines of the proof must be checkable by simple propagation. Let’s say we have this set of constraints:

8 9  2  3 0
8 9 -2  3 0
8 9  4 -3 0
8 9 -4 -3 0

Given these constraints, it is fairly easy to check that “8 9 0” holds. However, we can’t just write into the proof that “a 8 9 0”. Because for that, the propagation check as explained above should succeed: when I set both x8=FALSE and x9=FALSE, I should get the a conflict. But I don’t! Instead when I substitute x8=FALSE and x9=FALSE in to the equations, I get this reduced formula:

 2  3 0
-2  3 0
 4 -3 0
-4 -3 0

Which does not force any variable to any value (i.e. they don’t propagate anything) and none of them are the empty clause, so I’m stuck. The proof verifiers literally will say they are “stuck”. So, how I can prove “x8 OR x9”? Let’s do it like this:

a 8 9 -3 0
a 8 9  3 0
a 8 9 0

Let’s see how this works. First, let’s try to validate “8 9 -3” by setting v8=FALSE, v9=FALSE, v3=TRUE. Then the original constraint “8 9 4 -3 0” will propagate v4=TRUE, and the original constraint “8 9 -4 -3 0” will be immediately conflicting (all its literals are now FALSE). So simple propagation fails, hence the clause “8 9 -3” must be correct. While this is not the goal (that’s “8 9 0”), we are getting there. We now prove “8 9 3” in a similar fashion, and then we prove “8 9 0” in the same fashion. Easy!

Faster Proof Verification: DRUP

While RUP is indeed a good format, we don’t always need all the constraints to be in the active set in order to prove UNSAT. Let’s quickly re-visit this CNF, and try to prove “8 9 0”:

8 9  2  3 0
8 9 -2  3 0
8 9  4 -3 0
8 9 -4 -3 0

In order to reduce memory load, let’s delete all the clauses we don’t need after we used them! These steps are going to be noted with “d” (=”delete”) instead of “a” (“add”):

a 8 9 -3 0
d 8 9  4 -3 0
d 8 9 -4 -3 0
a 8 9  3 0
d 8 9 -2 3 0
d 8 9  2 3 0
a 8 9  0
d 8 9 -3 0
d 8 9  3 0

NICE! So we first derived “8 9 -3 0” and then removed both of the clauses we used to create it, they are not needed anymore! Begone “8 9 4 -3 0” and “8 9 -4 -3 0” ! Same thing with “8 9 3 0”. Then, once we derived “8 9 0”, we even deleted the intermediary constraints “8 9 -3 0” and “8 9 3 0” we previously derived. Begone! Not needed anymore, all we need is “8 9 0 “.

Extended resolution for proofs

Okay, I’m gonna cut this part short, mostly because I am terrible at this, but we have talk about the elephant in the room: RAT, or, more beautifully, Resolution Asymmetric Tautologies. So, what is it? Essentially, it allows you to declare new variables in your proof and use them later. This is absolutely amazing, because with this, you can provide so-called extended resolution proofs (warning: 1968 research paper), which allows us to express certain proofs more compactly, in fact, exponentially more compactly!

Here’s how to do it. Let’s say you want to use the definition “x1 OR v2 = x99” in your proof, because it would make your proof smaller. This can happen, for example, because your constraints often have “x1 OR x2” in them, and now you can replace all of those parts with simply “x99”. Well then, you write this into your proof file:

a -99 2 1 0
a  99 -1 0
a  99 -2 0

Notice that this is just a simple OR gate definition. The only thing RAT requires is that (1) the first variable is the one that’s being defined and (2) the introduced variable must not be part of our input formula — so our input formula here should have at most 98 variables. That’s it. Now you can use x99 anywhere you like, as long as it makes sense. For example, you could take a clause “1 2 3 4 0” and convert it to “99 3 4 0”:

a  99 3 4 0
d  1 2 3 4 0

Notice that “99 3 4 0” is simply a RUP proof at this point: setting 99=FALSE, 3=FALSE, 4=FALSE will immediately propagate x1=FALSE, x2=FALSE (due to the clauses “99 -1 0” and “99 -2 0”), and x1,x2,x3, and x4 all being FALSE will immediately trigger a conflict for clause “1 2 3 4 0”, so we have the RUP property and we are fine. Then we can delete “1 2 3 4 0”, we don’t need it no more. Notice that I can’t write the reverse:

d  1 2 3 4 0
a  99 3 4 0

If I wrote that, “a 99 3 4 0” will not work, since it relies on “1 2 3 4 0” but we already deleted that, oops!

Besides the above trivial example with the OR gate, extended resolution can simulate, in polynomial time, some algorithms that are extremely hard (read: exponentially hard) for normal resolution to express. This means that using RAT, we can express even complicated mathematical concepts in this extremely simple system, for example in this case, pseudo-boolean reasoning.

What the solver does vs. what the proof checker does

So let’s say you implement DRAT proof trace into your SAT solver and your proofs are always verified. All good! Not so fast. One day you decide to check the proof that DRAT recovered, and to your absolute horror, you realize that DRAT thinks your proof to be substantially different than what the SAT solver thought it was! How can that be?

Well, the DRAT checker recovered a proof. It’s a valid proof, and it can be recovered given the set of clauses that haven’t been deleted using the “d” operator, as above. But notice that we never told the proof checker which set of clauses we resolved on to get to a clause! It just kind of figured it out by itself. Now, we can look at this and say, this is great because (1) I don’t have to do hand-holding of the proof checker, and really interestingly, (2) the proof checker could actually recover different proofs from the same proof trace(!). Ambiguity can be useful. Well, that’s great, but what if I really-really want to know what the SAT solver actually did?

Enter FRAT. This proof format is super-close to DRAT, with the following changes. Firstly, FRAT numbers each clause with a unique clause ID. This is needed because the same clause can be in the solver’s memory more than once, and we need to be able to distinguish them from each other — they may have been derived in completely different ways! Secondly, FRAT allows an optional hint at how the clause was derived. Let me give a hands-on example, it should be quite clear. As a reminder, here is the input formula we had before:

 1  2 0
 1 -2 0
-1 -2 0
-1  2 0

And here is the DRAT proof:

a  1 0
a -1 0
a 0

Now, for the FRAT proof:

o 10  1  2 0         -- input clause, let's set its ID to 10
o 11  1 -2 0         -- input clause, let's set its ID to 11
o 12 -1  2 0         
o 13 -1 -2 0         -- last input clause
a 14  1  0 l 10 11 0 -- first resolvent
d 10  1  2 0 
d 11  1 -2 0 
a 15 -1  0 l 12 13 0 -- second resolvent
d 12 -1  2 0 
d 13 -1 -2 0 
a 16 0 l 14 15 0     -- empty clause! It's UNSAT!
f 14  1  0           -- finishup from down here
f 15 -1  0
f 16 0

Okkkay. So… the original clauses are part of the proof trace now, they start with “o”. And each clause has an ID, after o/a/d/f. This can be any number, but they must be non-clashing (obviously). Furthermore, we are allowed to have an “l” after the closing “0” when adding a clause — these are the clause IDs we had to resolve to arrive at the new clause. So, e.g. IDs 10 and 11, i.e. “1 2 0” and “1 -2 0” must resolve to “1 0” (clause ID 14). And finally, we must “finalize” all our clauses, i.e. we must account for all clauses remaining in memory, with the “f” command.

Firstly, finalization means that in case your SAT solver forgot to delete a clause in the proof trace, but deleted it from memory, finalization will fail, because the solver will not finalize that clause, and so the checker will complain. In essence, finalization forces the hand of the SAT solver writer to make sure that the “view of the world” from the perspective of the SAT solver and that of the proof verifier match when it comes to clauses in memory. This is not the case for DRAT — the proof verifier there could have millions of clauses more in memory than the SAT solver :S

Secondly, the “l” notation means that we can now be precise about what clauses were actually resolved by the SAT solver to arrive at a new clause. However, this is only a “can” — adding the “l” is not necessary, and of course can be stripped. This means that if we want to, we can have ambiguity, or if we want to, we can have precision. The best of both worlds!

Trusting trust

Proofs are cool, and are very important so we can be sure our results are correct. But… are we sure they are correct? Well, not so much. You see, the proof verifiers themselves are not verified. Say, frat-rs will say “VERIFIED” on a FRAT proof, but… frat-rs itself is over 5000 lines of Rust. And before you think I’m really stretching things here, let me just remind you that I personally found a segmentation fault in drat-trim, the official DRAT proof verifier, used at the SAT competition. (I actually found the bug accidentally while fuzzing my own SAT solver.)

So, what can we do? Well, we need another verifier. I’m not kidding. Basically, frat-rs can take a proof that the SAT solver provides, and translate it to a fully annotated, clean proof with all the deletions at their earliest possible points (see this hack). We can then write a slow, but verified proof checker, that can do the trivial checking that the resolutions are indeed correct, reaching the empty clause (i.e. 0=1). The pipeline would then be:

Airplane software code + compliance requirements ->SAT_query
solution, proof  = SAT_solver(SAT_query)
solution, clean_proof = frat-rs(SAT_query, solution, proof)
solution = verified_proof_checker(SAT_query, solution, clean_proof)

What this allows us to do is that we don’t need to trust the SAT_solver or frat-rs. We can treat them as black boxes. All we need to trust is that the verified_proof_checker is correct: given the SAT_query, the solution can be trusted. In this sense, clean_proof is simply a hint for the verified_proof_checker to do its job. Notice that I completely skipped how SAT_query can be trusted — that’s an entirely different can of worms :D

In case you want to use such a verified proof checker, there is one in lean4, available here, and one for acl2, available here. The non-verified tool to “elaborate” simpler (e.g. DRAT) proofs into annotated (LRAT) proofs, you can use frat-rs.

Conclusions

So, we talked a lot about proof traces, proof formats, and how to trust the solution. In some cases, this is extremely important — some software and hardware is verified in this way that controls trains and airplanes, and industrial systems (think: nuclear reactors, rockets… Ariane 5 crash anyone?). We certainly don’t want any bugs in there.

What I didn’t talk about, is that proof traces do something else, as well: they allow us to understand how SAT solvers work. You see, while the proof is written in the forward fashion — the same direction that SAT solvers work, — the proof actually only makes sense in the reverse direction. We always start at the empty clause, at the end of the proof, and walk backwards to see how the empty clause got proven! So, if you think about it, proofs can be used to look at SAT solvers in the reverse direction. And that’s super-exciting. For example, how much work did the SAT solver really have to do to reach the empty clause? How much complete nonsense did it do that was utterly useless to prove unsatsifiability? How many times did it forget proof fragmets that it later had to re-learn? Or, what is the minimum memory footprint that its produced proof could fit in? And if we take advantage of DRAT’s and FRAT’s ambiguity: is there a proof with fewer steps it could have produced given the total same set of resolvents? In my opinion, there is a whole wealth of data in proof traces that we should do research on to learn more about SAT solvers.

(PS: Tracecheck is one of the earliest proof trace systems. I didn’t want to mention it because it’s not as relevant anymore as the above, and it would have made the story a tad more confusing)

CMSGen, a Fast Uniform-Like Sampler

Uniform sampling is a problem where you are given a solution space and you have to present solutions uniformly, at random. In some cases, this is quite simple, say, for the lotto. Just pick 5 random numbers from a box and we are done! For the lotto the solution space is very easy to generate. However, when there are constraints on the solution space, it starts to get tricky.

Let’s say that I have a function I want to test, but the input to the function has some real-world constraints like e.g. the 1st parameter must be larger than the second, the 2nd parameter must be divisible by the 3rd etc. If I want to test that this function operates correctly, one way to do it is to generate 100 uniformly random inputs that don’t violate any of the constraints, run the function, and see if all is OK. For this, I need a fast way of generating uniform samples given the constraints on the solution space.

Sampler speed vs. accuracy

There have been many samplers proposed in the literature. I personally have worked on one called UniGen, a guaranteed approximate probabilistic sampler, meaning that it’ll give approximately uniform samples most of the time, and we have a proof to back this up. It’s a great sampler and will work very fast on many instances. However, for really complex solution spaces, it can have trouble. Say, you want to generate interesting test inputs for your deep learning algorithm. Deep neural networks tend to be extremely complex when translated to binary constraints, so UniGen will likely not be fast enough. It would give very good quality samples (i.e. properly uniform samples), but if it’s too slow, we may want to exchange quality of samples for speed of generation.

There are two well-known samplers that are supposed to generate uniform samples on complex solution spaces, QuickSampler (code), and STS (code), but give no guarantees, let’s call these “uniform-like” samplers. Unfortunately, the paper by Chakraborty et al and its resulting code Barbarik showed that these uniform-like samplers are highly non-uniform. Barbarik is a pretty neat idea that basically constructs solution spaces with known solution distributions and then asks the sampler to generate uniform samples. Knowing the solution space, Barbarik can then verify how non-uniform the sampler is. Imagine having a box with 1000 balls, half of them blue and the other half green. Now if I ask the sampler to give me 50 balls at random, and all of them are green, I’d be a bit surprised to say the least. It’d be like the 5-lottery having the same numbers 3 times in a row. Possible, but… not very likely. If I do this experiment 100 times, and I always get 50 green balls, it’s fair to conclude that the sampler is not uniform.

Our new uniform-like sampler, CMSGen

Given an effective tester, Barbarik, we (Priyanka Golia, Sourav Chakraborty, Kuldeep S. Meel, and myself) thought perhaps we can follow the nowadays very successful test-driven development (TDD) methodology. All we have to do is to make our sampler pass the test of Barbarik, while being at least as fast as STS/QuickSampler, and we’ll be good to go. In fact, given Barbarik, it only took about a week of playing around with CryptoMiniSat’s options to beat both STS and QuickSampler in both accuracy and speed. This speaks volumes to how important it is to have a robust, reliable, and fast testing framework that can give immediate feedback about the quality of samples generated.

Our new uniform-like sampler, based on CryptoMiniSat, is called CMSGen (research paper here), and effectively takes CyrptoMiniSat and applies the following set of changes, through pre-set command line options:

  • Pick polarities at random. Normally, SAT solvers use polarity caching scheme, but of course we want uniform samples over all the search space, so we need to pick polarities at random.
  • Branch on variables at random. Normally, SAT solver branch on variables that will most likely lead to a conflict to maximize search efficiency (the VSIDS heuristic). However, we want to explore the solution space as evenly as possible, and so we want to approach the solution space from as many angles as possible. If you think about the search space as an N-dimensional binary cube, then we are trying to approach this cube as any ways as possible.
  • Turn off all pre- and inprocessing. Pre and inprocessing in SAT solvers are used to minimize the instance, transforming it into something easier to solve, e.g. through Bounded Variable Elimination. We later reconstruct a viable solution based on the solution to the transformed instance. However, the transformed instance may (and often will!) have a very different solution space. We cannot have that, so we must turn this off. To be fair, some pre- and inprocessing could be left intact, e.g. subsumption and self-subsuming elimination, perhaps a future paper :)
  • Restart at static intervals. Restarts are nowadays often dynamic in modern SAT solvers, or even if not dynamic, then follow a non-regular pattern. However, that could disturb how we find solution. Imagine, let’s say that solutions with variable A set to TRUE are very easy to find, but solutions with FALSE are very hard to find. What will happen? Well, in restarts where A was randomly set to TRUE, we’ll always quickly find a solution and output it. But for restarts when A was randomly set to FALSE, the system would struggle to find a solution, and after some conflicts, it will simply restart into a status where hopefully A is set to TRUE, and it can find a solution again. It is quite clear to see that this will lead to serious issues with sampling quality. Hence, we set an adjustable but static restart interval of 100 conflicts, with higher values typically leading to more uniform samples.

Performance of CMSGen

Performance of the system is on the ridiculous scale in comparison with other samplers:

When it comes to 2-wise coverage, i.e. the quality of samples, the data speaks for itself (note, UniGen is missing here because it was too slow):

Note that between STS and QuickSampler, STS is both the more uniform sampler, but also the slower one. CMSGen overcomes this limitation: it’s both faster than QuickSampler, and more uniform than STS.

And of course, the Barbarik tester gives “Accept” on CMSGen much more often than on STS or QuickSampler:

Conclusion

If you need non-guaranteed uniform but fast sampling, I’d go and try out CMSGen. It’s really a completely different beast. It’s not a guaranteed uniform sampler, but it’s incredibly effective. In fact, it’s so effective and works so well, it took me a full year to figure out how best to generate problems for it where it wouldn’t be uniform. But that’s another paper, and another blog post! In the meantime, the sampler is here, go check it out!