Ethereum in the age of AOL

Imagine you are super-excited about some new technology that’s enabled by the internet, say, Google, and instead of investing in Google, i.e. buying Google stock, you start buying AT&T stock. Or let’s say Facebook just came out, and it’s the hottest thing. Everyone wants Facebook. So instead of investing in Facebook stock, you start buying AT&T stock. It’d be just weird, right? But that’s what seems to be happening in the crypto space.

The more I spend in this space, the more I’m baffled by people buying and holding ETH, the native currency of Ethereum, hoping that if the next Facebook comes around, the value of ETH will go up. And to be fair, AT&T and Cisco stock probably did go up when Netflix became popular. But ultimately, AT&T, Cisco, T-mobile — they are just the underlying pipes. Sure enough, Cisco should go up somewhat when Netflix is becoming popular, because you definitely need routers to watch Netflix. But we all understand that the Internet is just the piping.

This is reminiscent of AOL. Back in the dark ages of the Internet, AOL was “the” internet, it provided all the services in one place. From directory to mail services. If you wanted to invest in the internet, you bought AOL stock. But that’s no longer the case, and for a good reason.

Electricity, the Internet, and Infrastructure

Electricity plug designed to fit into a socket for light bulbs

I think most people would find it rather insane to buy electricity suppliers’ stocks when a new type of home appliance, say robot vacuums come around. However, electricity stocks were a big deal back in the day. The original use-case for electricity at home was light bulbs. Light bulbs were huge — lighting was expensive (you had to get candles or oil) and was painful to light up, and sometimes it burned down buildings. In fact, the original washing machines’ electric plugs actually plugged into a socket for light bulbs! You had to screw it in — and unscrew it in case there was an accident. Which was very-very dangerous, hence the new sockets that are very easy to unplug. But it’s interesting to observe that there was an original use-case, and it was so popular that “electricity”just straight up meant light bulbs, and nothing else.

The original use-case of cryptocurrencies was liquid, fast transfer of fungible value over the network without a central system. But we moved way beyond that now. We have currencies and banks operating over Ethereum. We have casinos and online games and betting platforms. We have exchanges and brokers. However, somehow people still hope that the infrastructure value will go up.

Hoping your AT&T bill goes up

Imagine buying a monthly internet subscription from AT&T and hoping that the subscription costs go up. That’s what’s happening in crypto. The native currency of Ethereum is used to regulate the transactions on the chain, so that the competition of which transaction makes it into the chain can be decided. When you hold ETH, you are effectively hoping that transaction costs go up. It’s like hoping that your AT&T bill gets higher. Who the hell would want that? It’s infrastructure cost, it’s not supposed to go up over time. It’s supposed to go down.

The secondary use-case of ETH is the consensus protocol, making sure the things that do make it into the transaction block are what one would more or less expect. So it’s used to also secure the chain in a manner that’s called Proof-of-Stake. Here, the value of ETH is used to make sure it’s not economically viable to cheat. However, it wouldn’t be too difficult to move staking to stablecoins such as Dai, or other value-holders, e.g. a combination of the most used systems on the chain. After all, it’s in everyone’s best interest for the most value-generating systems to survive.

Betting on success

In my view, if you want to bet on the crypto system to be successful, you need to start betting on not only ETH, but much more importantly all the systems that make Ethereum useful. Looking at e.g. a DeFi token list is a good start. Ultimately, the internet is not useful because of AT&T. It’s useful because of email, websites, and online systems such as Ethereum itself, which wouldn’t exist without the Internet. Yet I’m pretty sure ETH holders are not in a rush to buy AT&T stock. In the same vain, they should not be in a rush to buy ETH. Rather, they should be investing in all the systems and tools built on top of Ethereum. Ethereum is “just” the infrastructure, similarly how the internet on which Ethereum runs, is “just” the infrastructure, which itself is built on electricity, which is “just” the infrastructure. Sure, we need the infrastructure. And we need to invest in the infrastructure (I have a separate rant on how little is actually being invested by anyone other than the Ethereum Foundation). But only investing in the infrastructure, and not on the systems built on top of it, is weird.

Post Scriptum: Bitcoin

Bitcoin is forever stuck at being electricity for light bulbs only. Make of that what you will. Light bulbs are useful, of course. But I’m typing this on a computer, that’s connected to the internet, and my light bulbs are not even on. If electricity was only capable of lighting the light bulb, and not powering my computer, my router, and the system of routers that route my packets to you, we would still be writing&reading articles in physical newspapers, and sending mail via physical post.

References

You may be interested in this article and this tweet thread for ideas and thoughts that others had independently, but well before me. Thanks to M. at the EF for these references, they are actually really relevant. I’m glad others are seeing things similarly to me.

Computing Tricky Probabilities

Probabilities of certain events are really hard to estimate sometimes. Often, it’s because we lack information of the underlying causal chains, but sometimes, it’s because the causes are so intertwined that even if we know the underlying probabilities of certain events happening along with the causal chains, it’s still impossible for us to untangle the web.

Let’s say there’s an event X has a probability of 0.4, Y has a probability of 0.6, and Z happens if either X or Y happens, but X and Y can’t happen at the same time, with a probability of 0.8. What’s the probability of Z happening?

             0.4                  0.6          
         ┌───────┐            ┌───────┐       
         │       │    0.8     │       │       
         │   X   │◄────!─────►│   Y   │       
         │       │            │       │       
         └──┬────┘            └────┬──┘       
            │                      │                    
            │      ┌───────┐       │          
            │      │       │       │          
            └─────►│   Z   │◄──────┘          
                   │       │                  
                   └───────┘  

Translating to a Model Counting Problem

The solution is easy to compute if you know how to use propositional model counters. They allow you to create a set of rules, such as:

X = 0.4
Y = 0.6
X or Y = True
0.8 -> NOT (X & Y)

The above perfectly describes the probability landscape, by using “->”, i.e. implication (if the left side is true, the right side must also be true), and using “&” (i.e. logical AND). Basically, for Z to happen, X or Y has to happen. And with probability 0.8, they must not happen at the same time. To solve the above, we add a helper variable H1, that will allow us to translate the last equation into an implication:

X = 0.4
Y = 0.6
X or Y = True
H1 = 0.8
H1 -> NOT (X & Y)

To translate this to propositional model counting, we can do the following, where “-” is negation:

weight X = 0.4
weight Y = 0.6
weight H1 = 0.8
X or Y = True
-H1 or -X or -Y = True

The highlighted lines are what’s call the Tseitin transformation, which basically means that when H1 is TRUE, (NOT X or NOT y) must be TRUE, i.e. at least one of them has to be FALSE, i.e. they can’t happen at the same time (but it’s possible that neither of them happens). The above is what’s called a conjunctive normal form, or CNF for short.

The model counting DIMACS representation (formal description here) of the above is:

c t pwmc
p cnf 3 2
c comment X = 1, Y = 2, H1 = 3
c p weight 1 0.4 0
c p weight 2 0.6 0
c p weight 3 0.8 0
1 2 0
-3 -1 -2 0
c p show 1 2 3 0 

Which is a straightforward 1-to-1 translation of the above, except we have to tell the counter that this is a projected weighted model counting problem (“pwmc”) and that it has 3 variables, and 2 constraints (“p cnf 3 2”).

Solving with a Model Counter

Once you have a CNF representation, you can run a modern propositional weighted model counter to count the number of weighted solutions, i.e. the probability of Z. I recommend the one I develop, ganak (linux binary here, open source code will be available soon, ask me if you need it):

./ganak problem.cnf
c s type pwmc
s SATISFIABLE
c s exact arb float 5.68E-01

So Z has a probability of 0.568. This can easily be verified to be correct:

Python 3.13.1
>>> 0.6*(1-.4)+0.4*(1-0.6)+0.4*0.6*(1-0.8)
0.568

Which adds up the probabilities that the left fires (but not the right), the right fires (but not the left), and that both fire, but the conflict does not happen (with 1-0.8 probability).

Of course, the above is a rather simple problem. However, with the above steps, one can create arbitrarily complicated causal chains, with complex constraints between partial, or intermediate elements of the chain, conditional probabilities, etc. Although there are limits to scalability, it’s many-many orders of magnitude faster than anything by hand, or by some form of brute-force checking/estimation.

Use-cases

In certain scenarios, it’s really important to know what the probability of some event is. For example, if you want to run a hazardous process such as a nuclear reactor or a chemical plant, or you want to provide insurance for some complicated (financial) product, accurately computing the probabilities of bad events happening is extremely important. I think it’s rather obvious that nuclear plants have very complicated causal chains for bad events happening — and they tend to be intertwined. If the external power lines are damaged, then likely it’s a major event (e.g. earthquake) and you likely also lost the majority of your transport capabilities, and along them, much of potential external help.

Similarly, if you want to bet for/against market moves, where you know or have a good idea of the underlying causal chains and associated probabilities, you can build quant trading, or expert advice systems based on the above. In other words, weighted model counting is not only useful for party tricks. Complex probability chains with thousands of interdependent random variables can easily be analyzed using the above system.

Some food for thought: great content on the Internet

The Internet is a pretty vast place. So I decided to curate a few links for you all for this special end-of-year period:

Ágnes Dénes – Wheatfield – A Confrontation, 1982, Battery Park Landfill, Downtown Manhattan, photo: John McGrall

On SAT/SMT Model Counting

I have recently had a few discussions with people who wanted to do some form of model counting, and I decided to put my thoughts about this problem into a single blog post, hopefully bringing some light to this area. Firstly, when talking about model counting, it’s very important to think through what one actually needs to achieve. When people talk about a model, they often mean a solution to a problem, described as a set of constraints over either purely boolean variables (so-called propositional model counting) expressed in so-called CNF, or over higher-level constraints, often expressed in SMT-LIB format.

Let’s look at a CNF example:

c this is a comment
c below line means there are 5 boolean variables and 2 constraints
p cnf 5 2
1 -2 0
1 -2 3 0

The above CNF has 5 boolean variables and 2 constraints: “v1 or (not v2) = true”, and “v1 or (not v2) or v3 = true”. In this case, if “v1 or (not v2) = true” is satisfied, the other constraint is also satisfied. So we effectively only have to satisfy the “v1 or -v2” constraint. There are 3 solutions over the variables v1 and v2 to this constraint: (1, 0), (1, 1), and (0,0). For variables v3, v4, and v5 we can choose anything. So there are a total of 3*2^3 = 24 solutions to this CNF.

Let’s look at an SMT problem:

(set-logic QF_BV) ; Set the logic to Quantifier-Free BitVectors
(declare-fun bv () (_ BitVec 8)) ; Declare an 8-bit value
(assert (not (= bv #x00))) ; Assert that 'bv' is not equal to zero

In this SMT-LIB problem, we declared that the logic we will be using is the so-called quantifier-free bitvector logic, which basically gives you access to fixed-width variables such as uint8_t, and of course all the logical and fixed-bit bitwector operators such as modulo addition, division, subtraction, etc. In this particular case, we declared an 8-bit variable and banned it from being 0, so this problem has 2^8-1 solutions, i.e. 255.

Whichever format is being used, “counting” of solutions is something that may sound trivial (it’s just a number!), but once we get a bit more fiddly, it turns out that we can do many things that are actually easier, or even harder, than just counting, even though they sound very close to, or are even synonymous with, counting :

  • Figuring out if there is at least one solution. This is more appropriately called satisfiability checking. This is significantly easier than any notion of counting listed below. However, even this can be challenging. Typically, in the SMT/SAT competition, this is the so-called SMT single query track or the SAT competition Main Track. All SAT and SMT solvers such as kissat or cvc5 can run this kind of task.
  • Next up in complexity is to figure out if there is exactly one solution. This is actually very close to satisfiability checking, and requires only two so-called “incremental calls” to the SMT/SAT solver. I have seen this question pop up a lot in zero-knowledge (ZK) circuit correctness. What is needed is to run the query as per the above, but then add a constraint that bans the solution that is output by the solver, and ask the system to be solved again. If the solver outputs UNSAT, i.e. unsatisfiable, then we know there are no more solutions, and we are happy with the result. If not, it gives us another solution, which is a witness that there are at least two solutions. These queries are answered by so-called incremental SMT solvers (SMT competition track here), or incremental SAT solvers (incremental SAT competition track here). Almost all SAT and SMT solvers can work incrementally, for example cvc5 or z3. Most SAT solvers work incrementally, too, but e.g. kissat does not, and you need to use CaDiCaL instead.
  • Next up in complexity is to approximately count the solutions. Here, we are not interested that there are exactly, say, 87231214 solutions, instead, we are happy to know that there are approx 2^26 solutions. This is often good enough, in case one wants to e.g. figure out probabilities. There are very few such systems available, for SMT there is csb, and for SAT there is ApproxMC. For the special case of approximate DNF counting, I’d recommend pepin.
  • Next up in the complexity is exactly counting the solutions. Here, we want to know how many solutions there are, exactly, and we really-really need the exact number. There are few solutions to doing this over SMT, but doing this over SAT is quite developed. For SMT, I would recommend following the csb approach, and running Ganak on the blasted SAT formula instead of ApproxMC, and for SAT, there is the annual model counting competition, where Ganak won every single category this year, so I’d recommend Ganak.
  • Next up is model enumeration where it’s not enough to count the solutions, but we need a way to compactly represent and store them all. This of course requires quite a bit of disk space, as there could be e.g. 2^100 solutions, and representing them all, even in a very compact format with lots of smart ideas can take a lot of space — but not necessarily exponential amount. Currently, there is no such system for SMT, and my understanding is that only the d4 model counter can do this on SAT/CNF problems, thanks to its proof checking system, in part thanks to Randal Byrant. Once Ganak can output a proof, it will also be able to do this.

All in all, there is a very large gap between “find if there is exactly one solution” and “enumerate all solutions” — even though I have seen these two notions being mixed up a few times. The first might take only a few seconds to solve, while the other might not even be feasible, because compactly representing all solutions may be too expensive in terms of space (and that’s not even discussing time).

One aspect to keep in mind is that often, one does not need all solutions. For example, if one only wants to find a good set of example solutions, then it is possible to use uniform-like samplers, that give a “somewhat uniform” set of example solutions from the solution space (without guarantees of uniformity). A good tool for this is cmsgen for CNF/SAT problems. If one needs a guaranteed uniform set of samples (to be exact, a probabilistically approximately uniform sample), then unigen is the way to go. Obviously, it’s quite easy to find a uniform sample once we have an enumeration of all solutions — simply pick a random one from the whole set — but that’s way more expensive than running unigen.

While all of the above is about counting, it’s often the case that one wants to optimize for something within the solution space. Basically, we have some kind of objective function that we want to optimize for (e.g. maximize profit, minimize loss) , and so we are looking for a solution that maximizes/minimizes some property. Of course, once we have done model enumeration, it’s quite easy to do this — simply run the objective function on all solutions, and pick the best! But that’s really hard. Instead, one can formulate the problem as a so-called MaxSAT or an SMT problem with an objective function. These systems run some really smart algorithms to significantly cut down search time, and even often keep a “best so far” solution around, so we can early-abort them and still get some kind of “best so far” solution.

All in all, it’s extremely important to figure out what one actually needs, because there are vast gaps in complexity between the problems (and corresponding algorithms) above. Even between something like approximate and exact counting, the gap is gigantic, as there are huge swaths of problems that are almost trivial to count approximately, but essentially impossible to exactly count with current technology:

Here, the green line is a tool that changes its behaviour from exact to approximate model counting after 1800 seconds of solving, and the number of solved instances jumps significantly. Notice that the red line, the original system it runs for 1800s, is the best in class here, by a wide margin, and still the approximate counting that Ganak+ApprocMC switches into shows a massive difference.

Our tools for solving, counting and sampling

This post is just a bit of a recap of what we have developed over the years as part of our toolset of SAT solvers, counters, and samplers. Many of these tools depend on each other, and have taken greatly from other tools, papers, and ideas. These dependencies are too long to list here, but the list is long, probably starting somewhere around the Greek period, and goes all the way to recent work such as SharpSAT-td or B+E. My personal work stretches back to the beginning of CryptoMiniSat in 2009, and the last addition to our list is Pepin.

Overview

Firstly when I say “we” I loosely refer to the work of my colleagues and myself, often but not always part of the research group lead by Prof Kuldeep Meel. Secondly, almost all these tools depend on CryptoMiniSat, a SAT solver that I have been writing since around 2009. This is because most of these tools use DIMACS CNF as the input format and/or make use of a SAT solver, and CryptoMiniSat is excellent at reading, transforming , and solving CNFs. Thirdly, many of these tools have python interface, some connected to PySAT. Finally, all these tools are maintained by me personally, and all have a static Linux executable as part of their release, but many have a MacOS binary, and some even a Windows binary. All of them build with open source toolchains using open source libraries, and all of them are either MIT licensed or GPL licensed. There are no stale issues in their respective GitHub repositories, and most of them are fuzzed.

CryptoMiniSat

CryptoMiniSat (research paper) our SAT solver that can solve and pre- and inprocess CNFs. It is currently approx 30k+ lines of code, with a large amount of codebase dedicated to CNF transformations, which are also called “inprocessing” steps. These transformations are accessible to the outside via an API that many of the other tools take advantage of. CryptoMiniSat used to be a state-of-the-art SAT solver, and while it’s not too shabby even now, it hasn’t had the chance to shine at a SAT competition since 2020, when it came 3rd place. It’s hard to keep SAT solver competitive, there are many aspects to such an endeavor, but mostly it’s energy and time, some of which I have lately redirected into other projects, see below. Nevertheless, it’s a cornerstone of many of our tools, and e.g. large portions of ApproxMC and Arjun are in fact implemented in CryptoMiniSat, so that improvement in one tool can benefit all other tools.

Arjun

Arjun (research paper) is our tool to make CNFs easier to count with ApproxMC, our approximate counter. Arjun takes a CNF with or without a projection set, and computes a small projection set for it. What this means is that if say the question was: “How many solutions does this CNF has if we only count solutions to be distinct over variables v4, v5, and v6?”, Arjun can compute that in fact it’s sufficient to e.g. compute the solutions over variables v4 and v5, and that will be the same as the solutions over v4, v5, and v6. This can make a huge difference for large CNFs where e.g. the original projection set can be 100k variables, but Arjun can compute a projection set sometimes as small as a few hundred. Hence, Arjun is used as a preprocessor for our model counters ApproxMC and GANAK.

ApproxMC

ApproxMC (research paper) is our probabilistically approximate model counter for CNFs. This means that when e.g. ApproxMC gives a result, it gives it in a form of “The model count is between 0.9*M and 1.1*M, with a probability of 99%, and with a probability of 1%, it can be any value”. Which is very often enough for most cases of counting, and is much easier to compute than an exact count. It counts by basically halfing the solution space K times and then counts the remaining number of solutions. Then, the count is estimated to be 2^(how many times we halved)*(how many solutions remained). This halfing is done using XOR constraints, which CryptoMiniSat is very efficient at. In fact, no other state-of-the-art SAT solver can currently perform XOR reasoning other than CryptoMiniSat.

UniGen

UniGen (research paper) is an approximate probabilistic uniform sample generator for CNFs. Basically, it generates samples that are probabilistically approximately uniform. This can be hepful for example if you want to generate test cases for a problem, and you need the samples to be almost uniform. It uses ApproxMC to first count and then the same idea as ApproxMC to sample: add as many XORs as needed to half the solution space, and then take K random elements from the remaining (small) set of solutions. These will be the samples returned. Notice that UniGen depends on ApproxMC for counting, Arjun for projection minimization, and CryptoMiniSat for the heavy-lifting of solution/UNSAT finding.

GANAK

GANAK (research paper, binary) is our probabilistic exact model counter. In other words, it returns a solution such as “This CNF has 847365 solutions, with a probability of 99.99%, and with 0.01% probability, any other value”. GANAK is based on SharpSAT and some parts of SharpSAT-td and GPMC. In its currently released form, it is in its infancy, and while usable, it needs e.g. Arjun to be ran on the CNF before, and while competitive, its ease-of-use could be improved. Vast improvements are in the works, though, and hopefully things will be better for the next Model Counting Competition.

CMSGen

CMSGen (research paper) is our fast, weighted, uniform-like sampler, which means it tries to give uniform samples the best it can, but it provides no guarantees for its correctness. While it provides no guarantees, it is surprisingly good at generating uniform samples. While these samples cannot be trusted in scenarios where the samples must be uniform, they are very effective in scenarios where a less-than-uniform sample will only degrade the performance of a system. For example, they are great at refining machine learning models, where the samples are taken uniformly at random from the area of input where the ML model performs poorly, to further train (i.e. refine) the model on inputs where it is performing poorly. Here, if the sample is not uniform, it will only slow down the learning, but not make it incorrect. However, generating provably uniform samples in such scenarios may be prohibitively expensive. CMSGen is derived from CryptoMiniSat, but does not import it as a library.

Bosphorus

Bosphorus (research paper) is our ANF solver, where ANF stands for Algebraic Normal Form. It’s a format used widely in cryptography to describe constraints over a finite field via multivariate polynomials over a the field of GF(2). Essentially, it’s equations such as “a XOR b XOR (b AND c) XOR true = false” where a,b,c are booleans. These allow some problems to be expressed in a very compact way and solving them can often be tantamount to breaking a cryptographic primitive such as a symmetric cipher. Bosphorus takes such a set of polynomials as input and either tries to simplify them via a set of inprocessing steps and SAT solving, and/or tries to solve them via translation to a SAT problem. It can output an equivalent CNF, too, that can e.g. be counted via GANAK, which will give the count of solutions to the original ANF. In this sense, Bosphorus is a bridge from ANF into our set of CNF tools above, allowing cryptographers to make use of the wide array of tools we have developed for solving, counting, and sampling CNFs.

Pepin

Pepin (research paper) is our probabilistically approximate DNF counter. DNF is basically the reverse of CNF — it’s trivial to ascertain if there is a solution, but it’s very hard to know if all solutions are present. However, it is actually extremely fast to probabilistically approximate how many solutions a DNF has. Pepin does exactly that. It’s one of the very few tools we have that doesn’t depend on CryptoMiniSat, as it deals with DNFs, and not CNFs. It basically blows all other such approximate counters out of the water, and of course its speed is basically incomparable to that of exact counters. If you need to count a DNF formula, and you don’t need an exact result, Pepin is a great tool of choice.

Conclusions

My personal philosophy has been that if a tool is not easily accessible (e.g. having to email the authors) and has no support, it essentially doesn’t exist. Hence, I try my best to keep the tools I feel responsible for accessible and well-supported. In fact, this runs so deep, that e.g. CryptoMiniSat uses the symmetry breaking tool BreakID, and so I made that tool into a robust library, which is now being packaged by Fedora, because it’s needed by CryptoMiniSat. In other words, I am pulling other people’s tools into the “maintained and supported” list of projects that I work with, because I want to make use of them (e.g. BreakID now builds on Linux, MacOS, and Windows). I did the same with e.g. the Louvain Community library, which had a few oddities/issues I wanted to fix.

Another oddity of mine is that I try my best to make our tools make sense to the user, work as intended, give meaningful (error) messages, and good help pages. For example, none of the tools I develop call subprocesses that make it hard to stop a computation, and none use a random number seed that can lead to reproducibility issues. While I am aware that working tools are sometimes less respected than a highly cited research paper, and so in some sense I am investing my time in a slightly suboptimal way, I still feel obliged to make sure the tax money spent on my academic salary gives something tangible back to the people who pay for it.