**Introduction/Some Poetry**

I’ve recently been reading about Valiant’s framework of **holographic algorithms**, and I thought it might be a good way to revive this blog by posting a bit about what I’ve learned. Let’s begin with some motivation with respect to the famous P vs. NP problem: it is a widely held conjecture that the class of languages decidable in deterministic polynomial time can be separated from that of languages decidable in nondeterministic polynomial time. One of the prevailing reasons is simply the intuition that the algorithmic methods available to us for solving problems deterministically in polynomial time don’t seem sufficient for solving certain hard problems. In some sense, Valiant’s holographic approach serves as a potential warning against this kind of intuition: by giving rise to exotic polynomial-time algorithms for problems which previously might have been deemed intractable, it suggests that perhaps the reason we haven’t been able to come up with efficient solutions to NP-complete problems because our understanding of the power of polynomial-time computation is currently limited. Roughly speaking, these algorithms make use of so-called **holographic reductions** which, unlike classical reductions in complexity theory that map one problem instance to another, map the sum of solution fragments to multiple instances of a problem to the sum of solution fragments to multiple instances of another problem in such a way that while the problem instances do not correspond one-to-one, the sums agree. The utility of holographic algorithms mainly comes from reducing to counting problems which are tractable because their solutions involve making “cancellations” (think Strassen’s algorithm or Gaussian elimination) that arise because of linear algebraic reasons and in some cases can yield exponential speedups in computation. In this way, we can essentially import the power of cancellations to situations even where these linear algebraic reasons are no longer obvious and get a host of efficient solutions for seemingly difficult problems.

This blog series will build up to an exotic algorithm for the following bizarre-sounding problem: determine the number modulo 7 of satisfying assignments in any planar, read-twice, monotone 3-CNF, a problem Valiant refers to as “**#7Pl-Rtw-Mon-3CNF**” and seek to explain, using the machinery subsequently developed by Cai, where the number 7 comes from. Before we get to the core of the subject in subsequent posts, however, I’d like to spend one brief post discussing the famous algorithm due to Fisher, Kasteleyn, and Temperley for counting perfect matchings in planar graphs which takes advantage of the abovementioned exponential cancellations. The algorithms we will discuss in later posts will hinge on this algorithm.

**Perfect Matchings**

Recall that a perfect matching of a graph is a set of disjoint edges in the union of whose endpoints is all of . The physical motivations for studying perfect matchings lie in the realm of chemistry and statistical mechanics: how many ways can you position a layer of diatomic molecules adsorbed on a surface, a collection of water molecules bonded in the form of ice, or a mixture of dimers covering a lattice? It turns out that the number of perfect matchings of the relevant graphs roughly corresponds to the energy of such systems. So exactly how hard is it to compute this quantity?

For general graphs , finding a single perfect matching is easy. **Edmonds’ blossom algorithm** is a polynomial-time procedure for doing so; the basic idea is to start with an empty matching and iteratively build up the matching by finding “augmenting paths,” paths which end in uncovered vertices and which consist of edges alternating between lying inside and outside of the current matching, and flipping membership inside our matching of all edges in the augmenting path. It turns out that a matching is maximal if no augmenting path exists, and furthermore these paths can be *found efficiently* by looking for certain kinds of odd cycles called “blossoms” which, once contracted to a point, preserve the number of augmenting paths left in .

The problem of finding perfect matchings is however the canonical, (and indeed the first known) example of a problem in P whose corresponding counting problem, quite shockingly, is P-complete. In fact, by the #P-completeness of the permanent, this is the case even for counting perfect matchings in bipartite graphs. To get a grasp for the hardness of #P-complete problems, recall Toda’s theorem: give a deterministic Turing machine running in polynomial time a #P oracle, and with just a single query to that oracle it can solve any problem in the polynomial hierarchy.

The miracle that holographic algorithms exploit is that while the problem of counting perfect matchings in general graphs is intractable, if we restrict our attention to the case of planar graphs, all of a sudden the problem admits a *polynomial-time solution*. This is the famous FKT algorithm discovered independently by Kasteleyn and by Fisher and Temperley for the case of lattice graphs and then generalized by Kasteleyn to the case of planar graphs. We will spend the first part of this blog post describing the algorithm.

**Remark 1**: **Kuratowski’s Theorem** (the prototypical example of a forbidden graph characterization) says that a finite graph is planar so long as it contains neither a complete graph on five vertices nor a bipartite graph. Vijay Vazirani has shown that we can in fact ignore the former assumption and obtain a generalization of the FKT algorithm to all graphs missing a bipartite graph.

**Remark 2**: Counting the number of *all* matchings is unfortunately #P-complete even for planar graphs.

**The Pffafian**

We will think of a perfect matching first as a partition of elements into pairs. The first observation is that the number of perfect matchings of a graph is a polynomial in the entries of its -valued adjacency matrix :

Now let’s slightly tweak this polynomial. We can also think of a perfect matching as a permutation ; for example, the matching given by the partition corresponds to the permutation , so consider the following polynomial, the **Pfaffian** of :

Now the conundrum is that while these two polynomials are basically the same up to signs of coefficients, the former is ridiculously hard to compute whereas the latter I claim can be determined in polynomial time. Indeed, the famous result of Muir tells us that in fact , and we know how to compute the determinant in polynomial time (Gaussian elimination).

In other words, the Pfaffian is easier to compute because all those different signs in its coefficients yield exponential cancellations that facilitate computation. These are the sorts of “accidental” cancellations that holographic algorithms are after.

**FKT’s Algorithm**

Now the way to compute more efficiently becomes clear: modify the signs of the entries of adjacency matrix by orienting the edges of and hope that we get an orientation, a so-called **Pfaffian orientation**, such that the coefficients of for our new skew-symmetric adjacency matrix are all the same, i.e. the signs of the matchings are all the same. The point of FKT is that all planar graphs have such a Pfaffian orienation.

**Theorem 1 (Kasteleyn): **An orientation where each face has an odd number of edges oriented clockwise is a Pfaffian orientation.

Assuming this is true, it is straightforward to check that such an orientation can be found in polynomial time: basically just start with an arbitrary orientation of sufficiently many edges and fill in the remaining orientations in the right order. The algorithm is as follows:

- Compute a spanning tree of and orient the edges in arbitrarily.
- Construct the tree whose vertices correspond to the faces of and connect vertices corresponding to faces sharing an edge not in the spanning tree
- Starting at the leaves of this tree, orient the missing edges in each face appropriately

It remains to justify the theorem. First, some terminology: for a matching , denote the neighbor of vertex inside by , and define the merging of two matchings and to be the cycle cover of defined as follows: is contained in the cycle whose vertices are . It is clear that each such cycle is necessarily even (otherwise is adjacent to two vertices in ). Next, we say an even cycle is **clockwise odd** if an odd number of its edges are clockwise, and we say a cycle is **removable** if the subgraph induced from removing from still admits a perfect matching. It’s clear that for any , the cycles in the merging of and are removable.

Theorem 1 follows from the following two lemmas:

**Lemma 1: **If every cycle in the merging of and is clockwise odd, then the signs of and are the same.

**Lemma 2:** If every face has an odd number of clockwise edges, then every removable cycle is clockwise odd.

**Proof of lemma 1:** Consider any cycle in the cycle cover arising from a merging. Because the cycle has an odd number of clockwise edges, after swapping an even number of edges’ orientations, we get a cycle with a single clockwise edge and all other edges counterclockwise. We’ve preserved the matchings’ sign difference, but now we see that the sign of the permutation which sends to , so the signs of the two matchings agree.\end{proof}

**Proof of lemma 2: **Let be a removable cycle. Let respectively denote be the number of faces, vertices, and edges *inside* . Because the planar graph has Euler characteristic 2, and the total number of faces of is (including the single face outside ), . Let denote the number of clockwise edges on C, let denote number of clockwise edges on face i inside C. We want to show is odd.

Because each face has odd number of edges, oriented clockwise, and have the same parity. Furthermore, (each edge inside is clockwise for exactly one of the two faces sharing it), so and have the same parity, meaning and have opposite parities by Euler’s formula. But is even because once you remove , the subgraph inside should still have a perfect matching, and we’re done.

In the next post, we will put this algorithm to work by introducing the tools central to holographic reductions, namely the theory of matchgrids and Valiant’s Holant Theorem.