Holographic Algorithms Part 4: Clifford Algebras and Spinors

Recall that two posts ago, I mentioned that Landsberg et al. observed that the matchgate identities are merely the defining equations of the spinor varieties. In this post, I will make explore what this really means in the hopes of describing to the reader their vision of the beautiful geometric reasons why holographic algorithms work. The big picture is that holographic algorithms are all about the efficient computation of some custom-made Holant that captures the answer to the combinatorial problem at hand. We come up with this exponentially large vector pairing (i.e. tensor contraction) and magically manage to compute it in polynomial time basically by computing a determinant. When do such efficiently computable pairings occur in geometry? Well if you take vectors in the cones of {G(k,V)} and {G(k,V^*)} (informally, vectors close enough to a {k}-plane that they can be regarded as decomposable in {\Lambda^kV} or {\Lambda^kV^*} respectively) they turn out to be vectors consisting of determinants of minors of some matrix, and their pairing is merely the determinant of a related matrix. As it will turn out, if you take points in the variety of pure of spinors, they can likewise be characterized as vectors of “sub-Pfaffians” of some matrix, and a pairing between two such vectors is merely the Pfaffian of a related matrix. These are the kinds of efficiently computable pairings holographic algorithms take advantage of.

But it’s not enough to know that there exist special vectors whose pairings are easy to compute. After all, as a stupid example, the pairing between two {2^n}-vectors supported in only {\text{poly}(n)} dimensions is also computable in {\text{poly}(n)} time. What distinguishes the vectors that feature in holographic algorithms from these vectors is the surprising availability of the former in complexity theory problems. Whereas there are way too many vectors in {\mathbb{C}^{2^n}} not supported on only polynomially many dimensions, the codimensions of {G(k,V)} for {V} of low dimension and the spinor variety over low dimensions are quite small, and just as we’ve seen with matchgates, the vector spaces involved can often be decomposed into small enough “local” components (this local decomposition is key, because the abovementioned codimensions blow up very quickly).

In what follows, I hope to give a rigorous treatment of these ideas. My main references Chevalley’s The Algebraic Theory of Spinors and Clifford Algebras and the fifth chapter of Varadarajan’s lecture notes on supersymmetry for the development of the basic theory of Clifford algebras and spinors, and Landsberg’s Tensors: Geometry and Applications and Manivel’s On Spinor Varieties and Their Secants for establishing the connection between the matchgate identities and the variety of pure spinors. As always, if there are any mistakes in presentation below, they are entirely my fault.

Continue reading

Advertisement

Holographic Algorithms Part 3: Matchgate Identities

I still owe a proof of the identities invoked last post to characterize non-degenerate tensors. This relatively short post will be devoted to said proof. My primary reference for this section is Cai/Gorenstein’s paper “Matchgates Revisited.” The ideas it presents have roots in papers dating back to Valiant’s study of non-planar matchgate computation using characters in the context of simulating quantum computation by classical means. We haven’t mentioned this yet because this and the approach via signatures that we have described were shown to be equivalent by Cai, though this may form the subject of a future post. In any case, the exposition that follows will be limited to the signature theory of planar matchgates and will thus make no reference to characters.

We claimed last time that the following identities are necessary for a vector to be the standard signature of a planar matchgate {\Gamma}. In fact, we will show that they are also sufficient.

Theorem 1: The vector {u\in\mathbb{C}^n} is the standard signature of a matchgate {\Gamma} of arity {n} iff given any {\alpha,\beta\in\{0,1\}^n},

\displaystyle \sum^{\ell}_{i=1}(-1)^iu_{\alpha+e_{\gamma_i}}u_{\beta+e_{\gamma_i}} =0,\ \ \ \ \ (1)

where {e_j} denotes the bit vector with support at the {j}th bit, {\gamma_i} denotes the position of the {i}th nonzero bit of {\alpha+\beta}, and {\ell} is the Hamming weight of {\alpha+\beta}.

Continue reading

Holographic Algorithms Part 2: Higher Dimensional Bases and Cai/Lu’s Collapse Theorem

Our treatment of holographic algorithms thus far has only worked with bases of size 1 (a basis of size {k} is a pair of {2^k}-dimensional vectors). In the first section, we will describe the few simple steps needed to generalize our framework so far to bases of higher size. One naturally wonders whether higher-dimensional basis vectors add power to holographic algorithms. We will first briefly explore an example mentioned in our first post in this series that might seem to suggest this to be the case. Pl-Rtw-Mon-3CNF, the problem of determining the number of satisfying assignments in any planar, read-twice, monotone 3-CNF was shown by Valiant to be $\oplus P$-hard modulo 2 but admit a polynomal-time algorithm modulo 7 using holographic matchgates and a basis of size 2. At the time, people wondered whether bases of higher size indeed added power.

Cai/Lu later showed however that this problem can actually be solved only using matchgates over a basis of size 1. We sketch both algorithms in the second section. They subsequently showed that in fact all holographic algorithms using bases of size 2 can be “collapsed” to ones using bases of size 1 (The reason they stopped at 2 at the time was that for higher sizes, there exists an additional constraint for realizability of signatures, namely the so-called “Grassmann-Plucker identities”, which adds considerable complexity to the problem). They later extended their ideas for the original collapse theorem to prove a general basis collapse theorem by discovering that for so-called “non-degenerate bases” (degenerate bases give rise to trivial holographic algorithms and are uninteresting), it’s possible to break any such basis down into an “embedded basis” of 2-vectors capturing all the expressiveness of the higher-dimensional basis. Long story short, Cai/Lu showed that in fact higher-dimensional bases do not add power to holographic algorithms. The main goal of this post will be to present the proof of their final basis collapse theorem.

Continue reading

Holographic Algorithms Part 1: Matchgrids and the Holant

The last post armed us with a powerful piece of polynomial-time computation, the FKT algorithm. In this post we’ll begin to explore how to leverage this to solve other problems under Valiant’s holographic framework by developing the basic theory of matchgrids, including the original proof of Valiant’s flagship Holant theorem, a first glimpse at a holographic reduction using these tools, and a reformulation (due to Cai/Choudhary) of everything in terms of tensors.

At a high level, just as FKT solves a counting problem, {\text{PerfMatch}(G)}, related to a graph {G} by tweaking {G} until that counting problem becams something we know how to solve efficiently, namely the Pfaffian, the Holant theorem will allow us to solve counting problems related to {G} by building out of {G} a (planar) grid {\Omega} of “gadgets” called matchgates which capture the constraints related to the counting problem in question, so that the answer to the counting problem becomes something we know how to solve efficiently, namely {\text{PerfMatch}(\Omega)}. In particular, the counting problem is encoded by {\Omega} so that its answer is by construction equal to a particular function of the weighted edges of {\Omega}, an exponential sum of terms known as the Holant. We should think of the Holant as a global consolidation of all the local data needed to solve the counting problem at hand.

The whole point is that whereas we can rig the Holant to capture all this information because it’s defined as this huge exponential sum that we can’t hope to brute-force any more than we can our original problem, the FKT and Holant theorems imply that there are exponential cancellations among these terms, and solving the counting problem is as easy as computing the Holant is as easy as computing {\text{PerfMatch}}.

Of course, the tricky part is to define what the abovementioned “gadgets” are and how they can encode the local constraints we want. Before we introduce these key players, however, let’s play around with how we might encode the constraints of perfect matching locally.

Continue reading

Holographic Algorithms Part 0: The FKT Algorithm

Introduction/Some Poetry

I’ve recently been reading about Valiant’s framework of holographic algorithms, and I thought it might be a good way to revive this blog by posting a bit about what I’ve learned. Let’s begin with some motivation with respect to the famous P vs. NP problem: it is a widely held conjecture that the class of languages decidable in deterministic polynomial time can be separated from that of languages decidable in nondeterministic polynomial time. One of the prevailing reasons is simply the intuition that the algorithmic methods available to us for solving problems deterministically in polynomial time don’t seem sufficient for solving certain hard problems. In some sense, Valiant’s holographic approach serves as a potential warning against this kind of intuition: by giving rise to exotic polynomial-time algorithms for problems which previously might have been deemed intractable, it suggests that perhaps the reason we haven’t been able to come up with efficient solutions to NP-complete problems because our understanding of the power of polynomial-time computation is currently limited. Roughly speaking, these algorithms make use of so-called holographic reductions which, unlike classical reductions in complexity theory that map one problem instance to another, map the sum of solution fragments to multiple instances of a problem to the sum of solution fragments to multiple instances of another problem in such a way that while the problem instances do not correspond one-to-one, the sums agree. The utility of holographic algorithms mainly comes from reducing to counting problems which are tractable because their solutions involve making “cancellations” (think Strassen’s algorithm or Gaussian elimination) that arise because of linear algebraic reasons and in some cases can yield exponential speedups in computation. In this way, we can essentially import the power of cancellations to situations even where these linear algebraic reasons are no longer obvious and get a host of efficient solutions for seemingly difficult problems.

This blog series will build up to an exotic algorithm for the following bizarre-sounding problem: determine the number modulo 7 of satisfying assignments in any planar, read-twice, monotone 3-CNF, a problem Valiant refers to as “#7Pl-Rtw-Mon-3CNF” and seek to explain, using the machinery subsequently developed by Cai, where the number 7 comes from. Before we get to the core of the subject in subsequent posts, however, I’d like to spend one brief post discussing the famous algorithm due to Fisher, Kasteleyn, and Temperley for counting perfect matchings in planar graphs which takes advantage of the abovementioned exponential cancellations. The algorithms we will discuss in later posts will hinge on this algorithm.

Continue reading

Some Elementary Constructions of Small-Bias Spaces, Part 1

A small-bias space is a distribution over the Boolean hypercube \{0,1\}^n which fools parity tests. Since they were first introduced by Naor/Naor, these objects have been of central importance to the study of pseudorandomness and have found applications in constructing pseudorandom generator fooling small-width branching programs, derandomizing certain algorithms, constructing (k,\epsilon)-independent hash functions, and obtaining short PCPs.

In this post we examine four constructions, including an extremely simple construction over \mathbb{Z}^n_p by Alon/Mansour (2002) which generalizes one due to Alon/Goldreich/Hastad/Peralta (1992), another construction based on linear feedback shift registers due to AGHP, and Naor/Naor’s original construction (1990) based on k-wise independence (or alternatively Justesen codes) and random walks on expander graphs.

Continue reading

PRGs and Hardness Amplification Part 2- Local Decoding

(This is a continuation of a previous post.)

The goal now is to find such an encoding/decoding. We reduce this problem further to the problem of finding an algorithm that adjusts a word in \Sigma^{L'} to a codeword (rather than a message) with nontrivial accuracy, and the reduction will be valid as long as we can efficiently translate between the messages and codewords. Formally,

Definition 7: A local \delta-correcting algorithm D for an encoding E is a probabilistic algorithm that, given oracle access to a received word g and an input position x, computes with probability 2/3 the value f'(x) for any codeword f' \delta-close to g. For coin tosses r, the output of the algorithm will be denoted by D^g(x;r), and probability is taken over all r.

Definition 8: An encoding E: \{0,1\}^L\to\Sigma^{L'} is systematic if there exists a “translation” map T: \{0,...,L-1\}\to\{0,...,L'-1\} running in time polynomial in the length of the domain and a map t: \{0,1\}\to\Sigma such that for all messages f and corresponding codewords f', f(x) = t(f'(T(x))).

We can get a decoder by sending input position x\in\{0,...,L\} through such a translation map and then through a local corrector:

Result 4: If E is a systematic encoding and the code has a local \delta-correcting algorithm in time t, then E has a local \delta-decoding algorithm running in time t + \text{poly}(\log L).

We will design a local corrector and system encoding for the Reed-Muller code. We then concatenate to the Hadamard code to obtain a binary alphabet, and this will turn out to give the desired hardness amplification.

Continue reading

PRGs and Hardness Amplification Part 1- Preliminaries

In this series, we’ll be exploring some of the material from the last chapter of Vadhan’s survey, in particular results of amplifying the hardness of computing functions and obtaining pseudorandom generators out of hard functions. In this relatively short post, by providing preliminary definitions and a motivation related to derandomization. We then close with a glimpse of our first step in hardness amplification.

Preliminaries

Roughly speaking, generators are deterministic functions that stretch uniformly random seeds into bits that appear random. It turns out that our methods of quantifying “randomness” using both min-entropy H_{\infty} and statistical difference from the uniform distribution are insufficient, because H_{\infty}(f(X))\le H_{\infty}(X) for any random variable X and deterministic f. In place of statistical or information theoretic measures, we thus use a complexity theoretical one: a variable will seem random if it is impossible for a so-called nonuniform algorithm running within a certain time limit to tell it apart from a uniformly random variable. We first define nonuniformity:

Definition 0: An algorithm T deciding a language L is a-nonuniform for a: \mathbb{N}\to\mathbb{N} if there exist advice bitstrings \{\alpha_k\}_{k\in\mathbb{N}}\in\{0,1\}^* each of length at most a(k) such that there exists a language L' decided by some algorithm T' where A(x)=1 iff A'(x,\alpha_{|x|}).

We will not make use of this definition in its full generality; it will suffice to talk about algorithms which operate with some “advice” hardwired in, and the length of this advice will be reasonable enough that we won’t bother with the data of the length function a. We can now formalize our abovementioned ideas of “randomness” and pseudorandom generators.

Definition 1: Random variables X and Y are (nonuniformly) (t,\epsilon) computationally indistinguishable if for every nonuniform algorithm T running in time at most t, |P(T(X)=1) - P(T(Y)=1)|\le\epsilon.

Definition 2: A (t,\epsilon)-pseudorandom generator is a deterministic function G: \{0,1\}^d\to\{0,1\}^m such that G(U_d) and U_m are (t,\epsilon)-computationally indistinguishable.

Continue reading

Randomness Extractors and Relationships Between Pseudorandom Objects

“Everything is vitally connected, not merely artificially related, so that when I see anything, I am seeing everything. When I speak to anyone, I am speaking to everybody. When I touch anything, I touch all things, and when I know one thing, I know everything.” – Brihadaranyaka Upanishad

In this post, I’ll cover some of what I think are the coolest ideas in Vadhan’s survey, the principles unifying the main pseudorandom objects studied. We will continue where we left off in the last post by illustrating a connection between list encoding expanders. We then introduce the concept of randomness extractors and relate these to hash functions, expanders, and list encoding.

NOTE: This post only touches the surface on what Vadhan covers here.

Continue reading