Holographic Algorithms Part 2: Higher Dimensional Bases and Cai/Lu’s Collapse Theorem

Our treatment of holographic algorithms thus far has only worked with bases of size 1 (a basis of size {k} is a pair of {2^k}-dimensional vectors). In the first section, we will describe the few simple steps needed to generalize our framework so far to bases of higher size. One naturally wonders whether higher-dimensional basis vectors add power to holographic algorithms. We will first briefly explore an example mentioned in our first post in this series that might seem to suggest this to be the case. Pl-Rtw-Mon-3CNF, the problem of determining the number of satisfying assignments in any planar, read-twice, monotone 3-CNF was shown by Valiant to be $\oplus P$-hard modulo 2 but admit a polynomal-time algorithm modulo 7 using holographic matchgates and a basis of size 2. At the time, people wondered whether bases of higher size indeed added power.

Cai/Lu later showed however that this problem can actually be solved only using matchgates over a basis of size 1. We sketch both algorithms in the second section. They subsequently showed that in fact all holographic algorithms using bases of size 2 can be “collapsed” to ones using bases of size 1 (The reason they stopped at 2 at the time was that for higher sizes, there exists an additional constraint for realizability of signatures, namely the so-called “Grassmann-Plucker identities”, which adds considerable complexity to the problem). They later extended their ideas for the original collapse theorem to prove a general basis collapse theorem by discovering that for so-called “non-degenerate bases” (degenerate bases give rise to trivial holographic algorithms and are uninteresting), it’s possible to break any such basis down into an “embedded basis” of 2-vectors capturing all the expressiveness of the higher-dimensional basis. Long story short, Cai/Lu showed that in fact higher-dimensional bases do not add power to holographic algorithms. The main goal of this post will be to present the proof of their final basis collapse theorem.

1. Bases of Size {k}

In general, if a tensor of arity {n} is realized over a basis {T} of size {k}, the corresponding matchgate has arity {nk}, and it’ll be convenient to think of its input/output nodes as split into {n} “blocks” of size {k}. Call the basis’ {2^k\times 2} transformation matrix {T}, and denote by {t^{\alpha}_i} the entry in row {\alpha\in\{0,1\}^k} and column {i\in\{0,1\}}.

We can then extend the tensor-theoretic formulation covered last time as follows. As usual, the contravariant tensor G of a generator {\Gamma} has signature {G} over {T} iff {\underline{G} = T^{\otimes n}G}. Concretely, for {\alpha_1,...,\alpha_n\in\{0,1\}^k},

\displaystyle \underline{G}^{\alpha_1\cdots\alpha_n} = \sum_{i_1\cdots i_n\in\{0,1\}}G^{i_1\cdots i_n}t^{\alpha_1}_{i_1}\cdots t^{\alpha_n}_{i_n},

where {\underline{G}} denotes the standard signature of {\Gamma}. Likewise, the covariant tensor R of a recognizer {\Gamma} has signature {R} over {T} iff {R = \underline{R}T}. Concretely,

\displaystyle {R}_{\alpha_1\cdots\alpha_n} = \sum_{\alpha_1\cdots \alpha_n\in\{0,1\}^k}R_{i_1\cdots i_n}\underline{R}_{\alpha_1\cdots\alpha_n}t^{\alpha_1}_{i_1}\cdots t^{\alpha_n}_{i_n},

where {\underline{R}} denotes the standard signature of {\Gamma}.

2. Solutions Modulo 7

First note that the constraints of Pl-Rtw-Mon-3CNF can be simulated using generators with signature {[1,0,1]} and recognizers with signature {[1,0,0,1]}. The generators represent variables and the recognizers clauses. The symmetric signature {[1,0,1]} captures the read-twice condition because the generator has arity 2 and captures the constraint that the truth values of the two instances of the variable are consistent because {n\otimes p} and {p\otimes n} are emitted with weight 0. The symmetric signature {[1,0,0,1]} captures the 3CNF condition because the recognizer has arity 3 and captures the problem of satisfiability because it only recognizes instances where all three variables being transmitted are true.

Unfortunately, these signatures are not realizable in {\mathbb{C}} or {\mathbb{F}_2}, but Valiant showed that they are realizable in {\mathbb{F}_7}, specifically via the basis {n = (1,1,2,1)} and {p = (2,3,6,2)}.

Cai/Lu showed that these signatures are in fact realizable over the basis {n = (1,6)} and {p = (3,5)}. (For now we will defer the details of proving realizability in both cases; time permitting, I’ll write up a post some time on Cai/Lu’s successful characterization of realizability for all symmetric signatures.)

With this first instance of a dimension “collapse” result in mind, we will now begin to explore Cai/Lu’s remarkable result saying that such a collapse can always occur for nontrivial holographic algorithms.

3. Degenerate Generators

A new notion that Cai/Lu’s two papers on basis collapse introduced in moving beyond size-1 bases was the notion of degenerate and non-degenerate generators and bases. As it turns out recognizers over bases of size more than 1 are indeed more powerful. The saving grace is that such recognizers can only ever appear in conjunction with essentially “useless” generators that give rise to trivial holographic algorithms. Before introducing the actual definitions, let’s get a feel for what such generators would look like.

For starters, generators of arity 1 are pretty useless. Intuitively, without additional nodes to emit particles, there can be no holographic interference, defeating the purpose of using FKT to achieve exponential cancellations. More formally, assume that all the generators in a matchgrid {\Omega} were of arity 1; denote by {n_i} the number of input nodes of recognizer {R_i}, and denote by {G_{i,j}} the generator connected to the {j}th input node of {R_i}. Then the Holant is merely {\prod^r_{i=1}\left(\sum_{x\in\{0,1\}^{n_i}}\text{valR}(R_i,x)\prod^{n_i}_{j=1}\text{valG}(G_{i,j},x_i)\right)}; in other words, the Holant of all of {\Omega} is the product of the Holants of the subgraphs of {\Omega} containing each recognizer and all generators attached to it. What this tells us is that the particular counting problem in question can be solved merely by looking at its local parts; for instance, if our problem is satisfiability and the recognizers represent the clauses in a CNF, generators of arity 1 would imply that this CNF is read-once so that the total number of satisfying assignments is merely the product of the number of satisfying assignments for each clause.

If arity 1 generators are uninteresting, anything that “decomposes” into such a generator is also uninteresting. In tensor language, we mean decomposition in the sense that the signature tensor {G} of an arity-{k} generator is equal to {G_1\otimes\cdots\otimes G_k}, where {G_i} is the signature tensor of an arity-1 generator. In such a case, we could replace {G} with {k} arity-1 generators without changing the Holant, and we’re back in the situation of the previous paragraph.

Definition 1: A generator whose signature tensor has such a decomposition into signature tensors of arity-1 generators is said to be a degenerate generator.

To ignore degenerate generators, we will work only in bases for which at least one realizable tensor is non-degenerate. Cai/Lu’s first step is to give a necessary condition for such a basis. Let {T} be the {2^k\times 2} transformation matrix corresponding to a rank-2 basis of size {k}. Denote by {\text{wt}(x)} and {\oplus x} the Hamming weight of {x\in\{0,1\}^n} and its parity, respectively.

Definition 2: {T} is a degenerate basis if {t^{\alpha} = 0} for all {\alpha} such that {\oplus\alpha} is of a particular parity.

Theorem 3: If {T} is degenerate, then all signatures realizable over {T} are degenerate.

We will need the following collection of identities regarding standard signatures, the above-mentioned Grassmann-Plucker identities (also called matchgate identities), which are central to the theory of matchgates. In a future post, we will prove the necessity and sufficiency of these identities and explore the observation by Landsberg, Morton, and Norine that in fact these identities are merely the defining equations for the spinor varieties obtained by Chevalley. For now, we state these identities without proof:

If {u} is the standard signature of a matchgate {\Gamma}of arity {n}, then given any {\alpha,\beta\in\{0,1\}^n}, {\sum^{\ell}_{i=1}(-1)^iu_{\alpha+e_{\beta_i}}u_{\alpha+\beta+e_{\beta_i}} =0,} where {e_j} denotes the bit vector with support at the {j}th bit, {\beta_i} denotes the position of the {i}th nonzero bit of {\beta}, and {\ell} is the Hamming weight of {\beta}.

Example 1: If {u} is the standard signature of a generator of arity 4, then one of the matchgate identities {u} must satisfy is {u_{0000}u_{1111} - u_{0011}u_{1100} + u_{0101}u_{1010}-u_{0110}u_{1001} = 0} (we obtain this from setting {\alpha = 1000} and {\beta = 1111}).

Proof: Assume that {t^{\alpha} = 0} for all {\text{wt}(\alpha)} odd (the proof for {\text{wt}(\alpha)} even is essentially identical).

Now pick any tensor {G\in V^{n}_0} realizable on {T}; also call the generator that realizes it {G}. Denoting the standard signature of the generator as {\underline{G}}, we have by definition that {\underline{G} = T^{\otimes n}G}. If the standard signature is already the zero vector, then we’d conclude that {G = 0} and be done. If not, assume {\underline{G}^{\beta} \neq 0} for some {\beta}. In particular, by multiplying our tensor by a scalar, we can assume {\underline{G}^{\beta} = 1}, and by permuting indices, we can assume {\beta = 00\cdots 0}.

Denote the {i}th “block” of {G} as {G_i}. More precisely, this is the matchgate whose output nodes are the output nodes of {G} in the {i}th block. Note that it suffices to show that the standard signature decomposes, in an appropriate sense, as a product of the standard signatures of these blocks. By definition, {\underline{G_i}^{\alpha_i} = \underline{G}^{00\cdots 0\alpha_i 00\cdots 0}}. We claim the following.

Claim 1: For all {\alpha_1,...,\alpha_n\in\{0,1\}^k},

\displaystyle \underline{G}^{\alpha_1\alpha_2\cdots\alpha_n} = \underline{G_1}^{\alpha_1}\underline{G_2}^{\alpha_2}\cdots\underline{G_n}^{\alpha_n}.

Before we prove this, let’s make sure that this implies we’re done. Recall that we need to express {G} as a tensor product {G'_1\otimes\cdots\otimes G'_n} of signature tensors of arity-1 generators. Pick a {2\times 2^k} matrix {\tilde{T}} for which {\tilde{T}T} is the {2\times 2} identity matrix. Then simply define {G'_i = \tilde{T}\underline{G_i}}, and we’re done.

Proof: By our assumption that {t^{\alpha} = 0} for all odd {\alpha}, we can assume that {\alpha_i} is even for all {i}, or else the claim is trivially true.

We proceed by induction on {\text{wt}(\alpha_1\alpha_2\cdots\alpha_n)}. Weight 0 is obvious, and for weight 2, the two nonzero bits have to be in the same block or {\text{wt}(\alpha_i)} is odd for two values of {i} and the claim is trivially true.

For weight {2i\ge 4}, the inductive step is basically just some careful bookkeeping using the identities in Lemma~3. Let {t} be the position of the first nonzero bit of {\alpha_1} (we’ve assumed implicitly that {\alpha_1} is not all zero, but we could very well done this analysis for any {\alpha_i} not all zero). We will take {\alpha} in that lemma to be {\alpha_1\alpha_2\cdots\alpha_n + e_t} and {\beta} to be {\alpha_1\alpha_2\cdots\alpha_n}. By pulling out the first term in the matchgate identity and noting that {\alpha + \beta = e_t}, we get

\displaystyle \underline{G}^{\alpha_1\alpha_2\cdots\alpha_n} = \sum^{2i}_{j=2}(-1)^j\underline{G}^{\alpha_1\alpha_2\cdots\alpha_n + e_t + e_{\beta_j}}\underline{G}^{e_t + e_{\beta_j}}.

But for {j} such that {\beta_j} lies outside of the first block, {\underline{G}^{e_t + e_{\beta_j}} = 0}, so we’ll ignore those to get {\underline{G}^{\alpha_1\alpha_2\cdots\alpha_n} = \sum^{w}_{j=2}(-1)^j\underline{G}^{(\alpha_1+e_t+e_{\beta_j})\alpha_2\cdots\alpha_n}\underline{G}^{(e_t+e_{\beta_j})0\cdots0}}, where {w = \text{wt}(\alpha_1)}. The point now is that by restricting our attention to the first block, we can apply our inductive hypothesis to the tensors in each of the summands to conclude that it suffices to show {\underline{G}^{\alpha_1} = \sum^{w}_{j=2}(-1)^j\underline{G}^{\alpha_1+e_t+e_{\beta_j}}\underline{G}^{e_t+e_{\beta_j}}}, but one more application of Lemma~3, this time for {\alpha = \alpha_1+e_t} and {\beta = \alpha_1} gives the desired result. \Box

\Box

4. Embedded Basis Theorem

Now that we know what sorts of bases to ignore, we can establish the first crucial step towards the collapse theorem: any {2^k\times 2} transformation matrix for non-degenerate basis {T} has row rank 2! In our discussion below, we will refer to the row of {T} indexed by {\alpha\in\{0,1\}^k} by {(n^{\alpha},p^{\alpha})}.

Theorem 4: If {T} is non-degenerate, then {(n^{\alpha},p^{\alpha})} and {n^{\beta},p^{\beta})} are linearly dependent if {\text{wt}(\alpha)} and {\text{wt}(\beta)} have the same parity.

Proof: Pick any {\alpha_0,\beta_0} of even weight and {\alpha_1,\beta_1} of odd weight. Denote by {T_0} the {2\times 2} matrix with rows {(n^{\alpha_0},p^{\alpha_0})} and {(n^{\beta_0},p^{\beta_0})}, and {T_1} the {2\times 2} matrix with rows {(n^{\alpha_1},p^{\alpha_1})} and {(n^{\beta_1},p^{\beta_1})}. The claim is that for all possible {T_0} and {T_1} defined this way, {\det(T_0) = \det(T_1) = 0}.

Because {T} is non-degenerate, Theorem~3 tells us that there exists some non-degenerate tensor {G} realizable on {T}. Also call the generator realizing it {G}. Assume first that {G} has an odd number of vertices (we’ll remark how to drop this assumption at the end), so that entries of {\underline{G}} whose indices are of even weight are zero.

If the arity {n} of {G} is even, then {T^{\oplus n}_0G} and {T^{\oplus n}_1 G} are zero vectors because their entries are entries in the standard signature of {G} whose indices are of even weight (the sum of an even number of integers of the same parity is even), but {G} has an odd number of vertices. Because we’re assuming {G} is non-degenerate and thus certainly not zero, this implies that {T_0} and {T_1} are not invertible, as desired.

If the arity of {G} is odd, then {T^{\oplus n}_0G} still consists of entries in the standard signature of {G} whose indices are of even weight (the sum of an odd number of even integers is even), so the reasoning above tells us {T^{\oplus n}_0G = 0} and {T_0} is not invertible. We can no longer say that {T^{\otimes n}_1G = 0}, but instead we will “compress” {G} into a nonzero tensor {G'} in {V^{n-1}_0} for which {T^{\otimes(n-1)}_1G' = 0}.

Pick any {\alpha\in\{0,1\}^k} for which {(n^{\alpha},p^{\alpha})\neq (0,0)} and {\text{wt}(\alpha)} is even (the crucial assumption here is that {T} is non-degenerate). Then let’s replace {T^{\otimes n}_1} by {T_t := T^{\otimes(t-1)}_1\otimes(n^{\alpha},p^{\alpha})\otimes T^{\otimes(n-t)}_1} so that {T_tG = 0}. Let’s rewrite this; for each {t\in [n]}, define {G_t\in V^{n-1}_0} by {G^{i_1,...,i_{n-1}} = G^{i_1\cdots i_{t-1}0i_t\cdots i_{n-1}}n^{\alpha} + G^{i_1\cdots i_{t-1}1i_t\cdots i_{n-1}}p^{\alpha}} so that {T^{\otimes(n-1)}G_t = 0}.

The punch line is that one of these {G_t} is the {G'} we’re looking for, i.e. nonzero, or else we get a contradiction of the fact that {G} is non-degenerate. Indeed, if to the contrary {G_t = 0} for all {t}, then if {p^{\alpha}\neq 0}, the definition of {G_t} tells us that {G^{i_1\cdots i_{t-1}1i_t\cdots i_{n-1}} = (-n^{\alpha}/p^{\alpha})G^{i_1\cdots i_{t-1}0i_t\cdots i_{n-1}}}, i.e. we can basically “walk” from {G^{i_1\cdots i_{n}}} to {G^{0\cdots 0}} by switching off bits in the index at a cost of a factor of {-n^{\alpha}/p^{\alpha}} at each step. In other words, {G^{i_1\cdots i_n} = (-n^{\alpha}/p^{\alpha})^{\text{wt}(i_1\cdots i_n)} = G^{0\cdots 0}}, so {G = G^{0\cdots 0}\cdot(1,-n^{\alpha}/p^{\alpha})^{\otimes n}}, a contradiction. Likewise, if {n^{\alpha}\neq 0}, we could say that {G = G^{1\cdots 1}\cdot(-p^{\alpha}/n^{\alpha},1)^{\otimes n}}. \Box

5. Collapse Theorem for Generators

Now the end is in sight; we’ve shown there exist 2-vectors {\hat{n} = (n_0,n_1)^T} and {\hat{p} = (p_0,p_1)^T} for which every row {(n^{\alpha},p^{\alpha})} in {T} is a scalar off from either {(n_0,p_0)} or {(n_1,p_1)}. Define the transformation matrix for this basis by {\hat{T} = (t^j_i)} where {t^0 = (n_0,p_0)} and {t^1 = (n_1,p_1)}; Cai/Lu call this an embedded basis of size 1.

In other words, there is a ton of redundant information in signatures realizable over high-dimensional bases. Is it possible for such a signature to be realizable over the embedded basis? Specifically, we can basically fold {\underline{G}} into a particular {2^n}-vector {\hat{G}}, and the question is whether we can collapse {G} of arity {nk} defined above into a generator of arity {n} whose standard signature is this {\hat{G}}.

Before we jump into proving this in the affirmative, let’s clarify what this {\hat{G}} is. For any {(n^{\alpha},p^{\alpha})}, let {\lambda^{\alpha}} denote the scalar for which {(n^{\alpha},p^{\alpha}) = \lambda^{\alpha}\cdot(n_{\oplus\alpha},p_{\oplus\alpha})}. Then

\displaystyle \underline{G}^{\alpha_1\cdots\alpha_n} = \lambda^{\alpha_1}\cdots\lambda^{\alpha_n}\sum_{i_1,...,i_n}G^{i_1\cdots i_n}\hat{t}^{\oplus \alpha_1}_{i_1}\cdots\hat{t}^{\oplus\alpha_n}_{i_n}.

So if we define {\hat{G}} by {\hat{G}^{j_1\cdots j_n} = \sum_{i_1,...,i_n}G^{i_1\cdots i_n}\hat{t}^{j_1}_{i_1}\cdots\hat{t}^{j_n}_{i_n}}, this would imply that

\displaystyle \underline{G}^{\alpha_1\cdots\alpha_n} = \left(\prod^n_{i=1}\lambda^{\alpha_i}\right)\hat{G}^{\oplus\alpha_1\cdots\oplus\alpha_n}.\ \ \ \ \ (1)

 

Theorem 5: {\hat{G}} is the standard signature of some generator.

First, some simplifying assumptions that will be useful for the collapse theorem for recognizers. Pick {\beta_0} and {\beta_1} of minimal Hamming distance such that {\oplus(\beta_0) = 0}, {\oplus\beta_1 = 1}, and {\lambda^{\beta_0}} and {\lambda^{\beta_1}} are nonzero. By simple tweaks to {G}, we can assume that {\beta_0 = 00\cdots 0} and {\beta_1 = 11\cdots 10\cdots 0}. Let {a = \text{wt}(\beta_1)}. Note by minimality of the distance between {\beta_0} and {\beta_1} that for any {\beta'} between {\beta_0} and {\beta_1}, {\lambda^{\alpha} = 0}. As another piece of notation, denote the {j}th external node in block {i} of {G} by {(i,j)}.

Proof: Modify {G} into a new generator {\Gamma} as follows: for each block {i}, attach to node {(i,1)} a node {i''} via an edge of weight {1/\lambda^{\beta_1}} and attach to this a node {i'} via an edge of weight {1/\lambda^{\beta_0}}. Add edges of weight 1 between {(i,\ell)} and {(i,\ell+1)} for all {\ell = 2,4,...,a-1}. Take the output nodes to be all of these {i'}.

The point of connecting every other pair of adjacent nodes is as follows: for each {i}, if {i''} is not present in the matching, then the edge connecting {(i,1)} and {i''} must be. The only matchings of the original block which omit {(i,1)} necessarily omit {(i,2),...,(i,a)}, again by minimality of distance between {\beta_0} and {\beta_1}. In other words, the only such matchings of the original block correspond to {\beta_1}. The matching of the new block minus {i'} thus includes all the edges connecting adjacent pairs of nodes.

On the other hand, if {i''} is present in the matching, then the edge connecting {i'} and {i''} must be, as well as an edge from the original block that is incident to {(i,1)}. The only matchings of the original block containing such an edge correspond to {\beta_0} for the same reasons as above.

It thus follows that {\underline{\Gamma}^{j_1\cdots j_n} = \frac{1}{\lambda^{j_1}}\cdots\frac{1}{\lambda^{j_n}}\underline{G}^{\beta_{j_1}\cdots\beta_{j_n}}}, and {\Gamma} thus has standard signature {\hat{G}} as desired. \Box

6. Collapse Theorem for Recognizers

We conclude this post by showing a similar collapsing result for recognizers. Let’s first find an analogue of {\hat{G}} from the previous section. For signature {R} realized over basis {T} by a recognizer that we also call {R}, we know by definition that {R_{\ell_1\cdots\ell_n} = \sum_{\alpha_r\in\{0,1\}^k}\underline{R}_{\alpha_1\cdots\alpha_n}t^{\alpha_1}_{\ell_1}\cdots t^{\alpha_n}_{\ell_n}}. But recall that {t^{\alpha}} is defined up to a multiplicative constant by {\oplus\alpha}, so group the {\alpha_r} by parity to get {\sum_{i_r\in\{0,1\}}\sum_{\oplus\alpha_r = i_r}\hat{t}^{i_1}_{\ell_1}\cdots\hat{t}^{i_n}_{\ell_n}\sum_{\oplus\alpha_r = i_r}\underline{R}_{\alpha_1\cdots\alpha_n}\lambda^{\alpha_1}\cdots\lambda^{\alpha_n}}.

The analogue of {\hat{G}} is thus {\hat{R} := \sum_{\oplus\alpha_r = i_r}\underline{R}_{\alpha_1\cdots\alpha_n}\lambda^{\alpha_1}\cdots\lambda^{\alpha_n}}. Our claim is thus:

Theorem 6: {\hat{R}} is the standard signature of some recognizer.

Proof: The following result essentially finishes the proof.
Theorem 7: The collection of all {\lambda^{\alpha}} form the condensed signature of an even matchgate {\Gamma_{\lambda}} of arity {k+1}.

Before we see how to prove this (the argument is very similar to that of Theorem 5), let’s see how this can be used to prove Theorem 6. The idea is to attach a copy of {\Gamma_{\lambda}} to each block of {R} essentially to scale its contents and collapse each block to a single input node. Specifically, for each block, attach in a planar fashion the {k} input nodes to the first {k} input nodes of {\Gamma_{\lambda}} (you should check that this can be done in a planar fashion) by edges of weight 1, and denote the remaining node of {\Gamma_{\lambda}} as an input node. Call this new recognizer {\Gamma'}. Then {\underline{\Gamma}_{i_1\cdots i_n} = \sum_{\alpha_1,...,\alpha_n}\underline{R}_{\alpha_1\cdots\alpha_n}\lambda^{\alpha_1}\cdots\lambda^{\alpha_n}}. But note all the summands for which there exists one {\alpha_r} such that {\oplus\alpha_r\neq i_r} are zero: if {i_r = 0} so that the input node at block {r} is included in the matching, then {\Gamma_{\lambda}} with this node omitted is odd so that an odd number of the remaining vertices must also be omitted for there to exist a perfect matching of {\Gamma_{\lambda}}, meaning {\alpha_r} must be even. Likewise, if {i_r = 1}, we conclude that {\alpha_r} must be odd, and we’re done.

Proof: We will build the desired matchgate {\Gamma_{\lambda}} out of {G}. The following claim provides an anchor for our construction: if a standard signature has more than one nonzero entry, there exist entries whose indices are of Hamming distance 2 away from each other (perhaps more on this in another post, but for now, we’ll take this for granted).

Applying this claim to {\hat{G}}, assume wlog that we can pick entries {G_0 = \hat{G}^{00j_3\cdots j_n}} and {G_1 = \hat{G}^{11j_3\cdots j_n}}. Now for our construction, do nothing to block 1 but call the nodes {1',...,k'}. Modify block 2 as we did to construct {\Gamma} above, but instead of weights {1/\lambda^{\beta_1}} and {1/\lambda^{\beta_0}}, assign weights {1/g_1} and {1/g_0} to be decided later. Call the node that we referred to as {2'} in the previous proof {(k+1)'} instead. For blocks {i\ge 3}, if {j_i = 0}, then (by the usual minimality argument) {\beta_0} is already the only possible assignment of bits to the {i}th block’s output nodes for which there is at least one perfect matching in that block, so don’t change those blocks. If {j_i = 1}, then we want to ensure that {\beta_1} is the only possible assignment, so attach an edge of weight to the output node and pair up vertices {(i,2)} to {(i,n)} as before.

It’s straightforward to check that {\Gamma_{\lambda}} is even. By construction, we have for {\alpha} even and odd, respectively, that

\displaystyle \Gamma_{\lambda}^{\alpha 0} = \frac{1}{g_0}\underline{G}^{\alpha\beta_0\beta_{j_3}\cdots\beta_{j_n}} = \frac{1}{g_0}\lambda^{\alpha}\lambda^{\beta_0}\lambda^{\beta_{j_3}}\cdots\lambda^{\beta_{j_n}}G_0

\displaystyle \Gamma_{\lambda}^{\alpha 1} = \frac{1}{g_1}\underline{G}^{\alpha\beta_1\beta_{j_3}\cdots\beta_{j_n}}G_1.

Pick {g_0 = G_0\lambda^{\beta_0}\lambda^{\beta_{j_3}}\cdots\lambda^{\beta_{j_n}}} and {g_1 = G_1\lambda^{\beta_1}\lambda^{\beta_{j_3}}\cdots\lambda{\beta_{j_n}}}, and we’re done. \Box

In the next post, we will cover matchgate identities as they relate to spinors.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s