# PRGs and Hardness Amplification Part 2- Local Decoding

(This is a continuation of a previous post.)

The goal now is to find such an encoding/decoding. We reduce this problem further to the problem of finding an algorithm that adjusts a word in $\Sigma^{L'}$ to a codeword (rather than a message) with nontrivial accuracy, and the reduction will be valid as long as we can efficiently translate between the messages and codewords. Formally,

Definition 7: A local $\delta$-correcting algorithm $D$ for an encoding $E$ is a probabilistic algorithm that, given oracle access to a received word $g$ and an input position $x$, computes with probability 2/3 the value $f'(x)$ for any codeword $f'$ $\delta$-close to $g$. For coin tosses $r$, the output of the algorithm will be denoted by $D^g(x;r)$, and probability is taken over all $r$.

Definition 8: An encoding $E: \{0,1\}^L\to\Sigma^{L'}$ is systematic if there exists a “translation” map $T: \{0,...,L-1\}\to\{0,...,L'-1\}$ running in time polynomial in the length of the domain and a map $t: \{0,1\}\to\Sigma$ such that for all messages $f$ and corresponding codewords $f'$, $f(x) = t(f'(T(x)))$.

We can get a decoder by sending input position $x\in\{0,...,L\}$ through such a translation map and then through a local corrector:

Result 4: If $E$ is a systematic encoding and the code has a local $\delta$-correcting algorithm in time $t$, then $E$ has a local $\delta$-decoding algorithm running in time $t + \text{poly}(\log L)$.

We will design a local corrector and system encoding for the Reed-Muller code. We then concatenate to the Hadamard code to obtain a binary alphabet, and this will turn out to give the desired hardness amplification.

Reed-Muller Corrector

Result 5: For $q \ge 36$ and $d\le q/9$, the $q$-ary, degree-$d$, $m$-dimensional Reed-Muller code has a local $1/12$-correcting algorithm running in time polynomial in $m$ and $q$.

Proof: The basic picture for our corrector is that we reduce the dimension of the input space $\mathbb{F}^m$ to $\mathbb{F}$ by evaluating the oracle close to our target codeword $p$ only on a line (a map $\ell_{x,y}: \mathbb{F}\to\mathbb{F}^m$ given by $\ell_{x,y}(t) = x+ty$ for $x\in \mathbb{F}^m$ and $\mathbb{F}^m$). The resulting univariate polynomial is a Reed-Solomon codeword that we then globally decode at a distance close enough that it is likely the codeword $p$ itself restricted to the line in question. Evaluating at zero then gives $p(x)$ as desired.

Specifically, if we receive a Reed-Muller codeword $p$, to compute it with nontrivial accuracy at an input index $x$ using our oracle $g$ where $d_H(g,p)< \delta$, randomly pick some $y\in\mathbb{F}^m$ and define $G: \mathbb{F}\to\mathbb{F}$ by $G(t) = g(\ell_{x,y}(t))$. This is a $q$-ary, $m$-dimensional Reed-Solomon codeword, so run the global decoder given by Result 6 of our post on list decoding, for distance $\delta = 1/3$ (this is where we use the fact that $d\le q/9$). Evaluate the resulting message at zero.

We verify that this procedure outputs $p(x)$ with probability at least 2/3. The expected distance $d_H(G,p(\ell(\cdot)))$ over all lines $\ell$ through $x$ is less than $1/q + \delta = 1/9$ rather than $\delta = 1/12$ because the input to $g$ and $p$ is no longer uniformly distributed over all of $\mathbb{F}^m$. Markov’s inequality then tells us that the probability that this distance is at least $1/3$ is at most $1/3$, and we’re done by uniqueness of the polynomial less than $1/3$ away from Reed-Solomon codeword $G$.

To convert our corrector to a decoder, we need a systematic encoding.

Systematic Encoding

We begin by encoding $2^{\ell}$-bitstring messages $f: \{0,1\}^{\ell}\to\{0,1\}$ as an extensions to the domain $\mathbb{F}^{\ell}$ of degree 1 in each variable. Indeed, for any such $f$, define $E(f) = f'$ by

$f'(x_1,...,x_{\ell}) = \sum_{\alpha\in\{0,1\}^{\ell}}f(\alpha)\left(\prod_{i: \alpha_i = 1}x_i\right)\left(\prod_{i: \alpha_i = 0}(1-x_i)\right)$.

This construction generalizes from $\{0,1\}$ to arbitrary subsets of $\mathbb{F}$, giving:

Result 6: For all $m$, finite fields $\mathbb{H}\subset\mathbb{F}$, and maps  $F: \mathbb{H}^m\to\mathbb{F}$, there exists an extension $f': \mathbb{F}^m\to\mathbb{F}$ of degree at most $|\mathbb{H}|-1$ in each variable.

Our encoding will send any $f: \{0,1\}^{\ell}\to\mathbb{F}$ first to such an $F: \mathbb{H}^m\to\mathbb{F}$ by sending each input in $\{0,1\}^{\ell}$ efficiently to some element of $\mathbb{H}$ and then to the $f': \mathbb{F}^m\to\mathbb{F}$ given by result 6. Let’s check that our criteria for encoding are met:

1)   This encoding is certainly systematic.

2)   To specify $f'$ completely requires $|\mathbb{H}|^m$ additions for each of the $q^m$ inputs. Taking $\mathbb{H}$ to be of size $\lceil\sqrt{q}\rceil$ and $m$ to be $\lceil\ell/\log_2|\mathbb{H}|\rceil$ gives $2^{O(\ell)}$ additions for each of the $2^{O(\ell)}$ inputs, so encoding time is correct.

3)   Codeword length is $m\log q = O(\ell)$.

Because the total degree of $f'$ is at most $m\sqrt{q}\le\ell\sqrt{q}$, we need $q\ge 81\ell^2$ so that $q\le d/9$.

Converting to Boolean

The last step will be to make the alphabet of the code binary. We have the following “concatenation” result:

Lemma: If $E_1: \{1,...,N\}\to\Sigma^{n_1}_1$ has minimum distance $\delta_1$ and $E_2: \Sigma_1\to\Sigma^{n_2}_2$ has minimum distance $\delta_2$, then their concatenation $E: \{1,...,N\}\to\Sigma^{n_1n_2}_2$ given by $E(x) = E_2(E_1(x)_1)E_2(E_1(x)_2)\cdots E_2(E_1(x)_{n_1})$ has minimum distance $\delta_1\delta_2$.

Proof: Take any two messages $f,g$, and $E_1(f)$ and $E_1(g)$ differ in at least $\delta_1$ positions, and in these $\delta_1$ of all positions $i$, $E_2(E_1(f)_i)$ and $E_2(E_1(g)_i)$ differ by at least $\delta_2$ of all letters, so distance is multiplicative under concatenation.

To locally decode a concatenation of $E_1$ and $E_2$, then, we simply apply the $\delta_2$-decoding algorithm for the inner code to retrieve the $n_1$ letters and then run the local $\delta_1$-decoding algorithm on the $E_1$-codeword these letters form.

We’d like to concatenate with a code with a binary alphabet, and the most convenient one that has a local decoding is the Hadamard code:

Lemma: For all $\epsilon>0$ and $m\in\mathbb{N}$, the Hadamard code of message length $m$ has a local $1/4$-decoding algorithm running in time $\text{poly}(m,1/\epsilon)$.

Proof: The identity map is a systematic encoding, so it suffices to find a $1/4$-decoding algorithm running in the desired time. We have some $f'$ that we’d like to compute for any given input $x$ with high probability given oracle access to some word $g: \{0,1\}^m\to\{0,1\}$ close to it. Because. By linearity of Hadamard codewords, $c(x) = c(x\oplus r)\oplus c(r)$, where $\oplus$ will denote exclusive-or. If we pick a random $r$, the probability that $g$ differs from $c$ at either $x\oplus r$ or $r$ is at most $2(1/4-\epsilon) = 1/2-2\epsilon$, so if we run this sufficiently many times (polynomial in $1/\epsilon$) on different $r$ and take a majority vote on the output of $g(x\oplus r_i)\oplus g(r_i)$, then we can get $c(x)$ with nontrivial probability.

By concatenating our Reed-Muller code $\{0,1\}^L\to\mathbb{F}^{q^m}$ with a Hadamard code $\{0,1\}^{\log_2q}\to\{0,1\}^q$ to get a code $\{0,1\}^L\to\{0,1\}^{O(L)}$ with codeword length $L'$ polynomial in $L$ and for which there exists a local 1/48-decoder running in time $O(\ell)$, giving us our first result in hardness amplification.

Result 7: If there exists an $f: \{0,1\}^{\ell}\to\{0,1\}$ in $E$ that is worst-case hard in nonuniform time $t(\ell)$, there is an $f': \{0,1\}^{O(\ell)}\to\{0,1\}^{O(\log\ell)}$ in $E$ that is $(t(\ell)/\text{poly}(\ell),1/48)$-average-case-hard.

We now want to amplify worst-case hardness into arbitrarily large average-case hardness $1/2-\epsilon$. To get input length linear in the original input length, we will use so-called local list-decoding. Morally, given oracle access to some $g$, a list-decoding algorithm probabilistically outputs a bunch of guesses for codewords close to $g$, one of which can be decoded to be equal to the target message close to $g$ with high probability. It will turn out to be helpful to define a list-variant of local correcting as well, and again, its definition will essentially be the same as that of decoding with all instances of the word “message” replaced by “codeword.” Formally:

Definition 9: A local $\delta$-list-decoding algorithm for an encoding $E$ is a probabilistic algorithm that, given oracle access to a received word $g$, and an input position $x$, takes place in two steps using algorithms $D_1,D_2$ where $D_1$ probabilistically outputs advice strings such that for any message $f$ whose encoding $E(f) = f'$ is $\delta$-close to $g$, $D^g_2(x,a_i)=f(x)$ with probability at least 2/3 for one of the advice strings $a_i$, where probability is taken over the distribution of collections of advice strings  output by $D_1$.

Definition 10: A local $\delta$-list-correcting algorithm $(D_1,D_2)$ for an encoding $E$ is a probabilistic algorithm that, given oracle access to a received word $g$, and an input position $x$, takes place in two steps using algorithms $D_1,D_2$ where $D_1$ probabilistically outputs advice strings such that for any codeword $f'$ $\delta$-close to $g$, $D^g_2(x,a_i)=f'(x)$ with probability at least 2/3 for one of the advice strings $a_i$, where probability is taken over the distribution of collections of advice strings  output by $D_1$.

Analogously to result 4 we get local list-decoders from local list-correctors given a systematic encoding, with the encoding adding an overhead polylogarithmic in the message length. Analogously to result 3 we get hardness amplification out of local list-decoding, with time slowdown by a factor of $t_D$ where $t_D$ is the time for $D_2$. The proof for the former remains the same, whereas the proof for the latter is differs slightly from its counterpart because $D_2$ makes use of nonuniformity not in the form of fixed coin tosses but in the form of an advice string $a_i$.

The good news is that our discussion on systematic encoding applies just as soundly in the case of list-decoding, and moreover we already know the Hadamard code can be $(1/2-\epsilon)$-list-decoded globally (see result 3 here). What remains is to construct a local $(1-\epsilon)$-list -corrector, and it turns out we can do this with the Reed-Muller code as well.

Result 8: There is some fixed $c$ such that for all $\epsilon = c\sqrt{d/q}$, the $q$-ary, $m$-dimensional, degree-$d$ RM code has a local $(1-\epsilon)$-list-corrector running in time polynomial in $q$ and $m$.

Proof: Call our unknown codeword $p$. The corrector is very similar to the one used in result 5: reduce the dimension of the code to 1 by restricting our oracle to a line $\ell$ through $x$ and use Reed-Solomon (global) list-decoding on this line. In particular, $D_1$ will pass to $D_2$ advice strings including a point on $p$, and $D_2$ will use Reed-Solomon list decoding to find all univariate polynomials close to $g(\ell (\cdot))$, and the intuition is that if only one of these polynomials contains the point we know to be on $p$, we’ve probably properly corrected received word $g$.

Specifically, the algorithm is as follows: in order to ensure $D_1$ passes in a point on $p$, just pick some $y\in\mathbb{F}^m$ and, because it doesn’t cost much, pass all points of the form $(y,z)\in\mathbb{F}^m\times\mathbb{F}$ so that one of the advice strings will be the right one. $D_2$ then $(1-\epsilon/2)$-list-decodes $g$ restricted to the line going through $(0,x)$ and $(1,y)$ to get polynomials $q_1,...,q_s$ $1-\epsilon/2$-close to $g(\ell(\cdot))$. If exactly one of these passes through $(1,z)$, output its value at 0.

As long as we can compute something $1/12$-close to $p$, we can use our unique corrector of result 5 to compute the actual $p$ with probability 2/3.

We will show that for at least half of all $y\in\mathbb{F}^m$, for more than $11/12$ of the lines $\ell$ passing through $y$ we have that $p(\ell(\cdot))$ is the unique univariate polynomial which is $\epsilon/2$ close to $g(\ell(\cdot))$ and which, evaluated at 1, is equal to $p(y)$.

It suffices to prove that for a randomly chosen $y$ and line through $y$, we can tune the parameter $c$ in $\epsilon$ to make the probability that this condition holds arbitrarily large.

We first prove that the probability that $g|_{\ell}$ and $p|_{\ell}$ agree on at most $\epsilon/2$ inputs is arbitrarily small. The point is that if we pick our line randomly, the samples of points from this line are certainly not independent, but they are pairwise independent. Recalling Result 3 from here and noting that the expected agreement between $g$ and $p$ is at least $\epsilon$, we obtain that $P(d_H(g|_{\ell},p|_{\ell})\ge\epsilon/2)\le 1 - \frac{1}{q(\epsilon/2)^2}$, which we can make arbitrarily small by making $c$ arbitrarily large.

Lastly, we prove that the probability that there exists a univariate polynomial $q$ other than $p|_{\ell}$ that is $\epsilon/2$-close to $g|_{\ell}$ such that $q(1) = p(y)$ is also arbitrarily small. The key is 1) there are only $t=O(\sqrt{q/d})$ univariate polynomials $q_1,....,q_t$ that close to $g|_{\ell}$ to begin with, and each $q_i$ agrees with it at most $d$ points. So if we first pick an arbitrary line and then a point $y$, then for any line $\ell$ chosen, the union bound tells us that the probability over all $y$ on this line that $q_i(1) = p(y)$ for any of these $q_i$ is at most $\frac{dt}{q} = O(\sqrt{d/q})$, and because $\epsilon<1$, $\sqrt{d/q} < 1/c$, so again, we can take $c$ arbitrarily small to make this probability small as well.

As we stated earlier, we can directly use the results on systematic encoding and concatenation above to get an explicit code $2^{O(\ell)}$ with a local $(1/2-\epsilon)$-list-decoder. Specifically, as before we set dimension $m$ to be $\lceil\ell/\lceil\sqrt{q}\rceil\rceil$ so that degree $d$ is bounded above by $\ell\sqrt{q}$ and $q = \text{poly}(\ell,1/\epsilon)$. The running time depends linearly on alphabet size $q$, so our decoding algorithm runs in time $\text{poly}(\ell,1/\epsilon)$.

Conclusion

With this Reed-Muller code, we can send a function $\{0,1\}^{\ell}\to\{0,1\}$ that is worst-case hard for time $s(\ell)$ to a function that is $(s(\ell)/t_D,1/2-s(\ell))$-average-case hard, where $t_D = \text{poly}(\ell,1/s(\ell))$. Such a function is certainly also $(s(\ell)/t_D,1/2-s(\ell))$-average-case hard, and result 3 thus tells that:

Corollary: Fix a time function $s: \mathbb{N}\to\mathbb{N}$ that is computable in time exponential in its input. If there is some $f$ in E that on length-$\ell$ inputs is worst-case hard for time $s(\ell)$, there is a mildly explicit $(m,1/m)$-generator $G: \{0,1\}^{d(m)}\to\{0,1\}^m$ with seed length $d(m) = O(s^{-1}(\text{poly}(m))^2/\log m)$.

In particular, if $s(\ell) = 2^{\Omega(\ell)}$ in the above result, then we would get that BPP = P! Even if the latter doesn’t turn out to be true and no such function in E has such high circuit complexity, this would suggest that problems in E in like satisfiability don’t have as high circuit complexity as previously thought.