(This is a continuation of a previous post.)

The goal now is to find such an encoding/decoding. We reduce this problem further to the problem of finding an algorithm that adjusts a word in to a codeword (rather than a message) with nontrivial accuracy, and the reduction will be valid as long as we can efficiently translate between the messages and codewords. Formally,

**Definition 7**: A **local -correcting algorithm** for an encoding is a probabilistic algorithm that, given oracle access to a received word and an input position , computes with probability 2/3 the value for any codeword -close to . For coin tosses , the output of the algorithm will be denoted by , and probability is taken over all .

**Definition 8**: An encoding is **systematic** if there exists a “translation” map running in time polynomial in the length of the domain and a map such that for all messages and corresponding codewords , .

We can get a decoder by sending input position through such a translation map and then through a local corrector:

**Result 4**: If is a systematic encoding and the code has a local -correcting algorithm in time , then has a local -decoding algorithm running in time .

We will design a local corrector and system encoding for the Reed-Muller code. We then concatenate to the Hadamard code to obtain a binary alphabet, and this will turn out to give the desired hardness amplification.

**Reed-Muller Corrector**

**Result 5**: For and , the -ary, degree-, -dimensional Reed-Muller code has a local -correcting algorithm running in time polynomial in and .

**Proof**:** **The basic picture for our corrector is that we reduce the dimension of the input space to by evaluating the oracle close to our target codeword only on a **line** (a map given by for and ). The resulting univariate polynomial is a Reed-Solomon codeword that we then globally decode at a distance close enough that it is likely the codeword itself restricted to the line in question. Evaluating at zero then gives as desired.

Specifically, if we receive a Reed-Muller codeword , to compute it with nontrivial accuracy at an input index using our oracle where , randomly pick some and define by . This is a -ary, -dimensional Reed-Solomon codeword, so run the global decoder given by Result 6 of our post on list decoding, for distance (this is where we use the fact that ). Evaluate the resulting message at zero.

We verify that this procedure outputs with probability at least 2/3. The expected distance over all lines through is less than rather than because the input to and is no longer uniformly distributed over all of . Markov’s inequality then tells us that the probability that this distance is at least is at most , and we’re done by uniqueness of the polynomial less than away from Reed-Solomon codeword .

To convert our corrector to a decoder, we need a systematic encoding.

**Systematic Encoding **

We begin by encoding -bitstring messages as an extensions to the domain of degree 1 in each variable. Indeed, for any such , define by

.

This construction generalizes from to arbitrary subsets of , giving:

**Result 6**: For all , finite fields , and maps , there exists an extension of degree at most in each variable.

Our encoding will send any first to such an by sending each input in efficiently to some element of and then to the given by result 6. Let’s check that our criteria for encoding are met:

1) This encoding is certainly systematic.

2) To specify completely requires additions for each of the inputs. Taking to be of size and to be gives additions for each of the inputs, so encoding time is correct.

3) Codeword length is .

Because the total degree of is at most , we need so that .

**Converting to Boolean**

The last step will be to make the alphabet of the code binary. We have the following “concatenation” result:

**Lemma**: If has minimum distance and has minimum distance , then their concatenation given by has minimum distance .

**Proof**: Take any two messages , and and differ in at least positions, and in these of all positions , and differ by at least of all letters, so distance is multiplicative under concatenation.

To locally decode a concatenation of and , then, we simply apply the -decoding algorithm for the inner code to retrieve the letters and then run the local -decoding algorithm on the -codeword these letters form.

We’d like to concatenate with a code with a binary alphabet, and the most convenient one that has a local decoding is the Hadamard code:

**Lemma**: For all and , the Hadamard code of message length has a local -decoding algorithm running in time .

**Proof**: The identity map is a systematic encoding, so it suffices to find a -decoding algorithm running in the desired time. We have some that we’d like to compute for any given input with high probability given oracle access to some word close to it. Because. By linearity of Hadamard codewords, , where will denote exclusive-or. If we pick a random , the probability that differs from at either or is at most , so if we run this sufficiently many times (polynomial in ) on different and take a majority vote on the output of , then we can get with nontrivial probability.

By concatenating our Reed-Muller code with a Hadamard code to get a code with codeword length polynomial in and for which there exists a local 1/48-decoder running in time , giving us our first result in hardness amplification.

**Result 7**: If there exists an in that is worst-case hard in nonuniform time , there is an in that is -average-case-hard.

We now want to amplify worst-case hardness into arbitrarily large average-case hardness . To get input length linear in the original input length, we will use so-called local *list*-decoding. Morally, given oracle access to some , a list-decoding algorithm probabilistically outputs a bunch of guesses for codewords close to , one of which can be decoded to be equal to the target message close to with high probability. It will turn out to be helpful to define a list-variant of local correcting as well, and again, its definition will essentially be the same as that of decoding with all instances of the word “message” replaced by “codeword.” Formally:

**Definition 9**:** **A **local -list-decoding algorithm **for an encoding is a probabilistic algorithm that, given oracle access to a received word , and an input position , takes place in two steps using algorithms where probabilistically outputs advice strings such that for any message whose encoding is -close to , with probability at least 2/3 for one of the advice strings , where probability is taken over the distribution of collections of advice strings output by .

**Definition 10:** A **local -list-correcting algorithm ** for an encoding is a probabilistic algorithm that, given oracle access to a received word , and an input position , takes place in two steps using algorithms where probabilistically outputs advice strings such that for any codeword -close to , with probability at least 2/3 for one of the advice strings , where probability is taken over the distribution of collections of advice strings output by .

Analogously to result 4 we get local list-decoders from local list-correctors given a systematic encoding, with the encoding adding an overhead polylogarithmic in the message length.** **Analogously to result 3 we get hardness amplification out of local list-decoding, with time slowdown by a factor of where is the time for . The proof for the former remains the same, whereas the proof for the latter is differs slightly from its counterpart because makes use of nonuniformity not in the form of fixed coin tosses but in the form of an advice string .

The good news is that our discussion on systematic encoding applies just as soundly in the case of list-decoding, and moreover we already know the Hadamard code can be -list-decoded globally (see result 3 here). What remains is to construct a local -list -corrector, and it turns out we can do this with the Reed-Muller code as well.

**Result 8**: There is some fixed such that for all , the -ary, -dimensional, degree- RM code has a local -list-corrector running in time polynomial in and .

**Proof**: Call our unknown codeword . The corrector is very similar to the one used in result 5: reduce the dimension of the code to 1 by restricting our oracle to a line through and use Reed-Solomon (global) list-decoding on this line. In particular, will pass to advice strings including a point on , and will use Reed-Solomon list decoding to find all univariate polynomials close to , and the intuition is that if only one of these polynomials contains the point we know to be on , we’ve probably properly corrected received word .

Specifically, the algorithm is as follows: in order to ensure passes in a point on , just pick some and, because it doesn’t cost much, pass all points of the form so that one of the advice strings will be the right one. then -list-decodes restricted to the line going through and to get polynomials -close to . If exactly one of these passes through , output its value at 0.

As long as we can compute something -close to , we can use our unique corrector of result 5 to compute the actual with probability 2/3.

We will show that for at least half of all , for more than of the lines passing through we have that is the unique univariate polynomial which is close to and which, evaluated at 1, is equal to .

It suffices to prove that for a randomly chosen and line through , we can tune the parameter in to make the probability that this condition holds arbitrarily large.

We first prove that the probability that and agree on at most inputs is arbitrarily small. The point is that if we pick our line randomly, the samples of points from this line are certainly not independent, but they are pairwise independent. Recalling Result 3 from here and noting that the expected agreement between and is at least , we obtain that , which we can make arbitrarily small by making arbitrarily large.

Lastly, we prove that the probability that there exists a univariate polynomial other than that is -close to such that is also arbitrarily small. The key is 1) there are only univariate polynomials that close to to begin with, and each agrees with it at most points. So if we first pick an arbitrary line and then a point , then for any line chosen, the union bound tells us that the probability over all on this line that for any of these is at most , and because , , so again, we can take arbitrarily small to make this probability small as well.

As we stated earlier, we can directly use the results on systematic encoding and concatenation above to get an explicit code with a local -list-decoder. Specifically, as before we set dimension to be so that degree is bounded above by and . The running time depends linearly on alphabet size , so our decoding algorithm runs in time .

**Conclusion**

With this Reed-Muller code, we can send a function that is worst-case hard for time to a function that is -average-case hard, where . Such a function is certainly also -average-case hard, and result 3 thus tells that:

**Corollary**: Fix a time function that is computable in time exponential in its input. If there is some in **E** that on length- inputs is worst-case hard for time , there is a mildly explicit -generator with seed length .

In particular, if in the above result, then we would get that **BPP = P**! Even if the latter doesn’t turn out to be true and no such function in **E** has such high circuit complexity, this would suggest that problems in **E** in like satisfiability don’t have as high circuit complexity as previously thought.