This post will cover some cool results in coding theory I’ve learned from Vadhan’s survey on pseudorandomness, with a focus on list-decoding algorithms and the tradeoff between rate and distance and culminating in a discussion of the folded Reed-Solomon code and its relationship to the Parvaresh-Vardy code.

We begin with some basic definitions.

**Definition 1**: For alphabet of size and block length , a **q-ary code **is a multiset inside the set of -words . The members of are called **codewords**.

**Definition 2**: An **encoding function** for a code is a bijection from the **messages** to the set of codewords, thought of as a labeling.

The set of -words from the alphabet has an obvious structure as a metric space, and this notion of distance will help us in developing a model of decoding.

**Definition 3**: The relative **Hamming distance ** for is the fraction of positions such that . Under this metric, denote to be the open ball of -words centered at with radius . This happens to be called the Hamming ball of radius around .

With these in mind, we will focus not on unique decoding, but on decoding algorithms that output *lists* of potential candidates and allow for handling of bigger batches of errors.

**Definition 4**: Let denote the set of codewords lying inside the Hamming ball of radius centered about . An encoding is said to be **-list-decodable **if for all -words .

The parameter in the above definition can be thought of as capturing the difficulty of decoding: the lower the distance , the closer codewords are to each other and the harder it is to distinguish among them.

A related parameter is the **minimum distance** of a code, defined to be the minimum Hamming distance between any two distinct codewords.

One of the tradeoffs central to coding theory is that between the efficient decodability captured by and efficient encodability. This will serve as the motivation for the progression of codes we will present leading up to the FRS codes. We will formalize efficient encodability as follows:

If we imagine each message as a binary string, then the length of that binary string gives what one can think of as a measure of the “breadth” of the code. If we further think of the elements of the alphabet as binary strings, then the length of these strings times the block length gives a measure of the “verbosity“ of the code. With these images in mind, we can define the following parameters; in particular, we can think of the latter as the “expressiveness” of the code.

**Definition 5**: Call the **message length** of code with alphabet .. Define the **rate** of to be .

We introduce a slight modification to this, here solely for the purpose of giving a probabilistic proof of the existence of very good codes. The slight modification can be thought of both as the rate of all -words lying within a Hamming ball of fixed radius, and, more mysteriously, as a convergent to a measure of the entropy of an alphabet of size for a given distance .

**Definition 6**: .

**Result 1**: For fixed block length , base , list size , and distance , there exists a -list-decodable -ary code with a rate of at least .

**Proof**: Randomly and independently pick words from the set of -words to be our code for some that we will determine later to guarantee the existence of a good code. The probability that there exists a Hamming ball of radius containing more than codewords, i.e. the probability that we run into problems trying to decode, is bounded above, according to the union bound), by

,

and by taking to be

and we’re done.

We conclude our preliminaries with two result on minimum distance that will prove useful later on.

**Result 2 (Johnson Bound)**: i) If has minimum distance , it is -list-decodable.

ii) If is binary and has minimum distance , it is -list-decodable.

**Proof**: For part 1, we will show that for , there cannot exist at least codewords in for some -word . Say that such a collection of codewords . We count the fraction of positions where agrees with at least one of these codewords. By the union bound, this is at least

.

We defer our proof of part 2 until after our discussion of randomness extractors in a future post.

For the remainder of our discussion, we will be interested in codes with time-efficient encoding and decoding.

**Definition 7**: An encoding is **fully explicit** if for any message , can be found in time polynomial in , , and .

We start with an explicit code with a small alphabet and the best distance we could hope for:

**Definition 8**: The **binary Hadamard code** of message length is the code with alphabet and block length consisting of all linear maps from , where we regard as a -module.

**Remark**: For the purposes of comparison to the RS code given below and of generalization to the RM code, these linear maps can also be thought of as homogeneous polynomials of degree 1 in as many variables as there are additive generators of , the coefficient of each variable specifying where our map sends the corresponding generator.

**Result 3**: The Hadamard code has minimum distance ½ and is -list-decodable for all . The encoding given by is fully explicit.

**Proof**: The first result is an artifact of the fact that any two distinct linear maps will differ on half of all inputs. The second part of result 3 follows immediately from the second part of the Johnson bound. The third part of result 3 comes from our discussion in a previous post on finite field arithmetic in discussing pairwise independent hash functions.

Observe that the rate of the Hadamard code, decreases exponentially in message length. By considering fields of higher order, codewords as *single-variable *polynomial maps into a bigger alphabet, we get a constant rate, at the cost of this bigger alphabet.

**Definition 9**: The **-ary, degree- Reed-Solomon (RS) code** is the code with alphabet and block length consisting of all polynomials of degree at most .

**Result 4**: The RS code has minimum distance and is -list-decodable. The encoding given by is fully explicit.

**Proof**: The difference of two degree- polynomials has at most zeroes, giving the first part of the result on minimum distance. The second comes from the first part of the Johnson Bound.

We can generalize these two orthogonal codes into the following, the Hadamard code by considering larger alphabets and higher-degree polynomial maps, the RS code by considering polynomial maps in more variables.

**Definition 10**: The **-ary, degree-, -dimensional Reed-Muller (RM) code** is the code with alphabet and block length consisting of all polynomials of total degree at most .

**Result 5**: The RM code has minimum distance at least and is -list-decodable. The encoding function taking a -words to the polynomial with those coefficients is fully explicit.

**Proof**: This result is essentially identical to the one for RS, except because we are dealing with multivariate polynomials, we invoke Schwartz-Zippel instead of the Fundamental Theorem of Algebra.

We can now discuss list-decoding algorithms. We want to efficiently determine for any -word . We begin with a discussion of decoding RS using an algorithm due to Sudan. This method turns out to be useful in decoding a general class of “algebraic geometric” codes.

**Result 6**: For the -ary, degree- RS code, there exists a polynomial time -list decoding algorithm.

**Proof**: Given a -word , we want to find the polynomials of degree at most which agree with on more than elements in . The approach will be to find a low-degree bivariate polynomial which is solved by for all . By construction, factoring the polynomial will give us the desired polynomials.

In order for our “characteristic polynomial” to be solved by all pairs , we must satisfy a system of equations in unknown coefficients, which can be done as long as .

Once we have such a , consider for any codeword . This shares more than zeroes with , so if exceeds the degree of , then is the zero polynomial. We can then tune , , and to satisfy these two constraints, resulting in the parameter. It suffices then to factor and pick out the factors of the form .

We can adapt this approach to a code with a better rate-distance tradeoff than the quadratic one for RS. This construction will involve a characteristic polynomial of in more than two variables that is defined as being solved by tuples of *powers *of so that morally, we are getting more information out of our . These powers will be reduced modulo an irreducible polynomial in order to make sure our characteristic polynomial isn’t trivial.

**Definition 11**: For prime power , the **-ary Parvaresh-Vardy (PV) code** of degree , power , redundancy , and irreducible is the code with alphabet , block length , and messages represented by polynomials of degree at most , with encoding function given by

**Result 7**: For , irreducible of degree , redundancy , power , the -ary PV code can be -list-decoded in polynomial time for at a rate of .

**Proof**: The approach is essentially the same as in the case of the RS code. Take some word -word represented as a map . First, we can assume without loss of generality that does not divide our characteristic polynomial, by irreducibility of . We want our characteristic polynomial to have low degree; denote to be the degree of in and let the degree of each of the other variables of be the power of of the code. In order for there to be a characteristic polynomial such that for all , the number of unknown coefficients must exceed the number of equations .

As before, we can conclude that and argue that it suffices to factorize as long as degree exceeds the number of shared zeroes . By our assumption that does not divide and by the fact that the degree of in each power of is less than , i.e. we’ve “separated” our variables so that they don’t cancel each other implies that is nonzero.

The parameters can be tuned to get the desired power and message length, and it turns out that we get a rate and distance of and respectively.

We’re extremely close, but it turns out that to close the gap in the rate-distance tradeoff, we need to get rid of some redundancy. Consider irreducible for a multiplicative generator of . Then for , .

Observe that and overlap in positions, suggesting we can skip factors of at a time and get a “compressed” encoding with times the rate of the PV code. In particular, we can get rid of the factor of in the denominator of above so that .

**Definition 11**: For prime power , the **-ary folded RS (FRS) **code of degree , folding parameter , and generator is the code with alphabet , block length , and messages represented by polynomials of degree at most , with encoding function given by

,

where .

**Result 8**: For , the -ary folded Reed Solomon code of degree , generator , folding parameter can be -list-decoded in time for at a rate of .

**Proof**: Take a received word . To get rid of the redundancy, we will extract out of the FRS code a PV code of redundancy and power , where each symbol of the folded code contains overlapping PV symbols. Indeed, as we discussed above, the tuple contains contiguous subsequences of the form

Taking to be will turn out to allow us to extract the most information in total (this is also where the extra term in the upper bound on comes from).

Unfolding into the resulting gives a received word such that for every symbol on which agrees with under FRS, agrees with symbols of under PV, for a total of more than positions. The problem reduces to finding for under PV.

The approach is identical to the one taken in our proof for the previous result, except we will take which is of total degree at most 1 in all of the variables rather than individual degree of at most in each of these variables. This will yield a bigger redundancy parameter but ultimately yield the desired improvement on rate-distance tradeoff. Our constraints are to get a characteristic polynomial as well as to ensure that for all . Tuning to be at least gives us the desired upper bound on and the desired rate. Constructing and factoring the described in the previous proof also takes time , but because , runtime grows exponentially in the folding parameter .

In fact, because the power of the PV code is nonconstant, the list size also increases exponentially in folding parameter, as does the alphabet size. Beyond these drawbacks, we are able to get the desired rate-distance tradeoff .

Pingback: PRGs and Hardness Amplification Part 2- Local Decoding « For Formal Reasons