PRGs and Hardness Amplification Part 2- Local Decoding

(This is a continuation of a previous post.)

The goal now is to find such an encoding/decoding. We reduce this problem further to the problem of finding an algorithm that adjusts a word in \Sigma^{L'} to a codeword (rather than a message) with nontrivial accuracy, and the reduction will be valid as long as we can efficiently translate between the messages and codewords. Formally,

Definition 7: A local \delta-correcting algorithm D for an encoding E is a probabilistic algorithm that, given oracle access to a received word g and an input position x, computes with probability 2/3 the value f'(x) for any codeword f' \delta-close to g. For coin tosses r, the output of the algorithm will be denoted by D^g(x;r), and probability is taken over all r.

Definition 8: An encoding E: \{0,1\}^L\to\Sigma^{L'} is systematic if there exists a “translation” map T: \{0,...,L-1\}\to\{0,...,L'-1\} running in time polynomial in the length of the domain and a map t: \{0,1\}\to\Sigma such that for all messages f and corresponding codewords f', f(x) = t(f'(T(x))).

We can get a decoder by sending input position x\in\{0,...,L\} through such a translation map and then through a local corrector:

Result 4: If E is a systematic encoding and the code has a local \delta-correcting algorithm in time t, then E has a local \delta-decoding algorithm running in time t + \text{poly}(\log L).

We will design a local corrector and system encoding for the Reed-Muller code. We then concatenate to the Hadamard code to obtain a binary alphabet, and this will turn out to give the desired hardness amplification.

Continue reading

Advertisements

PRGs and Hardness Amplification Part 1- Preliminaries

In this series, we’ll be exploring some of the material from the last chapter of Vadhan’s survey, in particular results of amplifying the hardness of computing functions and obtaining pseudorandom generators out of hard functions. In this relatively short post, by providing preliminary definitions and a motivation related to derandomization. We then close with a glimpse of our first step in hardness amplification.

Preliminaries

Roughly speaking, generators are deterministic functions that stretch uniformly random seeds into bits that appear random. It turns out that our methods of quantifying “randomness” using both min-entropy H_{\infty} and statistical difference from the uniform distribution are insufficient, because H_{\infty}(f(X))\le H_{\infty}(X) for any random variable X and deterministic f. In place of statistical or information theoretic measures, we thus use a complexity theoretical one: a variable will seem random if it is impossible for a so-called nonuniform algorithm running within a certain time limit to tell it apart from a uniformly random variable. We first define nonuniformity:

Definition 0: An algorithm T deciding a language L is a-nonuniform for a: \mathbb{N}\to\mathbb{N} if there exist advice bitstrings \{\alpha_k\}_{k\in\mathbb{N}}\in\{0,1\}^* each of length at most a(k) such that there exists a language L' decided by some algorithm T' where A(x)=1 iff A'(x,\alpha_{|x|}).

We will not make use of this definition in its full generality; it will suffice to talk about algorithms which operate with some “advice” hardwired in, and the length of this advice will be reasonable enough that we won’t bother with the data of the length function a. We can now formalize our abovementioned ideas of “randomness” and pseudorandom generators.

Definition 1: Random variables X and Y are (nonuniformly) (t,\epsilon) computationally indistinguishable if for every nonuniform algorithm T running in time at most t, |P(T(X)=1) - P(T(Y)=1)|\le\epsilon.

Definition 2: A (t,\epsilon)-pseudorandom generator is a deterministic function G: \{0,1\}^d\to\{0,1\}^m such that G(U_d) and U_m are (t,\epsilon)-computationally indistinguishable.

Continue reading

Randomness Extractors and Relationships Between Pseudorandom Objects

“Everything is vitally connected, not merely artificially related, so that when I see anything, I am seeing everything. When I speak to anyone, I am speaking to everybody. When I touch anything, I touch all things, and when I know one thing, I know everything.” – Brihadaranyaka Upanishad

In this post, I’ll cover some of what I think are the coolest ideas in Vadhan’s survey, the principles unifying the main pseudorandom objects studied. We will continue where we left off in the last post by illustrating a connection between list encoding expanders. We then introduce the concept of randomness extractors and relate these to hash functions, expanders, and list encoding.

NOTE: This post only touches the surface on what Vadhan covers here.

Continue reading