Fractional Moments of the Geometric Distribution

In one of my papers, I needed an upper bound on the fractional moments of the geometric distribution. The integer moments have a nice-looking upper bound involving factorials. In addition, Mathematica showed that the same upper bound holds for fractional moments once we use Gamma function instead of factorials. The problem was, I could not prove it. Worse, I could not find a proof after much looking on the Internet.

This post contains a proof that for any real \lambda > 1, the \lambdath moment of a geometric random variable with success probability \displaystyle p \geq 1/2 is at most \displaystyle \Gamma(\lambda + 1)/p^\lambda.

Continue reading “Fractional Moments of the Geometric Distribution”

My Presentation on Proof-of-Work vs. Proof-of-Stake Blockchain Protocols

I gave a talk in our seminar about the proof-of-work vs. the proof-of-stake blockchain paradigm. Although I don’t have an audio/video recording, here is a Google Slides rendering of my original Powerpoint slides. Some of the animations are out of place/order, but in general, it feels okay.

 

I intended this talk to be accessible in nature, so I intentionally skipped many details and strived not to flaunt any equation in it.

Advertised Summary: Bitcoin is a blockchain protocol where finalized transactions need a “proof of work”. Such protocols have been criticized for a high demand for computing power i.e., electricity. There is another family of protocols which deals with a “proof of stake”. In these protocols, the ability to make a transaction depends on your “stake” in the system instead of your computing power. In both cases, it is notoriously difficult to mathematically prove that these protocols are secure. Only a handful of provably secure protocols exist today. In this talk, I will tell a lighthearted story about the basics of the proof-of-work vs. proof-of-stake protocols. No equations but a lot of movie references.

 Please enjoy, and please let me know your questions and comments.

 

Notes on the PCP Theorem and the Hardness of Approximation: Part 1

In this note, we are going to state the PCP theorem and its relation to the hardness of approximating some NP-hard problem.

PCP Theorem: the Interactive Proof View

Intuitively, a PCP (Probabilistically Checkable Proof) system is an interactive proof system where the verifier is given r(n) random bits and he is allowed to look into the proof \pi in q(n) many locations. If the string x \in \{0,1\}^n is indeed in the language, then there exists a proof so that the verifier always accepts. However, if x is not in the language, no prover can convince this verier with probability more than 1/2. The proof has to be short i.e., of size at most q(n) 2^{r(n)}. This class of language is designated as PCP[r(n), q(n)].

Theorem A (PCP theorem). Every NP language has a highly efficient PCP verifier. In particular,

NP = PCP[O(\log n), O(1)].

Continue reading “Notes on the PCP Theorem and the Hardness of Approximation: Part 1”

Forkable Strings are Rare

In a blockchain protocol such as Bitcoin, the users see the world as a sequence of states. A simple yet functional view of this world, for the purpose of analysis, is a Boolean string w = w_1, w_2, \cdots of zeros and ones, where each bit is independently biased towards 1 favoring the “bad guys.”

A bad guy is activated when w_t = 1 for some t. He may try to present the good guys with a conflicting view of the world, such as presenting multiple candidate blockchains of equal length. This view is called a “fork”. A string w that allows the bad guy to fork (with nonnegligible probability) is called a “forkable string”. Naturally, we would like to show that forkable strings are rare: that the manipulative power of the bad guys over the good guys is negligible.

Claim ([1], Bound 2). Suppose w =w_1, \cdots, w_n is a Boolean string, with every bit independently set to 1 with probability (1-\epsilon)/2 for some \epsilon < 1. The probability that w is forkable is at most \exp(-\epsilon^3n/2).

In this post, we present a commentary on the proof that forkable strings are rare. I like the proof because it uses simple facts about random walks, generating functions, and stochastic domination to bound an apparently difficult random process.

Continue reading “Forkable Strings are Rare”

A Publicly Verifiable Secret Sharing Scheme

We all know Shamir’s (t,n)-Secret Sharing Scheme: A dealer fixes a group \mathbf{Z}_p and selects a random polynomial P(x) of degree at most t-1 over this group. He sets the constant coefficient equal to your secret \sigma, then distributes n arbitrary evaluations of this polynomial as shares to the n parties. Any subset of t parties can pool their shares and reconstruct the polynomial via Lagrange interpolation.

What if we don’t want to trust the dealer nor the parties? Feldman’s Verifiable Secret Sharing Scheme allows the dealer to publish, in addition to the regular Shamir-shares, a generator g of the group \mathbf{Z}_p^* along with t proofs or commitments c_j for each coefficient of the polynomial P(x). Each party can verify that his share is correct by testing whether

\displaystyle g^{P(i)}\, \overset{?}{=} \, \prod_j{c_j^{i^j}} \bmod p.

However, the shares p(i) are still private. What if we want everyone to be able to verify the correctness of all shares without knowing what the shares are? This is achieved by a Publicly Verifiable Secret Sharing Scheme, such as the one developed by Berry Schoenmaker, assuming the discrete log problem is hard.

Continue reading “A Publicly Verifiable Secret Sharing Scheme”

Two MDS Array Codes for Disk Erasures: the Blaum-Bruck-Vardy Code and the BASIC Code

In this post, I am going to review two erasure codes: the Blaum-Bruck-Vardy code and the BASIC code (also here). These are erasure codes, which means, their purpose is to encode a number of data disks into a number of coding disks so that when one or more data/coding disks fail, the failed disk can be reconstructed using the existing data and coding disks.

A strength of these codes is that although the algebra is described on extension fields/rings over GF(2), the encoding/decoding process uses only Boolean addition/rotation operation and no finite field operation. These codes are also MDS (Maximum Distance Separable), which means they have the largest possible (minimum) distance for a fixed message-length and codeword-length.

(Recall that if a code has d data components and c parity components in its generator matrix in standard form, its distance is at most c + 1 by the Singleton bound. Hence the code is MDS if and only if it can tolerate c arbitrary disk failures.)

The BASIC code does the following things in relations to the BBV code:

  1. Adds a virtual parity bit after each disk, giving each disk an even parity
  2. Does polynomial arithmetic modulo 1+x^p instead of h(x) = 1+x+\cdots + x^{p-1} as in the case of BBV code
  3. Shows equivalence to the BBV code by making a nice observation via Chinese Remainder Theorem
  4. Proves MDS property for any number of coding disks when p is “large enough” and has a certain structure

Open Question: What is the least disk size for which these codes are MDS with arbitrary distance?

Continue reading “Two MDS Array Codes for Disk Erasures: the Blaum-Bruck-Vardy Code and the BASIC Code”

When does the Discrete Fourier Transform Matrix have Nonsingular Submatrices?

I am studying a coding theory problem. The question is this:

Open Question: Is there a prime p and a positive integer d such that all submatrices of the p\times p Discrete Fourier Transform matrix over the field GF(2^d) are nonsingular?

Currently, I have only counterexamples: Let d be the degree of the smallest extension over GF(2) which contains a nontrivial pth root of unity. Then, I know a lot of primes p for which the matrix V has a singular submatrix.

In this post, I am going to show a failed attempt to answer this question using the results in this paper by Evra, Kowalski, and Lubotzky.

Continue reading “When does the Discrete Fourier Transform Matrix have Nonsingular Submatrices?”

Discrete Fourier Transform: the Intuition

Every time I think “Now I understand the Fourier Transform,” I am wrong.

fourier2
Jean-Baptiste Joseph Fourier, 1768 – 1830. (Image source: Wikipedia)

Doing the Fourier Transform of a function is just seeing it from “another” point of view. The “usual” view of a function is in the standard basis \{e_1, \cdots, e_n\}. For example, f can be seen as a vector (in the basis given by the elements in the domain) whose coordinates are the evaluations of f on the elements in the domain. It can also be seen as a polynomial (in the monomial basis) whose coefficients are these evaluations. Let us call this vector u.

The same function can also be seen from the “Fourier basis”, which is just another orthogonal basis, formed by the basis vectors \{v_t\}, t. The tth coordinate in the new basis will be given by inner product between u and the tth basis vector v_t. We call these inner products the Fourier coefficients. The Discrete Fourier Transform matrix (the DFT matrix) “projects” a function from the standard basis to the Fourier basis in the usual sense of projection: taking the inner product along a given direction.

In this post, I am going to use elementary group theoretic notions, polynomials, matrices, and vectors. The ideas in this post will be similar to this Wikipedia article on Discrete Fourier Transform.

Continue reading “Discrete Fourier Transform: the Intuition”

Why x^p-1 Factors over GF(2) the Way It Does

Let p be a prime number. We know that F_2=GF(2)=\{0,1\}. Let F_2[x] be the ring  the polynomials with indeterminate x and coefficients in F_2. The polynomial (x^p-1) \in F_2[x] factors over F_2 as  follows for various p:

x^3-1=(1+x) (1+x+x^2).
x^5-1=(1+x) (1+x+x^2+x^3+x^4).
x^7-1=(1+x) (1+x+x^3) (1+x^2+x^3).
x^{11}-1=(1+x) (1+x+x^2+x^3+x^4+x^5+x^6+x^7+x^8+x^9+x^{10}).
x^{13}-1=(1+x) (1+x+x^2+x^3+x^4+x^5+x^6+x^7+x^8+x^9+x^{10}+x^{11}+x^{12}).
x^{17}-1=(1+x) (1+x^3+x^4+x^5+x^8) (1+x+x^2+x^4+x^6+x^7+x^8).

None of the factors above can be factored anymore, hence they are irreducible over F_2. Let us call x+1 the trivial factor since the root x= 1 already belongs to F_2. But why do we have two nontrivial irreducible factors of x^7-1, each of degree 3, whereas x^{13}-1 has only one non-trivial irreducible factor of degree 12? It appears that either there is only one nontrivial factor, or the degree of all nontrivial factors is the same. Why?

Continue reading “Why x^p-1 Factors over GF(2) the Way It Does”

The Structure of a Quotient Ring modulo x^p-1

Let f(X)=X^p-1 \in F_l[X] for a prime p. We are interested in the structure of the ring R=F_l[X]/(f). Via the Chinese Remainder Theorem, this question is equivalent to finding a factorization of f over F_l. Suppose f factors in F_l as

\displaystyle f(X) = (X-1) \prod_i^s{q_i(X)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \   (*)

where each q_i is an irreducible polynomial. What are the degrees of q_1, \cdots, q_s? What is s?

We claim that R=F_l \oplus (F_{l^r})^s, where r is the multiplicative order of l mod p.

This structure has been used in a beautifully instructive paper by Evra, Kowalski, and Lubotzky. (Yes, the one in Lubotzky-Philip-Sarnak.) In this paper, they have established connections among the Fourier transform, uncertainty principle, and dimension of ideals generated by polynomials in the ring R above. We’ll talk about these connections in another post. The content of this post is Proposition 2.6(3) from their paper.

Continue reading “The Structure of a Quotient Ring modulo x^p-1”