Notes On Unbiased Random Walks

Unbiased Random Walks (source:Wikipedia)
Unbiased Random Walks

Imagine that a particle is walking in a two-dimensional space, starting at the origin (0, 0). At every time-step (or “epoch”) n = 0, 1, 2, 3, \cdots it takes a \pm 1 vertical step. At every step, the particle either moves up by +1, or down by $altex -1$. This walk is “unbiased” in the sense that the up/down steps are equiprobable.

In this post, we will discuss some natural questions about this “unbiased random walk.” For example, how long will it take for the particle to return to zero? What is the probability that it will ever reach +1? When will it touch a for the first time? Contents of this post are a summary of the Chapter “Random Walks” from the awesome “Introduction to Probability” (Volume I) by William Feller.

Continue reading “Notes On Unbiased Random Walks”

Advertisements

Characterizing the Adversarial Grinding Power in a Proof-of-Stake Blockchain Protocol

[Contents of this post are based on an ongoing discussion with Alex Russell and Aggelos Kiayias. It contains potentially unpublished material.]

In a proof-of-stake blockchain protocol such as Ouroboros, at most half of the users are dishonest. While an honest user always extends the longest available blockchain, the dishonest users try to fool him into extending a manipulated blockchain. Here, the user who is allowed to issue a block at any time-slot is called the “slot leader.” As it happens, a number of future slot leaders are computed in advance using the random values present in the blocks. Although counterintuitive, such a scheme ensures that if the adversary does not control more than half the users now, it is very unlikely that he cannot control more than half the slot leaders. The time-slots are divided into “epochs” of length R.

Continue reading “Characterizing the Adversarial Grinding Power in a Proof-of-Stake Blockchain Protocol”

Forkable Strings are Rare

In a blockchain protocol such as Bitcoin, the users see the world as a sequence of states. A simple yet functional view of this world, for the purpose of analysis, is a Boolean string w = w_1, w_2, \cdots of zeros and ones, where each bit is independently biased towards 1 favoring the “bad guys.”

A bad guy is activated when w_t = 1 for some t. He may try to present the good guys with a conflicting view of the world, such as presenting multiple candidate blockchains of equal length. This view is called a “fork”. A string w that allows the bad guy to fork (with nonnegligible probability) is called a “forkable string”. Naturally, we would like to show that forkable strings are rare: that the manipulative power of the bad guys over the good guys is negligible.

Claim ([1], Bound 2). Suppose w =w_1, \cdots, w_n is a Boolean string, with every bit independently set to 1 with probability (1-\epsilon)/2 for some \epsilon < 1. The probability that w is forkable is at most \exp(-\epsilon^3n/2).

In this post, we present a commentary on the proof that forkable strings are rare. I like the proof because it uses simple facts about random walks, generating functions, and stochastic domination to bound an apparently difficult random process.

Continue reading “Forkable Strings are Rare”

Some Facts about the Gambler’s Ruin Problem

Consider a random walk in the two-dimensional discrete space, where the horizontal direction is indexed by nonnegative time steps and the vertical direction is indexed by integers.

A particle is at its initial position z > 0 at time 0. At every time step, it independently takes a step up or down: up with probability p and down with probability q = 1-p. If p = q the walk is called symmetric. Let p_z and q_z be the probabilities that the particle ever reaches some level a >z or 0, respectively. When it does, the walk stops. The quantity q_z is the ruin probability. We set the initial conditions as q_0 = 1, q_a = 0.

The Gambler’s Ruin Problem: What is the probability that a walk, starting at some z > 0, eventually reaches the origin?

Continue reading “Some Facts about the Gambler’s Ruin Problem”

A Publicly Verifiable Secret Sharing Scheme

We all know Shamir’s (t,n)-Secret Sharing Scheme: A dealer fixes a group \mathbf{Z}_p and selects a random polynomial P(x) of degree at most t-1 over this group. He sets the constant coefficient equal to your secret \sigma, then distributes n arbitrary evaluations of this polynomial as shares to the n parties. Any subset of t parties can pool their shares and reconstruct the polynomial via Lagrange interpolation.

What if we don’t want to trust the dealer nor the parties? Feldman’s Verifiable Secret Sharing Scheme allows the dealer to publish, in addition to the regular Shamir-shares, a generator g of the group \mathbf{Z}_p^* along with t proofs or commitments c_j for each coefficient of the polynomial P(x). Each party can verify that his share is correct by testing whether

\displaystyle g^{P(i)}\, \overset{?}{=} \, \prod_j{c_j^{i^j}} \bmod p.

However, the shares p(i) are still private. What if we want everyone to be able to verify the correctness of all shares without knowing what the shares are? This is achieved by a Publicly Verifiable Secret Sharing Scheme, such as the one developed by Berry Schoenmaker, assuming the discrete log problem is hard.

Continue reading “A Publicly Verifiable Secret Sharing Scheme”

Two MDS Array Codes for Disk Erasures: the Blaum-Bruck-Vardy Code and the BASIC Code

In this post, I am going to review two erasure codes: the Blaum-Bruck-Vardy code and the BASIC code (also here). These are erasure codes, which means, their purpose is to encode a number of data disks into a number of coding disks so that when one or more data/coding disks fail, the failed disk can be reconstructed using the existing data and coding disks.

A strength of these codes is that although the algebra is described on extension fields/rings over GF(2), the encoding/decoding process uses only Boolean addition/rotation operation and no finite field operation. These codes are also MDS (Maximum Distance Separable), which means they have the largest possible (minimum) distance for a fixed message-length and codeword-length.

(Recall that if a code has d data components and c parity components in its generator matrix in standard form, its distance is at most c + 1 by the Singleton bound. Hence the code is MDS if and only if it can tolerate c arbitrary disk failures.)

The BASIC code does the following things in relations to the BBV code:

  1. Adds a virtual parity bit after each disk, giving each disk an even parity
  2. Does polynomial arithmetic modulo 1+x^p instead of h(x) = 1+x+\cdots + x^{p-1} as in the case of BBV code
  3. Shows equivalence to the BBV code by making a nice observation via Chinese Remainder Theorem
  4. Proves MDS property for any number of coding disks when p is “large enough” and has a certain structure

Open Question: What is the least disk size for which these codes are MDS with arbitrary distance?

Continue reading “Two MDS Array Codes for Disk Erasures: the Blaum-Bruck-Vardy Code and the BASIC Code”

Generalized Vandermonde Matrices with Missing Row-Powers

Let [n] := \{0, 1, 2, \cdots, n -1 \}. Let x = (x_0, \cdots, x_{n-1}) with distinc x_js. Let R = \{0 = r_1, r_2, \cdots, r_{n-1} = r be a set of distinct nonnegative row-powers. Consider the n \times n Generalized Vandermonde matrix V(x ; R) := \left( x_j^{r_i} \right)_{i,j \in [n]}. When R = [n], this matrix becomes the ordinary Vandermonde matrix, V(x).

An equivalent description of R is the largest row-power r \in R and the set of missing rows from [r]: that is, the items that are in [r] but not in R. Let L_r = \{\ell_0, \ell_1, \cdots \} be this set. Define the punctured Generalized Vandermonde matrix V_{\perp}(x; L_r) := V(x; R). Let s_k(x) be the kth elementary symmetric polynomial.

Now we are ready to present some interesting results without proof, from the paper “Lower bound on Sequence Complexity” by Kolokotronis, Limniotis, and Kalouptsidis.

det V_{\perp}(x ; \{\ell \}) = det V(x) \ s_{n-\ell}(x).

det V_{\perp}(x ; \{\ell_0, \ell_1\}) = det V(x) \ det \left( s_{n-\ell_i+j}(x) \right)_{i,j \in [2]}.

det V_{\perp}(x ; L) = det V(x) \ det \left( s_{n-\ell_i+j}(x) \right)_{i,j \in [s]} \text{ where } L = \{ \ell_0, \ell_1, \cdots, \ell_{s-1}\}.

I will add some applications later on.