# Random Walk on Integers

Imagine that a particle is walking in a two-dimensional space, starting at the origin $(0, 0)$. At every time-step (or “epoch”) $n = 0, 1, 2, 3, \cdots$ it takes a $\pm 1$ vertical step. At every step, the particle either moves up by $+1$, or down by $-1$. This walk is “unbiased” in the sense that the up/down steps are equiprobable.

In this post, we will discuss some natural questions about this “unbiased” or “simple” random walk. For example, how long will it take for the particle to return to zero? What is the probability that it will ever reach +1? When will it touch $a$ for the first time? Contents of this post are a summary of the Chapter “Random Walks” from the awesome “Introduction to Probability” (Volume I) by William Feller. Also, this is an excellent reference.

Edit: the Biased Case. The biased walk follows

$\displaystyle p:=\Pr[ \text{down}]\, ,\qquad q:=\Pr[ \text{up} ]\, ,\qquad p + q = 1$.

The lazy walk follows

$\displaystyle p:=\Pr[ \text{down}]\, ,\qquad q:=\Pr[ \text{up} ]\, , \qquad \Pr[stay] = r\, ,\qquad p + q = 1$.

In the exposition, we’ll first state the unbiased case and when appropriate, follow it up with the biased case.

# Characterizing the Adversarial Grinding Power in a Proof-of-Stake Blockchain Protocol

[Contents of this post are based on an ongoing discussion with Alex Russell and Aggelos Kiayias. It contains potentially unpublished material.]

In a proof-of-stake blockchain protocol such as Ouroboros, at most half of the users are dishonest. While an honest user always extends the longest available blockchain, the dishonest users try to fool him into extending a manipulated blockchain. Here, the user who is allowed to issue a block at any time-slot is called the “slot leader.” As it happens, a number of future slot leaders are computed in advance using the random values present in the blocks. Although counterintuitive, such a scheme ensures that if the adversary does not control more than half the users now, it is very unlikely that he cannot control more than half the slot leaders. The time-slots are divided into “epochs” of length $R$.

# Forkable Strings are Rare

In a blockchain protocol such as Bitcoin, the users see the world as a sequence of states. A simple yet functional view of this world, for the purpose of analysis, is a Boolean string $w = w_1, w_2, \cdots$ of zeros and ones, where each bit is independently biased towards $1$ favoring the “bad guys.”

A bad guy is activated when $w_t = 1$ for some $t$. He may try to present the good guys with a conflicting view of the world, such as presenting multiple candidate blockchains of equal length. This view is called a “fork”. A string $w$ that allows the bad guy to fork (with nonnegligible probability) is called a “forkable string”. Naturally, we would like to show that forkable strings are rare: that the manipulative power of the bad guys over the good guys is negligible.

Claim ([1], Bound 2). Suppose $w =w_1, \cdots, w_n$ is a Boolean string, with every bit independently set to $1$ with probability $(1-\epsilon)/2$ for some $\epsilon < 1$. The probability that $w$ is forkable is at most $\exp(-\epsilon^3n/2)$.

In this post, we present a commentary on the proof that forkable strings are rare. I like the proof because it uses simple facts about random walks, generating functions, and stochastic domination to bound an apparently difficult random process.

# Some Facts about the Gambler’s Ruin Problem

Consider a random walk in the two-dimensional discrete space, where the horizontal direction is indexed by nonnegative time steps and the vertical direction is indexed by integers.

The particle is at its initial position $z > 0$ at time $0$. At every time step, it independently takes a step up or down: up with probability $p$ and down with probability $q = 1-p$. If $p = q$ the walk is called symmetric. Let $a$ be constant, $a > z$. If the walk reaches either $a$ or $0$, it stops. Define the two quantities below.

Escape Probability.

$\displaystyle p_z := \Pr[ \text{walk reaches } a \text{ before hitting }0]$.

Ruin Probability.

$\displaystyle q_z := \Pr[ \text{walk hits zero before hitting }a]$ .

It just so happens that given enough time, the walk either ruins or escapes. We set the initial conditions as $q_0 = 1, q_a = 0$.

The Gambler’s Ruin Problem: What is the probability that a walk, starting at some $z > 0$, eventually reaches the origin?

# A Publicly Verifiable Secret Sharing Scheme

We all know Shamir’s $(t,n)$-Secret Sharing Scheme: A dealer fixes a group $\mathbf{Z}_p$ and selects a random polynomial $P(x)$ of degree at most $t-1$ over this group. He sets the constant coefficient equal to your secret $\sigma$, then distributes $n$ arbitrary evaluations of this polynomial as shares to the $n$ parties. Any subset of $t$ parties can pool their shares and reconstruct the polynomial via Lagrange interpolation.

What if we don’t want to trust the dealer nor the parties? Feldman’s Verifiable Secret Sharing Scheme allows the dealer to publish, in addition to the regular Shamir-shares, a generator $g$ of the group $\mathbf{Z}_p^*$ along with $t$ proofs or commitments $c_j$ for each coefficient of the polynomial $P(x)$. Each party can verify that his share is correct by testing whether

$\displaystyle g^{P(i)}\, \overset{?}{=} \, \prod_j{c_j^{i^j}} \bmod p$.

However, the shares $p(i)$ are still private. What if we want everyone to be able to verify the correctness of all shares without knowing what the shares are? This is achieved by a Publicly Verifiable Secret Sharing Scheme, such as the one developed by Berry Schoenmaker, assuming the discrete log problem is hard.

# Two MDS Array Codes for Disk Erasures: the Blaum-Bruck-Vardy Code and the BASIC Code

In this post, I am going to review two erasure codes: the Blaum-Bruck-Vardy code and the BASIC code (also here). These are erasure codes, which means, their purpose is to encode a number of data disks into a number of coding disks so that when one or more data/coding disks fail, the failed disk can be reconstructed using the existing data and coding disks.

A strength of these codes is that although the algebra is described on extension fields/rings over $GF(2)$, the encoding/decoding process uses only Boolean addition/rotation operation and no finite field operation. These codes are also MDS (Maximum Distance Separable), which means they have the largest possible (minimum) distance for a fixed message-length and codeword-length.

(Recall that if a code has $d$ data components and $c$ parity components in its generator matrix in standard form, its distance is at most $c + 1$ by the Singleton bound. Hence the code is MDS if and only if it can tolerate $c$ arbitrary disk failures.)

The BASIC code does the following things in relations to the BBV code:

1. Adds a virtual parity bit after each disk, giving each disk an even parity
2. Does polynomial arithmetic modulo $1+x^p$ instead of $h(x) = 1+x+\cdots + x^{p-1}$ as in the case of BBV code
3. Shows equivalence to the BBV code by making a nice observation via Chinese Remainder Theorem
4. Proves MDS property for any number of coding disks when $p$ is “large enough” and has a certain structure

Open Question: What is the least disk size for which these codes are MDS with arbitrary distance?

# Generalized Vandermonde Matrices with Missing Row-Powers

Let $[n] := \{0, 1, 2, \cdots, n -1 \}$. Let $x = (x_0, \cdots, x_{n-1})$ with distinc $x_j$s. Let $R = \{0 = r_1, r_2, \cdots, r_{n-1} = r$ be a set of distinct nonnegative row-powers. Consider the $n \times n$ Generalized Vandermonde matrix $V(x ; R) := \left( x_j^{r_i} \right)_{i,j \in [n]}$. When $R = [n]$, this matrix becomes the ordinary Vandermonde matrix, $V(x)$.

An equivalent description of $R$ is the largest row-power $r \in R$ and the set of missing rows from $[r]$: that is, the items that are in $[r]$ but not in $R$. Let $L_r = \{\ell_0, \ell_1, \cdots \}$ be this set. Define the punctured Generalized Vandermonde matrix $V_{\perp}(x; L_r) := V(x; R)$. Let $s_k(x)$ be the $k$th elementary symmetric polynomial.

Now we are ready to present some interesting results without proof, from the paper “Lower bound on Sequence Complexity” by Kolokotronis, Limniotis, and Kalouptsidis.

$det V_{\perp}(x ; \{\ell \}) = det V(x) \ s_{n-\ell}(x).$

$det V_{\perp}(x ; \{\ell_0, \ell_1\}) = det V(x) \ det \left( s_{n-\ell_i+j}(x) \right)_{i,j \in [2]}.$

$det V_{\perp}(x ; L) = det V(x) \ det \left( s_{n-\ell_i+j}(x) \right)_{i,j \in [s]} \text{ where } L = \{ \ell_0, \ell_1, \cdots, \ell_{s-1}\}.$

I will add some applications later on.

# Vandermonde Submatrices and Arithmetic Progressions

[This post, which is based on an ongoing discussion with Alex Russell and Ravi Sundaram, contains some unpublished results.]

Currently, we are asking whether all submatrices of the order-$p$ Vandermonde matrix over a finite extension of $GF(2)$ are invertible where $p$ is prime. The answer is “no” in general: there are examples of fields where the Vandermonde matrix has a singular submatrix.

We can ask an easier(?) question, though. What happens if we randomly sample a set of columns and look into submatrices formed by a subset of the sampled columns. With a touch of beautiful insight, Professor Russell has connected Szemeredi’s theorem on arithmetic progressions with this question.

Let $AP_k$ denote an arithmetic progression of length $latek k$. Let $[N] := \{1, 2, \cdots, N\}$ for $N \in \mathbb{N}$.

The Szemerédi theorem says, any “sufficiently dense” subset $S \subset [N]$ contains infinitely many $AP_k$ for all $k \in \mathbb{N}$. A finitary version says: Fix your favourite $k \in \mathbb{N}, \delta \in [0, 1]$. Then,  there exists a natural $N := N_{k, \delta}$ such that if you look any subset $S \subset [N]$ of size at least $\delta N$, you will find an $AP_k$. Yet another version says:

Szemerédi’s Theorem. The size of the largest subset $S \subset [N]$ without an $AP_k$ cannot be too large; in particular, it is $o(N)$.

Recall that a function $f(x)$ is $o(g)$ if it grows too slow compared to $g(x)$, so that $\lim_{N\rightarrow \infty}{f(x)/g(x) = 0}$.

# When does the Discrete Fourier Transform Matrix have Nonsingular Submatrices?

I am studying a coding theory problem. The question is this:

Open Question: Is there a prime $p$ and a positive integer $d$ such that all submatrices of the $p\times p$ Discrete Fourier Transform matrix over the field $GF(2^d)$ are nonsingular?

Currently, I have only counterexamples: Let $d$ be the degree of the smallest extension over $GF(2)$ which contains a nontrivial $p$th root of unity. Then, I know a lot of primes $p$ for which the matrix $V$ has a singular submatrix.

In this post, I am going to show a failed attempt to answer this question using the results in this paper by Evra, Kowalski, and Lubotzky.

# What I Learned from “Never Split the Difference,” a Book by Chris Voss

[In a hurry? Jump straight to the bullet points.]

HBO charged me $15 via Amazon for my subscription that I forgot to cancel after the Game of Thrones was over. Upon writing to Amazon, the representative said: As per policy we are unable to issue refund for the previous month charge (September) for your HBO subscription. However, I am able to make an exception for you and have issued you a promotional credit of$15.14 in your account.

Walmart manager said he would not accept my returns (worth $107) without a receipt. Although the first guy said he would give me a gift card, the system didn’t let him proceed. The manager came in and told me they wouldn’t take these returns. Why? Because they have recently changed the rule: without a receipt, I can return items worth only up to$25.