Bounding the Supremum of a Gaussian Process: Talagrand’s Generic Chaining (Part 1)

This post is part of a series which answers a certain question about the supremum of a Gaussian process. I am going to write, as I have understood, a proof given in Chapter 1 of the book “Generic Chaining” by Michel Talagrand. I recommend the reader to take a look at the excellent posts by James Lee on this matter. (I am a beginner, James Lee is a master.)

Let (T,d) be a finite metric space. Let \{X_t\} be a Gaussian process where each X_t is a zero-mean Gaussian random variable. The distance between two points s,t\in T is the square-root of the covariance between X_s and X_t. In this post, we are interested in upper-bounding Q.

Question: How large can the quantity Q := \mathbb{E} \sup_{t\in T} X_t be?

In this post we are going to prove the following fact:

\boxed{\displaystyle \mathbb{E}\sup_{t\in T}X_t \leq O(1)\cdot  \sup_{t\in T}\sum_{n\geq 1}{2^{n/2}d(t,T_{n-1})} ,}

where (t, A) is the distance between the point X_t from the set A, and \{T_i\} is a specific sequence of sets with T_i\subset T. Constructions of these sets will be discussed in a subsequent post.

Continue reading “Bounding the Supremum of a Gaussian Process: Talagrand’s Generic Chaining (Part 1)”

Advertisements

Spectral Sparsification by Spielman and Srivastava: How They Connected the Dots

In this post, I will discuss my own understanding of the spectral sparsification paper by Daniel Spielman and Nikhil Srivastava (STOC 2008). I will assume the following:

  1. The reader is a beginner, like me, and have already glanced through the Spielman-Srivastava paper (from now on, the SS paper).
  2. The reader has, like me, a basic understanding of spectral sparsification and associated concepts of matrix analysis. I will assume that she has read and understood the Section 2 (Preliminaries) of the SS paper.
  3. The reader holds a copy of the SS paper while reading my post.

First, I will mention the main theorems (actually, I will mention only what they “roughly” say).

Continue reading “Spectral Sparsification by Spielman and Srivastava: How They Connected the Dots”

Chebyshev’s Inequality

If we have a random variable X and any number a, what is the probability that X \geq a? If we know the mean of X, Markov’s inequality can give an upper bound on the probability that X.  As it turns out, this upper bound is rather loose, and it can be improved if we know the variance of X in addition to its mean. This result is known as Chebyshev’s inequality after the name of the famous mathematician Pafnuty Lvovich Chebyshev. It was  first proved by his student Andrey Markov who provided a proof in 1884 in his PhD thesis (see wikipedia).

Continue reading “Chebyshev’s Inequality”

Markov’s Inequality

How likely is it that the absolute value of a random variable X will be greater than some specific value, say, a? The Russian mathematician Andrey Andreyevich Markov proved a simple, yet nice, result which enables us to answer the above question.

Markov’s Inequality states that if X is a random variable with mean \mu, and a>0 is a positive number, then

Pr[|X| \geq a] \leq \frac{|\mu|}{a}

Continue reading “Markov’s Inequality”