You are currently browsing the category archive for the ‘Expository’ category.

This is an elementary (yet important) fact in matrix analysis.

Statement

Let $M$ be an $n\times n$ complex Hermitian matrix which means $M=M^*$ where $*$ denote the conjugate transpose operation. Let $\lambda_1, \neq \lambda_2$ be two different  eigenvalues of $M$. Let $x, y$ be the two eigenvectors of $M$ corresponding to the two eigenvalues $\lambda_1$ and $\lambda_2$, respectively.

Then the following is true:

$\boxed{\lambda_1 \neq \lambda_2 \iff \langle x, y \rangle = 0.}$

Here $\langle a,b\rangle$ denote the usual inner product of two vectors $x,y$, i.e.,

$\langle x,y \rangle := y^*x$.

Informally speaking, Impagliazzo’s hardcore lemma says that if a boolean function is “hard to compute on average” by small circuits, then there exists a set of inputs on which the same function is “extremely hard to compute on average” by slightly smaller circuits.

In this post, I am going to explain how I understand the proof of the hardcore lemma presented in the Arora-Barak complexity book (here). Because the formal proof can be found in the book I intend to write in an informal way. I think some subtleties are involved in turning the context of the lemma into a suitable two-player zero-sum game. Doing so enables one to use von Neumann’s minimax theorem to effectively “exchange the quantifiers” in the contrapositive statement of the lemma. Although the Arora-Barak proof mentions these subtleties, I am going to explore these in more detail and in a more accessible way for a beginner like me.

We want to show that for $1\leq k\leq n$, the following holds: ${n\choose k} \leq \Big(\frac{n\cdot e}{k}\Big)^k.$

First, recall the power series $e^k=\sum_{i=0}^\infty{\frac{k^i}{i!}}$. For $k \geq 1$, this sum is definitely larger than its $k$th term only. In other words, $e^k \geq k^k/k!$, which implies $k! \geq (k/e)^k$. Now,

$\begin{matrix}{n\choose k} &=& \frac{n!}{k! \cdot (n-k)!}\\ &=& \frac{n(n-1)(n-2)\cdots (n-(k-1))}{k!}\\ &\leq& \frac{n^k}{k!} \\ &\leq& \frac{n^k}{(k/e)^k} \\ &=& (\frac{en}{k})^k.\\ \end{matrix}$

This proof is extracted from the StackExchange discussion here. For more inequalities/bounds for binomial coefficients, see Wikipedia.

We will show that the eigenvalues of the $n\times n$ Laplacian matrix $L$ of the complete graph $K_n$ are $\{0,1\}$ where  the eigenvalue $0$ has (algebraic) multiplicity $1$ and the eigenvalue $n$ has multiplicity $n-1$.

In this post, we provide a proof of Kirchhoff’s Matrix Tree theorem [1] which is quite beautiful in our biased opinion. This is a 160-year-old theorem which connects several fundamental concepts of matrix analysis and graph theory (e.g., eigenvalues, determinants, cofactor/minor, Laplacian matrix, incidence matrix, connectedness, graphs, and trees etc.). There are already many expository articles/proof sketches/proofs of this theorem (for example, see [4-6, 8,9]). Here we compile yet-another-expository-article and present arguments in a way which should be accessible to the beginners.

## Statement

Suppose  $L$ is the Laplacian of a connected simple graph $G$ with $n$ vertices. Then the number of spanning trees in $G$, denoted by $t(G)$, is the following:

$t(G)=\frac{1}{n} \times \{$ the product of all non-zero eigenvalues of $L\ \}$.

In this post I will discuss my own understanding of the spectral sparsification paper by Daniel Spielman and Nikhil Srivastava (STOC 2008). I will assume the following:

1. The reader is a beginner, like me, and have already glanced through the Spielman-Srivastava paper (from now on, the SS paper).
2. The reader has, like me, a basic understanding of spectral sparsification and associated concepts of matrix analysis. I will assume that she has read and understood the Section 2 (Preliminaries) of the SS paper.
3. The reader holds a copy of the SS paper while reading my post.

First, I will mention the main theorems (actually, I will mention only what they “roughly” say).

Stirling Numbers (of the first and second kind) are famous in combinatorics. There are well known recursive formulas for them, and they can be expressed through generating functions. Below we mention and explain the recursive definitions of the Stirling numbers through combinatorial ideas.

Since the Stirling numbers of the second kind are more intuitive, we will start with them.

Here we give an illustrative proof of Kleene’s recursion theorem, a fundamental theorem in computability/recursion theory. The proof is so simple it can be stated in few lines. On the other hand, the proof is a beauty in itself (after you get hold of it). Without further ado, we will first state the preliminary definitions/lemmas/theorems and then give the formal statement.

Here we would show a beautiful application of Fermat’s Little Theorem.

An integer $a$ is a quadratic residue with respect to prime $p$ if $a \equiv x^2 \mod{p}$ for some integer $x$.

Given a prime $p$ and an integer $a$, how fast can we decide whether $a$ is a quadratic residue modulo $p$? As it turns out (courtesy to Euler), pretty fast.

This is elementary, yet useful and interesting.

Statement: All eigenvalues of a Hermitian matrix are real.

Proof: