This is an elementary (yet important) fact in matrix analysis.

Statement

Let $M$ be an $n\times n$ complex Hermitian matrix which means $M=M^*$ where $*$ denote the conjugate transpose operation. Let $\lambda_1, \neq \lambda_2$ be two different  eigenvalues of $M$. Let $x, y$ be the two eigenvectors of $M$ corresponding to the two eigenvalues $\lambda_1$ and $\lambda_2$, respectively.

Then the following is true:

$\boxed{\lambda_1 \neq \lambda_2 \iff \langle x, y \rangle = 0.}$

Here $\langle a,b\rangle$ denote the usual inner product of two vectors $x,y$, i.e.,

$\langle x,y \rangle := y^*x$.

Informally speaking, Impagliazzo’s hardcore lemma says that if a boolean function is “hard to compute on average” by small circuits, then there exists a set of inputs on which the same function is “extremely hard to compute on average” by slightly smaller circuits.

In this post, I am going to explain how I understand the proof of the hardcore lemma presented in the Arora-Barak complexity book (here). Because the formal proof can be found in the book I intend to write in an informal way. I think some subtleties are involved in turning the context of the lemma into a suitable two-player zero-sum game. Doing so enables one to use von Neumann’s minimax theorem to effectively “exchange the quantifiers” in the contrapositive statement of the lemma. Although the Arora-Barak proof mentions these subtleties, I am going to explore these in more detail and in a more accessible way for a beginner like me.

I changed my mind before entering the blue doors of the weight room in my gym. There was the soothing sound of water trickling  down from the fountain. I felt more thirsty as I waited.

— “Hey, Saad!” exclaimed a low voice. I looked to my right. He just came out of that blue door of the weight room.

— “You used to be my TA !” he said. It always feels good to meet your past students, especially ones that still remember you. I looked hard. Remembering my students’ names used to be my forte, but probably not anymore. Read the rest of this entry »

You must have known it already, but I can’t stop myself from showing off the interplay of const with pointers in C/C++.

Here goes the code. It’s simple.


void main() {

int arr[]={1,2,3,4}; // data

const int *p1 = &arr[0]; // non-const ptr to const data
int const *p2 = &arr[0]; // non-const ptr to const data
int* const p3 = &arr[0]; // const ptr to non-const data
const int* const p4 = &arr[0]; // const ptr to const data

p1++; // ok
p1[0]++; // compile error: modifying const data

p2++; // ok
p2[0]++; // compile error: modifying const data

p3++; // compile error: modifying const ptr
p3[0]++; // ok

p4++; // compile error: modifying const ptr
p4[0]++; // compile error: modifying const data

}


We want to show that for $1\leq k\leq n$, the following holds: ${n\choose k} \leq \Big(\frac{n\cdot e}{k}\Big)^k.$

First, recall the power series $e^k=\sum_{i=0}^\infty{\frac{k^i}{i!}}$. For $k \geq 1$, this sum is definitely larger than its $k$th term only. In other words, $e^k \geq k^k/k!$, which implies $k! \geq (k/e)^k$. Now,

$\begin{matrix}{n\choose k} &=& \frac{n!}{k! \cdot (n-k)!}\\ &=& \frac{n(n-1)(n-2)\cdots (n-(k-1))}{k!}\\ &\leq& \frac{n^k}{k!} \\ &\leq& \frac{n^k}{(k/e)^k} \\ &=& (\frac{en}{k})^k.\\ \end{matrix}$

This proof is extracted from the StackExchange discussion here. For more inequalities/bounds for binomial coefficients, see Wikipedia.

We will show that the eigenvalues of the $n\times n$ Laplacian matrix $L$ of the complete graph $K_n$ are $\{0,1\}$ where  the eigenvalue $0$ has (algebraic) multiplicity $1$ and the eigenvalue $n$ has multiplicity $n-1$.

In this post, we provide a proof of Kirchhoff’s Matrix Tree theorem [1] which is quite beautiful in our biased opinion. This is a 160-year-old theorem which connects several fundamental concepts of matrix analysis and graph theory (e.g., eigenvalues, determinants, cofactor/minor, Laplacian matrix, incidence matrix, connectedness, graphs, and trees etc.). There are already many expository articles/proof sketches/proofs of this theorem (for example, see [4-6, 8,9]). Here we compile yet-another-expository-article and present arguments in a way which should be accessible to the beginners.

## Statement

Suppose  $L$ is the Laplacian of a connected simple graph $G$ with $n$ vertices. Then the number of spanning trees in $G$, denoted by $t(G)$, is the following:

$t(G)=\frac{1}{n} \times \{$ the product of all non-zero eigenvalues of $L\ \}$.

In this post I will discuss my own understanding of the spectral sparsification paper by Daniel Spielman and Nikhil Srivastava (STOC 2008). I will assume the following:

1. The reader is a beginner, like me, and have already glanced through the Spielman-Srivastava paper (from now on, the SS paper).
2. The reader has, like me, a basic understanding of spectral sparsification and associated concepts of matrix analysis. I will assume that she has read and understood the Section 2 (Preliminaries) of the SS paper.
3. The reader holds a copy of the SS paper while reading my post.

First, I will mention the main theorems (actually, I will mention only what they “roughly” say).

Stirling Numbers (of the first and second kind) are famous in combinatorics. There are well known recursive formulas for them, and they can be expressed through generating functions. Below we mention and explain the recursive definitions of the Stirling numbers through combinatorial ideas.

Since the Stirling numbers of the second kind are more intuitive, we will start with them.

Here we give an illustrative proof of Kleene’s recursion theorem, a fundamental theorem in computability/recursion theory. The proof is so simple it can be stated in few lines. On the other hand, the proof is a beauty in itself (after you get hold of it). Without further ado, we will first state the preliminary definitions/lemmas/theorems and then give the formal statement.