Featured

Finding a p-th Root of Unity in a Finite Field, with C++ Code

To this day, no method of finding a generator of Z_p^* is known to be more efficient than essentially trying 2, then 3, and so on. Who cares? Well, the difficulty of breaking a certain public key cryptosystem (due to El Gamal) depends on the difficulty of working with generators of Z_p^*.Keith Conrad

An nth root of unity in a finite field F is an element r \in F satisfying r^n=1, where n is an integer. If n is the smallest positive integer with this property, r is called a primitive nth root of unity. If r is a primitive nth root of unity, then all elements in the set \mu_n = \{1, r, r^2, \cdots, r^{n-1}\} are also roots of unity. Actually, the set \mu_n form a cyclic group of order n under multiplication, with generator r.

Problem: Suppose you are given a finite field F=GF(2^d) of degree d, and you are promised that there indeed exists a primitive pth root of unity r\in F for p prime. Find r, and in particular, produce a C++code that finds it.

In what follows, we talk about how to find such a root and provide my C++ code; the code uses the awesome NTL library.

Continue reading “Finding a p-th Root of Unity in a Finite Field, with C++ Code”

Advertisements

An Intuitive Explanation of Discrete Fourier Transform on Finite Groups

I’ve read the Fourier Transform time and time again. Every time I thought I understood it, I was wrong. Right now, this is how I see it. And I despise the words “time domain” and “frequency domain”. High five if you do the same ūüôā

Doing the Fourier Transform of a function is just seeing it from “another” point of view. The “usual” view of a function is in the standard basis \{e_1, \cdots, e_n\} e.g., as a vector of its evaluations on its domain (in a Dirac delta-basis given by the elements in the domain), or a polynomial (in the monomial basis) whose coefficients are these evaluations. Call this vector u.

The same function can also be seen from the “Fourier basis”, which is just another orthogonal basis, formed by the basis vectors \{v_t\}, t. The tth coordinate in the new basis will be given by inner product between u and the tth basis vector v_t. We call these inner products the Fourier coefficients. The Discrete Fourier Transform matrix (the DFT matrix) “projects” a function from the standard basis to the Fourier basis in the usual sense of projection: taking the inner product along a given direction.

In this post, I am going to use elementary group theoretic notions, polynomials, matrices, and vectors. The ideas in this post will be similar to this Wikipedia article on Discrete Fourier Transform.

Continue reading “An Intuitive Explanation of Discrete Fourier Transform on Finite Groups”

Why x^p-1 Factors over GF(2) the Way It Does

Let p be a prime number. We know that F_2=GF(2)=\{0,1\}. Let F_2[x] be the ring  the polynomials with indeterminate x and coefficients in F_2. The polynomial (x^p-1) \in F_2[x] factors over F_2 as  follows for various p:

x^3-1=(1+x) (1+x+x^2).
x^5-1=(1+x) (1+x+x^2+x^3+x^4).
x^7-1=(1+x) (1+x+x^3) (1+x^2+x^3).
x^{11}-1=(1+x) (1+x+x^2+x^3+x^4+x^5+x^6+x^7+x^8+x^9+x^{10}).
x^{13}-1=(1+x) (1+x+x^2+x^3+x^4+x^5+x^6+x^7+x^8+x^9+x^{10}+x^{11}+x^{12}).
x^{17}-1=(1+x) (1+x^3+x^4+x^5+x^8) (1+x+x^2+x^4+x^6+x^7+x^8).

None of the factors above can be factored anymore, hence they are irreducible over F_2. Let us call x+1 the trivial factor since the root x= 1 already belongs to F_2. But why do we have two nontrivial irreducible factors of x^7-1, each of degree 3, whereas x^{13}-1 has only one non-trivial irreducible factor of degree 12? It appears that either there is only one nontrivial factor, or the degree of all nontrivial factors is the same. Why?

Continue reading “Why x^p-1 Factors over GF(2) the Way It Does”

The Structure of a Quotient Ring modulo x^p-1

Let f(X)=X^p-1 \in F_l[X] for a prime p. We are interested in the structure of the ring R=F_l[X]/(f). Via the Chinese Remainder Theorem, this question is equivalent to finding a factorization of f over F_l. Suppose f factors in F_l as

\displaystyle f(X) = (X-1) \prod_i^s{q_i(X)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \   (*)

where each q_i is an irreducible polynomial. What are the degrees of q_1, \cdots, q_s? What is s?

We claim that R=F_l \oplus (F_{l^r})^s, where r is the multiplicative order of l mod p.

This structure has been used in a beautifully instructive paper by Evra, Kowalski, and Lubotzky. (Yes, the one in Lubotzky-Philip-Sarnak.) In this paper, they have established connections among¬†the Fourier transform, uncertainty principle, and dimension of ideals generated by polynomials in the ring R above. We’ll talk about these connections in another post. The content of this post is Proposition 2.6(3) from their paper.

Continue reading “The Structure of a Quotient Ring modulo x^p-1”

Bounding the Supremum of a Gaussian Process: Talagrand’s Generic Chaining (Part 1)

This post is part of a series which answers a certain question about the supremum of a Gaussian process. I am going to write, as I have understood, a proof given in Chapter 1 of the book “Generic Chaining” by Michel Talagrand. I recommend the reader to take a look at the excellent posts by¬†James Lee on this matter. (I am a beginner, James Lee is a master.)

Let (T,d) be a finite metric space. Let \{X_t\} be a Gaussian process where each X_t is a zero-mean Gaussian random variable. The distance between two points s,t\in T is the square-root of the covariance between X_s and X_t. In this post, we are interested in upper-bounding Q.

Question: How large can the quantity Q := \mathbb{E} \sup_{t\in T} X_t be?

In this post we are going to prove the following fact:

\boxed{\displaystyle \mathbb{E}\sup_{t\in T}X_t \leq O(1)\cdot  \sup_{t\in T}\sum_{n\geq 1}{2^{n/2}d(t,T_{n-1})} ,}

where (t, A) is the distance between the point X_t from the set A, and \{T_i\} is a specific sequence of sets with T_i\subset T. Constructions of these sets will be discussed in a subsequent post.

Continue reading “Bounding the Supremum of a Gaussian Process: Talagrand’s Generic Chaining (Part 1)”

In a Hermitian Matrix, the Eigenvectors of Different Eigenvalues are Orthogonal

This is an elementary (yet important) fact in matrix analysis.

Statement

Let M be an n\times n complex Hermitian matrix which means M=M^* where * denote the conjugate transpose operation. Let \lambda_1, \neq \lambda_2 be two different  eigenvalues of M. Let x, y be the two eigenvectors of M corresponding to the two eigenvalues \lambda_1 and \lambda_2, respectively.

Then the following is true:

\boxed{\lambda_1 \neq \lambda_2 \iff \langle x, y \rangle = 0.}

Here \langle a,b\rangle denote the usual inner product of two vectors x,y, i.e.,

\langle x,y \rangle := y^*x.

Continue reading “In a Hermitian Matrix, the Eigenvectors of Different Eigenvalues are Orthogonal”

Impagliazzo’s Hardcore Lemma: a Proof

Informally speaking, Impagliazzo’s hardcore lemma says that if a boolean function is “hard to compute on average” by small circuits, then there exists a set of inputs on which¬†the same function is “extremely hard to compute on average” by slightly smaller circuits.

In this post, I am going to explain how I understand the proof of the hardcore lemma presented in the Arora-Barak complexity book (here). Because the formal proof can be found in the book I intend to write in an informal way. I think some subtleties are involved in turning the context of the lemma into a suitable two-player zero-sum game. Doing so enables one to use von Neumann’s minimax theorem to¬†effectively “exchange the quantifiers” in the contrapositive statement of the lemma. Although the Arora-Barak proof mentions these subtleties, I am going to explore these in more detail and in a more accessible way for a beginner like me.

Continue reading “Impagliazzo’s Hardcore Lemma: a Proof”

What Is Your Excuse Today?

I changed my mind before entering the blue doors of the weight room in my gym. There was the soothing sound of water trickling  down from the fountain. I felt more thirsty as I waited.

— “Hey, Saad!” exclaimed¬†a low voice. I looked to my right. He¬†just came out of that blue door of the¬†weight room.

— “You used to be my TA !” he said. It always feels good to meet your past students, especially ones that still remember you. I looked hard. Remembering my students’ names used to be my forte, but probably not anymore. Continue reading “What Is Your Excuse Today?”

Const-Pointer and Pointer-to-Const in C / C++

You must have known it already, but I can’t stop myself from showing off the¬†interplay of const¬†with pointers in C/C++.

Here goes the code. It’s simple.


void main() {

	int arr[]={1,2,3,4}; // data

	const int *p1 = &arr[0]; // non-const ptr to const data
	int const *p2 = &arr[0]; // non-const ptr to const data
	int* const p3 = &arr[0]; // const ptr to non-const data
	const int* const p4 = &arr[0]; // const ptr to const data

	p1++; // ok
	p1[0]++; // compile error: modifying const data

	p2++; // ok
	p2[0]++; // compile error: modifying const data

	p3++; // compile error: modifying const ptr
	p3[0]++; // ok 

	p4++; // compile error: modifying const ptr
	p4[0]++; // compile error: modifying const data

}

Continue reading “Const-Pointer and Pointer-to-Const in C / C++”