If we have a random variable and any number , what is the probability that ? If we know the mean of , Markov’s inequality can give an upper bound on the probability that . As it turns out, this upper bound is rather loose, and it can be improved if we know the variance of in addition to its mean. This result is known as Chebyshev’s inequality after the name of the famous mathematician Pafnuty Lvovich Chebyshev. It was first proved by his student Andrey Markov who provided a proof in 1884 in his PhD thesis (see wikipedia).

**Chebyshev’s Inequality**

Let be a random variable with mean and variance . Let be any positive number. We ask the following question: What is the probability that will assume a value which which will be away from its mean? Chebyshev’s inequality says,

**Proof**

From Markov’s inequality, we know that for the random variable with mean ,

.

Now consider the random variable . The expected value of this variable is which, by definition, is the variance of . Therefore,

.

Also, since , it follows that .

According to Markov’s inequality, the probability that is

.

Putting , the above inequality becomes

.

Since occurs precisely when , it follows that

.

Proved.

**What is the use?**

Suppose, the random variable takes values between 0 and 1 (inclusive), has mean 0.5, and variance $0.16$. We ask: what is the probability that assumes a value ?

According to Markov’s inequality,

.

However, according to Chebyshev’s inequality,

,

which is a much tighter bound.

### Like this:

Like Loading...

*Related*

## 1 comment

Comments feed for this article

September 28, 2012 at 10:26 pm

Markov’s Inequality « Saad Quader[…] inequality, while simple, gives rather loose bound. A much tighter bound can be found using Chebyshev’s inequality if we know the variance of in addition to its mean. Like this:LikeBe the first to like this. […]