StatlectThe Digital Textbook
Index > Asymptotic theory

Central Limit Theorem

Let [eq1] be a sequence of random variables. Let Xbar_n be the sample mean of the first n terms of the sequence:[eq2]

A Central Limit Theorem (CLT) is a proposition stating a set of conditions that are sufficient to guarantee the convergence of the sample mean Xbar_n to a normal distribution, as the sample size n increases.

More precisely, a Central Limit Theorem is a proposition giving a set of conditions that are sufficient to guarantee that:[eq3]where Z is a standard normal random variable (i.e. a normal random variable with zero mean and unit variance), mu and $sigma $ are two constants and [eq4] indicates convergence in distribution.

Why is the ratio [eq5] multiplied by the square root of n? If we do not multiply it by $sqrt{n}$, then [eq5] converges to a constant, provided that the conditions of a Law of Large Numbers apply. On the contrary, multiplying it by $sqrt{n}$, we obtain a sequence that converges to a proper random variable (i.e. a random variable that is not constant). When the conditions of a Central Limit Theorem apply, this variable has a normal distribution.

In practice, the CLT is used as follows:

  1. we observe a sample consisting of n observations X_1, X_2, $ldots $, X_n;

  2. if n is large enough, then a standard normal distribution is a good approximation of the distribution of [eq7];

  3. therefore, we pretend that:[eq8]where [eq9] indicates the normal distribution with mean 0 and variance 1;

  4. as a consequence, the distribution of the sample mean $overline{X}_{n} $ is:[eq10]

There are several Central Limit Theorems. We report some examples below.

Examples

Lindeberg-Lévy Central Limit Theorem

The best known Central Limit Theorem is probably Lindeberg-Lévy CLT:

Proposition (Lindeberg-Lévy CLT) Let [eq1] be an IID sequence of random variables such that:[eq12]where $sigma ^{2}>0$. Then, a Central Limit Theorem applies to the sample mean Xbar_n:[eq13]where Z is a standard normal random variable and [eq14] denotes convergence in distribution.

Proof

We will just sketch a proof. For a detailed and rigorous proof see, for example: Resnick (1999) and Williams (1991). First of all, denote by [eq15] the sequence whose generic term is:[eq16]The characteristic function of $Z_{n}$ is:[eq17]Now take a second order Taylor series expansion of [eq18] around the point $s=0$:[eq19]where [eq20] is an infinitesimal of higher order than $s^{2} $, i.e. a quantity that converges to 0 faster than $s^{2}$ does. Therefore:[eq21]So, we have that:[eq22]where [eq23]is the characteristic function of a standard normal random variable Z (see the lecture entitled Normal distribution). A theorem, called Lévy continuity theorem, which we do not cover in these lectures, states that if a sequence of random variables [eq24] is such that their characteristic functions [eq25] converge to the characteristic function [eq26] of a random variable Z, then the sequence [eq27] converges in distribution to Z. Therefore, in our case the sequence [eq15] converges in distribution to a standard normal distribution.

So, roughly speaking, under the stated assumptions, the distribution of the sample mean Xbar_n can be approximated by a normal distribution with mean mu and variance [eq29] (provided n is large enough).

Also note that the conditions for the validity of Lindeberg-Lévy Central Limit Theorem resemble the conditions for the validity of Kolmogorov's Strong Law of Large Numbers. The only difference is the additional requirement that:[eq30]

The Central Limit Theorem for correlated sequences

In the Lindeberg-Lévy CLT (see above), the sequence [eq31] is required to be an IID sequence. The assumption of independence can be weakened as follows:

Proposition (CLT for correlated sequences) Let [eq1] be a stationary and mixing sequence of random variables satisfying a CLT technical condition (defined in the proof below) and such that:[eq33]where $V>0$. Then, a Central Limit Theorem applies to the sample mean Xbar_n:[eq34]where Z is a standard normal random variable and [eq14] indicates convergence in distribution.

Proof

Several different technical conditions (beyond those explicitly stated in the above proposition) are imposed in the literature in order to derive Central Limit Theorems for correlated sequences. These conditions are usually very mild and differ from author to author. We do not mention these technical conditions here and just refer to them as CLT technical conditions.

For a proof, see for example Durrett (2010) and White (2001).

So, roughly speaking, under the stated assumptions, the distribution of the sample mean Xbar_n can be approximated by a normal distribution with mean mu and variance $frac{V}{n}$ (provided n is large enough).

Also note that the conditions for the validity of the Central Limit Theorem for correlated sequences resemble the conditions for the validity of the ergodic theorem. The main differences (beyond some technical conditions that are not explicitly stated in the above proposition) are the additional requirements that:[eq36]and the fact that ergodicity is replaced by the stronger condition of mixing.

Finally, let us mention that the variance V in the above proposition, which is defined as:[eq37]is called the long-run variance of Xbar_n.

Multivariate generalizations

The results illustrated above for sequences of random variables extend in a straightforward manner to sequences of random vectors. For example, the multivariate version of the Lindeberg-Lévy CLT is:

Proposition (Multivariate Lindeberg-Lévy CLT) Let [eq1] be an IID sequence of Kx1 random vectors such that:[eq39]where Sigma is a positive definite matrix. Let [eq40] be the vector of sample means. Then:[eq41]where Z is a standard multivariate normal random vector and [eq42] denotes convergence in distribution.

Proof

For a proof see, for example, Basu (2004), DasGupta (2008) and McCabe and Tremayne (1993).

In a similar manner, the CLT for correlated sequences generalizes to random vectors (V becomes a matrix, called long-run covariance matrix).

Solved exercises

Below you can find some exercises with explained solutions:

  1. Exercise set 1 (use the Central Limit Theorem to find mean and variance of approximating normal distributions).

References

Basu, A. K. (2004) Measure theory and probability, PHI Learning PVT.

DasGupta, A. (2008) Asymptotic theory of statistics and probability, Springer.

Durrett, R. (2010) Probability: theory and examples, Cambridge University Press.

McCabe, B. and A. Tremayne (1993) Elements of modern asymptotic theory with statistical applications, Manchester University Press.

Resnick, S. I. (1999) A probability path, Birkhauser.

White, H. (2001) Asymptotic theory for econometricians, Academic Press.

Williams, D. (1991) Probability with martingales, Cambridge University Press.

The book

Most learning materials found on this website are now available in a traditional textbook format.