Search for probability and statistics terms on Statlect
StatLect

Law of Large Numbers

by , PhD

A Law of Large Numbers (LLN) is a proposition that provides a set of sufficient conditions for the convergence of the sample mean to a constant.

Typically, the constant is the expected value of the distribution from which the sample has been drawn.

Table of Contents

The sample mean

Let [eq1] be a sequence of random variables.

Let Xbar_n be the sample mean of the first n terms of the sequence:[eq2]

A Law of Large Numbers (LLN) states some conditions that are sufficient to guarantee the convergence of Xbar_n to a constant, as the sample size n increases.

This animation shows how the sample mean converges as we add more and more observations.

Typically, all the random variables in the sequence [eq1] have the same expected value [eq4]. In this case, the constant to which the sample mean converges is mu (which is called population mean).

But there are also Laws of Large Numbers in which the terms of the sequence [eq5] are not required to have the same expected value. In these cases, which are not treated in this lecture, the constant to which the sample mean converges is an average of the expected values of the individual terms of the sequence [eq1].

There are literally dozens of LLNs. We report some important examples below (road map in the figure).

There are several laws of large numbers. Some apply to independent and some to correlated sequences. Some guarantee convergence in probability, and some almost surely.

Weak Laws

A LLN is called a Weak Law of Large Numbers (WLLN) if the sample mean converges in probability.

The adjective weak is used because convergence in probability is often called weak convergence. It is employed to make a distinction from Strong Laws of Large Numbers, in which the sample mean is required to converge almost surely.

A quick reminder of the main differences between convergence in probability and almost surely.

Chebyshev's Weak Law of Large Numbers

One of the best known WLLNs is Chebyshev's.

Proposition (Chebyshev's WLLN) Let [eq1] be an uncorrelated and covariance stationary sequence:[eq8]Then, a Weak Law of Large Numbers applies to the sample mean:[eq9]where $QTR{rm}{plim}$ denotes a probability limit.

Proof

The expected value of the sample mean Xbar_n is[eq10]The variance of the sample mean Xbar_n is[eq11]Now we can apply Chebyshev's inequality to the sample mean Xbar_n:[eq12]for any [eq13] (i.e., for any strictly positive real number k). Plugging in the values for the expected value and the variance derived above, we obtain[eq14]Since[eq15]and[eq16]then it must be that also[eq17]Note that this holds for any arbitrarily small k. By the very definition of convergence in probability, this means that Xbar_n converges in probability to mu (if you are wondering about strict and weak inequalities here and in the definition of convergence in probability, note that [eq18] implies [eq19] for any strictly positive $arepsilon <k$).

Note that it is customary to state Chebyshev's Weak Law of Large Numbers as a result on the convergence in probability of the sample mean:[eq9]

However, the conditions of the above theorem guarantee the mean square convergence of the sample mean to mu:[eq21]

Proof

In the above proof of Chebyshev's WLLN, it is proved that[eq22]and that [eq23]This implies that[eq24]As a consequence,[eq25]but this is just the definition of mean square convergence of Xbar_n to mu.

Hence, in Chebyshev's WLLN, convergence in probability is just a consequence of the fact that convergence in mean square implies convergence in probability.

Chebyshev's Weak Law of Large Numbers for correlated sequences

Chebyshev's WLLN sets forth the requirement that the terms of the sequence [eq26] have zero covariance with each other. By relaxing this requirement and allowing for some correlation between the terms of the sequence [eq1], a more general version of Chebyshev's Weak Law of Large Numbers can be obtained.

Proposition (Chebyshev's WLLN for correlated sequences) Let [eq1] be a covariance stationary sequence of random variables:[eq29]If covariances tend to be zero on average, that is, if[eq30]then a Weak Law of Large Numbers applies to the sample mean:[eq9]

Proof

For a full proof see, e.g., Karlin and Taylor (1975). We give here a proof based on the assumption that covariances are absolutely summable:[eq32]which is a stronger assumption than the assumption made in the proposition that covariances tend to be zero on average. The expected value of the sample mean Xbar_n is[eq33]The variance of the sample mean Xbar_n is[eq34]Note that [eq35]But the covariances are absolutely summable, so that[eq36]where $overline{gamma }$ is a finite constant. Therefore,[eq37]Now we can apply Chebyshev's inequality to the sample mean Xbar_n:[eq12]for any [eq13] (i.e., for any strictly positive real number k). Plugging in the values for the expected value and the variance derived above, we obtain[eq40]Since[eq41]and[eq16]then it must be that also[eq17]Note that this holds for any arbitrarily small k. By the definition of convergence in probability, this means that Xbar_n converges in probability to mu (if you are wondering about strict and weak inequalities here and in the definition of convergence in probability, note that [eq44] implies [eq45] for any strictly positive $arepsilon <k$).

Chebyshev's Weak Law of Large Numbers for correlated sequences has been stated as a result on the convergence in probability of the sample mean:[eq9]

However, the conditions of the above theorem also guarantee the mean square convergence of the sample mean to mu:[eq21]

Proof

In the above proof of Chebyshev's Weak Law of Large Numbers for correlated sequences, we proved that[eq48]and that [eq23]This implies[eq50]Thus, taking limits on both sides, we obtain[eq51]But [eq52]so it must be that[eq53]This is just the definition of mean square convergence of Xbar_n to mu.

Hence, also in Chebyshev's Weak Law of Large Numbers for correlated sequences, convergence in probability descends from the fact that convergence in mean square implies convergence in probability.

Strong Laws

A LLN is called a Strong Law of Large Numbers (SLLN) if the sample mean converges almost surely.

The adjective Strong is used to make a distinction from Weak Laws of Large Numbers, where the sample mean is required to converge in probability.

Kolmogorov's Strong Law of Large Numbers

Among SLLNs, Kolmogorov's is probably the best known.

Proposition (Kolmogorov's SLLN) Let [eq1] be an iid sequence of random variables having finite mean:[eq55]Then, a Strong Law of Large Numbers applies to the sample mean:[eq56]where [eq57] denotes almost sure convergence.

Proof

See, for example, Resnick (1999) and Williams (1991).

Comparison of the assumptions needed to prove the WLNN and the SLNN.

Ergodic theorem

In Kolmogorov's SLLN, the sequence [eq1] is required to be an iid sequence. This requirement can be weakened, by requiring [eq59] to be stationary and ergodic.

Proposition (Ergodic Theorem) Let [eq1] be a stationary and ergodic sequence of random variables having finite mean:[eq55]Then, a Strong Law of Large Numbers applies to the sample mean:[eq56]

Proof

See, for example, Karlin and Taylor (1975) and White (2001).

Laws of Large Numbers for random vectors

The LLNs we have just presented concern sequences of random variables. However, they can be extended in a straightforward manner to sequences of random vectors.

Proposition Let [eq1] be a sequence of Kx1 random vectors, let [eq4] be their common expected value and[eq65]their sample mean. Denote the $j$-th component of X_n by $X_{n,j}$ and the $j$-th component of Xbar_n by $overline{X}_{n,j}$. Then:

  • a Weak Law of Large Numbers applies to the sample mean Xbar_n if and only if a Weak Law of Large numbers applies to each of the components of the vector Xbar_n, that is, if and only if[eq66]

  • a Strong Law of Large Numbers applies to the sample mean Xbar_n if and only if a Strong Law of Large numbers applies to each of the components of the vector Xbar_n, that is, if and only if[eq67]

Proof

This is a consequence of the fact that a vector converges in probability (almost surely) if and only if all of its components converge in probability (almost surely). See the lectures entitled Convergence in probability and Almost sure convergence.

Solved exercises

Below you can find some exercises with explained solutions.

Exercise 1

Let [eq68] be an IID sequence.

A generic term of the sequence has mean mu and variance sigma^2.

Let [eq1] be a covariance stationary sequence such that a generic term of the sequence satisfies[eq70]where $-1<
ho <1$.

Denote by [eq71]the sample mean of the sequence.

Verify whether the sequence [eq72] satisfies the conditions that are required by Chebyshev's Weak Law of Large Numbers. In the affirmative case, find its probability limit.

Solution

By assumption the sequence [eq1] is covariance stationary. So all the terms of the sequence have the same expected value. Taking the expected value of both sides of the equation[eq70]we obtain[eq75]Solving for [eq76], we obtain[eq77]By the same token, the variance can be derived from[eq78]which, solving for [eq79], yields[eq80]Now, we need to derive [eq81]. Note that[eq82]The covariance between two terms of the sequence is[eq83]The sum of the covariances is[eq84]Thus, covariances tend to be zero on average:[eq85]and the conditions of Chebyshev's Weak Law of Large Numbers are satisfied. Therefore, the sample mean converges in probability to the population mean:[eq86]

References

Karlin, S. and H. E. Taylor (1975) A first course in stochastic processes, Academic Press.

Resnick, S. I. (1999) A probability path, Birkhauser.

White, H. (2001) Asymptotic theory for econometricians, Academic Press.

Williams, D. (1991) Probability with martingales, Cambridge University Press.

How to cite

Please cite as:

Taboga, Marco (2021). "Law of Large Numbers", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/asymptotic-theory/law-of-large-numbers.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.