StatlectThe Digital Textbook
Index > Asymptotic theory

Convergence in probability

This lecture discusses convergence in probability, first for sequences of random variables, and then for sequences of random vectors.

Convergence in probability of a sequence of random variables

As we have discussed in the lecture entitled Sequences of random variables and their convergence, different concepts of convergence are based on different ways of measuring the distance between two random variables (how "close to each other" two random variables are). The concept of convergence in probability is based on the following intuition: two random variables are "close to each other" if there is a high probability that their difference is very small.

Let [eq1] be a sequence of random variables defined on a sample space Omega. Let X be a random variable and epsilon a strictly positive number. Consider the following probability:[eq2]Intuitively, X_n is considered far from X when [eq3]; therefore, [eq4] is the probability that X_n is far from X. If [eq1] converges to X, [eq6] should become smaller and smaller as n increases. In other words, the probability of X_n being far from X should go to zero when n increases. Formally, we should have[eq7]Note that [eq8] is a sequence of real numbers. Therefore, the above limit is the usual limit of a sequence of real numbers.

Furthermore, the condition [eq9] should be satisfied for any epsilon (also for very small epsilon, which means that we are very restrictive on our criterion for deciding whether X_n is far from X). This leads us to the following definition of convergence.

Definition Let [eq1] be a sequence of random variables defined on a sample space Omega. We say that [eq1] is convergent in probability to a random variable X defined on Omega if and only if[eq12]for any $arepsilon >0$. X is called the probability limit of the sequence and convergence is indicated by[eq13]or by[eq14]

The following example illustrates the concept of convergence in probability.

Example Let X be a discrete random variable with support [eq15] and probability mass function[eq16]Consider a sequence of random variables [eq1] whose generic term is[eq18]We want to prove that [eq1] converges in probability to X. Take any $arepsilon >0$. Note that [eq20]When $X=0$, which happens with probability $frac{2}{3}$, we have that[eq21]and, of course, [eq22]. When $X=1$, which happens with probability $frac{1}{3}$, we have that[eq23]and [eq22] only if [eq25] (or only if [eq26]). Therefore,[eq27]and[eq28]Thus, [eq29] trivially converges to 0, because it is identically equal to zero for all n such that [eq26]. Since epsilon was arbitrary, we have obtained the desired result: [eq31]for any $arepsilon >0$.

Convergence in probability of a sequence of random vectors

The above notion of convergence generalizes to sequences of random vectors in a straightforward manner.

Let [eq1] be a sequence of random vectors defined on a sample space Omega, where each random vector X_n has dimension Kx1. In the case of random variables, the sequence of random variables [eq33] converges in probability if and only if [eq34]for any $arepsilon >0$, where [eq35] is the distance of X_n from X. In the case of random vectors, the definition of convergence in probability remains the same, but distance is measured by the Euclidean norm of the difference between the two vectors:[eq36]where the second subscript is used to indicate the individual components of the vectors [eq37] and [eq38].

The following is a formal definition.

Definition Let [eq1] be a sequence of random vectors defined on a sample space Omega. We say that [eq1] is convergent in probability to a random vector X defined on Omega if and only if[eq41]for any $arepsilon >0$. X is called the probability limit of the sequence and convergence is indicated by[eq13]or by[eq14]

Now, denote by [eq44] the sequence of the i-th components of the vectors X_n. It can be proved that the sequence of random vectors [eq1] is convergent in probability if and only if all the K sequences of random variables [eq44] are convergent in probability.

Proposition Let [eq1] be a sequence of random vectors defined on a sample space Omega. Denote by [eq44] the sequence of random variables obtained by taking the i-th component of each random vector X_n. The sequence [eq1] converges in probability to the random vector X if and only if the sequence [eq50] converges in probability to the random variable $X_{ullet ,i}$ (the i-th component of X) for each $i=1,ldots ,K$.

More details

Uniform convergence in probability

When the terms of the sequence [eq1] depend on the value of a parameter that can take many different values, it is often necessary to use a slightly modified concept of convergence in probability. This is presented in the lecture entitled Uniform convergence in probability.

Solved exercises

Below you can find some exercises with explained solutions:

  1. Exercise set 1 (verify convergence in probability and find the probability limit).

The book

Most learning materials found on this website are now available in a traditional textbook format.