StatlectThe Digital Textbook
Index > Asymptotic theory

Convergence in probability

This lecture discusses convergence in probability, first for sequences of random variables, and then for sequences of random vectors.

Convergence in probability of a sequence of random variables

As we have discussed in the lecture entitled Sequences of random variables and their convergence, different concepts of convergence are based on different ways of measuring the distance between two random variables (how "close to each other" two random variables are). The concept of convergence in probability is based on the following intuition: two random variables are "close to each other" if there is a high probability that their difference is very small.

Let [eq1] be a sequence of random variables defined on a sample space Omega. Let X be a random variable and epsilon a strictly positive number. Consider the following probability:[eq2]Intuitively, X_n is considered far from X when [eq3]; therefore, [eq4] is the probability that X_n is far from X. If [eq1] converges to X, [eq6] should become smaller and smaller as n increases. In other words, the probability of X_n being far from X should go to zero when n increases. Formally, we should have[eq7]Note that [eq8] is a sequence of real numbers. Therefore, the above limit is the usual limit of a sequence of real numbers.

Furthermore, the condition [eq9] should be satisfied for any epsilon (also for very small epsilon, which means that we are very restrictive on our criterion for deciding whether X_n is far from X). This leads us to the following definition of convergence.

Definition Let [eq1] be a sequence of random variables defined on a sample space Omega. We say that [eq1] is convergent in probability to a random variable X defined on Omega if and only if[eq12]for any $arepsilon >0$. X is called the probability limit of the sequence and convergence is indicated by[eq13]or by[eq14]

The following example illustrates the concept of convergence in probability.

Example Let X be a discrete random variable with support [eq15] and probability mass function[eq16]Consider a sequence of random variables [eq1] whose generic term is[eq18]We want to prove that [eq1] converges in probability to $X $. Take any $arepsilon >0$. Note that [eq20]When $X=0$, which happens with probability $frac{2}{3}$, we have that[eq21]and, of course, [eq22]. When $X=1$, which happens with probability $frac{1}{3}$, we have that[eq23]and [eq22] only if [eq25] (or only if [eq26]). Therefore,[eq27]and[eq28]Thus, [eq29] trivially converges to 0, because it is identically equal to zero for all n such that [eq26]. Since epsilon was arbitrary, we have obtained the desired result: [eq31]for any $arepsilon >0$.

Convergence in probability of a sequence of random vectors

The above notion of convergence generalizes to sequences of random vectors in a straightforward manner.

Let [eq1] be a sequence of random vectors defined on a sample space Omega, where each random vector X_n has dimension Kx1. In the case of random variables, the sequence of random variables [eq33] converges in probability if and only if [eq34]for any $arepsilon >0$, where [eq35] is the distance of X_n from X. In the case of random vectors, the definition of convergence in probability remains the same, but distance is measured by the Euclidean norm of the difference between the two vectors:[eq36]where the second subscript is used to indicate the individual components of the vectors [eq37] and [eq38].

The following is a formal definition.

Definition Let [eq1] be a sequence of random vectors defined on a sample space Omega. We say that [eq1] is convergent in probability to a random vector X defined on Omega if and only if[eq41]for any $arepsilon >0$. X is called the probability limit of the sequence and convergence is indicated by[eq13]or by[eq14]

Now, denote by [eq44] the sequence of the i-th components of the vectors X_n. It can be proved that the sequence of random vectors [eq1] is convergent in probability if and only if all the K sequences of random variables [eq44] are convergent in probability.

Proposition Let [eq1] be a sequence of random vectors defined on a sample space Omega. Denote by [eq44] the sequence of random variables obtained by taking the i-th component of each random vector X_n. The sequence [eq1] converges in probability to the random vector X if and only if the sequence [eq50] converges in probability to the random variable $X_{ullet ,i}$ (the i-th component of X) for each $i=1,ldots ,K$.

More details

The folowing sections contain more details about convergence in probability.

Uniform convergence in probability

When the terms of the sequence [eq1] depend on the value of a parameter that can take many different values, it is often necessary to use a slightly modified concept of convergence in probability. This is presented in the lecture entitled Uniform convergence in probability.

Solved exercises

Below you can find some exercises with explained solutions.

Exercise 1

Let $U$ be a random variable having a uniform distribution on the interval $left[ 0,1
ight] $. In other words, $U$ is an absolutely continuous random variable with support[eq52]and probability density function[eq53]Now, define a sequence of random variables [eq1] as follows:[eq55]where [eq56] is the indicator function of the event [eq57].

Find the probability limit (if it exists) of the sequence [eq58].

Solution

A generic term X_n of the sequence, being an indicator function, can take only two values:

By the previous inequality, $m$ goes to infinity as n goes to infinity and[eq63]Therefore, the probability that X_n is equal to zero converges to 1 as n goes to infinity. So, obviously, [eq1] converges in probability to the constant random variable[eq65]because, for any $arepsilon >0$,[eq66]

Exercise 2

Does the sequence in the previous exercise also converge almost surely?

Solution

We can identify the sample space Omega with the support of $U$:[eq67]and the sample points omega in Omega with the realizations of $U$: i.e. when the realization is $U=u$, then $omega =u$. Almost sure convergence requires that[eq68]where E is a zero-probability event and the superscript $c$ denotes the complement of a set. In other words, the set of sample points omega for which the sequence [eq69] does not converge to [eq38] must be included in a zero-probability event E. In our case, it is easy to see that, for any fixed sample point [eq71], the sequence [eq72] does not converge to [eq73], because infinitely many terms in the sequence are equal to 1. Therefore,[eq74]and, trivially, there does not exist a zero-probability event including the set [eq75]Thus, the sequence does not converge almost surely to X.

Exercise 3

Let [eq1] be an IID sequence of continuous random variables having a uniform distribution with support[eq77]and probability density function[eq78]

Find the probability limit (if it exists) of the sequence [eq58].

Solution

As n tends to infinity, the probability density tends to become concentrated around the point $x=0$. Therefore, it seems reasonable to conjecture that the sequence [eq1] converges in probability to the constant random variable[eq65]To rigorously verify this claim we need to use the formal definition of convergence in probability. For any $arepsilon >0$,[eq82]

The book

Most learning materials found on this website are now available in a traditional textbook format.