Search for probability and statistics terms on Statlect
StatLect

Convergence in probability

by , PhD

This lecture discusses convergence in probability, first for sequences of random variables, and then for sequences of random vectors.

Table of Contents

The intuition

As we have discussed in the lecture on Sequences of random variables and their convergence, different concepts of convergence are based on different ways of measuring the distance between two random variables (how "close to each other" two random variables are).

The concept of convergence in probability is based on the following intuition: two random variables are "close to each other" if there is a high probability that their difference is very small.

How to measure closeness

Let [eq1] be a sequence of random variables defined on a sample space Omega.

Take a random variable X and a strictly positive number epsilon.

Suppose that we consider X_n far from X when [eq2]

Then, the probability[eq3]is the probability that X_n is far from X.

How to define convergence

If [eq1] converges to X, the probability that X_n and X are far from each other should become smaller and smaller as n increases.

In other words, we should have[eq5]

Note that [eq6]is a sequence of real numbers. Therefore, the limit in equation (1) is the usual limit of a sequence of real numbers.

We would like to be very restrictive on our criterion for deciding whether X_n is far from X. As a consequence, condition (1) should be satisfied for any, arbitrarily small, epsilon.

Definition for sequences of random variables

The intuitive considerations above lead us to the following definition of convergence.

Definition Let [eq1] be a sequence of random variables defined on a sample space Omega. We say that [eq1] is convergent in probability to a random variable X defined on Omega if and only if[eq9]for any $arepsilon >0$.

The variable X is called the probability limit of the sequence and convergence is indicated by[eq10]or by[eq11]

Example

The following example illustrates the concept of convergence in probability.

Let X be a discrete random variable with support [eq12] and probability mass function[eq13]

Consider a sequence of random variables [eq1] whose generic term is[eq15]

We want to prove that [eq1] converges in probability to $X $.

Take any $arepsilon >0$. Note that [eq17]

When $X=0$, which happens with probability $frac{2}{3}$, we have that[eq18]and, of course, [eq19].

When $X=1$, which happens with probability $frac{1}{3}$, we have that[eq20]and [eq19] only if [eq22] (or only if [eq23]).

Therefore,[eq24]and[eq25]

Thus, [eq26] trivially converges to 0, because it is identically equal to zero for all n such that [eq23].

Since epsilon was arbitrary, we have obtained the desired result: [eq28]for any $arepsilon >0$.

How to generalize the definition to the multivariate case

The above notion of convergence generalizes to sequences of random vectors in a straightforward manner.

Let [eq1] be a sequence of random vectors defined on a sample space Omega, where each random vector X_n has dimension Kx1.

In the case of random variables, the sequence of random variables [eq30] converges in probability if and only if [eq31]for any $arepsilon >0$, where [eq32] is the distance of X_n from X.

In the case of random vectors, the definition of convergence in probability remains the same, but distance is measured by the Euclidean norm of the difference between the two vectors:[eq33]where the second subscript is used to indicate the individual components of the vectors [eq34] and [eq35].

Definition for sequences of random vectors

The following is a formal definition.

Definition Let [eq1] be a sequence of Kx1 random vectors defined on a sample space Omega. We say that [eq37] is convergent in probability to a random vector X defined on Omega if and only if[eq38]for any $arepsilon >0$.

Again, X is called the probability limit of the sequence and convergence is indicated by[eq39]or by[eq11]

Connection between univariate and multivariate convergence

A sequence of random vectors is convergent in probability if and only if the sequences formed by their entries are convergent.

Proposition Let [eq1] be a sequence of random vectors defined on a sample space Omega. Denote by [eq42] the sequence of random variables obtained by taking the i-th entry of each random vector X_n. The sequence [eq1] converges in probability to the random vector X if and only if the sequence [eq44] converges in probability to the random variable $X_{ullet ,i}$ (the i-th component of X) for each $i=1,ldots ,K$.

Solved exercises

Below you can find some exercises with explained solutions.

Exercise 1

Let $U$ be a random variable having a uniform distribution on the interval $left[ 0,1
ight] $.

In other words, $U$ is a continuous random variable with support[eq45]and probability density function[eq46]

Now, define a sequence of random variables [eq1] as follows:[eq48]where [eq49] is the indicator function of the event [eq50].

Find the probability limit (if it exists) of the sequence [eq51].

Solution

A generic term X_n of the sequence, being an indicator function, can take only two values:

  • it can take value 1 with probability[eq52]where $m$ is an integer satisfying[eq53]and $j$ is an integer satisfying[eq54]

  • it can take value 0 with probability[eq55]

By the previous inequality, $m$ goes to infinity as n goes to infinity and[eq56]Therefore, the probability that X_n is equal to zero converges to 1 as n goes to infinity. So, obviously, [eq1] converges in probability to the constant random variable[eq58]because, for any $arepsilon >0$,[eq59]

Exercise 2

Does the sequence in the previous exercise also converge almost surely?

Solution

We can identify the sample space Omega with the support of $U$:[eq60]and the sample points omega in Omega with the realizations of $U$: i.e. when the realization is $U=u$, then $omega =u$. Almost sure convergence requires that[eq61]where E is a zero-probability event and the superscript $c$ denotes the complement of a set. In other words, the set of sample points omega for which the sequence [eq62] does not converge to [eq35] must be included in a zero-probability event E. In our case, it is easy to see that, for any fixed sample point [eq64], the sequence [eq65] does not converge to [eq66], because infinitely many terms in the sequence are equal to 1. Therefore,[eq67]and, trivially, there does not exist a zero-probability event including the set [eq68]Thus, the sequence does not converge almost surely to X.

Exercise 3

Let [eq1] be an IID sequence of continuous random variables having a uniform distribution with support[eq70]and probability density function[eq71]

Find the probability limit (if it exists) of the sequence [eq51].

Solution

As n tends to infinity, the probability density tends to become concentrated around the point $x=0$. Therefore, it seems reasonable to conjecture that the sequence [eq1] converges in probability to the constant random variable[eq58]To rigorously verify this claim we need to use the formal definition of convergence in probability. For any $arepsilon >0$,[eq75]

How to cite

Please cite as:

Taboga, Marco (2021). "Convergence in probability", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/asymptotic-theory/convergence-in-probability.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.