This lecture discusses convergence in probability, first for sequences of random variables, and then for sequences of random vectors.
As we have discussed in the lecture entitled Sequences of random variables and their convergence, different concepts of convergence are based on different ways of measuring the distance between two random variables (how "close to each other" two random variables are). The concept of convergence in probability is based on the following intuition: two random variables are "close to each other" if there is a high probability that their difference is very small.
Let be a sequence of random variables defined on a sample space . Let be a random variable and a strictly positive number. Consider the following probability:Intuitively, is considered far from when ; therefore, is the probability that is far from . If converges to , should become smaller and smaller as increases. In other words, the probability of being far from should go to zero when increases. Formally, we should haveNote that is a sequence of real numbers. Therefore, the above limit is the usual limit of a sequence of real numbers.
Furthermore, the condition should be satisfied for any (also for very small , which means that we are very restrictive on our criterion for deciding whether is far from ). This leads us to the following definition of convergence.
Definition Let be a sequence of random variables defined on a sample space . We say that is convergent in probability to a random variable defined on if and only iffor any . is called the probability limit of the sequence and convergence is indicated byor by
The following example illustrates the concept of convergence in probability.
Example Let be a discrete random variable with support and probability mass functionConsider a sequence of random variables whose generic term isWe want to prove that converges in probability to . Take any . Note that When , which happens with probability , we have thatand, of course, . When , which happens with probability , we have thatand only if (or only if ). Therefore,andThus, trivially converges to , because it is identically equal to zero for all such that . Since was arbitrary, we have obtained the desired result: for any .
The above notion of convergence generalizes to sequences of random vectors in a straightforward manner.
Let be a sequence of random vectors defined on a sample space , where each random vector has dimension . In the case of random variables, the sequence of random variables converges in probability if and only if for any , where is the distance of from . In the case of random vectors, the definition of convergence in probability remains the same, but distance is measured by the Euclidean norm of the difference between the two vectors:where the second subscript is used to indicate the individual components of the vectors and .
The following is a formal definition.
Definition Let be a sequence of random vectors defined on a sample space . We say that is convergent in probability to a random vector defined on if and only iffor any . is called the probability limit of the sequence and convergence is indicated byor by
Now, denote by the sequence of the -th components of the vectors . It can be proved that the sequence of random vectors is convergent in probability if and only if all the sequences of random variables are convergent in probability.
Proposition Let be a sequence of random vectors defined on a sample space . Denote by the sequence of random variables obtained by taking the -th component of each random vector . The sequence converges in probability to the random vector if and only if the sequence converges in probability to the random variable (the -th component of ) for each .
The folowing sections contain more details about convergence in probability.
When the terms of the sequence depend on the value of a parameter that can take many different values, it is often necessary to use a slightly modified concept of convergence in probability. This is presented in the lecture entitled Uniform convergence in probability.
Below you can find some exercises with explained solutions.
Let be a random variable having a uniform distribution on the interval . In other words, is an absolutely continuous random variable with supportand probability density functionNow, define a sequence of random variables as follows:where is the indicator function of the event .
Find the probability limit (if it exists) of the sequence .
A generic term of the sequence, being an indicator function, can take only two values:
it can take value with probabilitywhere is an integer satisfyingand is an integer satisfying
it can take value with probability
By the previous inequality, goes to infinity as goes to infinity andTherefore, the probability that is equal to zero converges to as goes to infinity. So, obviously, converges in probability to the constant random variablebecause, for any ,
Does the sequence in the previous exercise also converge almost surely?
We can identify the sample space with the support of :and the sample points with the realizations of : i.e. when the realization is , then . Almost sure convergence requires thatwhere is a zero-probability event and the superscript denotes the complement of a set. In other words, the set of sample points for which the sequence does not converge to must be included in a zero-probability event . In our case, it is easy to see that, for any fixed sample point , the sequence does not converge to , because infinitely many terms in the sequence are equal to . Therefore,and, trivially, there does not exist a zero-probability event including the set Thus, the sequence does not converge almost surely to .
Let be an IID sequence of continuous random variables having a uniform distribution with supportand probability density function
Find the probability limit (if it exists) of the sequence .
As tends to infinity, the probability density tends to become concentrated around the point . Therefore, it seems reasonable to conjecture that the sequence converges in probability to the constant random variableTo rigorously verify this claim we need to use the formal definition of convergence in probability. For any ,
Most learning materials found on this website are now available in a traditional textbook format.