StatlectThe Digital Textbook
Index > Fundamentals of probability

Random vectors

The concept of random vector is a multidimensional generalization of the concept of random variable.

Suppose that we conduct a probabilistic experiment and that the possible outcomes of the experiment are described by a sample space Omega. A random vector is a vector whose value depends on the outcome of the experiment, as stated by the following:

Definition Let Omega be a sample space. A random vector X is a function from the sample space Omega to the set of K-dimensional real vectors $U{211d} ^{K}$:[eq1]

In rigorous probability theory, the function X is also required to be measurable (a concept found in measure theory - see a more rigorous definition of random vector).

The real vector [eq2] associated to a sample point omega in Omega is called a realization of the random vector. The set of all possible realizations is called support and is denoted by R_X.

Denote by [eq3] the probability of an event $Esubseteq Omega $. When dealing with random vectors, the following conventions are used:

Example Two coins are tossed. The possible outcomes of each toss can be either tail ($T$) or head (H). The sample space is:[eq11]The four possible outcomes are assigned equal probabilities:[eq12]If tail ($T$) is the outcome, we win one dollar, if head (H) is the outcome we lose one dollar. A 2-dimensional random vector X indicates the amount we win (or lose) on each toss:[eq13]The probability of winning one dollar on both tosses is:[eq14]The probability of losing one dollar on the second toss is:[eq15]

The next sections deal with discrete random vectors and absolutely continuous random vectors, two kinds of random vectors that have special properties and are often found in applications.

Discrete random vectors

Discrete random vectors are defined as follows:

Definition A random vector X is discrete if:

  1. its support R_X is a countable set;

  2. there is a function [eq16], called the joint probability mass function (or joint pmf, or joint probability function) of X, such that, for any $xin U{211d} ^{K}$:[eq17]

The following notations are used interchangeably to indicate the joint probability mass function:[eq18]In the second and third notation the K components of the random vector X are explicitly indicated.

Example Suppose X is a $2$-dimensional random vector whose components (X_1 and X_2) can take only two values: 1 or 0. Furthermore, the four possible combinations of 0 and 1 are all equally likely. X is an example of a discrete random vector. Its support is: [eq19]Its probability mass function is:[eq20]

Absolutely continuous random vectors

Absolutely continuous random vectors are defined as follows:

Definition A random vector X is absolutely continuous (or, simply, continuous) if:

  1. its support R_X is not countable;

  2. there is a function [eq21], called the joint probability density function (or joint pdf or joint density function) of X, such that, for any set [eq7] where[eq23]the probability that X belongs to A can be calculated as follows:[eq24]provided the above multiple integral is well defined.

The following notations are used interchangeably to indicate the joint probability density function:[eq25]In the second and third notation the K components of the random vector X are explicitly indicated.

Example Suppose X is a $2$-dimensional random variable whose components (X_1 and X_2) are independent uniform random variables (on the interval $left[ 0,1
ight] $). X is an example of an absolutely continuous random variable. Its support is: [eq26]Its joint probability density function is:[eq27]The probability that the realization of X falls in the rectangle [eq28] is:[eq29]

Random vectors in general

Random vectors, also those that are neither discrete nor absolutely continuous, are often described using their joint distribution function:

Definition Let X be a random vector. The joint distribution function (or joint df, or joint cumulative distribution function, or joint cdf) of X is a function [eq30] such that:[eq31]where the components of X and x are denoted by $X_{k}$ and $x_{k}$ respectively, for $k=1,ldots ,K$.

The following notations are used interchangeably to indicate the joint distribution function:[eq32]In the second and third notation the K components of the random vector X are explicitly indicated.

Sometimes, we talk about the joint distribution of a random vector, without specifying whether we are referring to the joint distribution function, or to the joint probability mass function (in the case of discrete random vectors), or to the joint probability density function (in the case of absolutely continuous random vectors). This ambiguity is legitimate, since:

  1. the joint probability mass function completely determines (and is completely determined by) the joint distribution function of a discrete random vector;

  2. the joint probability density function completely determines (and is completely determined by) the joint distribution function of an absolutely continuous random vector.

In the remainder of this lecture, we use the term joint distribution when we are making statements that apply both to the distribution function and to the probability mass (or density) function of a random vector.

More details

Random matrices

A random matrix is a matrix whose entries are random variables. It is not necessary to develop a separate theory for random matrices, because a random matrix can always be written as a random vector. Given a $K	imes L$ random matrix A, its vectorization, denoted by [eq33], is the $KL	imes 1$ random vector obtained by stacking the columns of A on top of each other.

Example Let A be the following $2	imes 2$ random matrix:[eq34]The vectorization of A is the following $4	imes 1$ random vector:[eq35]

When [eq36] is a discrete random vector, then we say that A is a discrete random matrix and the joint probability mass function of A is just the joint probability mass function of [eq37]. By the same token, when [eq36] is an absolutely continuous random vector, then we say that A is an absolutely continuous random matrix and the joint probability density function of A is just the joint probability density function of [eq39].

The marginal distribution of a random vector

Let X_i be the i-th component of a K-dimensional random vector X. The distribution function [eq40] of X_i is called marginal distribution function of X_i. If X is discrete, then X_i is a discrete random variable and its probability mass function [eq41] is called marginal probability mass function of X_i. If X is absolutely continuous, then X_i is an absolutely continuous random variable and its probability density function [eq42] is called marginal probability density function of X_i.

Marginalization of a joint distribution

The process of deriving the distribution of a component X_i of a random vector X from the joint distribution of X is known as marginalization. Marginalization can also have a broader meaning: it can refer to the act of deriving the joint distribution of a subset of the set of components of X from the joint distribution of X. For example, if X is a random vector having three components (X_1, X_2 and $X_{3}$), we can marginalize the joint distribution of X_1, X_2 and $X_{3}$ to find the joint distribution of X_1 and X_2 (in this case we say that $X_{3}$ is marginalized out of the joint distribution of X_1, X_2 and $X_{3}$).

The marginal distribution of a discrete random vector

Let X_i be the i-th component of a K-dimensional discrete random vector X. The marginal probability mass function of X_i can be derived from the joint probability mass function of X as follows:[eq43]where the sum is over the set:[eq44]In other words, the probability that $X_{i}=x$ is obtained as the sum of the probabilities of all the vectors in R_X such that their i-th component is equal to x.

Marginalization of a discrete distribution

Let X_i be the i-th component of a discrete random vector X. Marginalizing X_i out of the joint distribution of X, we can obtain the joint distribution of the remaining components of X, i.e. we can obtain the joint distribution of the random vector $X_{-i}$ defined as follows:[eq45]The joint probability mass function of $X_{-i}$ is computed as follows:[eq46]where the sum is over the set[eq47]In other words, the joint probability mass function of $X_{-i}$ can be computed by summing the joint probability mass function of X over all values of $x_{i}$ that belong to the support of X_i.

The marginal distribution of an absolutely continuous random vector

Let X_i be the i-th component of a K-dimensional absolutely continuous random vector X. The marginal probability density function of X_i can be derived from the joint probability density function of X as follows:[eq48]In other words, the joint probability density function, evaluated at $x_{i}=x $, is integrated with respect to all variables except $x_{i}$ (so it is integrated a total of $K-1$ times).

Marginalization of a continuous distribution

Let X_i be the i-th component of an absolutely continuous random vector X. Marginalizing X_i out of the joint distribution of X, we can obtain the joint distribution of the remaining components of X, i.e. we can obtain the joint distribution of the random vector $X_{-i}$ defined as follows:[eq49]The joint probability density function of $X_{-i}$ is computed as follows:[eq50]In other words, the joint probability density function of $X_{-i}$ can be computed by integrating the joint probability density function of X with respect to $x_{i}$.

Absolutely continuous random vectors - Partial derivative of the distribution function

Note that, if X is absolutely continuous, then:[eq51]Hence, by taking the K-th order cross-partial derivative with respect to [eq52] of both sides of the above equation, we obtain:[eq53]

A more rigorous definition of random vector

We report here a more rigorous definition of random vector.

Definition Let [eq54] be a probability space. Let X be a function [eq55]. Let [eq56] be the Borel $sigma $-algebra of $U{211d} ^{K}$ (i.e. the smallest $sigma $-algebra containing all open hyper-rectangles in $U{211d} ^{K}$). If [eq57]for any [eq58], then X is a random vector on Omega.

Thus, if X satisfies this property, we are allowed to define [eq59]because the set [eq60] is measurable by the very definition of random vector.

Solved exercises

Some solved exercises on random vectors can be found below:

  1. Exercise set 1 (discrete random vectors and joint probability mass functions).

  2. Exercise set 2 (absolutely continuous random vectors and joint probability density functions).

The book

Most learning materials found on this website are now available in a traditional textbook format.