StatlectThe Digital Textbook
Index > Fundamentals of probability

Properties of the expected value

This lecture discusses some fundamental properties of the expected value operator. Although most of these properties can be understood and proved using the material presented in previous lectures, some properties are gathered here for convenience, but can be proved and understood only after reading the material presented in successive lectures.

Linearity of the expected value

The following properties are related to the linearity of the expected value.

Scalar multiplication of a random variable

If X is a random variable and $ain U{211d} $ is a constant, then[eq1]This property has already been discussed in the lecture entitled Expected value.

Example Let X be a random variable with expected value[eq2]and let Y be a random variable defined as follows:[eq3]Then,[eq4]

Sums of random variables

If X_1, X_2, ..., $X_{K}$ are K random variables, then[eq5]Also this property has already been discussed in the lecture entitled Expected value.

Example Let X and Y be two random variables with expected values[eq6]and let Z be a random variable defined as follows:[eq7]Then,[eq8]

Linear combinations of random variables

If X_1, X_2, ..., $X_{K}$ are K random variables and [eq9] are K constants, then[eq10]This can be trivially obtained by combining the two properties above (scalar multiplication and sum). Consider [eq11] as the K entries of a $1	imes K$ vector a and X_1, X_2, ..., $X_{K}$ as the K entries of a Kx1 random vector X. Then the property above can be written as[eq12]which is a multivariate generalization of the Scalar multiplication property above.

Example Let X and Y be two random variables with expected values[eq13]and let Z be a random variable defined as follows:[eq14]Then,[eq15]

Addition of a constant matrix and a matrix with random entries

Let Sigma be a $K	imes L$ random matrix, i.e., a $K	imes L$ matrix whose entries are random variables. If A is a $K	imes L$ matrix of constants, then[eq16]This is easily proved by applying the linearity properties above to each entry of the random matrix $A+Sigma $.

Note that a random vector is just a particular instance of a random matrix. So, if X is a Kx1 random vector and a is a Kx1 vector of constants, then[eq17]

Example Let X be a $2	imes 1$ random vector such that its two entries X_1 and X_2 have expected values[eq18]Let A be the following $2	imes 1$ constant vector:[eq19]Let the random vector Y be defined as follows:[eq20]Then,[eq21]

Multiplication of a constant matrix and a matrix with random entries

Let Sigma be a $K	imes L$ random matrix, i.e., a $K	imes L$ matrix whose entries are random variables. If $B$ is a $M	imes K$ matrix of constants, then[eq22]If $C$ is a a $L	imes N$ matrix of constants, then[eq23]These are immediate consequences of the linearity properties above.

By iteratively applying this property, if $B$ is a $M	imes K$ matrix of constants and $C$ is a a $L	imes N$ matrix of constants, we obtain[eq24]

Example Let X be a $1	imes 2$ random vector such that[eq25]where X_1 and X_2 are the two components of X. Let A be the following $2	imes 2$ matrix of constants:[eq26]Let the random vector Y be defined as follows:[eq27]Then,[eq28]

Other properties

The following properties of the expected value are also very important.

Expectation of a positive random variable

Let X be an integrable random variable defined on a sample space Omega. Let [eq29] for all omega in Omega (i.e., X is a positive random variable). Then,[eq30]Intuitively, this is obvious. The expected value of X is a weighted average of the values that X can take on. But X can take on only positive values. Therefore, also its expected value must be positive. Formally, the expected value is the Lebesgue integral of X, and X can be approximated to any degree of accuracy by positive simple random variables whose Lebesgue integral is positive. Therefore, also the Lebesgue integral of X must be positive.

Preservation of almost sure inequalities

Let X and Y be two integrable random variables defined on a sample space Omega. Let X and Y be such that $Xleq Y$ almost surely (in other words, there exists a zero-probability event E such that [eq31]). Then,[eq32]

Proof

Let E be a zero-probability event such that [eq33]First, note that[eq34]where $1_{E}$ is the indicator of the event E and $1_{E^{c}}$ is the indicator of the complement of E. As a consequence, we can write [eq35]By the properties of indicators of zero-probability events, we have [eq36]Thus, we can write[eq37]Now, when $omega in E^{c}$, then [eq38] and [eq39]. On the contrary, when $omega in E$, then $1_{E^{c}}=0$ and [eq40]. Therefore, [eq41] for all omega in Omega (i.e., [eq42] is a positive random variable). Thus, by the previous property ( expectation of a positive random variable), we have [eq43]which implies [eq44]By the linearity of the expected value, we get[eq45]Therefore,[eq32]

Solved exercises

Below you can find some exercises with explained solutions:

  1. Exercise set 1 (compute the expected value of linear transformations).

The book

Most learning materials found on this website are now available in a traditional textbook format.