This lecture discusses some fundamental properties of the expected value operator. Although most of these properties can be understood and proved using the material presented in previous lectures, some properties are gathered here for convenience, but can be proved and understood only after reading the material presented in successive lectures.

The following properties are related to the linearity of the expected value.

If is a random variable and is a constant, thenThis property has already been discussed in the lecture entitled Expected value.

Example Let be a random variable with expected valueand let be a random variable defined as follows:Then,

If , , ..., are random variables, thenAlso this property has already been discussed in the lecture entitled Expected value.

Example Let and be two random variables with expected valuesand let be a random variable defined as follows:Then,

If , , ..., are random variables and are constants, thenThis can be trivially obtained by combining the two properties above (scalar multiplication and sum). Consider as the entries of a vector and , , ..., as the entries of a random vector . Then the property above can be written aswhich is a multivariate generalization of the Scalar multiplication property above.

Let be a random matrix, i.e., a matrix whose entries are random variables. If is a matrix of constants, thenThis is easily proved by applying the linearity properties above to each entry of the random matrix .

Note that a **random vector is just a particular instance of a random
matrix**. So, if
is a
random vector and
is a
vector of constants,
then

Example Let be a random vector such that its two entries and have expected valuesLet be the following constant vector:Let the random vector be defined as follows:Then,

Let be a random matrix, i.e., a matrix whose entries are random variables. If is a matrix of constants, thenIf is a a matrix of constants, thenThese are immediate consequences of the linearity properties above.

By iteratively applying this property, if is a matrix of constants and is a a matrix of constants, we obtain

Example Let be a random vector such thatwhere and are the two components of . Let be the following matrix of constants:Let the random vector be defined as follows:Then,

The following properties of the expected value are also very important.

Let be an integrable random variable defined on a sample space . Let for all (i.e., is a positive random variable). Then,Intuitively, this is obvious. The expected value of is a weighted average of the values that can take on. But can take on only positive values. Therefore, also its expected value must be positive. Formally, the expected value is the Lebesgue integral of , and can be approximated to any degree of accuracy by positive simple random variables whose Lebesgue integral is positive. Therefore, also the Lebesgue integral of must be positive.

Let and be two integrable random variables defined on a sample space . Let and be such that almost surely (in other words, there exists a zero-probability event such that ). Then,

Proof

Let be a zero-probability event such that First, note thatwhere is the indicator of the event and is the indicator of the complement of . As a consequence, we can write By the properties of indicators of zero-probability events, we have Thus, we can writeNow, when , then and . On the contrary, when , then and . Therefore, for all (i.e., is a positive random variable). Thus, by the previous property ( expectation of a positive random variable), we have which implies By the linearity of the expected value, we getTherefore,

Below you can find some exercises with explained solutions.

Let and be two random variables, having expected values:

Compute the expected value of the random variable defined as follows:

Solution

Using the linearity of the expected value operator, we obtain

Let be a random vector such that its two entries and have expected values

Let be the following matrix of constants:

Compute the expected value of the random vector defined as follows:

Solution

The linearity property of the expected value applies also to the multiplication of a constant matrix and a random vector:

Let be a matrix with random entries, such that all its entries have expected value equal to . Let be the following constant vector:Compute the expected value of the random vector defined as follows:

Solution

The linearity property of the expected value applies also to the multiplication of a constant vector and a matrix with random entries:

The book

Most of the learning materials found on this website are now available in a traditional textbook format.

Featured pages

- Maximum likelihood
- Convergence in distribution
- Bernoulli distribution
- Bayes rule
- Poisson distribution
- Central Limit Theorem

Explore

Main sections

- Mathematical tools
- Fundamentals of probability
- Probability distributions
- Asymptotic theory
- Fundamentals of statistics
- Glossary

About

Glossary entries

Share