This lecture discusses how to derive the distribution of the sum of two independent random variables. We explain first how to derive the distribution function of the sum and then how to derive its probability mass function (if the summands are discrete) or its probability density function (if the summands are continuous).
The following proposition characterizes the distribution function of the sum in terms of the distribution functions of the two summands:
Proposition Let and be two independent random variables and denote by and their distribution functions. Let:and denote the distribution function of by . The following holds:or:
The first formula is derived as follows:The second formula is symmetric to the first.
Example Let be a uniform random variable with support and probability density function:and another uniform random variable, independent of , with support and probability density function:The distribution function of is:The distribution function of is:There are four cases to consider:
If , then:
If , then:
If , then:
If , then:
Combining these four possible cases, we obtain:
When the two summands are discrete random variables, the probability mass function of their sum can be derived as follows:
Proposition Let and be two independent discrete random variables and denote by and their respective probability mass functions and by and their supports. Let:and denote the probability mass function of by . The following holds:or:
The first formula is derived as follows:The second formula is symmetric to the first.
The two summations above are called convolutions (of two probability mass functions).
Example Let be a discrete random variable with support and probability mass function:and another discrete random variable, independent of , with support and probability mass function:DefineIts support is: The probability mass function of , evaluated at is:Evaluated at , it is:Evaluated at , it is:Therefore, the probability mass function of is:
When the two summands are absolutely continuous random variables, the probability density function of their sum can be derived as follows:
Proposition Let and be two independent absolutely continuous random variables and denote by and their respective probability density functions. Let:and denote the probability density function of by . The following holds:or:
The distribution function of a sum of independent variables is:Differentiating both sides and using the fact that the density function is the derivative of the distribution function, we obtain:The second formula is symmetric to the first.
The two integrals above are called convolutions (of two probability density functions).
Example Let be an exponential random variable with support and probability density function:and another exponential random variable, independent of , with support and probability density function:Define: The support of is:When , the probability density function of is:Therefore, the probability density function of is:
We have discussed above how to derive the distribution of the sum of two independent random variables. How do we derive the distribution of the sum of more than two mutually independent random variables? Suppose , , ..., are mutually independent random variables and let be their sum:The distribution of can be derived recursively, using the results for sums of two random variables given above:
first, define:and compute the distribution of ;
then, define:and compute the distribution of ;
and so on, until the distribution of can be computed from:
Below you can find some exercises with explained solutions:
Most learning materials found on this website are now available in a traditional textbook format.