This lecture presents some examples of point
estimation problems, focusing on **variance estimation**,
i.e. on using a sample to produce a point estimate of the
variance of an unknown distribution.

In this example we make assumptions that are similar to those we made in the example of mean estimation entitled Mean estimation - Normal IID samples. The reader is strongly advised to read that example before reading this one.

The sample is made of independent draws from a normal distribution having known mean and unknown variance . Specifically, we observe realizations , ..., of independent random variables , ..., , all having a normal distribution with known mean and unknown variance . The sample is the -dimensional vector which is a realization of the random vector

We use the following estimator of variance:

The expected value of the estimator is equal to the true variance :

Proof

This can be proved using linearity of the expected value:

Therefore, the estimator is unbiased.

The variance of the estimator is:

Proof

This can be proved using the fact that for a normal distribution and the formula for the variance of an independent sum:

Therefore, the variance of the estimator tends to zero as the sample size tends to infinity.

The estimator has a Gamma distribution with parameters and .

Proof

The estimator can be written as:where the variables are independent standard normal random variables and , being a sum of squares of independent standard normal random variables, has a Chi-square distribution with degrees of freedom (see the lecture entitled Chi-square distribution for more details). Multiplying a Chi-square random variable with degrees of freedom by one obtains a Gamma random variable with parameters and (see the lecture entitled Gamma distribution for more details).

The mean squared error of the estimator is:

The estimatorcan be viewed as the sample mean of a sequence where the generic term of the sequence isThe sequence satisfies the conditions of Kolmogorov's Strong Law of Large Numbers ( is an IID sequence with finite mean). Therefore, the sample mean of converges almost surely to the true mean :Therefore the estimator is strongly consistent. It is also weakly consistent, because almost sure convergence implies convergence in probability:

This example is similar to the previous one. The only difference is that we relax the assumption that the mean of the distribution is known.

The sample is made of independent draws from a normal distribution having unknown mean and unknown variance . Specifically, we observe realizations , ..., of independent random variables , ..., , all having a normal distribution with unknown mean and unknown variance . The sample is the -dimensional vector which is a realization of the random vector

In this example also the mean of the distribution, being unknown, needs to be estimated. It is estimated with the sample mean :

We use the following estimators of variance:

The expected value of the unadjusted sample variance is:

Proof

This can be proved as follows:But when (because and are independent when - see Mutual independence via expectations). Therefore:

Therefore, the unadjusted sample variance is a biased estimator of the true variance . The adjusted sample variance , on the contrary, is an unbiased estimator of variance:

Proof

This can be proved as follows:

Thus, when also the mean
is being estimated, we need to divide by
rather than by
to obtain an unbiased estimator. Intuitively, by considering squared
deviations from the sample mean rather than squared deviations from the true
mean, we are underestimating the true variability of the data. In fact, the
sum of squared deviations from the true mean is always larger than the sum of
squared deviations from the sample mean. Dividing by
rather than by
exactly corrects this bias. The number
by which we divide is called the **number of degrees of freedom**
and it is equal to the number of sample points
()
minus the number of other parameters to be estimated (in our case
,
the true mean
).

The factor by which we need to multiply the biased estimatot to obtain the unbiased estimator is, of course:

This factor is known as **degrees of freedom adjustment**, which
explains why
is called unadjusted sample variance and
is called adjusted sample variance.

The variance of the unadjusted sample variance is:

Proof

This is proved in the following subsection (distribution of the estimator).

The variance of the adjusted sample variance is:

Proof

This is also proved in the following subsection (distribution of the estimator).

Therefore, both the variance of and the variance of converge to zero as the sample size tends to infinity. Also note that the unadjusted sample variance , despite being biased, has a smaller variance than the adjusted sample variance , which is instead unbiased.

The unadjusted sample variance has a Gamma distribution with parameters and .

Proof

To prove this result, we need to use some facts on quadratic forms involving normal random variables, which have been introduced in the lecture entitled Normal distribution - Quadratic forms. To understand this proof, you need to first read that lecture, in particular the section entitled Sample variance as a quadratic form. Define the matrix:where is an identity matrix and is a vector of ones. is symmetric and idempotent. Denote by the random vector whose -th entry is equal to . The random vector has a multivariate normal distribution with mean and covariance matrix .

Using the fact that the matrix is symmetric and idempotent, the unadjusted sample variance can be written as:

Using the fact that the random vectorhas a standard multivariate normal distribution and the fact that , we can rewrite: In other words, is proportional to a quadratic form in a standard normal random vector () and the quadratic form involves a symmetric and idempotent matrix whose trace is equal to . Therefore, the quadratic form has a Chi-square distribution with degrees of freedom. Finally, we can write:i.e. is a Chi-square random variable divided by its number of degrees of freedom and multiplied by . Thus, is a Gamma random variable with parameters and (see the lecture entitled Gamma distribution for an explanation). Also, by the properties of Gamma random variables, its expected value is:and its variance is:

The adjusted sample variance has a Gamma distribution with parameters and .

Proof

The proof of this result is similar to the proof for unadjusted sample variance found above. It can also be found in the lecture entitled Normal distribution - Quadratic forms. Here, we just notice that , being a Gamma random variable with parameters and , has expected valueand variance

The mean squared error of the unadjusted sample variance is:

Proof

It can be proved as follows:

The mean squared error of the adjusted sample variance is:

Proof

It can be proved as follows:

Therefore the mean squared error of the unadjusted sample variance is always smaller than the mean squared error of the adjusted sample variance:

The unadjusted sample variancecan be written as:where we have defined:The two sequences and are the sample means of and respectively. The latter both satisfy the conditions of Kolmogorov's Strong Law of Large Numbers (they form IID sequences with finite means), which implies that their sample means and converge almost surely to their true means:Since the functionis continuous and almost sure convergence is preserved by continuous transformations, we obtain:Therefore the estimator is strongly consistent. It is also weakly consistent, because almost sure convergence implies convergence in probability:The adjusted sample variance can be written as:The ratio can be thought of as a constant random variable defined as follows:which converges almost surely to . Therefore where both and are almost surely convergent. Since the product is a continuous function and almost sure convergence is preserved by continuous transformation, we have:Thus, also is strongly consistent.

Below you can find some exercises with explained solutions:

The book

Most learning materials found on this website are now available in a traditional textbook format.

Featured pages

- Student t distribution
- Combinations
- Binomial distribution
- Uniform distribution
- Poisson distribution
- Gamma function

Explore