An estimator of a given parameter is said to be unbiased if its expected value is equal to the true value of the parameter.
In other words, an estimator is unbiased if it produces parameter estimates that are on average correct.
Remember that in a parameter estimation problem:
we observe some data (a sample, denoted by ), which has been extracted from an unknown probability distribution;
we want to estimate a parameter (e.g., the mean or the variance) of the distribution that generated our sample;
we produce an estimate of (i.e., our best guess of ) by using the information provided by the sample .
The estimate is usually obtained by using a predefined rule (a function) that associates an estimate to each sample that could possibly be observed
The function is called an estimator.
Definition An estimator is said to be unbiased if and only ifwhere the expected value is calculated with respect to the probability distribution of the sample .
The following table contains examples of unbiased estimators (with links to lectures where unbiasedness is proved).
Estimator | Estimated parameter | Lecture where proof can be found |
---|---|---|
Sample mean | Expected value | Estimation of the mean |
Adjusted sample variance | Variance | Estimation of the variance |
OLS estimator | Linear regression coefficients | Gauss-Markov theorem |
Adjusted sample variance of the OLS residuals | Variance of the error of a linear regression | Normal linear regression model |
An estimator which is not unbiased is said to be biased.
The bias of an estimator is the expected difference between and the true parameter:
Thus, an estimator is unbiased if its bias is equal to to zero, and biased otherwise.
Unbiasedness is discussed in more detail in the lecture entitled Point estimation.
Previous entry: Unadjusted sample variance
Next entry: Variance formula
Most of the learning materials found on this website are now available in a traditional textbook format.