In the theory of parameter estimation, an estimator is a function that associates a parameter estimate to each possible sample we can observe.
In a parameter estimation problem, we need to choose a parameter from a parameter space , by using the information contained in a sample (a set of observations from an unknown probability distribution). The parameter is our best guess of the true and unknown parameter , which is associated to the probability distribution that generated the sample. The parameter is called an estimate of .
When the estimate is produced using a predefined rule that associates a parameter estimate to each possible sample , we can write as a function of :
The function is called an estimator.
Two commonly found examples of estimators are the sample mean, which is used to estimate the expected value of an unknown distribution, and the sample variance, which is used to estimate the variance of an unknown distribution.
More details about estimators can be found in the lecture entitled Point estimation, which discusses the concept of estimator and the main criteria used to evaluate estimators.
Previous entry: Estimate
Next entry: Event
Most learning materials found on this website are now available in a traditional textbook format.