Search for probability and statistics terms on Statlect
StatLect

Estimator

by , PhD

In statistics, an estimator is a function that associates a parameter estimate to each possible sample we can observe.

Table of Contents

Description

In an estimation problem, we need to choose a parameter [eq1]from a set $\Theta $.

We do so by using a set of observations from an unknown probability distribution.

The set of observations is called a sample and it is denoted by $xi $.

The chosen $widehat{	heta }$ is our best guess of the true and unknown parameter $\theta _{0}$, which characterizes the probability distribution that generated the sample.

The parameter $widehat{	heta }$ is called an estimate of $\theta _{0}$.

When $widehat{	heta }$ is chosen by using a predefined rule that associates an estimate $widehat{	heta }$ to each possible sample $xi $, we can write $widehat{	heta }$ as a function of $xi $:[eq2]

The function [eq3] is called an estimator.

Estimators as random variables

The sample $xi $, before being observed, is regarded as randomly drawn from the distribution of interest. Therefore, the estimator $widehat{	heta }$, being a function of $xi $, is regarded as a random variable.

After the sample $xi $ is observed, the realization [eq3] of the estimator is called an estimate of the true parameter $\theta _{0}$.

In other words, an estimate is a realization of an estimator.

Image summarizing the main differences between the concepts of estimate and estimator.

Estimators as statistics

A function of a sample $xi $ is called a statistic.

Therefore, an estimator [eq3] is a statistic.

However, not all statistics are estimators. For example, the z-statistic often used in hypothesis tests about the mean is not an estimator.

Examples

Commonly found examples of estimators are:

How estimators are compared

Different estimators of the same parameter are often compared by looking at their mean squared error (MSE).

The MSE is equal to the expected value of the squared difference between the estimator and the true value of the parameter:[eq6]

The square provides a measure of the distance between the estimator and the true value.

Therefore, the lower the MSE is, the lower on average the distance of the estimator from the true value, and the better the estimator is.

For an example of such comparisons, see the lecture on Ridge estimation.

Other metrics

The MSE is only one of the metrics used to assess estimators.

There are also other metrics, such as the mean absolute error (MAE):[eq7]

These metrics are known as loss functions, which quantify the loss generated by the difference between the estimate $widehat{	heta }$ and the true value $\theta _{0}$.

Properties of estimators

When is an estimator considered well-behaved and reliable?

Besides a small MSE, the following two properties are often deemed desirable:

Interval estimators

Until now we have discussed point estimators, that is, rules used to produce our best guess of a parameter value.

That best guess is a single number, or a vector of numbers.

There are also interval estimators (also called set estimators) which give us intervals of numbers that contain the true parameter value with high probability.

The intervals of numbers are called confidence intervals or confidence sets.

To know more about interval estimators and their properties, you can consult the page on set estimators.

More details

More details about estimators can be found in the lecture entitled Point estimation, which discusses the concept of estimator and the main criteria used to evaluate estimators.

Keep reading the glossary

Previous entry: Distribution function

Next entry: Event

How to cite

Please cite as:

Taboga, Marco (2021). "Estimator", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/glossary/estimator.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.