In the lecture entitled Point estimation we have defined the concept of an estimator and we have discussed criteria to evaluate estimators, but we have not discussed methods to derive estimators. This lecture discusses general techniques that can be used to derive parameter estimators in a parametric estimation problem.
Before starting, let us recall the main elements of a parametric estimation problem:
a sample is used to make statements about the probability distribution that generated the sample;
the sample is regarded as the realization of a random vector , whose unknown joint distribution function, denoted by , is assumed to belong to a set of distribution functions , called statistical model;
the model is put into correspondence with a set of real vectors; is called the parameter space and its elements are called parameters;
the parameter associated with the unknown distribution function that actually generated the sample is denoted by and it is called the true parameter (if several different parameters are put into correspondence with , can be any one of them);
a predefined rule (a function) that associates a parameter estimate to each in the support of is called an estimator (the symbol is often used to denote both the estimate and the estimator and the meaning is usually clear from the context).
Several widely employed estimators fall within the class of extremum estimators. An estimator is an extremum estimator if it can be represented as the solution of a maximization problem:where is a function of both the parameter and the sample .
General conditions can be derived for the consistency and asymptotic normality of extremum estimators. We do not discuss them here (see, e.g., Hayashi, F. (2000) Econometrics, Princeton University Press), but we rather give some examples of extremum estimators and we refer the reader to lectures that describe these examples in a more detailed manner.
In maximum likelihood estimation, we maximize the likelihood of the sample:where:
if is discrete, the likelihood is the joint probability mass function of associated to the distribution that corresponds to the parameter ;
if is absolutely continuous, the likelihood is the joint probability density function of associated to the distribution that corresponds to the parameter .
is called the maximum likelihood estimator of .
Maximum likelihood estimation is discussed in more detail in the lecture entitled Maximum Likelihood.
In generalized method of moments (GMM) estimation, the distributions associated to the parameters are such that they satisfy the moment condition:where is a (vector) function and indicates that the expected value is computed using the distribution associated to . The GMM estimator is obtained aswhere is a measure of the distance of from its expected value of and the estimator is an extremum estimator because
In least squares estimation the sample comprises realizations , ..., of a random variable , called the dependent variable, and observations , ..., of a random vector , whose components are called independent variables. It is postulated that there exists a function such that
The least squares estimator is obtained as
The estimator is an extremum estimator because
Most of the learning materials found on this website are now available in a traditional textbook format.