Search for probability and statistics terms on Statlect
StatLect
Index > Fundamentals of statistics

Model selection criteria

by , PhD

Model selection criteria are rules used to select a statistical model among a set of candidate models, based on observed data. Typically, the criteria try to minimize the expected dissimilarity, measured by the Kullback-Leibler divergence, between the chosen model and the true model (i.e., the probability distribution that generated the data).

In this lecture we focus on the selection of models that have been estimated by the maximum likelihood method.

Table of Contents

Competing models

First of all, we need to define precisely what we mean by statistical model.

A statistical model is a set of probability distributions that could have generated the data we are analyzing.

Example Suppose we observe n data points [eq1] which have all been independently drawn from the same probability distribution (in technical terms, they are IID draws). If we assume that the draws come from a normal distribution, then we are formulating a statistical model: we are restricting our attention to the set of all normal distributions and we are ruling out all the probability distributions that are not normal. Note that the normal distribution has two parameters, the mean mu and the variance sigma^2, so that the set of distributions we are considering (the statistical model) includes many normal distributions: one for each possible couple [eq2]. If instead we assume that the data has been drawn from an exponential distribution, then we are formulating an alternative model. The exponential distribution has one parameter $lambda $, called rate parameter. Our statistical model is a set including many possible distributions: one for each possible value of the parameter $lambda $.

The previous example, although admittedly unrealistic, introduces in a simple manner the problem that we are going to deal with: how do we select one model (normal vs exponential distribution in the example) if we deem that two or more alternative models are plausible?

Notation and main assumptions

Let us denote the vector of observed data by $xi $. We assume that the data is continuous, and that a model for $xi $ is a family of joint probability density functions[eq3]parametrized by a parameter vector $	heta _{m}$ for each model $m=1,ldots ,M$.

We focus on continuous distributions in order to simplify the discussion, but everything we say is valid also for discrete distributions, with straightforward modifications (replace probability densities with probability mass functions).

Example In the example above the vector $xi $ contains the n data points:[eq4]The number of models is $M=2$. The two parameter vectors are[eq5]for the normal distribution and[eq6]for the exponential distribution. The joint probability density function for the first model is [eq7]because the joint density of a vector of independent random variables is equal to the product of their marginal densities. The joint probability density function for the second model is[eq8]where [eq9] is an indicator function (equal to 1 if $x_{j}>0$ and to 0 otherwise).

We assume that model parameters are estimated by maximum likelihood (ML). We denote by [eq10] the ML estimates of the parameters of the $M$ models.

If you want to see some examples of how ML estimates are derived, you can have a look at these two lectures:

Finally, we will denote by [eq11] the unknown probability distribution that generated the data, and by $S$ the index of the model selected by a model selection criterion. Clearly, $S$ can range between 1 and $M$.

The general criterion

Akaike (1973) was the first to propose a general criterion for selecting models estimated by maximum likelihood. He proposed to minimize the expected dissimilarity between the chosen model [eq12] at the ML estimate and the the true distribution [eq13].

The dissimilarity between an estimated model and the true distribution is measured by the Kullback-Leibler divergence [eq14]where the expected value is with respect to the true density [eq15]

The expected dissimilarity is computed as [eq16]where the expectation is over the sampling distribution of [eq17], which, being a function of the sample $xi $, is regarded as stochastic.

Ideally, we would like to select the model that minimizes the expected dissimilarity:[eq18]

However, the expected dissimilarity cannot be computed exactly because the true distribution [eq19] and the sampling distribution of [eq20] are unknown.

Akaike (1973) proposed an approximation to the expected dissimilarity that can be easily computed, giving rise to the so-called Akaike Information Criterion (AIC).

As proved, for example, by Burnham and Anderson (2004), other popular selection criteria such as the AIC corrected for small-sample bias (AICc; Sugiura 1978, Hurvich and Tsai 1989) and the Bayesian Information Criterion (BIC; Schwarz 1978) are based on different approximations of the same measure of expected dissimilarity.

Popular criteria

We briefly present here the most popular selection criteria.

Akaike Information Criterion (AIC)

According to the Akaike Information Criterion, the selected model $S$ solves the minimization problem[eq21]where the value of the $m$-th model is [eq22]where [eq23] is the number of parameters to be estimated in the $m$-th model.

Note that any linear transformation applied to all model values does not change the selected model. As a matter of fact, many references define the value of the $m$-th model as[eq24]

Corrected Akaike Information Criterion (AIC)

An approximation that is more precise in small samples is the so-called corrected Akaike Information Criterion (AICc), according to which the value to be minimized is[eq25]where $N$ is the size of the sample being used for estimation.

Bayesian Information Criterion (BIC)

Another popular criterion is the Bayesian Information Criterion, according to which the selected model is the one that achieves the minimum value of[eq26]

The penalty for complexity

As you might have noticed, all of these criteria penalize the dimension of the model: the higher the number of parameters $K_{m}$ is, the more model $m$ is penalized.

This penalty for complexity is typical of model selection criteria: a model with many parameters is more likely to over-fit, that is, to have a spuriously high value of the log-likelihood [eq27]. For a discussion of over-fitting see the lecture on the R squared of a linear regression.

The complexity penalty is also related to the so-called bias-variance trade-off: by increasing model complexity, we usually decrease the bias and increase the variance; beyond a certain degree of complexity, increases in variance are larger than reductions in bias, and, as a consequence, the quality of our inferences becomes worse.

References

Akaike, H., 1973. Information theory as an extension of the maximum likelihood principle. In: Petrov, BN and Csaki, F. In Second International Symposium on Information Theory. Akademiai Kiado, Budapest, pp. 276-281.

Burnham, K.P. and Anderson, D.R., 2004. Multimodel inference: understanding AIC and BIC in model selection. Sociological methods & research, 33(2), pp. 261-304.

Hurvich, C.M. and Tsai, C.L., 1989. Regression and time series model selection in small samples. Biometrika, 76(2), pp. 297-307.

Schwarz, G., 1978. Estimating the dimension of a model. The annals of statistics, 6(2), pp. 461-464.

Sugiura, N., 1978. Further analysis of the data by Akaike's information criterion and the finite corrections, in Statistics Theory and Methods, 7(1), pp. 13-26.

The book

Most of the learning materials found on this website are now available in a traditional textbook format.