# Likelihood ratio test

The likelihood ratio (LR) test is a test of hypothesis in which two different maximum likelihood estimates of a parameter are compared in order to decide whether to reject or not to reject a restriction on the parameter.

## The null hypothesis

The likelihood ratio test is used to verify null hypotheses that can be written in the form:where:

• is an unknown parameter belonging to a parameter space ;

• is a vector valued function ().

The above formulation of a null hypothesis is quite general, as many common parameter restrictions can be written in the form . To understand why you should read the introductory lecture on Hypothesis testing in a maximum likelihood framework.

## The likelihood ratio statistic

The likelihood ratio test is based on two different ML estimates of the parameter .

One estimate, called unrestricted estimate and denoted by , is obtained from the solution of the unconstrained maximum likelihood problemwhere is the sample of observed data, and is the likelihood function.

The other estimate, called restricted estimate and denoted by , is obtained from the solution of the constrained maximum likelihood problemwhere is the set of parameters that satisfy the restriction being tested.

The test statistic, called likelihood ratio statistic, iswhere is the sample size.

## Assumptions

In order to derive the asymptotic properties of the statistic , we are going to assume that:

• both the restricted and the unrestricted estimator are asymptotically normal and satisfy the set of sufficient conditions for asymptotic normality given in the lecture on maximum likelihood estimation;

• the entries of are continuously differentiable on with respect to all the entries of ;

• the matrix of the partial derivatives of the entries of with respect to the entries of , denoted by and called the Jacobian of , has rank .

## Asymptotic distribution of the test statistic

Given the above assumptions, the following result can be proved.

Proposition If the null hypothesis is true and some technical conditions are satisfied (see above), the likelihood ratio statistic converges in distribution to a Chi-square distribution with degrees of freedom.

Proof

By the Mean Value Theorem, the second order expansion of can be written as where is the Hessian matrix (a matrix of second partial derivatives) and is an intermediate point (to be precise, there are intermediate points, one for each row of the Hessian). Because the gradient is zero at an unconstrained maximum, we have thatand, as a consequence, Thus, the likelihood ratio statistic can be written asBy results that can be found in the proof of convergence of the score test statistic, we have thatwhere is another intermediate point, and thatwhere is the Jacobian of and is a Lagrange multiplierNote that the expression for the Lagrange multiplier includes a third intermediate point . By putting all these things together, we obtainwhere we have definedIf we also definethe test statistic can be written aswhere we have used the fact that is symmetric and we have definedUnder the null hypothesis both and converge in probability to . As a consequence, also , and converge in probability to , because they are strictly comprised between the entries of and . Furthermore, and converge in probability to , the asymptotic covariance matrix of . Therefore, by the continuous mapping theorem, we have the following resultsThus, we can write the likelihood ratio statistic as a sequence of quadratic forms whereand As we have proved in the lecture on the Wald test, such a sequence of quadratic forms converges in distribution to a Chi-square random variable with degrees of freedom.

Note that the likelihood ratio statistic, unlike the statistics used in the Wald test and in the score test, depends only on the parameter estimates and not on their asymptotic covariance matrices. This can be an advantage if the latter are difficult to estimate.

## The test

In the likelihood ratio test, the null hypothesis is rejected ifwhere is a pre-specified critical value.

The size of the test can be approximated by its asymptotic valuewhere is the cumulative distribution function of a Chi-square random variable having degrees of freedom.

By appropriately choosing , it is possible to achieve a pre-specified size, as follows:

## Example

This example illustrates how the likelihood ratio statistic can be used.

Let , that is, the parameter space is the set of all -dimensional real vectors.

Denote the three entries of the true parameter by , and .

The restrictions to be tested areso that is a function defined by

We have that and the Jacobian of is

It has rank because its two rows are linearly independent.

Suppose that we have obtained the constrained estimate and the unconstrained one , and that we know the values of the log-likelihoods corresponding to the two estimates:

These two values are used to compute the value of the test statistic:

According to the rank calculations above, the statistic has a Chi-square distribution with degrees of freedom.

Let us fix the size of the test at .

Then, the critical value iswhere is the distribution function of a Chi-square random variable with degrees of freedom and can be calculated with any statistical software (e.g., in MATLAB, with the command `chi2inv(0.90,2)`).

Thus, the test statistic is below the critical value

As a consequence, the null hypothesis cannot be rejected.