 StatLect

# Wald test

The Wald test is a test of hypothesis usually performed on parameters that have been estimated by maximum likelihood (ML). ## The null hypothesis

We assume that an unknown -dimensional parameter vector has been estimated by ML.

We want to test the null hypothesis that equations (possibly nonlinear) are satisfied: where is a vector valued function, with .

The lecture on hypothesis testing in maximum likelihood framework explains that the most common null hypotheses can be written in this form.

## Assumptions

Let the parameter space be .

We assume that the following technical conditions are satisfied:

• for each , the entries of are continuously differentiable with respect to all the entries of ;

• the matrix of the partial derivatives of the entries of with respect to the entries of , called the Jacobian of and denoted by , has rank .

## The ML estimator

Let be the estimate of the parameter obtained by maximizing the log-likelihood over the whole parameter space : where is the likelihood function and is the sample.

We assume that the sample and the likelihood function satisfy some set of conditions that are sufficient to guarantee the consistency and asymptotic normality of (see the lecture on maximum likelihood for a set of such conditions).

## The Wald statistic

Here is the formula for the test statistic used in the Wald test: where is the sample size, and is a consistent estimate of the asymptotic covariance matrix of (see Covariance matrix of the MLE).

## The asymptotic distribution of the test statistic

Asymptotically, the test statistic has a Chi-square distribution.

Proposition Under the null hypothesis that , the Wald statistic converges in distribution to a Chi-square distribution with degrees of freedom.

Proof

We have assumed that is consistent and asymptotically normal, which implies that converges in distribution to a multivariate normal random variable with mean and asymptotic covariance matrix , that is, Now, by the delta method, we have that But , so that We have assumed that and are consistent estimators, that is, where denotes convergence in probability. Therefore, by the continuous mapping theorem, we have that Thus we can write the Wald statistic as a sequence of quadratic forms where converges in distribution to a normal random vector with mean zero, and converges in probability to . By a standard result (see Exercise 2 in the lecture on Slutsky's theorem), such a sequence of quadratic forms converges in distribution to a Chi-square random variable with a number of degrees of freedom equal to .

## The test

In the Wald test, the null hypothesis is rejected if where is a pre-determined critical value.

The size of the test can be approximated by its asymptotic value where is the distribution function of a Chi-square random variable with degrees of freedom.

The critical value is chosen so as to achieve a pre-determined size, as follows: ## Example

This example shows how to use the Wald test to test a simple linear restriction.

Let the parameter space be the set of all -dimensional vectors: ### The estimates

Suppose that we have obtained the following estimates of the parameter and of the asymptotic covariance matrix: where is the sample size.

### The null hypothesis

We want to test the restriction where and denote the first and second entries of .

Then, the function is a function defined by In this case, .

### The Jacobian

The Jacobian of is which has rank .

Note also that it does not depend on .

### The test statistic

We have We can substitute these values in the formula for the Wald statistic: Our test statistic has a Chi-square distribution with degrees of freedom.

### The critical value

Suppose that we want our test to have size .

Then, our critical value is where is the distribution function of a Chi-square random variable with degree of freedom.

The value of can be calculated with any statistical software (for, example, in MATLAB with the command `chi2inv(0.95,1)`).

### The decision

Therefore, the test statistic does not exceed the critical value and we do not reject the null hypothesis.