Before reading this lecture, the reader is strongly advised to read the lecture entitled Maximum likelihood - Hypothesis testing, which introduces the basics of hypothesis testing in a maximum likelihood (ML) framework.
In what follows, we are going to assume that an unknown parameter has been estimated by ML, that it belongs to a parameter space , and that we want to test the null hypothesiswhere is a vector valued function, with . The aforementioned lecture on hypothesis testing gives some examples of common null hypotheses that can be written in this form.
We are going to assume that the following technical conditions are satisfied:
for each , the entries of are continuously differentiable with respect to all entries of ;
the matrix of the partial derivatives of the entries of with respect to the entries of , called the Jacobian of and denoted by , has rank .
Let be the estimate of the parameter obtained by maximizing the log-likelihood over the whole parameter space :where is the likelihood function and is the sample.
Let us assume that the sample and the likelihood function satisfy some set of conditions that are sufficient to guarantee consistency and asymptotic normality of (see the lecture on maximum likelihood for a set of such conditions).
The test statistic employed in the Wald test iswhere is the sample size, and is a consistent estimate of the asymptotic covariance matrix of (see the lecture entitled Maximum likelihood - Covariance matrix estimation).
Asymptotically, the test statistic has a Chi-square distribution, as stated by the following proposition.
We have assumed that is consistent and asymptotically normal, which implies that converges in distribution to a multivariate normal random variable with mean and asymptotic covariance matrix , that is,Now, by the delta method, we have thatBut , so thatWe have assumed that and are consistent estimators, that is,where denotes convergence in probability. Therefore, by the continuous mapping theorem, we have thatThus we can write the Wald statistic as a sequence of quadratic forms whereconverges in distribution to a normal random vector with mean zero, and converges in probability to . By a standard result (see Exercise 2 in the lecture on Slutsky's theorem), such a sequence of quadratic forms converges in distribution to a Chi-square random variable with a number of degrees of freedom equal to .
In the Wald test, the null hypothesis is rejected ifwhere is a pre-determined critical value.
The size of the test can be approximated by its asymptotic value
where is the distribution function of a Chi-square random variable with degrees of freedom.
The critical value can be chosen so as to achieve a pre-determined size, as follows:
The following example shows how to use the Wald test to test a simple linear restriction.
Let the parameter space be the set of all
Suppose we have obtained the following estimates of the parameter and of the
is the sample size. We want to test the restriction
denote the first and second component of
Then, the function
is a function
The Jacobian of
Note also that it does not depend on
a consequence, the Wald statistic
test statistic has a Chi-square distribution with
degrees of freedom. Suppose we want our test to have size
Then, our critical value
is the distribution function of a Chi-square random variable with
degree of freedom and the value of
can be calculated with any statistical software (for, example, in MATLAB with
chi2inv(0.95,1)). Therefore, the test
statistic does not exceed the critical
we do not reject the null hypothesis.
Please cite as:
Taboga, Marco (2021). "Wald test", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/Wald-test.
Most of the learning materials found on this website are now available in a traditional textbook format.