StatlectThe Digital Textbook
Index > Fundamentals of statistics > Maximum likelihood

Normal distribution - Maximum Likelihood Estimation

This lecture deals with maximum likelihood estimation of the parameters of the normal distribution. Before reading this lecture, you might want to revise the lecture entitled Maximum likelihood, which presents the basics of maximum likelihood estimation.

Assumptions

Our sample is made up of the first n terms of an IID sequence [eq1] of normal random variables having mean $mu _{0}$ and variance $sigma _{0}^{2}$. The probability density function of a generic term of the sequence is[eq2]

The mean $mu _{0}$ and the variance $sigma _{0}^{2}$ are the two parameters that need to be estimated.

The regularity conditions needed for the consistency and asymptotic normality of maximum likelihood estimators are assumed to be satisfied.

The likelihood function

The likelihood function is[eq3]

Proof

Given the assumption that the observations from the sample are IID, the likelihood function can be written as[eq4]

The log-likelihood function

The log-likelihood function is [eq5]

Proof

By taking the natural logarithm of the likelihood function, we get[eq6]

The maximum likelihood estimators

The maximum likelihood estimators of the mean and the variance are[eq7]

Proof

We need to solve the following maximization problem [eq8]The first order conditions for a maximum are [eq9]The partial derivative of the log-likelihood with respect to the mean is [eq10]which is equal to zero only if[eq11]Therefore, the first of the two first-order conditions implies [eq12]The partial derivative of the log-likelihood with respect to the variance is [eq13]which, if we rule out $sigma ^{2}=0$, is equal to zero only if[eq14]Thus, the system of first order conditions is solved by[eq15]

Thus, the estimator $widehat{mu }$ is equal to the sample mean and the estimator [eq16] is equal to the unadjusted sample variance.

Asymptotic variance

The vector[eq17] is asymptotically normal with asymptotic mean equal to[eq18] and asymptotic covariance matrix equal to[eq19]

Proof

The first entry of the score vector [eq20] is[eq21]The second entry of the score vector is[eq22]In order to compute the Hessian [eq23]we need to compute all second order partial derivatives. We have[eq24]and[eq25]Finally, [eq26]which, as you might want to check, is also equal to the other cross-partial derivative [eq27]. Therefore, the Hessian is[eq28]By the information equality, we have that[eq29]As a consequence, the asymptotic covariance matrix is[eq30]

In other words, the distribution of the vector [eq31]can be approximated by a multivariate normal distribution with mean [eq32]and covariance matrix[eq33]

The book

Most learning materials found on this website are now available in a traditional textbook format.