Search for probability and statistics terms on Statlect
Index > Fundamentals of statistics > Maximum likelihood

Exponential distribution - Maximum Likelihood Estimation

by , PhD

In this lecture, we derive the maximum likelihood estimator of the parameter of an exponential distribution. The theory needed to understand this lecture is explained in the lecture entitled Maximum likelihood.

Table of Contents


We observe the first n terms of an IID sequence [eq1] of random variables having an exponential distribution. A generic term of the sequence $X_{j}$ has probability density function[eq2]where [eq3] is the support of the distribution and the rate parameter $lambda _{0}$ is the parameter that needs to be estimated.

We assume that the regularity conditions needed for the consistency and asymptotic normality of maximum likelihood estimators are satisfied.

The likelihood function

The likelihood function is[eq4]


Since the terms of the sequence are independent, the likelihood function is equal to the product of their densities:[eq5]Because the observed values [eq6] can only belong to the support of the distribution, we can write[eq7]

The log-likelihood function

The log-likelihood function is [eq8]


This is obtained by taking the natural logarithm of the likelihood function:[eq9]

The maximum likelihood estimator

The maximum likelihood estimator of $lambda $ is[eq10]


The estimator is obtained as a solution of the maximization problem [eq11]The first order condition for a maximum is [eq12]The derivative of the log-likelihood is[eq13]By setting it equal to zero, we obtain[eq14]Note that the division by [eq15] is legitimate because exponentially distributed random variables can take on only positive values (and strictly so with probability 1).

Therefore, the estimator [eq16] is just the reciprocal of the sample mean[eq17]

Asymptotic variance

The estimator [eq18] is asymptotically normal with asymptotic mean equal to $lambda _{0}$ and asymptotic variance equal to[eq19]


The score is[eq20]The Hessian is[eq21]By the information equality, we have that[eq22]Finally, the asymptotic variance is[eq23]

This means that the distribution of the maximum likelihood estimator [eq24] can be approximated by a normal distribution with mean $lambda _{0}$ and variance $lambda _{0}^{2}/n$.

The book

Most of the learning materials found on this website are now available in a traditional textbook format.