# Score vector

In the theory of maximum likelihood estimation, the score vector (or simply, the score) is the gradient (i.e., the vector of first derivatives) of the log-likelihood function with respect to the parameters being estimated.

## Definition

The concept is defined as follows.

Definition Let be a parameter vector describing the distribution of a sample . Let be the likelihood function of the sample , depending on the parameter . Let be the log-likelihood functionThen, the vector of first derivatives of with respect to the entries of , denoted by is called the score vector.

The symbol is read nabla and is often used to denote the gradient of a function.

## Example

In the next example, the likelihood depends on a parameter. As a consequence, the score is a vector.

Example Suppose the sample is a vector of draws , ..., from a normal distribution with mean and variance . As proved in the lecture on maximum likelihood estimation of the parameters of a normal distribution, the log-likelihood of the sample is The two parameters (mean and variance) together form a vectorThe partial derivative of the log-likelihood with respect to is and the partial derivative with respect to the variance is The score vector is

## How the score is used to find the maximum likelihood estimator

The maximum likelihood estimator of the parameter solves the maximization problem

Under some regularity conditions, the solution of this problem can be found by solving the first order conditionthat is, by equating the score vector to .

## More details

More details about the log-likelihood and the score vector can be found in the lecture entitled Maximum likelihood.

## Keep reading the glossary

Previous entry: Sample variance

Next entry: Size of a test

The book

Most of the learning materials found on this website are now available in a traditional textbook format.

Glossary entries
Share