In statistical inference, a sample is employed to make statements about the probability distribution from which the sample has been generated (see the lecture entitled Statistical inference). The sample can be regarded as a realization of a random vector , whose joint distribution function, denoted by , is unknown, but is assumed to belong to a set of distribution functions , called statistical model.
In a parametric model, the set is put into correspondence with a set of -dimensional real vectors. is called the parameter space and its elements are called parameters. Denote by the true parameter, that is, the parameter that is associated with the unknown distribution function from which the sample was actually generated. For concreteness, is assumed to be unique. This lecture discusses a kind of inference about called set estimation.
Roughly speaking, set estimation is the act of choosing a subset of the parameter space () in such a way that has a high probability of containing the true (and unknown) parameter . The chosen subset is called a set estimate of or a confidence set for .
When the parameter space is a subset of the set of real numbers and the subset is chosen among the intervals of (e.g. intervals of the kind ), we speak about interval estimation (instead of set estimation), interval estimate (instead of set estimate) and confidence interval (instead of confidence set).
When the set estimate is produced using a predefined rule (a function) that associates a set estimate to each in the support of , we can write
The function is called a set estimator. Often, the symbol is used to denote both the set estimate and the set estimator. The meaning is usually clear from the context.
As we already said, set estimation is the act of choosing a subset of the parameter space in such a way that has a high probability of containing the true parameter . The probability that contains the true parameter is called coverage probability and it is usually chosen by the statistician. Intuitively, before observing the data the statistician makes a statement: where is the parameter space, containing all the parameters that are deemed plausible. The statistician believes the statement to be true, but the statement is not very informative, because is a very large set. After observing the data, she makes a more informative statement:This statement is more informative, because is smaller than , but it has a positive probability of being wrong (which is the complement to of the coverage probability). In controlling this probability, the statistician faces a trade-off: if she decreases the probability of being wrong, then her statements become less informative; on the contrary, if she increases the probability of being wrong, then her statements become more informative.
In formal terms, the coverage probability of a set estimator is defined as follows:where the notation is used to indicate the fact that the probability is calculated using the distribution function associated to the true parameter . It is important to note that in the above definition of coverage probability the random quantity is the interval , while the parameter is fixed.
In practice, the coverage probability is seldom known, because it depends on the unknown parameter (although in some cases it is equal for all parameters belonging to the parameter space). When the coverage probability is not known, it is customary to compute the confidence coefficient , which is defined as follows:In other words, the confidence coefficient is equal to the smallest possible coverage probability. The confidence coefficient is also often called level of confidence.
We already mentioned that there is a trade-off in the construction and choice of a set estimator. On the one hand, we want our set estimator to have a high coverage probability, that is, we want the set to include the true parameter with a high probability. On the other hand, we want the size of to be as small as possible, so as to make our interval estimate more precise. What do we mean by size of ? If the parameter space is unidimensional and is an interval estimate, then the size of is just its length. If the space is multidimensional, then the size of is its volume. The size of a confidence set is also called measure of a confidence set (for those who have a grasp of measure theory, the name stems from the fact that Lebesgue measure is the generalization of volume in multidimensional spaces). If we denote by the size of a confidence set, then we can also define the expected size of a set estimator :where the notation is used to indicate the fact that the expected value is calculated using the distribution function associated to the true parameter . Like the coverage probability, also the expected size of a set estimator depends on the unknown parameter . Hence, unless it is a constant function of , one needs to somehow estimate it or to take the infimum over all possible values of the parameter, as we did above for coverage probabilities.
Although size is probably the simplest criterion to evaluate and select set estimators, there are several other criteria. We do not discuss them here, but we refer the reader to the very nice exposition in Berger, R. L. and G. Casella (2002) "Statistical inference", Duxbury Advanced Series.
Examples of set estimation problems can be found in the following lectures:
Set estimation of the mean (examples of set estimation of the mean of an unknown distribution);
Set estimation of the variance (examples of set estimation of the variance of an unknown distribution).
Most of the learning materials found on this website are now available in a traditional textbook format.