Search for probability and statistics terms on Statlect
StatLect

Importance sampling

by , PhD

Importance sampling is a variance reduction technique. We use it to reduce the variance of the approximation error that we make when we approximate an expected value with Monte Carlo integration.

In this lecture, we explain how importance sampling works and then we show with an example how effective it can be.

Table of Contents

Equivalent expectations

Importance sampling is based on a simple method used to compute expected values in many different but equivalent ways.

Discrete vectors

The next proposition shows how the technique works for discrete random vectors.

Proposition Let X be a Kx1 discrete random vector with support R_X and joint probability mass function [eq1]. Let g(x) be a function [eq2]. Let Y be another Kx1 discrete random vector Y with joint probability mass function [eq3] and such that [eq4] whenever [eq5]. Then, [eq6]

Proof

This is obtained as follows:[eq7]

Continuous vectors

An almost identical proposition holds for continuous random vectors.

Proposition Let X be a continuous Kx1 random vector with support R_X and joint probability density function [eq8]. Let g(x) be a function [eq2]. Let Y be another continuous random vector Y with joint probability density function [eq10] and such that [eq11] whenever [eq12]. Then, [eq13]

Proof

This is obtained as follows:[eq14]where we have used[eq15]as a shorthand for the multiple integral over the coordinates of x.

Importance samples

Suppose that we need to compute the expected value[eq16]of a function of a random vector X by Monte Carlo integration.

The standard way to proceed is to produce a computer-generated sample of realizations of n independent random vectors X_1,...,X_n having the same distribution as X.

Then, we use the sample mean [eq17]to approximate the expected value.

Thanks to the propositions in the previous section, we can compute an alternative Monte Carlo approximation of [eq18] by extracting n independent draws [eq19] from the distribution of another random vector Y (in what follows we assume that it is discrete, but everything we say applies also to continuous vectors).

Then, we use the sample mean[eq20]as an approximation.

This technique is called importance sampling.

The reason why we use importance sampling is that we can often choose Y in such a way that the variance of the approximation error is much smaller than the variance of the standard Monte Carlo approximation.

Approximation errors

In the case of the standard Monte Carlo approximation, the variance of the approximation error is[eq21]

In the case of importance sampling, the variance of the approximation error is[eq22]

Proof

In the standard case, the approximation error is[eq23]and its variance is[eq24]In the case of importance sampling, we have[eq25]

The ideal sample

Ideally, we would like to be able to choose Y in such a way that [eq26]is a constant, which would imply that the variance of the approximation error is zero.

The next proposition shows when this ideal situation is achievable.

Proposition If [eq27] for any $y$, then[eq28]when Y has joint probability mass function[eq29]

Proof

The ratio [eq30]is constant if the proportionality condition[eq31]holds. By imposing that [eq32] be a legitimate probability density function, we get[eq33]or[eq34]

Of course, the denominator [eq35] is unknown (otherwise we would not be discussing how to compute a Monte Carlo approximation for it), so that it is not feasible to achieve the optimal choice for Y.

However, the formula for the probability mass function (pmf) of the optimal Y gives us some indications about the choice of Y.

In particular, equation (1) tells us that the pmf of Y should place more mass where the product between the pmf of X and the value of $g$ is larger.

This product is a measure of the importance of the possible values of X.

Therefore, the pmf of X should be tilted so as to give more weight to the more important values of X.

Intuition and main takeaways

Before showing an example, let us summarize the main takeaways from this lecture:

Example

Let us now illustrate importance sampling with an example.

The problem

Suppose that X has a standard normal distribution (i.e., with mean $mu =0$ and standard deviation $sigma =1$) and [eq44]

The function g(x) attains its maximum at the point $x=3$ and then rapidly goes to 0 for values of x that are smaller or larger than $3 $.

On the contrary, the probability density function [eq45] of a standard normal random variable is almost zero at $x=3$.

As a consequence, if we use a standard Monte Carlo approximation:

This results in a high variance of the approximation error.

The solution

In order to shift weight towards $x=3$, we can sample Y from a normal distribution with mean $mu =3$ and standard deviation $sigma =1$.

This plot shows the probability distributions used in the example.

The following Python code shows how to do so and computes the standard Monte Carlo (MC) and the importance sampling (IS) approximations by using samples of $n=10,000$ independent draws from the distributions of $X $ and Y.

The standard deviations of the two approximations (std_MC and std_IS) are estimated by using the sample variances of [eq46] and [eq47].

The Python code

If you run this example code, you can see that indeed the importance sampling approximation achieves a significant reduction in the approximation error (from 0.0080 to 0.0012).

# Example of importance sampling in Python

import numpy as np
from scipy.stats import norm

n = 10000 # Number of Monte Carlo samples

np.random.seed(0) # Initialization of random number generator for replicability

# Standard Monte Carlo
x = np.random.randn(n, 1)
g = 10 * np.exp(-5 * (x - 3) ** 4)
MC = np.mean(g)
std_MC = np.sqrt(( 1 / n) * np.var(g))
print('Standard Monte-Carlo estimate of the expected value: ' + str(MC))
print('Standard deviation of plain-vanilla Monte Carlo: ' + str(std_MC))
print(' ')

# Importance sampling
y = 3 + np.random.randn(n, 1);
g = 10 * np.exp(-5 * (y  - 3) ** 4);
g_weighted = g * norm.pdf(y, 0, 1) / norm.pdf(y, 3, 1);
IS = np.mean(g_weighted)
std_IS = np.sqrt((1 / n) * np.var(g_weighted))
print('Importance-sampling Monte-Carlo estimate of the expected value: ' + str(IS))
print('Standard deviation of importance-sampling Monte Carlo: ' + str(std_IS))

The output is:

Standard Monte-Carlo estimate of the expected value: 0.08579415409780462
Standard deviation of plain-vanilla Monte Carlo: 0.007904811247115087
 
Importance-sampling Monte-Carlo estimate of the expected value: 0.09096069224808337
Standard deviation of importance-sampling Monte Carlo: 0.0011925073695279826

How to cite

Please cite as:

Taboga, Marco (2021). "Importance sampling", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/asymptotic-theory/importance-sampling.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.