Statlect is a free digital textbook on probability theory and mathematical statistics. Explore its main sections.
Read a rigorous yet accessible introduction to the main concepts of probability theory, such as random variables and random vectors, expected value, variance, correlation, conditional probability and conditional expectation.
Explore this compendium of common probability distributions, including the binomial, Poisson, uniform, exponential and normal distributions; find step-by-step derivations of the properties of the main probability distributions.
Learn about stochastic convergence, including convergence in probability, almost surely and in distribution; read about the Central Limit Theorem and the Law of Large Numbers.
This is a rigorous introduction to the basics of mathematical statistics; learn about statistical inference, point estimation, interval estimation and hypothesis testing.
Review the basics of differentiation and integration, learn about the fundamental concepts of combinatorial analysis, such as permutations and combinations; discover some special functions used in statistics.
Use this glossary to quickly review the technical terms that are introduced in the digital textbook.
Explore some popular pages on Statlect.
A gentle introduction to the concept of expected value, with an informal definition and more formal definitions based on the Stieltjes and Lebesgue integrals.
The Beta distribution is a continuous probability distribution having two parameters. One of its most common uses is to model one's uncertainty about the probability of success of an experiment.
The Poisson distribution is a discrete probability distribution used to model the number of occurrences of an unpredictable event within a unit of time.
The exponential distribution is a continuous probability distribution used to model the time we need to wait before a given event occurs.
The binomial distribution is a discrete probability distribution used to model the number of successes obtained by repeating several times an experiment that can have two outcomes, either success or failure.
Bayes' rule is a formula that allows to compute the conditional probability of a given event, after observing a second event whose conditional and unconditional probabilities were known in advance.
Maximum likelihood is an estimation method that allows to use observed data to estimate the parameters of the probability distribution that generated the data.
The moment generating function is often used to characterize the probability distribution of a random variable. Its derivatives at zero are equal to the moments of the random variable.
The Beta function is a function of two variables that is often employed in probability theory and statistics, for example, as a normalizing constant in the probability density functions of the F distribution and of the Student's t distribution.
The concept of convergence in probability is based on the following intuition: two random variables are "close to each other" if there is a high probability that their difference will be very small.
A Central Limit Theorem provides a set of conditions that are sufficient for the sample mean to have a normal distribution asymptotically (as the sample size increases).
The theory of statistical inference is concerned with how to use observed data to draw conclusions about the unknown probability distribution that generated the data.
See what's new on Statlect.
The linear regression model is a conditional model in which the output variable is linearly related to the input variables and to an error term.
This lecture discusses the conditions under which the Ordinary Least Squares (OLS) estimators of the coefficients of a linear regression are consistent and asymptotically normal.
Most learning materials found on this website are now available in a traditional textbook format.