In statistics, the exponential family of probability density functions or probability mass functions comprises those that have the following form:

where:

  • h(x) is the reference density,

  • η is the natural parameter, a column vector, so that ηT, its transpose, is a row vector,

  • T(x) is called the sufficient statistic, a column vector whose number of scalar components is the same as that of η. (However, the concept of sufficient statistic is broader than what may appear from this article.)

  • and A(η) is a normalizing constant without which f(x | η) would not be a probability density or probability mass function. It is the cumulant-generating function of the probability distribution of the sufficient statistic T(X) when the distribution of X is that whose density function is h.

The parameter space -- i.e., the set of values of η for which this function is integrable -- is necessarily convex.

The term exponential family is also frequently used to refer to any particular concrete case, i.e., any parametrized family of probability distributions of this form.

The Bernoulli, normal, gamma, Poisson and binomial distributions are all exponential families.

According to the Pitman-Koopman-Darmois theorem, only in exponential families is there a sufficient statistic whose dimension remains bounded as sample size increases. More long-windedly, suppose Xn, n = 1, 2, 3, ... are independent identically distributed random variables whose distribution is known to be in some family of probability distributions. Only if that family is an exponential family is there a (possibly vector-valued) sufficient statistic T(X1, ..., Xn) whose number of scalar components does not increase as the sample size n increases.