The normal distribution is an extremely important probability distribution in many fields. It is also called the Gaussian distribution. It is actually a family of distributions of the same general form, differing only in their location and scale parameters: the mean and standard deviation. The standard normal distribution is the normal distribution with a mean of zero and a standard deviation of one. Because the graph of its probability density resembles a bell, it is often called the bell curve.
The normal distribution was first introduced by de Moivre in an article in 1733 (reprinted in the second edition of his The Doctrine of Chances, 1738) in the context of approximating certain binomial distributions for large n. His result was extended by Laplace in his book Analytical Theory of Probabilities (1812), and is now called the Theorem of de Moivre-Laplace.
Laplace used the normal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have used the method since 1794, justified it rigorously in 1809 by assuming a normal distribution of the errors.
The name "bell curve" goes back to Jouffret who used the term "bell surface" in 1872 for a bivariate normal with independent components. The name "normal distribution" was coined independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875 [Stigler]. This terminology is unfortunate, since it reflects and encourages the fallacy that "everything is Gaussian". (See the discussion of "occurrence" below).
There are various ways to specify a random variable. The most visual is the probability density function (plot at the top), which represents how likely each value of the random variable is. The cumulative density function is a conceptually cleaner way to specify the same information, but to the untrained eye its plot is much less informative (see below). Equivalent ways to specify the normal distribution are: the moments, the cumulants, the characteristic function, the moment-generating function, and the cumulant-generating function. Some of these are very useful for theoretical work, but not intuitive. See probability distribution for a discussion.
All of the cumulants of the normal distribution are zero, except the first two.
The probability density function of the normal distribution with mean μ and standard deviation σ (equivalently, variance σ2) is an example of a Gaussian function,
For all normal distributions,
the density function is symmetric about its mean value. About 68% of the area under the curve is within one standard deviation of the mean, 95.5% within two standard deviations, and 99.7% within three standard deviations. The inflection points of the curve occur at one standard deviation away from the mean.
The cumulative distribution function (hereafter cdf) is defined as the probability that a variable X has a value less than x, and it is expressed in terms of the density function as
On this graph, we see the probability that a standard normal variable has a value less than 0.25 is approximately equal to 0.60.
The characteristic function is defined as the expected value of
.
For a normal distribution, it can be shown the characteristic function is
As a consequence of Property 1, it is possible to relate all normal random variables to the standard normal.
If X is a normal random variable with mean μ and variance σ2, then
The standard normal distribution has been tabulated, and the other normal distributions are simple transformations of the standard one.
Therefore, one can use tabulated values of the cdf of the standard normal distribution to find values of the cdf of a general normal distribution.
For computer simulations, it is often useful to generate values that have a normal distribution.
There are several methods; the most basic is to invert the standard normal cdf. More efficient methods are also known.
One such method is the Box-Muller transform.
The Box-Muller transform takes two uniformly distributed values as input and maps them to two normally distributed values.
This requires generating values from a uniform distribution, for which many methods are known. See also random number generators.
The Box-Muller transform is a consequence of Property 3 and the fact that the chi-square distribution with two degrees of freedom is an exponential random variable (which is easy to generate).
The normal distribution has the very important property that
under certain conditions, the distribution of a sum of a large number of independent variables is approximately normal.
This is the so-called central limit theorem.
The practical importance of the central limit theorem is that the normal distribution can be used as an approximation to some other distributions.
Approximately normal distributions occur in many situations, as a result of the central limit theorem.
When there is reason to suspect the presence of a large number of small effects acting additively, it is reasonable to assume that observations will be normal.
There are statistical methods to empirically test that assumption.
Effects can also act as multiplicative (rather than additive) modifications. In that case, the assumption of normality is not justified, and it is the logarithm of the variable of interest that is normally distributed. The distribution of the directly observed variable is then called log-normal.
Finally, if there is a single external influence which has a large effect on the variable under consideration, the assumption of normality is not justified either. This is true even if, when the external variable is held constant, the resulting distributions are indeed normal. The full distribution will be a superposition of normal variables, which is not in general normal. This is related to the theory of errors (see below).
To summarize, here's a list of situations where approximate normality
is sometimes assumed. For a fuller discussion, see below.
Light intensity from a single source varies with time, and is usually assumed to be normally distributed. However, quantum mechanics interprets measurements of light intensity as photon counting. Ordinary light sources which produce light by thermal emission, should follow a Poisson distribution or Bose-Einstein distribution on very short time scales. On longer time scales (longer than the coherence time), the addition of independent variables yields an approximately normal distribution. The intensity of laser light, which is a quantum phenomenon, has an exactly normal distribution.
Repeated measurements of the same quantity are expected to yield results which are clustered around a particular value. If all major sources of errors have been taken into account, it is assumed that the remaining error must be the result of a large number of very small additive effects, and hence normal. Deviations from normality are interpreted as indications of systematic errors which have not been taken into account. Note that this is the central assumption of the mathematical theory of errors.
The overwhelming biological evidence is that bulk growth processes of living tissue proceed by multiplicative, not additive, increments, and that therefore measures of body size should at most follow a lognormal rather than normal distribution. Despite common claims of normality, the sizes of plants and animals is approximately lognormal. The evidence and an explanation based on models of growth was first published in the classic book
The assumption that linear size of biological specimens is normal leads to a non-normal distribution of weight (since weight/volume is roughly the 3rd power of length, and gaussian distributions are only preserved by linear transformations), and conversely assuming that weight is normal leads to non-normal lengths. This is a problem, because there is no a priori reason why one of length, or body mass, and not the other, should be normally distributed. Lognormal distributions, on the other hand, are preserved by powers so the "problem" goes away if lognormality is assumed.
Because of the exponential nature of interest and inflation, financial indicators such as interest rates, stock values, or commodity prices make good examples of multiplicative behaviour. As such, they should not be expected to be normal, but lognormal.
Mandelbrot, the popularizer of fractals, has claimed that even the assumption of lognormality is flawed.
Other examples of variables that are not normally distributed include the lifetimes of humans or mechanical devices. Examples of distributions used in this connection are the exponential distribution (memoryless) and the Weibull distribution. In general, there is no reason that waiting times should be normal, since they are not directly related to any kind of additive influence.
The IQ score of an individual for example can be seen as the result of many small additive influences: many genes and many environmental factors all play a role.
History
Specification of the normal distribution
Probability density function
(See also exponential function and pi.) If a random variable X has this distribution, we write X ~ N(μ, σ2). If μ = 0 and σ = 1, the distribution is called the standard normal distribution, with formula
The picture at the top of this article is the graph of the probability density function of the standard normal distribution. Cumulative distribution function
The standard normal cdf, conventionally denoted , is just the general cdf evaluated with and ,
The standard normal cdf can be expressed in terms of an elementary function called the error function, as
The following graph shows the cumulative distribution function for values of z from -4 to +4: Generating functions
Moment generating function
Characteristic function
as can be seen by completing the square in the exponent.Properties
Standardizing normal random variables
is a standard normal random variable: Z~N(0,1).
An important consequence is that the cdf of a general normal distribution is therefore
Conversely, if Z is a standard normal random variable,
is a normal random variable with mean μ and variance σ2. Generating normal random variables
The central limit theorem
Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution.
It is typically the case that such approximations are less accurate in the tails of the distribution.
Occurrence
Of relevance to biology and economics is the fact that complex systems tend to display power laws rather than normality.Photon counts
Measurement errors
Physical characteristics of biological specimens
Differences in size due to sexual dimorphism, or other polymorphisms like the worker/soldier/queen division in social insects, further make the joint distribution of sizes deviate from lognormality. Financial variables
Lifetime
Test scores
Criticisms: test scores are discrete variable associated with the number of correct/incorrect answers, and as such they are related to the binomial. Moreover (see this USENET post), raw IQ test scores are customarily 'massaged' to force the distribution of IQ scores to be normal. Finally, there is no widely accepted model of intelligence, and the link to IQ scores let alone a relationship between influences on intelligence and additive variations of IQ, is subject to debate.Further reading
External links and references