In statistics, an estimator is a function of the known data that is used to estimate an unknown parameter. Many different estimators are possible for any given parameter. Some criterion is used to choose between the estimators. Often, a criterion cannot clearly pick one estimator over another.

Two types of estimators: point estimators, and interval estimators.

Point estimators

For a point estimator θ of parameter θ:

  1. The bias of θ is defined as B(θ) = E[θ] − θ
  2. θ is an unbiased estimator of θ iff B(θ) = 0 for all θ
  3. The mean square error of θ is defined as MSE(θ) = E[(θ − θ)2]
  4. MSE(θ) = V(θ) + (B(θ))2
  5. The standard deviation of θ is also called the standard error of θ.

where V(X) is the variance of X and E is the expected value operator.

Occasionally one chooses the unbiased estimator with the lowest variance. Sometimes it is preferable not to limit oneself to unbiased estimators; see Bias (statistics). Concerning such "best unbiased estimators", see also Gauss-Markov theorem, Lehmann-Scheffé theorem, Rao-Blackwell theorem.

See also Maximum likelihood.