In statistics, an

**estimator**is a function of the known data that is used to estimate an unknown parameter. Many different estimators are possible for any given parameter. Some criterion is used to choose between the estimators. Often, a criterion cannot clearly pick one estimator over another.

Two types of estimators: point estimators, and interval estimators.

## Point estimators

For a point estimator **θ** of parameter θ:

- The
*bias*of**θ**is defined as B(**θ**) = E[**θ**] − θ -
**θ**is an*unbiased estimator*of θ iff B(**θ**) = 0 for all θ - The
*mean square error*of**θ**is defined as MSE(**θ**) = E[(**θ**− θ)^{2}] - MSE(
**θ**) = V(**θ**) + (B(**θ**))^{2} - The standard deviation of
**θ**is also called the*standard error*of**θ**.

Occasionally one chooses the unbiased estimator with the lowest variance. Sometimes it is preferable not to limit oneself to unbiased estimators; see Bias (statistics). Concerning such "best unbiased estimators", see also Gauss-Markov theorem, Lehmann-Scheffé theorem, Rao-Blackwell theorem.

See also Maximum likelihood.