Many researchers wish to test a

**statistical hypothesis**with their data. (Note: this article describes the traditional frequentist treatment of hypothesis testing).

There are several preparations we make before we observe the data.

- The hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. For example:
*The mean response to treatment being tested is equal to the mean response to the placebo in the control group. Both responses have the normal distribution with this unknown mean and the same known standard deviation ... (value).* - A test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. In the example given above, it might be the numerical difference between the two sample means,
**m**._{1}-m_{2} - The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). In this example, the difference between sample means would have a normal distribution with a standard deviation equal to the common standard deviation times the factor
**sqrt(1/n**where n_{1}+ 1/n_{2})_{1}and n_{2}are the sample sizes. - Among all the sets of possible values, we must choose one that we think represents the most extreme evidence
**against**the hypothesis. That is called the**critical region**of the test statistic. The probability of the test statistic falling in the critical region when the hypothesis is correct is called the**alpha**value (or**size**) of the test.

If the test statistic is inside the critical region, then our conclusion is either

- The hypothesis is incorrect
*or* - An event of probability less than or equal to
*alpha*has occurred.

In the example we would say: the observed response to treatment is statistically significant.

If the test statistic is outside the critical region, the only conclusion is that

*There is not enough evidence to reject the hypothesis.*

**not**the same as evidence for the hypothesis. That we cannot obtain. Statistical research progresses by eliminating error, not by

*finding the truth*.

*Note: Statistics cannot "find the truth", but it can approximate it. The argument for the maximum likelihood principle illustrates this -- TedDunning*