A false positive is when a test incorrectly reports that it has found what it is looking for. Detection algorithms of all kinds have the tendency to create such false alarms.

For example, optical character recognition (OCR) may detect an 'a' where there are only some dots that look like an a to the algorithm being used.

This is problematic when it happens in biometric scans, such as retina scans or facial recognition, when the scanner incorrectly identifies someone as matching a known person, either a person who is entitled to enter the system, or a suspected criminal.

When developing such software or hardware there is always a tradeoff between false positives and false negatives (in which an actual match is not detected).

Usually there is some trigger value of how close a match to a given sample must be achieved before the algorithm reports a match. The higher this trigger value is, the more similar an object has to be to be detected and the fewer false positives will be created.

False positives are also a significant issue in medical testing. In some cases, there are two or more tests that can be used, one of which is simpler and less expensive, but less accurate, than the other. For example, the simplest tests for HIV and hepatitis in blood have a significant rate of false positives. These tests are used to screen out possible blood donors, but more expensive and more precise tests are used in medical practice, to determine whether a person is actually infected with these viruses.

False positives can produce serious and counterintuitive problems when the condition being searched for is rare. If a test has a false positive rate of one in ten thousand, but only one in a million samples (or people) is a true positive, most of the "positives" detected by the test will be false. The probability that an observed positive result is a false positive may be calculated, and the problem of false positives demonstrated, using Bayes' theorem.

See also: Receiver-operator characteristic, Type I error