In probability theory, a (discrete-time) martingale is a discrete-time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies the identity

i.e., the conditional expected value of the next observation, given all of the past observations, is equal to the most recent past observation. Like many things in probability theory, the term was adopted from the language of gambling.

Somewhat more generally, a sequence Y1, Y2, Y3, ... is said to be a martingale with respect to another sequence X1, X2, X3, ... if

for every n.

Table of contents
1 History
2 Examples of martingales
3 Convergence of martingales
4 Martingales and stopping times
5 Submartingales and supermartingales

History

Originally, martingale referred to a class of betting strategies popular in 18th century France. The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double his bet after every loss, so that the first win would recover all previous losses plus win a profit equal to the original stake. Since a gambler with infinite wealth is guaranteed to eventually flip heads, the martingale betting strategy was seen as a sure thing by those who practiced it. Unfortunately, none of these practitioners in fact possessed infinite wealth, and the exponential growth of the bets would quickly bankrupt those foolish enough to use the martingale after even a moderately long run of bad luck.

Martingales in the probability-theory sense were invented by Lévy, and much of the original development of the theory was done by Doob. Part of the motivation for that work was to show the impossibility of successful betting strategies.

Examples of martingales

with "+" in case of "heads" and "-" in case of "tails". Let

Then { Yn : n = 1, 2, 3, ... } is a martingale with respect to { Xn : n = 1, 2, 3, ... }.

  • Let Yn = P(A | X1, ... , Xn). Then { Yn : n = 1, 2, 3, ... } is a martingale with respect to { Xn : n = 1, 2, 3, ... }.

  • (Polya's urn) An urn initially contains r red and b blue marbles. One is chosen randomly. If it is red, it is replaced and a new red marble put into the urn. If it is blue, it is replaced and a new blue marble put into the urn. Let Xn be the number of red marbles in the urn after n iterations of this procedure, and let Yn=Xn/(n+r+b). Then the sequence { Yn : n = 1, 2, 3, ... } is a martingale.

  • (Likelihood-ratio testing in statistics) A population is thought to be distributed according either to a probability density f or another probability density g. A random sample is taken, the data being X1, ... , Xn. Let Yn be the "likelihood ratio"

(which, in applications, would be used as a test statistic). If the population is actually distributed according to the density f rather than according to g, then { Yn : n = 1, 2, 3, ... } is a martingale with respect to { Xn : n = 1, 2, 3, ... }.

  • Suppose each ameba either splits into two amebas, with probability p, or eventually dies, with probability 1 − p. Let Xn be the number of amebas surviving in the nth generation (in particular Xn = 0 if the population has become extinct by that time). Let r be the probability of eventual extinction. (Finding r as function of p is an instructive exercise. Hint: The probability that the descendants of an ameba eventually die out is equal to the probability that either of its immediate offspring dies out, given that the original ameba has split.) Then

is a martingale with respect to { Xn: n = 1, 2, 3, ... }.

Convergence of martingales

[This section should state a martingale convergence theorem and perhaps some applications, and give at least one example of a non-convergent martingale.]

Martingales and stopping times

A stopping time with respect to a sequence of random variables X1, X2, ... is a random variable τ with the property that for each t, the occurrence or non-occurrence of the event τ=t depends only on the values of X1, X2, ..., Xt. The intuition behind the definition is that at any particular time t, you can look at the sequence so far and tell if it is time to stop. An example in real life might be the time at which a gambler leaves the gambling table, which might be a function of his previous winnings (for example, he might leave only when he goes broke), but can't choose to go or stay based on the outcome of games that haven't been played yet.

Some mathematicians defined the concept of stopping time by requiring only that the occurrence or non-occurrence of the event τ = t be probabilistically independent of Xt+1, Xt+2, Xt+3, ...., but not that it be completely determined by the history of the process up to time t. That is a weaker condition than the one appearing in the paragraph above, but is strong enough to serve in some of the proofs in which stopping times are used.

The optional stopping theorem says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value. One version of the theorem is given below:

Let X1, X2, ... be a martingale and τ a stopping time with respect to X1, X2, .... If (a) Pr[τ < ∞] = 1, (b) E[τ] < ∞, and (c) there exists a constant c such that |Xi+1Xi| ≤ c for all i; then E[Xτ] = E[X1].

Some applications of the theorem:

Submartingales and supermartingales

A submartingale is like a martingale, except that the current value of the random variable is always less than or equal to the expected future value. Formally, this means

Similarly, in a supermartingale, the current value is always greater than or equal to the expected future value:

Examples of submartingales and supermartingales