For historical reasons and in order to have application to the solution of Diophantine equations, results in number theory have been scrutinised more than in other branches of mathematics to see if their content is effectively computable. This for example brings into question any use of big O notation and its implied constants: are assertions pure existence theorems for such constants, or can one recover a version in which 1000 (say) takes the place of the implied constant?

Many of the principal results of analytic number theory proved in the period 1900-1950 were in fact ineffective. These included lower bounds for class numbers (ideal class groups for some families of number fields grow); and bounds for the best rational approximations to algebraic numbers in terms of denominators. These latter could be read quite directly as results on Diophantine equations, after work of Thue. The result used for Liouville numbers in the proof is effective in the way it applies the mean value theorem: but improvements (to what is now the Thue-Siegel-Roth theorem) were not.

Later results, particularly of Baker, changed the position somewhat. Weaker theorems, qualitatively speaking, but with explicit constants, can actually be applied, in conjunction with machine computation, to prove that lists of solutions (suspected to be complete) are actually the entire solution set.

The difficulties here were met by radically different proof techniques, taking much more care about proofs by contradiction. The logic involved is closer to proof theory, than to that of computability theory and recursive functions. It is rather loosely conjectured that the difficulties may lie in the realm of computational complexity theory. Ineffective results are still being proved in the shape A or B, where we have no way of telling which.