A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are in use.

A major advantage of positional numeral systemss over other systems of writing down numbers is that they facilitate the usual grade-school method of long multiplication: multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why multiplication tables have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Rather than first computing the products, and then adding them all together in a second phase, computers add the products to the result as soon as they are computed. Modern chips implement this algorithm for 32-bit or 64-bit numbers in hardware or in microcode. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally: the time complexity of multiplying two n-digit numbers using long multiplication is &Theta(n2).

An old method for multiplication, that doesn't require multiplication tables, is the Peasant multiplication algorithm; this is actually a method of multiplication using base 2.

For systems that need to multiply huge numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, this algorithm is too slow. These systems employ Karatsuba multiplication which was discovered in 1962 and proceeds as follows: suppose you work in base 10 (unlike most computer implementations) and want to multiply two n-digit numbers x and y, and assume n = 2m is even (if not, add zeros at the left end). We can write

x = x1 10m + x2
y = y1 10m + y2
with m-digit numbers x1, x2, y1 and y2. The product is given by
xy = x1y1 102m + (x1y2 + x2y1) 10m + x2y2
so we need to quickly determine the numbers x1y1, x1y2 + x2y1 and x2y2. The heart of Karatsuba's method lies in the observation that this can be done with only three rather than four multiplications:
  1. compute x1y1, call the result A
  2. compute x2y2, call the result B
  3. compute (x1 + x2)(y1 + y2), call the result C
  4. compute C - A - B; this number is equal to x1y2 + x2y1.
To compute these three products of m-digit numbers, we can employ the same trick again, effectively using recursion. Once the numbers are computed, we need to add them together, which takes about n operations.

If T(n) denotes the time it takes to multiply two n-digit numbers with Karatsuba's method, then we can write

T(n) = 3 T(n/2) + cn + d
for some constants c and d, and this recurrence relation can be solved, giving a time complexity of Θ(nln(3)/ln(2)). The number ln(3)/ln(2) is approximately 1.585, so this method is significantly faster than long multiplication. Because of the overhead of recursion, Karatsuba's multiplication is not very fast for small values of n; typical implementations therefore switch to long multiplication if n is below some threshold.

It is possible to experimentally verify whether a given system uses Karatsuba's method or long multiplication: take your favorite two 100,000 digit numbers, multiply them and measure the time it takes. Then take your favorite two 200,000 digit numbers and measure the time it takes to multiply those. If Karatsuba's method is being used, the second time will be about three times as long as the first; if long multiplication is being used, it will be about four times as long.

Another Method of multiplication is called Toom-Cook or Toom3

There exist even faster algorithms, based on the fast Fourier transform. The idea, due to Strassen (1968), is the following: multiplying two numbers represented as digit strings is virtually the same as computing the convolution of those two digit strings. Instead of computing a convolution, one can instead first compute the discrete Fourier transforms, multiply them entry by entry, and then compute the inverse Fourier transform of the result. (See convolution theorem.) The fastest known method based on this idea was described in 1972 by Schönhage/Strassen and has a time complexity of Θ(n ln(n) ln(ln(n))). These approaches are not used in computer algebra systems and bignum libraries because they are difficult to implement and don't provide speed benefits for the sizes of numbers typically encountered in those systems. The GIMPS distributed Internet prime search project deals with numbers having several million digits and employs a Fourier transform based multiplication algorithm. Using number-theoretic transforms instead of discrete Fourier transforms should avoid any rounding error problems by using modular arithmetic instead of complex numbers.

All the above multiplication algorithms can also be used to multiply polynomials.

A simple improvement to the basic recursive multiplication algorithm:

x·0 = 0
x·y = x + x(y-1)

where x is an arbitrary quantity, and y is a natural number, is to use:

x·0 = 0
x·y = 2x·(y/2) , if y is divisible by 2
x·y = x + 2x·(y/2), if y is not divisible by 2 (using integer division)

The major improvement in this algorithm arises because the number of operations required is O(log y) rather than O(y). For numbers which can be represented directly as computer words a further benefit is that multiplying by 2 is equivalent to an arithmetic shift left, while division by 2 is equivalent to an arithmetic shift right. Clearly the major benefits arise when y is very large in which case it will not be possible to represent it as a single computer word.

This may not help so much for multiplication by real or complex values, but is useful for multiplication of very large integers which are supported in some programming languages such as Haskell, Ruby, and Common Lisp.

External links:

Multiplication Algorithms used by GMP