A Color space encoding is a technique used to represent colors of a particular color space in a format that can be understood by digital computers.

The problem of encoding colors into a format that digital computers recognize is limited by the binary number system; each bit can encode two distinct values, or colors. Two bits can encode 22 = 4 colors; eight bits can encode up to 28 = 256 colors.

Most modern computer graphics systems use an RGB encoding of 24 bits, sometimes called Truecolor. Typically, this encoding allocates eight bits each for red, green, and blue, permimtting 256 different shades of each, or a total of approximately 16.7 million colors in all.

The average human is said to be able to distinguish between several million different colors, depending upon the individual's perceptive capability, and upon environmental conditions, so the 24-bit color space encoding is generally considered sufficient to represent all of the colors that humans are able to distinguish.

Most current computers are 32 bit processors. This means they can easily deal with numbers up to about four billion. They do this by representing each individual number as patterns of zero's and ones... Kind of like the example above.

Computers usually use television style monitors as displays. Because the monitors are self-illuminated, they use _additive_ colour to generate the spectrum. The basic additive colours are red, green and blue. Combinations of the colours can be created to form almost any colour that the human eye can see. Here's a simple mix...

red   green   blue
      /      /
  yellow   cyan
          /
      white

Mixing red and green gives yellow. Green and blue makes cyan. If you were to add cyan and yellow, they'd come out white. This is the opposite of what happens when you mix paints, or printer's ink. That's a _subtractive_ colour system and you start with cyan, yellow and magenta instead.

So, by combining red, green and blue, you can make just about any colour. So, any colour which is displayed by your computer screen is broken down into its red-green-blue, or RGB, components. The computer has 32 bits to work with. If we assign eight (another power of 2!) bits to each colour, we can fit them comfortably into this space. Like so:

1       8      16      24      32 bits
| red   | green | blue  | ?     | colours

That gives us 24 bits of colour resolution. 2 to the 24th power is... 16.7 million. "Wait a minute!", you say, "what about those left over eight bits? Why not use them?" In the beginning it was done as tradeoff between accuracy and convenience. Digital to analog converters got very expensive the more bits of accuracy you added to them. (They still do, but it's less pronounced.) This meant that it was important to choose an accuracy that was good enough, but not so good that it was very expensive. Fortunately, eight bits fell right in the sweet spot. It also made the design of hardware to display these coloured pixels relatively easy and straightforward.

Some people did try other schemes, such as sixteen bit colour, where either blue or green had a few bits more than the others -- since 3 doesn't divide evenly into sixteen. Early PC graphics adapters called this "HiColour" mode. Other formats used ten bits per colour or even sixteen. These tended to be large and clumsy to work with, so they're not usually seen.

So, for a while, those eight extra bits were wasted, casualties of convenience. However, later on, a couple of smart guys were looking thing over and realized that the last eight bits could be used to represent "coverage", also called "alpha." This controls the opacity of the pixel. Typically, alpha ranges from zero to 255 with zero meaning fully transparent, and 255 meaning fully opaque. It doesn't mean much when you're just displaying the picture, but when you composite it with another -- it makes all the difference. More about this below. For now, rest assured that the eight bits did not go to waste in the end.