An instruction pipeline is a technology used on modern microprocessors to enhance their performance.

Early, non-pipelined, processors read an instruction at a time, executed it, and wrote the results back out, all in one step. The stepping of the instructions through the processor was driven by a central clock, with many instructions taking several clock cycles to complete. Instructions consist of a number of steps, each being performed by different pieces of hardware on the CPU. While the processor was adding numbers, for instance, the hardware dedicated to loading data from computer memory was idle.

Pipelining seeks to improve performance by reducing the amount of idle time. To do this the CPU includes new circuitry that examines the instructions, and breaks them down into their sub-instructions. These smaller instructions are then dispatched to the hardware in step, with the pipeline controller watching the process.

For instance, a typical instruction to add two numbers might be ADD A, B, C, which adds the values found in memory locations A and B, and then puts the result in memory location C. In a pipelined processor the pipeline controller would break this into a series of instructions similar to:

LOAD A, R1
LOAD B, R2
ADD R1, R2, R3
STORE R3, C

The R locations are registers, temporary memory that is quick to access. The end result is the same, the numbers are added and the result placed in C, and the time taken to drive the addition to completion is no different than in the non-pipelined case.

The key to understanding the advantage of pipelining is to consider what happens when this ADD instruction is "half-way done", at the ADD instruction for instance. At this point the circuitry responsible for loading data from memory is no longer being used, and would normally sit idle. In this case the pipeline controller fetches the next instruction from memory, and starts loading the data it needs into registers. That way when the ADD instruction is complete, the data needed for the next ADD is already loaded and ready to go. If the cost of accessing memory is high (which it is on modern machines), the overall effective speed of the machine can be greatly increased because no parts of the CPU sit idle.

Each of the simple steps are usually called pipeline stages, in the example above the pipeline is three stages long, a loader, adder and storer. Pipelines as long as 7, 10 and even 20 stages (like in the Intel Pentium 4) are common in modern designs.

To better visualize the concept, we can look at a theoretical 3-stages pipeline:

StageDescription
Load Read instruction from memory
Execute Execute instruction
Store Store result in memory and/or registers

and a pseudo-code assembly listing to be executed:

	LOAD  #40,A      ; load 40 in A
	MOVE  A,B        ; copy A in B
	ADD   #20,B      ; add 20 to B
	STORE B, 0x300   ; store B into memory cell 0x300

This is how it would be executed:

Clock 1
LoadExecuteStore
LOAD  

The LOAD instruction is fetched from memory.

Clock 2
LoadExecuteStore
MOVELOAD 

The LOAD instruction is executed, while the MOVE instruction is fetched from memory

Clock 3
LoadExecuteStore
ADDMOVELOAD

The LOAD instruction is in the Store stage, where its result (the number 40) will be stored in the register A. In the meantime, the MOVE instruction is being executed. Since it must move the contents of A into B, it must wait for the ending of the LOAD instruction.

Clock 4
LoadExecuteStore
STOREADDMOVE

The STORE instruction is loaded, while the MOVE instruction is finishing off and the ADD is calculating.

And so on. Note that, sometimes, an instruction will depend on the result of another one (like our MOVE example). In this case, a pipeline stall will happen, where a pipeline will stop, waiting for the offending instruction to finish before resuming work. The throughput of the processor is not changed: one instruction is executed every clock cycle. But actually every instruction has been worked on for many cycles before.

A long pipeline suffers from latency, where it takes an instruction a (relatively speaking) long time from when it is first read to finish. This can be a problem when the executed code contains many branches: the processor cannot know where to read the next instruction, and must wait for the finishing of the branch instruction, leaving the pipeline behind it empty. After the branch is resolved, the new instruction will have to travel all the pipeline before its result will be available, and the processor will appear to "work" again. Many branch prediction techniques have been employed to minimize this problem.