- Pipeline (computing)
In
computing , a pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements.Computer-related pipelines include:
*Instruction pipeline s, such as theclassic RISC pipeline , which are used in processors to allow overlapping execution of multiple instructions with the same circuitry. The circuitry is usually divided up into stages, including instruction decoding, arithmetic, and register fetching stages, wherein each stage processes one instruction at a time.
*Graphics pipeline s, found in mostgraphics cards , which consist of multiple arithmetic units, or complete CPUs, that implement the various stages of common rendering operations (perspective projection , window clipping,color andlight calculation, rendering, etc.).
* Software pipelines, consisting of multiple processes arranged so that the output stream of one process is automatically and promptly fed as the input stream of the next one. Unix pipelines are the classical implementation of this concept.Concept and motivation
Pipelining is a natural concept in everyday life, e.g. on an
assembly line . Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps). A car on the assembly line can have only one of the three steps done at once. After the car has its engine installed, it moves on to having its hood installed, leaving the engine installation facilities available for the next car. The first car then moves on to wheel installation, the second car to hood installation, and a third car begins to have its engine installed. If engine installation takes 20 minutes, hood installation takes 5 minutes, and wheel installation takes 10 minutes, then finishing all three cars when only one car can be operated at once would take 105 minutes. On the other hand, using the assembly line, the total time to complete all three is 75 minutes. At this point, additional cars will come off the assembly line at 20 minute increments.Costs, drawbacks, and benefits
As the assembly line example shows, pipelining doesn't decrease the time for a single datum to be processed; it only increases the
throughput of the system when processing a stream of data.High pipelining leads to increase of latency - the time required for a signal to propagate through a full pipe.
A pipelined system typically requires more resources (circuit elements, processing units, computer memory, etc.) than one that executes one batch at a time, because its stages cannot reuse the resources of a previous stage. Moreover, pipelining may increase the time it takes for an instruction to finish.
Design considerations
One key aspect of pipeline design is balancing pipeline stages. Using the assembly line example, we could have greater time savings if both the engine and wheels took only 15 minutes. Although the system latency would still be 35 minutes, we would be able to output a new car every 15 minutes.
Another design consideration is the provision of adequate buffering between the pipeline stages — especially when the processing times are irregular, or when data items may be created or destroyed along the pipeline.
Implementations
Buffered, Synchronous pipelines
Conventional microprocessors are
synchronous circuit s that use buffered, synchronous pipelines. In these pipelines, "pipeline registers" are inserted in-between pipeline stages, and are clocked synchronously. The time between each clock signal is set to be greater than the longest delay between pipeline stages, so that when the registers are clocked, the data that is written to them is the final result of the previous stage.Buffered, Asynchronous pipelines
Asynchronous pipelines are used in
asynchronous circuit s, and have their pipeline registers clocked asynchronously. Generally speaking, they use a request/acknowledge system, wherein each stage can detect when it's "finished". When a stage is finished and the next stage has sent it a "request" signal, the stage sends an "acknowledge" signal to the next stage, and a "request" signal to the previous stage. When a stage receives an "acknowledge" signal, it clocks its input registers, thus reading in the data from the previous stage.The
AMULET microprocessor is an example of a microprocessor that uses buffered, asynchronous pipelines.Unbuffered pipelines
Unbuffered pipelines, called "wave pipelines", do not have registers in-between pipeline stages. Instead, the delays in the pipeline are "balanced" so that, for each stage, the difference between the first stabilized output data and the last is minimized. Thus, data flows in "waves" through the pipeline, and each wave is kept as short (synchronous) as possible.
The maximum rate that data can be fed into a wave pipeline is determined by the maximum difference in delay between the first piece of data coming out of the pipe and the last piece of data, for any given wave. If data is fed in faster than this, it is possible for waves of data to interfere with each other.
References
*For a standard discussion on pipelining in parallel computing see "Parallel Programming in C with MPI and OpenMP" by Michael J. Quinn,McGraw-Hill Professional, 2004
ee also
*
Throughput
*Parallelism
*Instruction pipeline
**Classic RISC pipeline
*Graphics pipeline
*Pipeline (software)
**Pipeline (Unix)
**Hartmann pipeline for VM
**BatchPipes for MVS
*Geometry pipelines
*XML pipeline
Wikimedia Foundation. 2010.