- Graphics pipeline
-
In 3D computer graphics, the terms graphics pipeline or rendering pipeline most commonly refers to the current state of the art method of rasterization-based rendering as supported by commodity graphics hardware[1]. The graphics pipeline typically accepts some representation of a three-dimensional primitives as an input and results in a 2D raster image as output. OpenGL and Direct3D are two notable 3d graphic standards, both describing very similar graphic pipeline.
Stages of the graphics pipeline
Generations of graphics pipelines
Graphics pipelines constantly evolve. This article describes them as can be found in OpenGL 4.2 and Direct3D 11.
Transformation
This stage consumes data about polygons with vertices, edges and faces that constitute the whole scene. A matrix controls the linear transformations (scaling, rotation, translation, etc.) and viewing transformations (world and view space) that are to be applied on this data.
Per-vertex lighting
For more details on this topic, see Vertex shader.Geometry in the complete 3D scene is lit according to the defined locations of light sources, reflectance, and other surface properties. Current hardware implementations of the graphics pipeline compute lighting only at the vertices of the polygons being rendered. The lighting values between vertices are then interpolated during rasterization. Per-fragment (i.e. per-pixel) lighting can be done on modern graphics hardware as a post-rasterization process by means of a shader program.
Viewing transformation or normalizing transformation
Objects are transformed from 3-D world space coordinates into a 3-D coordinate system based on the position and orientation of a virtual camera. This results in the original 3D scene as seen from the camera’s point of view, defined in what is called eye space or camera space. The normalizing transformation is the mathematical inverse of the viewing transformation, and maps from an arbitrary user-specified coordinate system (u, v, w) to a canonical coordinate system (x, y, z).
Primitives generation
For more details on this topic, see Geometry shader.After the transformation, new primitives are generated from those primitives that were sent to the beginning of the graphics pipeline.
Projection transformation
In the case of a Perspective projection, objects which are distant from the camera are made smaller (sheared). In an orthographic projection, objects retain their original size regardless of distance from the camera.
In this stage of the graphics pipeline, geometry is transformed from the eye space of the rendering camera into a special 3D coordinate space called "Homogeneous Clip Space", which is very convenient for clipping. Clip Space tends to range from [-1, 1] in X,Y,Z, although this can vary by graphics API(Direct3D or OpenGL). The Projection Transform is responsible for mapping the planes of the camera's viewing volume (or Frustum) to the planes of the box which makes up Clip Space.
Clipping
For more details on this topic, see Clipping (computer graphics).Geometric primitives that now fall outside of the viewing frustum will not be visible and are discarded at this stage. Clipping is not necessary to achieve a correct image output, but it accelerates the rendering process by eliminating the unneeded rasterization and post-processing on primitives that will not appear anyway.
Viewport transformation
The post-clip vertices are transformed once again to be in window space. In practice, this transform is very simple: applying a scale (multiplying by the width of the window) and a bias (adding to the offset from the screen origin). At this point, the vertices have coordinates which directly relate to pixels in a raster.
Scan conversion or rasterization
For more details on this topic, see Render Output unit.Rasterization is the process by which the 2D image space representation of the scene is converted into raster format and the correct resulting pixel values are determined. From now on, operations will be carried out on each single pixel. This stage is rather complex, involving multiple steps often referred as a group under the name of pixel pipeline.
Texturing, fragment shading
For more details on this topic, see Texture mapping unit.At this stage of the pipeline individual fragments (or pre-pixels) are assigned a color based on values interpolated from the vertices during rasterization or from a texture in memory.
Display
The final colored pixels can then be displayed on a computer monitor or other display.
The graphics pipeline in hardware
The rendering pipeline is mapped onto current graphics acceleration hardware such that the input to the graphics card (GPU) is in the form of vertices. These vertices then undergo transformation and per-vertex lighting. At this point in modern GPU pipelines a custom vertex shader program can be used to manipulate the 3D vertices prior to rasterization. Once transformed and lit, the vertices undergo clipping and rasterization resulting in fragments. A second custom shader program can then be run on each fragment before the final pixel values are output to the frame buffer for display.
The graphics pipeline is well suited to the rendering process because it allows the GPU to function as a stream processor since all vertices and fragments can be thought of as independent. This allows all stages of the pipeline to be used simultaneously for different vertices or fragments as they work their way through the pipe. In addition to pipelining vertices and fragments, their independence allows graphics processors to use parallel processing units to process multiple vertices or fragments in a single stage of the pipeline at the same time.
See also
References
- ^ Graphics pipeline. (n.d.). Computer Desktop Encyclopedia. Retrieved December 13, 2005, from Answers.com: [2]
- ^ Raster Graphics and Color 2004 by Greg Humphreys at the University of Virginia
External links
Categories:- 3D computer graphics
- Graphics hardware
Wikimedia Foundation. 2010.