- Camera resectioning
**Camera resectioning**(often called**camera calibration**) is the process of finding the true parameters of the camera that produced a given photograph or video. These parameters characterize the transformation that maps 3D points in the scene to 2D points in the camera plane. Some of these parameters are focal length, format size, principal point, and lens distortion. Camera calibration is often used as an early stage inComputer Vision and especially in the field ofAugmented reality .When a

camera is used, light from the environment is focused on an image plane and captured. This process reduces the dimensions of the data taken in by the camera from three to two (light from a 3D scene is stored on a 2D image). Eachpixel on the image plane therefore corresponds to a shaft of light from the original scene. Camera resectioning determines which incoming light is associated with each pixel on the resulting image. In an idealpinhole camera , a simpleprojection matrix is enough to do this. With more complex camera systems, errors resulting from misaligned lenses and deformations in their structures can result in more complex distortions in the final image.The camera projection matrix is derived from the intrinsic and extrinsic parameters of the camera, and is often represented by the series of transformations; e.g. a matrix of camera intrinsic parameters, a 3x3 rotation matrix, and a translation vector. The camera projection matrix can be used to associate points in a camera's image space with locations in 3D world space.Camera resectioning is often used in the application of

stereo vision where the camera projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by both cameras.Some people call this camera calibration, but many restrict the term

camera calibration for the estimation of internal or intrinsic parameters only.**Algorithms**There are many different approaches to calculate the intrinsic and extrinisic parameters for a specific camera setup. A classical approach is Roger Y. Tsai's Algorithm.It is a 2-stage algorithm, calculating the pose (3D Orientation, and x-axis and y-axis translation) in first stage. In second stage it computes the focal length, distortion coefficients and the z-axis translation.

**See also***

Augmented reality

*Augmented virtuality

*Mixed reality

*Pinhole camera model **External links*** [

*http://campar.in.tum.de/twiki/pub/Far/AugmentedRealityIISoSe2004/L3-CamCalib.pdf Camera Calibration*] - Augmented reality lecture at TU Muenchen, Germany

* [*http://www.cs.cmu.edu/~rgw/TsaiDesc.html Tsai's Approach*]

* [*http://www.hitl.washington.edu/artoolkit/documentation/usercalibration.htm Camera calibration*] (usingARToolKit )

* [*http://www.vision.caltech.edu/bouguetj/calib_doc/papers/heikkila97.pdf A Four-step Camera Calibration Procedure with Implicit Image Correction*]

*Wikimedia Foundation.
2010.*