# Affine shape adaptation

Affine shape adaptation

Affine shape adaptation is a methodology for iteratively adapting the shape of the smoothing kernels in an affine group of smoothing kernels to the local image structure in neighbourhood region of a specific image point. Equivalently, affine shape adaptation can be accomplished by iteratively warping a local image patch with affine transformations while applying a rotationally symmetric filter to the warped image patches. Provided that this iterative process converges, the resulting fixed point will be "affine invariant". In the area of computer vision, this idea has been used for defining affine invariant interest point operators as well as affine invariant texture analysis methods.

Affine-adapted interest point operators

The interest points obtained from the scale-adapted Laplacian blob detector or the multi-scale Harris corner detector with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain interest points that are more robust to perspective transformations, a natural approach is to devise a feature detector that is "invariant to affine transformations".

Interestingly, affine invariance can be accomplished from measurements of the same multi-scale windowed second moment matrix $mu$ as is used in the multi-scale Harris operator provided that we extend the regular scale-space concept obtained by convolution with rotationally symmetric Gaussian kernels to an "affine Gaussian scale-space" obtained by shape-adapted Gaussian kernels (Lindeberg and Garding 1997). For a two-dimensional image $I$, let and let $Sigma_t$ be a positive definite 2*2 matrix. Then, a non-uniform Gaussian kernel can be defined as:and given any input image $I_L$ the affine Gaussian scale-space is the three-parameter scale-space defined as:Next, introduce an affine transformation $eta = B xi$ where $B$ is a 2*2-matrix, and define a transformed image $I_R$ as:.Then, the affine scale-space representations $L$ and $R$ of $I_L$ and $I_R$, respectively, are related according to:provided that the affine shape matrices $Sigma_L$ and $Sigma_R$ are related according to:$Sigma_R = B Sigma_L B^T$.Disregarding mathematical details, which unfortunately become somewhat technical if one aims at a precise description of what is going on, the important message is that "the affine Gaussian scale-space is closed under affine transformations".

Next, if we given the notation $abla L = \left(L_x, L_y\right)^T$ as well as local shape matrix $Sigma_t$ and an integration shape matrix $Sigma_s$, introduce an "affine-adapted multi-scale second-moment matrix" according to:it can be shown that under any affine transformation the affine-adapted multi-scale second-moment matrix transforms according to:.Again, disregarding somewhat messy technical details, the important message here is that "given a correspondence between the image points and , the affine transformation $B$ can be estimated from measurements of the multi-scale second-moment matrices $mu_L$ and $mu_R$ in the two domains.

An important consequence of this study is that if we can find an affine transformation $B$ such that $mu_R$ is a constant times the unit matrix, then we obtain a "fixed-point that is invariant to affine transformations". For the purpose of practical implementation, this property can often be reached by in either of two main ways. The first approach is based on "transformations of the smoothing filters" and consists of:

• estimating the second-moment matrix $mu$ in the image domain,
• determining a new adapted smoothing kernel with covariance matrix proportional to $mu^\left\{-1\right\}$,
• smoothing the original image by the shape-adapted smoothing kernel, and
• repeating this operation until the difference between two successive second-moment matrices is sufficiently small.
The second approach is based on "warpings in the image domain" and implies:
• estimating $mu$ in the image domain,
• estimating a local affine transformation proportional to $hat\left\{B\right\} = mu^\left\{1/2\right\}$ where $mu^\left\{1/2\right\}$ denotes the square root matrix of $mu$,
• warping the input image by the affine transformation $hat\left\{B\right\}^\left\{-1\right\}$ and
• repeating this operation until $mu$ is sufficiently close to a constant times the unit matrix.
This overall process is referred to as "affine shape adaptation" (Lindeberg and Garding 1997; Baumberg 2000; Mikolajczyk and Schmid 2004). In the ideal continuous case, the two approaches are mathematically equivalent. In practical implementations, however, the first filter-based approach is usually more accurate in the presence of noise while the second warping-based approach is usually faster.

In practice, the affine shape adaptation process described here is often combined with interest point detection automatic scale selection as described in the articles on blob detection and corner detection, to obtain interest points that are invariant to the full affine group, including scale changes. Besides the commonly used multi-scale Harris operator, this affine shape adaptation can also be applied to other types of interest point operators such as the Laplacian/Difference of Gaussian blob operator, the determinant of the Hessian and the Hessian-Laplace blob operator. Affine shape adaptation can also be used for affine invariant texture recognition and affine invariant texture segmentation.

See also

*corner detection
*blob detection
* Harris affine region detector
* Hessian affine region detector
*scale-space
*Gaussian function

References

* cite conference
author=A. Baumberg
title=Reliable feature matching across widely separated views
booktitle=Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
pages=pages I:1774--1781
year=2000
url=http://citeseer.ist.psu.edu/baumberg00reliable.html
note = First reference on the multi-scale Harris operator

* cite journal
author=T. Lindeberg and J. Garding
title=Shape-adapted smoothing in estimation of 3-{D} depth cues from affine distortions of local 2-{D} structure
journal=Image and Vision Computing
year=1997
volume=15
pages=pp 415--434
url=http://www.nada.kth.se/~tony/abstracts/LG94-ECCV.html

* cite journal
author= K. Mikolajczyk, K. and C. Schmid
title=Scale and affine invariant interest point detectors
year=2004
journal=International Journal of Computer Vision
volume=60
issue=1
pages=pp 63–86
url=http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf
note=Integration of the multi-scale Harris operator with the methodology for automatic scale selection as well as with affine shape adaptation
doi=10.1023/B:VISI.0000027790.02288.f2

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Harris affine region detector — In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or …   Wikipedia

• Hessian Affine region detector — The Hessian Affine region detector is a feature detector used in the fields of computer vision and image analysis. Like other feature detectors, the Hessian Affine detector is typically used as a preprocessing step to algorithms that rely on… …   Wikipedia

• Corner detection — Feature detection Output of a typical corner detection algorithm …   Wikipedia

• Scale space — theory is a framework for multi scale signal representation developed by the computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling …   Wikipedia

• Blob detection — Feature detection Output of a typical corner detection algorithm …   Wikipedia

• Scale-invariant feature transform — Feature detection Output of a typical corner detection algorithm …   Wikipedia

• Maximally stable extremal regions — Feature detection Output of a typical corner detection algorithm …   Wikipedia

• Structure tensor — Structure tensors (or second moment matrices) are matrix representations of partial derivatives. In the field of image processing and computer vision, they are typically used to represent gradients, edges or similar information. Structure tensors …   Wikipedia

• Object recognition (computer vision) — Feature detection Output of a typical corner detection algorithm …   Wikipedia

• Gaussian function — In mathematics, a Gaussian function (named after Carl Friedrich Gauss) is a function of the form::f(x) = a e^{ { (x b)^2 over 2 c^2 } }for some real constants a > 0, b , c > 0, and e ≈ 2.718281828 (Euler s number).The graph of a Gaussian is a… …   Wikipedia

### Share the article and excerpts

##### Direct link
Do a right-click on the link above
and select “Copy Link”