- Point distribution model
The point distribution model is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes. It has been developed by Cootes, [citation
author = T. F. Cootes
title = Statistical models of appearance for computer vision
year = 2004
month = May
url=http://www.isbe.man.ac.uk/~bim/Models/app_models.pdf] Taylor "et al"citation
authors = D.H. Cooper, T.F. Cootes, C.J. Taylor and J. Graham
title = Active shape models—their training and application
journal = Computer Vision and Image Understanding
number = 61
pages = 38–59
year = 1995] and became a standard incomputer vision for the statistical study of shape [citation
title = Shape discrimination in the Hippocampus using an MDL Model
authors = Rhodri H. Davies and Carole J. Twining and P. Daniel Allen and Tim F. Cootes and Chris J. Taylor
year = 2003
conference = IMPI
url=http://www2.wiau.man.ac.uk/caws/Conferences/10/proceedings/8/papers/133/rhhd_ipmi03%2Epdf] and for segmentation of medical images where shape priors really help interpretation of noisy and low-contrastedpixel s/voxel s. The latter point leads toactive shape model s (ASM) andActive Appearance Model s (AAM).Point distribution models rely on
landmark point s. A landmark is an annotating point posed by an anatomist onto a given locus for every shape instance across the training set population. For instance, the same landmark will designate the tip of the index in a training set of 2D hands outlines.Principal component analysis (PCA), for instance, is a relevant tool for studying correlations of movement between groups of landmarks among the training set population. Typically, it might detect that all the landmarks located along the same finger move exactly together across the training set examples showing different finger spacing for a flat-posed hands collection.The implementation of the procedure is roughly the following:
# Annotate the training set outlines with enough corresponding landmarks to sufficiently approximate the geometry of the original shapes
# Align the clouds of landmark using thegeneralized procrustes analysis (minimization of overall distance between landmarks of same label). The big idea is that shape information is not related to affine pose parameters, which need to be removed before any shape study. A mean shape can now be computed in averaging the aligned landmark positions.
# Now the shape outlines are reduced to sequences of n landmarks, we can see the training set as a 2n or 3n (2D/3D) space where any shape instance is a single dot. Assuming the scattering is gaussian in this space, PCA is supposedly the most straightforward tool to analyse the training set in this space
# PCA computes normalizedeigenvectors andeigenvalues of the training setcovariance matrix . Each eigenvector describe a principal mode of variation along the set, the corresponding eigenvalue indicating the importance of this mode in the shape space scattering. Since correlation was found between landmarks, the total variation of the space is concentrated on the very first eigenvectors, showing a very fast descent. Otherwise correlation was not found, suggesting the training set shows no variation or the landmarks are not properly posed.An eigenvector, interpreted in
euclidean space , can be seen as a sequence of n euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twistingnematode worm is used as an example in the teaching ofkernel PCA -based methods.Due to the PCA properties: eigenvectors are mutually
orthogonal , form a basis of the training set cloud in the shape space, and cross at the 0 in this space, which represents the mean shape. Also, PCA is a traditional way of fitting a closed ellipsoid to a Gaussian cloud of points (whatever their dimension): this suggests the concept of bounded variation.The very big idea of PDM is that eigenvectors can be linearly combined to create an infinity of new shape instances that will 'look like' the one in the training set. The coefficients are bounded alike the values of the corresponding eigenvalues, so as to ensure the generated 2n/3n-dimensional dot will remain into the hyper-ellipsoïdal allowed domain—
allowable shape domain (ASD).References
ee also
*
Procrustes analysis External links
* [http://www.isbe.man.ac.uk/~bim/Models/index.html Flexible Models for Computer Vision] , Tim Cootes, Manchester University.
* [http://www.icaen.uiowa.edu/~dip/LECTURE/Understanding3.html A practical introduction to PDM and ASMs] .
Wikimedia Foundation. 2010.