Kernel principal component analysis
- Kernel principal component analysis
Kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are done in a reproducing kernel Hilbert space with a non-linear mapping.
Example
The two images show a number of data points before and after Kernel PCA. The color of the points is not part of the algorithm, it's only there to show how the data groups together before and after the transformation. Note in particular that the first principal component is enough to distinguish the three different groups, which is impossible using only linear PCA.
The kernel used in this example was:
If instead a gaussian kernel is used:the result is shown in the next figure.
External links
* [http://www.face-rec.org/algorithms/Kernel/kernelPCA_scholkopf.pdf Nonlinear Component Analysis as a Kernel Eigenvalue Problem]
ee also
* Kernel trick
Wikimedia Foundation.
2010.
Look at other dictionaries:
Principal component analysis — PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by… … Wikipedia
Principal Component Analysis — Hauptkomponentenanalyse als Faktorenanalyse: Zwei Hauptkomponenten einer zweidimensionalen Punktwolke (orthogonal rotiert) Die Hauptkomponentenanalyse (englisch: Principal Component Analysis, PCA) ist ein Verfahren der multivariaten Statistik.… … Deutsch Wikipedia
Component analysis — may refer to: Principal component analysis Kernel principal component analysis Independent component analysis Neighbourhood components analysis ANOVA simultaneous component analysis Connected Component Analysis This disambiguation pag … Wikipedia
Principal components analysis — Principal component analysis (PCA) is a vector space transform often used to reduce multidimensional data sets to lower dimensions for analysis. Depending on the field of application, it is also named the discrete Karhunen Loève transform (KLT),… … Wikipedia
Kernel regression — Not to be confused with Kernel principal component analysis. The kernel regression is a non parametric technique in statistics to estimate the conditional expectation of a random variable. The objective is to find a non linear relation between a… … Wikipedia
Linear discriminant analysis — (LDA) and the related Fisher s linear discriminant are methods used in statistics, pattern recognition and machine learning to find a linear combination of features which characterize or separate two or more classes of objects or events. The… … Wikipedia
Cluster analysis — The result of a cluster analysis shown as the coloring of the squares into three clusters. Cluster analysis or clustering is the task of assigning a set of objects into groups (called clusters) so that the objects in the same cluster are more… … Wikipedia
Nonlinear dimensionality reduction — High dimensional data, meaning data that requires more than two or three dimensions to represent, can be difficult to interpret. One approach to simplification is to assume that the data of interest lies on an embedded non linear manifold within… … Wikipedia
Technical analysis — Financial markets Public market Exchange Securities Bond market Fixed income Corporate bond Government bond Municipal bond … Wikipedia
Dimension reduction — For dimensional reduction in physics, see Dimensional reduction. In machine learning, dimension reduction is the process of reducing the number of random variables under consideration, and can be divided into feature selection and feature… … Wikipedia