Generalized Hebbian Algorithm

Generalized Hebbian Algorithm

The Generalized Hebbian Algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989cite journal |last=Sanger |first=Terence D. |authorlink=Terence Sanger |coauthors= |year=1989 |month= |title= Optimal unsupervised learning in a single-layer linear feedforward neural network |journal=Neural Networks |volume=2 |issue=6 |pages=459–473 |id= |url=http://ece-classweb.ucsd.edu/winter06/ece173/documents/Sanger%201989%20--%20Optimal%20Unsupervised%20Learning%20in%20a%20Single-layer%20Linear%20FeedforwardNN.pdf |accessdate= 2007-11-24 |quote=|doi= 10.1016/0893-6080(89)90044-0 ] , it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs.

Theory

GHA combines Oja's rule with the Gram-Schmidt process to produce a learning rule of the form

:Delta w_{ij} = etaleft(y_j x_i - y_j sum_{k=1}^j w_{ik} y_k ight),

where w_{ij} defines the synaptic weight or connection strength between the ith input and jth output neurons, x and y are the input and output vectors, respectively, and eta is the "learning rate" parameter.

Derivation

In matrix form, Oja's rule can be written

:frac{d w(t)}{d t}=w(t) Q - extrm{diag} (w(t) Q w(t)^T) w(t),

and the Gram-Schmidt algorithm is

:,Delta w(t) = - extrm{lower} [w(t) w(t)^T] w(t),

where w(t) is any matrix, in this case representing synaptic weights, Q = eta extbf{x} extbf{x}^T is the autocorrelation matrix, simply the outer product of inputs, extrm{diag} is the function that diagonalizes a matrix, and extrm{lower} is the function that sets all matrix elements on or above the diagonal equal to 0. We can combine these equations to get our original rule in matrix form,

:Delta w(t) = eta(t) left( extbf{y}(t) extbf{x}(t)^T - extrm{LT} [ extbf{y}(t) extbf{y}(t)^T] w(t) ight),

where the function extrm{LT} sets all matrix elements above the diagonal equal to 0, and note that our output extbf{y}(t)= w(t) extbf{x}(t) is a linear neuron.

tability and PCA

cite book |last=Haykin |first=Simon |authorlink=Simon Haykin |title=Neural Networks: A Comprehensive Foundation |edition=2 |year=1998 |publisher=Prentice Hall |location= |isbn=0132733501 ] cite journal |last=Oja |first=Erkki |authorlink=Erkki Oja |coauthors= |year=1982 |month=November |title=Simplified neuron model as a principal component analyzer |journal=Journal of Mathematical Biology |volume=15 |issue=3 |pages=267–273 |id=BF00275687 |url=http://www.springerlink.com/content/u9u6120r003825u1/ |accessdate= 2007-11-22 |quote= |doi=10.1007/BF00275687 ]

Applications

GHA is used in applications where a self-organizing map is necessary, or where a feature or principal components analysis can be used. Examples of such cases include artificial intelligence and speech and image processing.

Its importance comes from the fact that learning is a single-layer process--that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer, thus avoiding the multi-layer dependence associated with the backpropagation algorithm. It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by the learning rate parameter eta.

ee also

*Hebbian learning
*Oja's rule
*Factor analysis
*Principal components analysis
*PCA network

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Hebbian theory — describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell s repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in 1949, it is also called… …   Wikipedia

  • Oja's rule — Oja s learning rule, or simply Oja s rule, named after a Finnish computer scientist Erkki Oja, is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the… …   Wikipedia

  • Analyse Sémantique Latente — L’analyse sémantique latente (LSA, de l anglais : Latent semantic analysis) ou indexation sémantique latente (ou LSI, de l anglais : Latent semantic indexation) est un procédé de traitement des langues naturelles, dans le cadre de la… …   Wikipédia en Français

  • Analyse semantique latente — Analyse sémantique latente L’analyse sémantique latente (LSA, de l anglais : Latent semantic analysis) ou indexation sémantique latente (ou LSI, de l anglais : Latent semantic indexation) est un procédé de traitement des langues… …   Wikipédia en Français

  • Latent Semantic Analysis — Analyse sémantique latente L’analyse sémantique latente (LSA, de l anglais : Latent semantic analysis) ou indexation sémantique latente (ou LSI, de l anglais : Latent semantic indexation) est un procédé de traitement des langues… …   Wikipédia en Français

  • Latent semantic analysis — (LSA) is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA was …   Wikipedia

  • GHA — may refer to:* Gha, a letter that has been used in various Latin orthographies for Turkic languages * Generalized Hebbian Algorithm, a learning rule for neural networks * Glasgow Hutchesons Aloysians RFC, a rugby union club in Scotland * Good… …   Wikipedia

  • Analyse sémantique latente — L’analyse sémantique latente (LSA, de l anglais : Latent semantic analysis) ou indexation sémantique latente (ou LSI, de l anglais : Latent semantic indexation) est un procédé de traitement des langues naturelles, dans le cadre de la… …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”