Hebbian theory

Hebbian theory

Hebbian theory describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's "repeated" and "persistent" stimulation of the postsynaptic cell. Introduced by Donald Hebb in 1949, it is also called Hebb's rule, Hebb's postulate, and cell assembly theory, and states:

:Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell "A" is near enough to excite a cell "B" and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that "A"'s efficiency, as one of the cells firing "B", is increased.

The theory is often summarized as "cells that fire together, wire together", although this is an oversimplification of the nervous system not to be taken literally, as well as not accurately representing Hebb's original statement on cell connectivity strength changes. The theory is commonly evoked to explain some types of associative learning in which simultaneous activation of cells leads to pronounced increases in synaptic strength. Such learning is known as Hebbian learning.

Hebbian engrams and cell assembly theory

Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb's theories on the form and function of cell assemblies can be understood from the following:

:"The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated', so that activity in one facilitates activity in the other." harv|Hebb|1949|p=70

:"When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell." harv|Hebb|1949|p=63

Gordon Allport posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:

:"If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become 'auto-associated'. We may call a learned (auto-associated) pattern an engram." harv|Hebb|1949|p=44

Hebbian theory has been the primary basis for the conventional view that when analyzed from a holistic level, engrams are neuronal nets or neural networks.

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod "Aplysia californica".

Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms

Principles

From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons will increase if the two neurons activate simultaneously; it is reduced if they activate separately. Nodes which tend to be either both positive or both negative at the same time will have strong positive weights while those which tend to be opposite will have strong negative weights. It is sometimes stated more simply as "neurons that fire together, wire together."

This original principle is perhaps the simplest form of weight selection. While this means it can be relatively easily coded into a computer program and used to update the weights for a network, it also prohibits the number of applications of Hebbian learning. Today, the term "Hebbian learning" generally refers to some form of mathematical abstraction of the original principle proposed by Hebb. In this sense, Hebbian learning involves weights between learning nodes being adjusted so that each weight better represents the relationship between the nodes. As such, many learning methods can be considered to be somewhat Hebbian in nature.

The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)

:,w_{ij}=x_ix_j

where w_{ij} is the weight of the connection from neuron j to neuron i and x_i the input for neuron i . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections w_{ij} are set to zero if i=j (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

Another formulaic description is:

:w_{ij} = frac{1}{n} sum_{k=1}^p x_i^k x_j^k, ,

where w_{ij} is the weight of the connection from neuron j to neuron i , n is the dimension of the input vector, p the number of training patterns, and x_{i}^k the k th input for neuron i . This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections w_{ij} are set to zero if i=j (no reflexive connections).

A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf. Klopf's model reproduces a great many biological phenomena, and is also simple to implement.

Generalization and stability

Hebb's Rule is often generalized as

:,Delta w_i = eta x_i y ,

or the change in the ith synaptic weight w_i is equal to a learning rate eta times the ith input x_i times the postsynaptic response y. Often cited is the case of a linear neuron,

:,y = sum_j w_j x_j ,

and the previous section's simplification takes both the learning rate and the input weights to be 1. This version of the rule is clearly unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. However, it can be shown that for "any" neuron model, Hebb's rule is unstable. Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule [cite web |url=http://nba.uth.tmc.edu/homepage/shouval/Hebb_PCA.ppt |title=The Physics of the Brain |accessdate=2007-11-14 |last=Shouval |first=Harel |coauthors= |date=2005-01-03 |work=The Synaptic basis for Learning and Memory: A theoretical approach |publisher=The University of Texas Health Science Center at Houston] , or the Generalized Hebbian Algorithm.

ee also

* Neural Networks
* BCM theory
* long-term potentiation
* memory
* Tetanic stimulation
* Coincidence Detection in Neurobiology
* Metaplasticity

References

Further reading

*
*
*
*
*
*

External links

* [http://diwww.epfl.ch/~gerstner/SPNM/node71.html Overview]
* Hebbian Learning tutorial ( [http://blog.peltarion.com/2006/05/11/the-talented-dr-hebb-part-1-novelty-filtering Part 1: Novelty Filtering] , [http://blog.peltarion.com/2006/06/20/the-talented-drhebb-part-2-pca/ Part 2: PCA] )


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Anti-Hebbian learning — In neuroethology and the study of learning, anti Hebbian learning describes a particular class of learning rule by which synaptic plasticity can be controlled. These rules are based on a reversal of Hebb’s postulate, and therefore can be… …   Wikipedia

  • BCM theory — BCM theory, BCM synaptic modification, or the BCM rule, named for Elie Bienenstock, Leon Cooper, and Paul Munro, is a physical theory of learning in the visual cortex developed in 1981. Due to its successful experimental predictions, the theory… …   Wikipedia

  • Generalized Hebbian Algorithm — The Generalized Hebbian Algorithm (GHA), also known in the literature as Sanger s rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989cite …   Wikipedia

  • Long-term potentiation — (LTP) is a persistent increase in synaptic strength following high frequency stimulation of a chemical synapse. Studies of LTP are often carried out in slices of the hippocampus, an important organ for learning and memory. In such studies,… …   Wikipedia

  • Metaplasticity — is a term originally coined by W.C. Abraham and M.F. Bear to refer to the plasticity of synaptic plasticity. Until that time synaptic plasticity had referred to the plastic nature of individual synapses. However this new form referred to the… …   Wikipedia

  • Gaussian adaptation — Articleissues citations missing = July 2008 COI = y expert = Mathematics notability = July 2008 jargon = July 2008 OR = September 2007 primarysources = July 2008 technical = July 2008Gaussian adaptation (GA) is an evolutionary algorithm designed… …   Wikipedia

  • Models of neural computation — are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most… …   Wikipedia

  • Donald O. Hebb — Donald Olding Hebb Born July 22, 1904(1904 07 22) Chester, Nova Scotia, Canada …   Wikipedia

  • Coincidence detection in neurobiology — For the electronic device, see Coincidence circuit. Coincidence detection in the context of neurobiology is a process by which a neuron or a neural circuit can encode information by detecting the occurrence of timely simultaneous yet spatially… …   Wikipedia

  • Sound localization — refers to a listener s ability to identify the location or origin of a detected sound in direction and distance. It may also refer to the methods in acoustical engineering to simulate the placement of an auditory cue in a virtual 3D space (see… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”