Neural backpropagation

Neural backpropagation

Neural backpropagation is the phenomenon in which the action potential of a neuron creates a voltage spike both at the end of the axon (normal propagation) and back through to the dendritic arbor or dendrites, from which much of the original input current originated. It has been shown that this simple process can be used in a manner similar to the backpropagation algorithm used in multilayer perceptrons, a type of artificial neural network. In addition to active backpropagation the action potential, there is also passive electrotonic spread.

Contents

Mechanism

When a neuron fires an action potential, it is initiated at the axon hillock. An action potential spreads down the axon because of the gating properties of voltage-gated sodium channels and voltage-gated potassium channels. However, the cell body or soma can also become depolarized when an AP is initiated, and this depolarization can spread out to the dendritic tree where there are voltage-gated calcium channels. Voltage-gated calcium channels can then lead to a propagating (most of the time) dendritic action potential. EPSPs from synaptic activation are not large enough to activate the dendritic voltage-gated calcium channels (usually on the order of a couple milliAmps each) so backpropagation is believed to happen only when the cell is activated to fire an AP.

History

Since the 1950s, evidence has existed that neurons in the central nervous system generate an action potential, or voltage spike, that travels both through the axon to signal the next neuron and backpropagates through the dendrites sending a retrograde signal to its presynaptic signaling neurons. This current decays significantly with travel length along the dendrites, so effects are predicted to be more significant for neurons whose synapses are near the postsynaptic cell body, with magnitude depending mainly on sodium-channel density in the dendrite. It is also dependent on the shape of the dendritic tree and, more importantly, on the rate of signal currents to the neuron. On average, a backpropagating spike loses about half its voltage after traveling nearly 500 micrometres.

Backpropagation occurs actively in the neocortex, hippocampus, substantia nigra, and spinal cord, while in the cerebellum it occurs relatively passively. This is consistent with observations that synaptic plasticity is much more apparent in areas like the hippocampus, which controls memory, than the cerebellum, which controls more unconscious and vegetative functions.

The backpropagating current also causes a voltage change that increases the concentration of Ca2+ in the dendrites, an event which coincides with certain models of synaptic plasticity. This change also affects future integration of signals, leading to at least a short-term response difference between the presynaptic signals and the postsynaptic spike[1].

Functions

There are a number of hypotheses regarding the function of backpropagation of action potentials. In addition to synaptic plasticity, it is also hypothesized to be involved in dendrodendritic inhibition, boosting synaptic responses, resetting membrane potential, retrograde actions at synapses and conditional axonal output. Backpropagation is believed to help form LTP (long term potentiation) and Hebbian plasticity at hippocampal synapses. Since artificial LTP induction, using microelectrode stimulation, voltage clamp, etc. requires the postsynaptic cell to be slightly depolarized when EPSPs are elicited, backpropagation can serve as the means of depolarization of the postsynaptic cell.

Algorithm

While a backpropagating action potential can presumably cause changes in the weight of the presynaptic connections, there is no simple mechanism for an error signal to propagate through multiple layers of neurons, as in the computer backpropagation algorithm. However, simple linear topologies have shown that effective computation is possible through signal backpropagation in this biological sense[2].

References

  1. ^ Stuart, Greg; Nelson Spruston, Bert Sakmann, Michael Häusser (1997). "Action potential initiation and backpropagation in neurons of the mammalian CNS". TINS 20 (3). http://www.columbia.edu/cu/biology/faculty/yuste/reprints/s/stuart_tins_1997.pdf. Retrieved 2010-12-23. 
  2. ^ Bogacz, Rafal; Malcolm W. Brown, Christophe Giraud-Carrier (2000). "Frequency-based Error Back-propagation in a Cortical Network" ([dead link]Scholar search). Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, Como (Italy) 2: 211–216. doi:10.1109/IJCNN.2000.857899. 0-7695-0619-4. Archived from the original on June 14, 2007. http://web.archive.org/web/20070614031320/http://www.math.princeton.edu/~rbogacz/papers/Como00.pdf. Retrieved 2007-11-18. 
  1. Buzsáki G, Kandel A. Somadendritic backpropagation of action potentials in cortical pyramidal cells of the awake rat. J Neurophysiol. 1998 Mar;79(3):1587-91. PMID 9497436
  2. Bereshpolova Y, Amitai Y, Gusev AG, Stoelzel CR, Swadlow HA. Dendritic backpropagation and the state of the awake neocortex. J Neurosci. 2007 Aug 29;27(35):9392-9. PMID 17728452
  3. Rózsa B, Katona G, Kaszás A, Szipöcs R, Vizi ES. Dendritic nicotinic receptors modulate backpropagating action potentials and long-term plasticity of interneurons. Eur J Neurosci. 2008 Jan;27(2):364-77. PMID 18215234
  4. Waters J, Schaefer A, Sakmann B. Backpropagating action potentials in neurones: measurement, mechanisms and potential functions. Prog Biophys Mol Biol. 2005 Jan;87(1):145-70. Review. PMID 15471594
  5. Bender VA, Feldman DE. A dynamic spatial gradient of Hebbian learning in dendrites. Neuron. 2006 Jul 20;51(2):153-5. Review. PMID 16846850
  6. Migliore M, Shepherd GM. Dendritic action potentials connect distributed dendrodendritic microcircuits. J Comput Neurosci. 2007 Aug 3; PMID 17674173
  7. Lowe G. Inhibition of backpropagating action potentials in mitral cell secondary dendrites. J Neurophysiol. 2002 Jul;88(1):64-85. PMID 12091533

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Backpropagation — Backpropagation, or propagation of error, is a common method of teaching artificial neural networks how to perform a given task. It was first described by Paul Werbos in 1974, but it wasn t until 1986, through the work of David E. Rumelhart,… …   Wikipedia

  • Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term …   Wikipedia

  • Neural cryptography — is a branch of cryptography dedicated to analyzing the application of stochastic algorithms, especially neural network algorithms, for use in encryption and cryptanalysis. Contents 1 Definition 2 Applications 3 Neural key e …   Wikipedia

  • Backpropagation-Algorithmus — Backpropagation oder auch Backpropagation of Error bzw. selten auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos …   Deutsch Wikipedia

  • Backpropagation mit Trägheitsterm — Backpropagation oder auch Backpropagation of Error bzw. selten auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos …   Deutsch Wikipedia

  • Neural network software — is used to simulate, research, develop and apply artificial neural networks, biological neural networks and in some cases a wider array of adaptive systems. Contents 1 Simulators 1.1 Research simulators 1.2 …   Wikipedia

  • Neural Network — Neuronale Netze bilden die Struktur und Informationsarchitektur von Gehirn und Nervensystem von Tieren und Menschen: Neuronen und Glia sind in der Art eines Netzes miteinander verknüpft. Zwischen ihnen findet auf chemischem und elektrischem Weg… …   Deutsch Wikipedia

  • backpropagation — back·prop·a·ga·tion (băkʹprŏp ə gāʹshən) n. A common method of training a neural net in which the initial system output is compared to the desired output, and the system is adjusted until the difference between the two is minimized. * * * …   Universalium

  • backpropagation — noun a) An error correction technique used in neural networks b) A phenomenon in which the action potential of a neuron creates a voltage spike both at the end of the axon, as normally, and also back through to the dendrites from which much of… …   Wiktionary

  • Artificial neural network — An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”