Biological neural network

Biological neural network
From "Texture of the Nervous System of Man and the Vertebrates" by Santiago Ramón y Cajal. The figure illustrates the diversity of neuronal morphologies in the auditory cortex.

In neuroscience, a biological neural network describes a population of physically interconnected neurons or a group of disparate neurons whose inputs or signalling targets define a recognizable circuit. Communication between neurons often involves an electrochemical process. The interface through which they interact with surrounding neurons usually consists of several dendrites (input connections), which are connected via synapses to other neurons, and one axon (output connection). If the sum of the input signals surpasses a certain threshold, the neuron sends an action potential (AP) at the axon hillock and transmits this electrical signal along the axon.

In contrast, a neuronal circuit is a functional entity of interconnected neurons that influence each other (similar to a control loop in cybernetics).

Contents

Early study

Early treatments of neural networks can be found in Herbert Spencer's Principles of Psychology, 3rd edition (1872), Theodor Meynert's Psychiatry (1884), William James' Principles of Psychology (1890), and Sigmund Freud's Project for a Scientific Psychology (composed 1895). The first rule of neuronal learning was described by Hebb in 1949, Hebbian learning. Thus, Hebbian pairing of pre-synaptic and post-synaptic activity can substantially alter the dynamic characteristics of the synaptic connection and therefore facilitate or inhibit signal transmission. The neuroscientists Warren Sturgis McCulloch and Walter Pitts published the first works on the processing of neural networks called "What the frog's eye tells to the frog's brain." They showed theoretically that networks of artificial neurons could implement logical, arithmetic, and symbolic functions. Simplified models of biological neurons were set up, now usually called perceptrons or artificial neurons. These simple models accounted for neural summation, i.e., potentials at the post-synaptic membrane will summate in the cell body. Later models also provided for excitatory and inhibitory synaptic transmission.

Connections between neurons

The connections between neurons are much more complex than those implemented in neural computing architectures. The basic kinds of connections between neurons are chemical synapses and electrical gap junctions. One principle by which neurons work is neural summation, i.e. potentials at the post synaptic membrane will sum up in the cell body. If the depolarization of the neuron at the axon goes above threshold an action potential will occur that travels down the axon to the terminal endings to transmit a signal to other neurons. Excitatory and inhibitory synaptic transmission is realized mostly by inhibitory postsynaptic potentials and excitatory postsynaptic potentials.

On the electrophysiological level, there are various phenomena which alter the response characteristics of individual synapses (called synaptic plasticity) and individual neurons (intrinsic plasticity). These are often divided into short-term plasticity and long-term plasticity. Long-term synaptic plasticity is often contended to be the most likely memory substrate. Usually the term "plasticity" refers to changes in the brain that are caused by activity or experience.

Connections display temporal and spatial characteristics. Temporal characteristics refer to the continuously modified activity-dependent efficacy of synaptic transmission, called spike-dependent synaptic plasticity. It has been observed in several studies that the synaptic efficacy of this transmission can undergo short-term increase (called facilitation) or decrease (depression) according to the activity of the presynaptic neuron. The induction of long-term changes in synaptic efficacy, by long-term potentiation (LTP) or depression (LTD), depends strongly on the relative timing of the onset of the EPSP generated by the pre-synaptic AP, and the post-synaptic action potential. LTP is induced by a series of action potentials which cause a variety of biochemical responses. Eventually the reactions cause the insertion of new receptors into the cellular membrane of the dendrites, or serve to increase the efficacy of the receptors through phosphorylation.

Backpropagating APs are impossible because after an action potential travels down a given segment of the axon, the voltage gated sodium channels' (Na+ channels) m gate becomes closed, thus blocking any transient opening of the h gate from causing a change in the intracellular [Na+], and hence preventing the generation of an action potential back towards the cell body. In some cells, however, neural backpropagation does occur through the dendritic arbor and may have important effects on synaptic plasticity and computation.

A neuron in the brain requires a single impulse to a neuromuscular junction to fire for the contraction of the postsynaptic muscle cell. In the spinal cord, however, at least 75 afferent neurons are required to produce firing. This picture is further complicated by variation in time constant between neurons, as some cells can experience their EPSPs over a wider period of time than others.

While in synapses in the developing brain synaptic depression has been particularly widely observed it has been speculated that it changes to facilitation in adult brains.

Representations in neural networks

A receptive field is a small region within the entire visual field. Any given neuron only responds to a subset of stimuli within its receptive field. This property is called tuning. As for vision, in the earlier visual areas, neurons have simpler tuning. For example, a neuron in V1 may fire to any vertical stimulus in its receptive field. In the higher visual areas, neurons have complex tuning. For example, in the fusiform gyrus, a neuron may only fire when a certain face appears in its receptive field. It is also known that many parts of the brain generate patterns of electrical activity that correspond closely to the layout of the retinal image (this is known as retinotopy). It seems further that imagery that originates from the senses and internally generated imagery may have a shared ontology at higher levels of cortical processing (see e.g. Language of thought). About many parts of the brain some characterization has been made as to what tasks are correlated with its activity.

In the brain, memories are very likely represented by patterns of activation amongst networks of neurons. However, how these representations are formed, retrieved and reach conscious awareness is not completely understood. Cognitive processes that characterize human intelligence are mainly ascribed to the emergent properties of complex dynamic characteristics in the complex systems that constitute neural networks. Therefore, the study and modeling of these networks have attracted broad interest under different paradigms and many different theories have been formulated to explain various aspects of their behavior. One of these — and the subject of several theories — is considered a special property of a neural network: the ability to learn complex patterns.

Philosophical issues

Today most researchers believe in mental representations of some kind (representationalism) or, more general, in particular mental states (cognitivism). For instance, perception can be viewed as information processing through transfer information from the world into the brain/mind where it is further processed and related to other information (cognitive processes). Few others envisage a direct path back into the external world in the form of action (radical behaviorism).

Another issue, called the binding problem, relates to the question of how the activity of more or less distinct populations of neurons dealing with different aspects of perception are combined to form a unified perceptual experience and have qualia.

Neuronal networks are not full reconstructions of any cognitive system found in the human brain, and are therefore unlikely to form a complete representation of human perception. Some researchers argue that human perception must be studied as a whole; hence, the system cannot be taken apart and studied without destroying its original functionality. Furthermore, there is evidence that cognition is gained through a well-orchestrated barrage of sub-threshold synaptic activity throughout the network.

Study methods

Different neuroimaging techniques have been developed to investigate the activity of neural networks. The use of "brain scanners" or functional neuroimaging to investigate the structure or function of the brain is common, either as simply a way of better assessing brain injury with high resolution pictures, or by examining the relative activations of different brain areas. Such technologies may include fMRI (functional magnetic resonance imaging), PET (positron emission tomography) and CAT (computed axial tomography). Functional neuroimaging uses specific brain imaging technologies to take scans from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In functional neuroimaging, especially fMRI, which measures hemodynamic activity that is closely linked to neural activity, PET, and electroencephalography (EEG) is used.

Connectionist models serve as a test platform for different hypotheses of representation, information processing, and signal transmission. Lesioning studies in such models, e.g. artificial neural networks, where parts of the nodes are deliberately destroyed to see how the network performs, can also yield important insights in the working of several cell assemblies. Similarly, simulations of dysfunctional neurotransmitters in neurological conditions (e.g., dopamine in the basal ganglia of Parkinson's patients) can yield insights into the underlying mechanisms for patterns of cognitive deficits observed in the particular patient group. Predictions from these models can be tested in patients and/or via pharmacological manipulations, and these studies can in turn be used to inform the models, making the process recursive.

See also

External links



Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Neural network (disambiguation) — Neural network(s) may refer to: Science Neural network (NN), The cognitive processes of Neurons (biological nodes) or Neurodes (artificial nodes) These form the structure and architecture of brains, in animals and human beings. Artificial neural… …   Wikipedia

  • Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term …   Wikipedia

  • Neural network software — is used to simulate, research, develop and apply artificial neural networks, biological neural networks and in some cases a wider array of adaptive systems. Contents 1 Simulators 1.1 Research simulators 1.2 …   Wikipedia

  • Artificial neural network — An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an… …   Wikipedia

  • neural network — 1. any group of neurons that conduct impulses in a coordinated manner, as the assemblages of brain cells that record a visual stimulus. 2. Also called neural net. a computer model designed to simulate the behavior of biological neural networks,… …   Universalium

  • neural network — neu′ral net′work n. 1) cbl any group of neurons that conduct impulses in a coordinated manner, as the assemblages of brain cells that record a visual stimulus 2) cmp Also called neu′ral net′. a computer model designed to simulate the behavior of… …   From formal English to slang

  • Cellular neural network — Cellular neural networks (CNN) are a parallel computing paradigm similar to neural networks, with the difference that communication is allowed between neighbouring units only. Typical applications include image processing, analyzing 3D surfaces,… …   Wikipedia

  • Optical neural network — An optical neural network is a physical implementation of an artificial neural network with optical components. Some artificial neural networks that have been implemented as optical neural networks include the Hopfield neural network[1][2] and… …   Wikipedia

  • Quantum neural network — Quantum neural networks (QNN) refers to the class of neural network models, artificial or biological, which rely on principles inspired in some way from quantum mechanics.Two different classes may be generally distinguished:#The class of quantum… …   Wikipedia

  • Hybrid neural network — The term hybrid neural network can have two meanings: #biological neural networks interacting with artificial neuronal models, and #Artificial neural networks with a symbolic part (or, conversely, symbolic computations with a connectionist part) …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”