Recurrent neural network

Recurrent neural network

A recurrent neural network (RNN) is a class of neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior.

Recurrent neural networks must be approached differently from feedforward neural networks, both when analyzing their behavior and training them. Recurrent neural networks can also behave chaotically. Usually, dynamical systems theory is used to model and analyze them. While a feedforward network propagates data linearly from input to output, recurrent networks (RN) also propagate data from later processing stages to earlier stages.

Architectures

Some of the most common recurrent neural network architectures are described here. The Elman and Jordan networks are also known as "simple recurrent networks" (SRN).

Elman network

This variation on the multilayer perceptron was invented by Jeff Elman. A three-layer network is used, with the addition of a set of "context units" in the input layer. There are connections from the middle (hidden) layer to these context units fixed with a weight of one [Neural Networks as Cybernetic Systems 2nd and revised edition, Holk Cruse [http://www.brains-minds-media.org/archive/615/bmm615.pdf] ] . At each time step, the input is propagated in a standard feed-forward fashion, and then a learning rule is applied. The fixed back connections result in the context units always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard multilayer perceptron.

Jordan network

This network architecture is similar to the Elman network. The context units are however fed from the output layer instead of the hidden layer.

MMC network

See [http://www.tech.plym.ac.uk/socce/ncpw9/Kuehn.pdf]

Hopfield network

The Hopfield network is a recurrent neural network in which all connections are symmetric. Invented by John Hopfield in 1982, this network guarantees that its dynamics will converge. If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration.

Echo state network

The echo state network (ESN) is a recurrent neural network with a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change and be trained. ESN are good to (re)produce temporal patterns.

Long short term memory network

The Long short term memory (LSTM) is an artificial neural net structure that unlike traditional RNNs doesn't have the problem of vanishing gradients. It can therefore use long delays and can handle signals that have a mix of low and high frequency components.

RNN with parametric bias

In this setup the recurrent neural network with parametric bias (RNNPB) is trained to reproduce a sequence with a constant bias input. The network is capable of learning different sequences with different parametric biases. With a trained network is also possible to find the associated parameter for an observed sequence. The sequence is backpropagated through the network to recover the bias which would produce the given sequence.

Continuous-time RNN

(CTRNN)

Hierarchical RNN

Here RNN are sparsely connected together through bottlenecks with the idea to isolate different hierarchical functions to different parts of the composite network. [ [http://www.bdc.brain.riken.go.jp/~rpaine/PaineTaniSAB2004_h.pdf Dynamic Representation of Movement Primitives in an Evolved Recurrent Neural Network ] ] [ [http://www-clmc.usc.edu/publications/P/paine-NN2004.pdf doi:10.1016/j.neunet.2004.08.005 ] ] [http://adb.sagepub.com/cgi/reprint/13/3/211.pdf]

Recurrent Multilayer Perceptron

(RMLP)

Pollack’s Sequential Cascaded Networks

Training

Training in recurrent neural networks is generally very slow.

Backpropagation through time (BPTT)

In this approach the simple recurrent network is unfolded in time for some iterations and then trained through backpropagation as the feed forward network.

Real-time recurrent learning (RTRL)

Unlike BPTT this algorithm is "local in time but not local in space" [ Neural and Adaptive Systems: Fundamentals through Simulation. J.C. Principe, N.R. Euliano, W.C. Lefebvre]

Genetic algorithms

Since RNN learning is very slow, genetic algorithms is a feasible alternative for weight optimization, especially in unstructured networks.

References

*cite book | author=Mandic, D. & Chambers, J. | title=Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability | publisher=Wiley | year=2001
*cite journal | last=Elman | first=J.L. | title=Finding Structure in Time | journal = Cognitive Science | year=1990 | volume=14 | pages=179–211 | doi=10.1016/0364-0213(90)90002-E


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term …   Wikipedia

  • neural network — 1. any group of neurons that conduct impulses in a coordinated manner, as the assemblages of brain cells that record a visual stimulus. 2. Also called neural net. a computer model designed to simulate the behavior of biological neural networks,… …   Universalium

  • Artificial neural network — An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an… …   Wikipedia

  • Artificial Neural Network — Réseau de neurones Pour les articles homonymes, voir Réseau. Vue simplifiée d un réseau artificiel de neurones Un réseau de neurones artificiel est un modèle de c …   Wikipédia en Français

  • Random neural network — The random neural network (RNN) is a mathematical representation of neurons or cells which exchange spiking signals. Each cell is represented by an integer whose value rises when the cell receives an excitatory spike and drops when it receives an …   Wikipedia

  • Feedforward neural network — A feedforward neural network is an artificial neural network where connections between the units do not form a directed cycle. This is different from recurrent neural networks.The feedforward neural network was the first and arguably simplest… …   Wikipedia

  • Gene regulatory network — A gene regulatory network (also called a GRN or genetic regulatory network ) is a collection of DNA segments in a cell which interact with each other (indirectly through their RNA and protein expression products) and with other substances in the… …   Wikipedia

  • Echo state network — The echo state network (ESN) is a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are randomly assigned and are fixed. The weights of output neurons… …   Wikipedia

  • Neuronal network — Réseau de neurones Pour les articles homonymes, voir Réseau. Neurosciences …   Wikipédia en Français

  • Simple recurrent network — Ein Elman Netz, auch Simple recurrent network (SRN), zu deutsch Einfaches rekurrentes Netz, ist ein einfaches künstliches neuronales Netz, das durch vorhandene Rückkopplungen von Kanten zwischen den künstlichen Neuronen in der Lage ist zeitliche… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”