- Neuroevolution
-
Not to be confused with Evolution of nervous systems or Neural development.
Neuroevolution, or neuro-evolution, is a form of machine learning that uses evolutionary algorithms to train artificial neural networks. It is useful for applications such as games and robot motor control, where it is easy to measure a network's performance at a task but difficult or impossible to create a syllabus of correct input-output pairs for use with a supervised learning algorithm. In the classification scheme for neural network learning these methods usually belong in the reinforcement learning category (needs reference).
Contents
Features
There are many neuroevolutionary algorithms. A distinction is made between those that evolve the values of the connection weights for a network of pre-specified topology, vs. those that evolve the topology of the network in addition to the weights. Although there are no standardized terms for this distinction as a whole, adding or removing a network's connections during evolution may be referred to as complexification or simplification, respectively.[1] Networks that have both their connection weights and topology evolved are referred to as TWEANNs (Topology & Weight Evolving Artificial Neural Networks).
A further distinction is made between methods that evolve the structure (topology) of the neural networks in parallel to the parameters (e.g. synaptic weights) and those that develop them separately.
Direct and Indirect Encoding of Networks
Evolutionary algorithms operate on a population of genotypes (in some algorithms these may be divided into species with differing genomes), in neuroevolution a genotype is some representation of a neural network (a phenotype).
In Direct encoding schemes the genotype is the same as the phenotype, every neuron and connection is specified directly and explicitly in the genotype. In contrast, in indirect encoding schemes the genotype specifies rules or some other structure for generating the network.[2]
Indirect encodings are often used to achieve several aims:[2][3][4][5][6]
- Allow recurring structures or features in the network to form (modularity and other regularities);
- compression of phenotype to a smaller genotype, providing a smaller search space;
- mapping the search space (genome) to the problem domain.
Taxonomy of Embryogenic Systems for Indirect Encoding
Traditionally indirect encodings that employ artificial embryogeny (also known as artificial development) have been categorised along the lines of a grammatical approach versus a cell chemistry approach.[5] The former evolve sets of rules in the form of grammatical rewrite systems. The latter attempt to mimic how physical structures emerge in biology through gene expression. However, this distinction is largely superficial: although they often develop differently in a number of important ways, many of these differences only exist for historical reasons and not because of any intrinsic requirement of either approach. Indirect encoding systems often use aspects of both approaches.
Stanley and Miikkulainen[5] propose a taxonomy for embryogenic systems that is intended to reflect their underlying properties. The taxonomy identifies five continuous dimensions along which any embryogenic system can be placed and thus compared with others:
- Cell (Neuron) Fate: the final characteristics and role of the cell in the mature phenotype. This dimension ranges from a single method for determining the fate of a cell to having many determination methods.
- Targeting: the method by which connections are directed from source cells to target cells. This ranges from specific targeting (source and target are explicitly identified) to only relative targeting (e.g. based on locations of cells relative to each other).
- Heterochrony: the timing and ordering of events during embryogeny. Ranges from no mechanisms for changing the timing of events to many mechanisms.
- Canalization: how tolerant the genome is to mutations (brittleness). Ranges from requiring precise genotypic instructions to a high tolerance of imprecision or mutation.
- Complexification: the ability of the system (including evolutionary algorithm and genotype to phenotype mapping) to allow complexification of the genome (and hence phenotype) over time. Ranges from allowing only fixed-size genomes to allowing highly variable length genomes.
Examples
Examples of Neuroevolution methods (those with direct encodings are necessarily non-embryogenic):
Method Encoding Evolutionary algorithm Aspects evolved Cellular Encoding (CE) by F. Gruau, 1994[7] Indirect, embryogenic (grammar tree using S-expressions) Genetic programming Structure and parameters (simultaneous, complexification) GNARL by Angeline et al., 1994[8] Direct Evolutionary programming Structure and parameters (simultaneous, complexification) EPNet by Yao and Liu, 1997[9] NeuroEvolution of Augmenting Topologies (NEAT) by Stanley and Miikkulainen, 2002[10][11] Direct Genetic algorithm. Tracks genes with historical markings to allow crossover between different topologies, protects innovation via speciation. Structure and parameters (simultaneous, complexification) Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) by Stanley, D'Ambrosio, Gauci, 2008 [3] Indirect, non-embryogenic (spatial patterns generated by a Compositional pattern-producing network (CPPN) within a hypercube are interpreted as connectivity patterns in a lower-dimensional space) Genetic algorithm. The NEAT algorithm (above) is used to evolve the CPPN. Parameters, structure fixed (functionally fully connected) Evolutionary Acquisition of Neural Topologies (EANT/EANT2) by Kassahun and Sommer, 2005[12] / Siebel and Sommer, 2007[13] Direct and indirect, potentially embryogenic (Common Genetic Encoding[2]) Evolutionary programming/Evolution strategies Structure and parameters (separately, complexification) See also
- NeuroEvolution of Augmented Topologies (NEAT)
- HyperNEAT (A Generative version of NEAT)
- Evolutionary Acquisition of Neural Topologies (EANT/EANT2)
References
- ^ http://www.ucs.louisiana.edu/~dxj2534/james_gecco04.pdf
- ^ a b c Yohannes Kassahun, Mark Edgington, Jan Hendrik Metzen, Gerald Sommer and Frank Kirchner. Common Genetic Encoding for Both Direct and Indirect Encodings of Networks. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2007), London, UK, 1029-1036, 2007.
- ^ a b Gauci, Stanley. Generating Large-Scale Neural Networks Through Discovering Geometric Regularities. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2007). New York, NY: ACM. [1]
- ^ F. Gruau. Neural Network Synthesis Using Cellular Encoding and the Genetic Algorithm. PhD thesis, Ecole Normale Superieure de Lyon, Laboratoire de l’Informatique du Parallelisme, France, January 1994.
- ^ a b c Stanley, Miikkulainen. A Taxonomy for Artificial Embryogeny. The MIT Press Journals, 2003. [2]
- ^ Clune J, Stanley KO, Pennock RT, and Ofria C. On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation, 2011 (to appear). pdf
- ^ Gruau, F. (1994). Neural network synthesis using cellular encoding and the genetic algorithm. Doctoral dissertation, Ecole Normale Superieure de Lyon, France.
- ^ Peter J Angeline, Gregory M Saunders, and Jordan B Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, 5:54–65, 1994. [3]
- ^ Xin Yao and Yong Liu. A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks, 8(3):694–713, May 1997. [4]
- ^ http://nn.cs.utexas.edu/downloads/papers/stanley.ieeetec05.pdf
- ^ http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf
- ^ Yohannes Kassahun and Gerald Sommer. Efficient reinforcement learning through evolutionary acquisition of neural topologies. In Proceedings of the 13th European Symposium on Artificial Neural Networks (ESANN 2005), pages 259–266, Bruges, Belgium, April 2005. [5]
- ^ Nils T Siebel and Gerald Sommer. Evolutionary reinforcement learning of artificial neural networks. International Journal of Hybrid Intelligent Systems 4(3): 171-183, October 2007. [6]
External links
- University of Texas neuroevolution page (has downloadable papers on NEAT and applications)
- ANNEvolve is an Open Source AI Research Project (Has downloadable source code in C and Python for a variety of interesting problems. Also a tutorial & miscellaneous writings and illustrations)
- Web page on evolutionary learning with EANT/EANT2 (information and articles on EANT/EANT2 with applications to robot learning)
Categories:- Evolutionary algorithms
Wikimedia Foundation. 2010.