Auto-encoder

Auto-encoder

An auto-encoder is an artificial neural network used for learning efficient codings.The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data.This means it is being used for dimensionality reduction. More specifically, it is a feature extraction method.Auto-encoders use three or more layers:

* An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph.
* A number of considerably smaller hidden layers, which will form the encoding.
* An output layer, where each neuron has the same meaning as in the input layer.

If linear neurons are used, then an auto-encoder is very similar to PCA.

Training

An auto-encoder is often trained using one of the many backpropagation variants (conjugate gradient method, steepest descent, etc.) Though often reasonably effective, there are fundamental problems with using backpropagation to train networks with many hidden layers. Once the errors get backpropagated to the first few layers, they are minuscule, and quite ineffectual. This causes the network to almost always learn to reconstruct the average of all the training data. Though more advanced backpropagation methods (such as the conjugate gradient method) help with this to some degree, it still results in very slow learning and poor solutions. This problem is remedied by using initial weights that approximate the final solution. The process to find these initial weights is often called pretraining.

A pretraining technique developed by Geoffrey Hinton for training many-layered "deep" auto-encoders involves treating each neighboring set of two layers like a Restricted Boltzmann Machine for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.

External links

* [http://hebb.mit.edu/people/seung/talks/continuous/sld007.htm Presentation introducing auto-encoders for number recognition]
* [http://www.sciencemag.org/cgi/content/abstract/313/5786/504 Reducing the Dimensionality of Data with Neural Networks] (Science, 28 July 2006, Hinton & Salakhutdinov)


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Linear encoder — A linear encoder is a sensor, transducer or readhead paired with a scale that encodes position. The sensor reads the scale in order to convert the encoded position into an analog or digital signal, which can then be decoded into position by a… …   Wikipedia

  • Vocoder — A vocoder (  /ˈvoʊk …   Wikipedia

  • Windows Media Player — A component of Microsoft Windows Details …   Wikipedia

  • WinAMP — Entwickler: Nullsoft Aktuelle Version: 5.552 (21. April 2009) …   Deutsch Wikipedia

  • WinAmp — Entwickler: Nullsoft Aktuelle Version: 5.552 (21. April 2009) …   Deutsch Wikipedia

  • Winamp — 5.6 unter Microsoft Windows 7 …   Deutsch Wikipedia

  • Vorbis — This article is about the audio compression codec. For the Discworld character, see Discworld characters. Vorbis Xiph.org Logo Filename extension .ogg .oga …   Wikipedia

  • Microsoft Expression Studio — Microsoft Expression Developer(s) Microsoft Stable release 4.0 / June 7, 2010; 16 months ago ( …   Wikipedia

  • Mp3splt — Developer(s) Mp3splt project Team Stable release 2.4 / July 14th, 2011 Operating system Cross platform Type …   Wikipedia

  • code — [ kɔd ] n. m. • 1220; lat. jurid. codex « planchette, recueil » 1 ♦ Recueil de lois. Le code de Justinien, et absolt le Code. Ensemble des lois et dispositions légales relatives à une matière spéciale. Livre, article d un code. Le C ODE CIVIL ou… …   Encyclopédie Universelle

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”