Generalization error

Generalization error

The generalization error of a machine learning model is a function that measures how far the student machine is from the teacher machine in average over the entire set of possible data that can be generated by the teacher after each iteration of the learning process. It has this name because this function indicates the capacity of a machine that learns with the specified algorithm to infer a rule (or "generalize") that is used by the teacher machine to generate data based only on a few examples.

The performance of a machine learning algorithm is measured by plots of the generalization error values through the learning process and are called learning curves.

The generalization error of a perceptron is the probability of the student perceptron to classify an example differently from the teacher and is given by the overlap ofthe student and teacher synaptic vectors and is a function of their scalar product.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Hasty generalization — is a logical fallacy of faulty generalization by reaching an inductive generalization based on insufficient evidence  essentially making a hasty conclusion without considering all of the variables. In statistics, it may involve basing broad… …   Wikipedia

  • Universal law of generalization — The universal law of generalization is a theory of cognition originally posited by Roger Shepard. According to it, the probability that a response to one stimulus will be generalized to another will be a function of the distance between the two… …   Wikipedia

  • Concatenated error correction code — In coding theory, concatenated codes form a class of error correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both… …   Wikipedia

  • Margin classifier — In machine learning, a margin classifer is a classifier which is able to give an associated distance from the decision boundary for each example. For instance, if a linear classifier (e.g. perceptron or linear discriminant analysis) is used, the… …   Wikipedia

  • Artificial neural network — An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an… …   Wikipedia

  • BrownBoost — is a boosting algorithm that may be robust to noisy datasets. BrownBoost is an adaptive version of the boost by majority algorithm. As is true for all boosting algorithms, BrownBoost is used in conjunction with other machine learning methods.… …   Wikipedia

  • Random forest — In machine learning, a random forest is a classifier that consists of many decision trees and outputs the class that is the mode of the classes output by individual trees. The algorithm for inducing a random forest was developed by Leo Breiman… …   Wikipedia

  • Probably approximately correct learning — In computational learning theory, probably approximately correct learning (PAC learning) is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant.[1] In this framework, the learner receives samples… …   Wikipedia

  • Categorical imperative — Part of a series on Immanue …   Wikipedia

  • Support vector machine — Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. Viewing input data as two sets of vectors in an n dimensional space, an SVM will construct a separating hyperplane in that… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”