- Winnow (algorithm)
The winnow algorithm [Littlestone, N. (1988) "Learning Quickly When Irrelevant Attributes About: A New Linear-threshold Algorithm" Machine Learning 285-318(2)] is a technique from machine learning. It is closely related to the
Perceptron , but it uses a multiplicative weight-update scheme that allows it perform much better than the perceptron when many dimensions are irrelevant (hence its name). It is not a sophisticated algorithm but it scales well to high-dimensional spaces. During training, winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane. It can also be used in the online learning setting, where the learning phase is not separated from the training phase.The algorithm
The basic algorithm, Winnow1, is given as follows.The instance space is . The algorithm maintains non-negative weights for which are initially set to 1. When the learner is given an example , the learner follows the following prediction rule:
* If , then it predicts 1
* Otherwise it predicts 0Where is a real number that is called the 'threshold'. Good bounds are obtained if .
The update rule is (loosely):
* If an example is correctly classified, do nothing.
* If an example is predicted to be 1 but the correct result was 0, all of the weights involved in the mistake are set to zero (demotion step).
* If an example is predicted to be 0 but the correct result was 1, all of the weights involved in the mistake are multiplied by (promotion step).A good value for is 2.
Variations are also used. For example, Winnow2 is the same as Winnow1 except that in the demotion step the weights are multiplied by instead of being set to 0.
Mistake bounds
If Winnow1 is run with and on a target function that is a -literal monotone disjunction given by , then for any sequence of instances the total number of mistakes is bounded by .
References
Citations and notes
Wikimedia Foundation. 2010.