Weighted Majority Algorithm

Weighted Majority Algorithm

In machine learning, Weighted Majority Algorithm (WMA) is a meta-learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well.

Assume that the problem is a binary decision problem. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0<β<1.

It can be shown that the upper bounds on the number of mistakes made in a given sequence of predictions from a pool of algorithms mathbf{A} is

:mathbf{O(log|A|+m)}

if one algorithm in mathbf{x}_i makes at most mathbf{m} mistakes.

There are many variations of the Weighted Majority Algorithm to handle different situations, like shifting targets, infinite pools, or randomized predictions. The core mechanism remain similar, with the final performances of the compound algorithm bounded by a function of the performance of the specialist (best performing algorithm) in the pool.

References

* Littlestone,N. & Warmuth,M. (1989). "Weighted Majority Algorithm." IEEE Symposium on Foundations of Computer Science.


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • k-nearest neighbor algorithm — KNN redirects here. For other uses, see KNN (disambiguation). In pattern recognition, the k nearest neighbor algorithm (k NN) is a method for classifying objects based on closest training examples in the feature space. k NN is a type of instance… …   Wikipedia

  • K-nearest neighbor algorithm — In pattern recognition, the k nearest neighbor algorithm ( k NN) is a method for classifying objects based on closest training examples in the feature space. k NN is a type of instance based learning, or lazy learning where the function is only… …   Wikipedia

  • Manfred K. Warmuth — Manfred Klaus Warmuth Fields Computer Science …   Wikipedia

  • Concept drift — In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions… …   Wikipedia

  • Quantitative comparative linguistics — is a branch of comparative linguistics that applies mathematical models to the problem of classifying language relatedness. This includes the use of computational phylogenetics and cladistics to define an optimal tree (or network) to represent a… …   Wikipedia

  • Reference counting — In computer science, reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object or block of memory. It is typically used as a means of deallocating objects which are no longer… …   Wikipedia

  • 2-satisfiability — In computer science, 2 satisfiability (abbreviated as 2 SAT or just 2SAT) is the problem of determining whether a collection of two valued (Boolean or binary) variables with constraints on pairs of variables can be assigned values satisfying all… …   Wikipedia

  • Voting system — For other uses, see Voting system (disambiguation). Part of the Politics series Electoral methods …   Wikipedia

  • Maximum parsimony — Maximum parsimony, often simply referred to as parsimony, is a non parametric statistical method commonly used in computational phylogenetics for estimating phylogenies. Under maximum parsimony, the preferred phylogenetic tree is the tree that… …   Wikipedia

  • Maximum parsimony (phylogenetics) — Parsimony is a non parametric statistical method commonly used in computational phylogenetics for estimating phylogenies. Under parsimony, the preferred phylogenetic tree is the tree that requires the least evolutionary change to explain some… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”