Inductive bias

Inductive bias

The inductive bias of a learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered (Mitchell, 1980).

In machine learning, one aims to construct algorithms that are able to "learn" to predict a certain target output. To achieve this, the learning algorithm is presented some training examples that demonstrate the intended relation of input and output values. Then the learner is supposed to approximate the correct output, even for examples that have not been shown during training. Without any additional assumptions, this task cannot be solved exactly since unseen situations might have an arbitrary output value. The kind of necessary assumptions about the nature of the target function are subsumed in the term "inductive bias" (Mitchell, 1980; desJardins and Gordon, 1995).

A classical example of an inductive bias is Occam's Razor, assuming that the simplest consistent hypothesis about the target function is actually the best. Here "consistent" means that the hypothesis of the learner yields correct outputs for all of the examples that have been given to the algorithm.

Approaches to a more formal definition of inductive bias are based on mathematical logic. Here, the inductive bias is a logical formula that, together with the training data, logically entails the hypothesis generated by the learner. Unfortunately, this strict formalism fails in many practical cases, where the inductive bias can only be given as a rough description (e.g. in the case of neural networks), or not at all.

Types of inductive biases

The following is a list of common inductive biases in machine learning algorithms.
* Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the bias used in the Naive Bayes classifier.
* Minimum cross-validation error: when trying to choose among hypotheses, select the hypothesis with the lowest cross-validation error. Although cross-validation may seem to be free of bias, the No Free Lunch theorems show that cross-validation must be biased.
* Maximum margin: when drawing a boundary between two classes, attempt to maximize the width of the boundary. This is the bias used in Support Vector Machines. The assumption is that distinct classes tend to be separated by wide boundaries.
* Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis. The assumption is that simpler hypotheses are more likely to be true. See Occam's razor.
* Minimum features: unless there is good evidence that a feature is useful, it should be deleted. This is the assumption behind feature selection algorithms.
* Nearest neighbors: assume that most of the cases in a small neighborhood in feature space belong to the same class. Given a case for which the class is unknown, guess that it belongs to the same class as the majority in its immediate neighborhood. This is the bias used in the k-nearest neighbor algorithm. The assumption is that cases that are near each other tend to belong to the same class.

hift of bias

Although most learning algorithms have a static bias, some algorithms are designed to shift their bias as they acquire more data (Utgoff, 1984). This does not avoid bias, since the bias shifting process itself must have a bias.

ee also

* Bias
* Cognitive bias
* No free lunch in search and optimization

References

desJardins, M., and Gordon, D.F. (1995). [http://citeseer.ist.psu.edu/article/desjardins95evaluation.html Evaluation and selection of biases in machine learning] . Machine Learning Journal, 5:1--17, 1995.

Mitchell, T.M. (1980). [http://citeseer.ist.psu.edu/mitchell80need.html The need for biases in learning generalizations] . CBM-TR 5-110, Rutgers University, New Brunswick, NJ.

Utgoff, P.E. (1984). Shift of bias for inductive concept learning. Doctoral dissertation, Department of Computer Science, Rutgers University, New Brunswick, NJ.


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Bias — This article is about different ways the term bias is used . For other uses, see Bias (disambiguation). Bias is an inclination to present or hold a partial perspective at the expense of (possibly equally valid) alternatives. Bias can come in many …   Wikipedia

  • Inductive inference — This article is about the mathematical concept, for inductive inference in logic, see Inductive reasoning. Around 1960, Ray Solomonoff founded the theory of universal inductive inference, the theory of prediction based on observations; for… …   Wikipedia

  • Inductive output tube — The inductive output tube or IOT is a variety of vacuum tube which evolved in the 1980s to meet increasing efficiency requirements for high power RF amplifiers. The primary commercial use of IOTs is in UHF television transmitters, where they have …   Wikipedia

  • Sampling bias — In statistics, sampling bias is when a sample is collected in such a way that some members of the intended population are less likely to be included than others. It results in a biased sample, a non random sample[1] of a population (or non human… …   Wikipedia

  • Confirmation bias — (also called confirmatory bias or myside bias) is a tendency for people to favor information that confirms their preconceptions or hypotheses regardless of whether the information is true.[Note 1][1] As a result, people gather evidence and recall …   Wikipedia

  • Exchange Bias — Als Exchange Bias (EB) bezeichnet man eine unidirektionale Anisotropie (deshalb auch unidirectional exchange anisotropy genannt), die durch die Kopplung zwischen einem Ferro und einem Antiferromagneten entsteht. Der Exchange Bias bewirkt eine… …   Deutsch Wikipedia

  • Exchange bias — or exchange anisotropy occurs in bilayers (or multilayers) of magnetic materials where the hard magnetization behavior of an antiferromagnetic thin film causes a shift in the soft magnetization curve of a ferromagnetic film. The exchange bias… …   Wikipedia

  • Meta learning (computer science) — This article is about meta learning in computer science. For meta learning in social psychology, see Meta learning. Meta learning is a subfield of Machine learning where automatic learning algorithms are applied on meta data about machine… …   Wikipedia

  • Supervised learning — is a machine learning technique for learning a function from training data. The training data consist of pairs of input objects (typically vectors), and desired outputs. The output of the functioncan be a continuous value (called regression), or… …   Wikipedia

  • Multi-task learning — is an approach to machine learning that learns a problem together with other related problems at the same time, using a shared representation. This often leads to a better model for the main task, because it allows the learner to use the… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”