Instance-based learning

Instance-based learning

In machine learning, instance-based learning or memory-based learning is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have been stored in memory. Instance-based learning is a kind of lazy learning.

It is called instance-based because it constructs hypotheses directly from the training instances themselves.[1] This means that the hypothesis complexity can grow with the data:[1] in the worst case, a hypothesis is a list of n training items and classification takes O(n). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Where other methods generally require the entire set of training data to be re-examined when one instance is changed, instance-based learners may simply store a new instance or throw an old instance away.[citation needed]

A simple example of an instance-based learning algorithm is the k-nearest neighbor algorithm. Daelemans and Van den Bosch describe variations of this algorithm for use in natural language processing (NLP), claiming that memory-based learning is both more psychologically realistic than other machine-learning schemes and more effective in practice.[2]

References

  1. ^ a b Stuart Russell and Peter Norvig (2003). Artificial Intelligence: A Modern Approach, second edition, p. 733. Prentice Hall. ISBN 0-13-080302-2
  2. ^ Walter Daelemans and Antal van den Bosch (2005). Memory-Based Language Processing. Cambridge University Press.

External links



Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Learning object metadata — is a data model, usually encoded in XML, used to describe a learning object and similar digital resources used to support learning. The purpose of learning object metadata is to support the reusability of learning objects, to aid discoverability …   Wikipedia

  • Lazy learning — (engl., „Träges Lernen“) ist eine Klasse von maschinellen Lernverfahren. Im Gegensatz zum eager learning findet dabei die Modellbildung nicht während oder nach dem Trainieren statt, sondern erst zur Zeit der Anfrage.[1] Der Vorteil ist dabei,… …   Deutsch Wikipedia

  • Learning classifier system — A learning classifier system, or LCS, is a machine learning system with close links to reinforcement learning and genetic algorithms. First described by John Holland, his LCS consisted of a population of binary rules on which a genetic algorithm… …   Wikipedia

  • Learning disability — In the United States and Canada, the term learning disability (LD) refers to a group of disorders that affect a broad range of academic and functional skills including the ability to speak, listen, read, write, spell, reason and organize… …   Wikipedia

  • Learning center approach in American schools — The learning center strategy uses eight basic learning centers to address the countless objectives of American early childhood classrooms, attempting to develop the student’s social, emotional, physical, cognitive, and aesthetic abilities. There… …   Wikipedia

  • Rote learning — By heart redirects here. For the album by Lea Salonga, see By Heart. Rote learning is a learning technique which focuses on memorization. The major practice involved in rote learning is learning by repetition by which students commit information… …   Wikipedia

  • History of virtual learning environments — A virtual learning environment (VLE) is a system that creates an environment designed to facilitate teachers in the management of educational courses for their students, especially a system using computer hardware and software, which involves… …   Wikipedia

  • Concept learning — Concept learning, also known as category learning, concept attainment, and concept formation, is largely based on the works of the cognitive psychologist Jerome Bruner. Bruner, Goodnow, Austin (1967) defined concept attainment (or concept… …   Wikipedia

  • Discovery learning — is a method of inquiry based instruction and is considered a constructivist based approach to education. It is supported by the work of learning theorists and psychologists Jean Piaget, Jerome Bruner, and Seymour Papert. Although this form of… …   Wikipedia

  • Constraint learning — In constraint satisfaction backtracking algorithms, constraint learning is a technique for improving efficiency. It works by recording new constraints whenever an inconsistency is found. This new constraint may reduce the search space, as future… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”