- Bag of words model
The bag-of-words model is a simplifying assumption used in
natural language processingand information retrieval. In this model, a text (such as a sentence or a document) is represented as an unordered collection of words, disregarding grammar and even word order.
The bag-of-words model is used in some methods of
document classification. When a Naive Bayes classifieris applied to text, for example, the conditional independenceassumption leads to the bag-of-words model. [cite conference
first = David
last = Lewis
title = Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval
booktitle = Proceedings of ECML-98, 10th European Conference on Machine Learning
pages = 4-15
publisher = Springer Verlag, Heidelberg, DE
date = 1998
location = Chemnitz, DE
url = http://citeseer.ist.psu.edu/lewis98naive.html] Other methods of document classification that use this model are
latent Dirichlet allocationand latent semantic analysis. [cite journal
last = Blei
first = David M.
coauthors = Andrew Y. Ng and Michael I. Jordan
title = Latent Dirichlet Allocation
journal = J. Mach. Learn. Res.
volume = 3
pages = 993–1022
publisher = MIT Press
location = Cambridge, MA
date = 2003
doi = 10.1162/jmlr.2003.3.4-5.993]
Example: Spam filtering
Bayesian spam filtering, an e-mail message is modeled as an unordered collection of words selected from one of two probability distributions: one representing spam and one representing legitimate e-mail ("ham"). Imagine that there are two literal bags full of words. One bag is filled with words found in spam messages, and the other bag is filled with words found in legitimate e-mail. While any given word is likely to be found somewhere in both bags, the "spam" bag will contain spam-related words such as "stock", "Viagra", and "buy" much more frequently, while the "ham" bag will contain more words related to the user's friends or workplace.
To classify an e-mail message, the Bayesian spam filter assumes that the message is a pile of words that has been poured out randomly from one of the two bags, and uses
Bayesian probabilityto determine which bag it is more likely to be.
Natural language processing
Bag of words model in computer vision
Wikimedia Foundation. 2010.