- Bag of words model
The bag-of-words model is a simplifying assumption used in
natural language processing andinformation retrieval . In this model, a text (such as a sentence or a document) is represented as an unordered collection of words, disregarding grammar and even word order.The bag-of-words model is used in some methods of
document classification . When aNaive Bayes classifier is applied to text, for example, theconditional independence assumption leads to the bag-of-words model. [cite conference
first = David
last = Lewis
title = Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval
booktitle = Proceedings of ECML-98, 10th European Conference on Machine Learning
pages = 4-15
publisher = Springer Verlag, Heidelberg, DE
date = 1998
location = Chemnitz, DE
url = http://citeseer.ist.psu.edu/lewis98naive.html] Other methods of document classification that use this model arelatent Dirichlet allocation andlatent semantic analysis . [cite journal
last = Blei
first = David M.
coauthors = Andrew Y. Ng and Michael I. Jordan
title = Latent Dirichlet Allocation
journal = J. Mach. Learn. Res.
volume = 3
pages = 993–1022
publisher = MIT Press
location = Cambridge, MA
date = 2003
doi = 10.1162/jmlr.2003.3.4-5.993]Example: Spam filtering
In
Bayesian spam filtering , an e-mail message is modeled as an unordered collection of words selected from one of two probability distributions: one representing spam and one representing legitimate e-mail ("ham"). Imagine that there are two literal bags full of words. One bag is filled with words found in spam messages, and the other bag is filled with words found in legitimate e-mail. While any given word is likely to be found somewhere in both bags, the "spam" bag will contain spam-related words such as "stock", "Viagra", and "buy" much more frequently, while the "ham" bag will contain more words related to the user's friends or workplace.To classify an e-mail message, the Bayesian spam filter assumes that the message is a pile of words that has been poured out randomly from one of the two bags, and uses
Bayesian probability to determine which bag it is more likely to be.See also
*
Natural language processing
*Document classification
*Machine learning
*Document-term matrix
*Bag of words model in computer vision References
Wikimedia Foundation. 2010.