- Text Retrieval Conference
The Text REtrieval Conference (TREC) is an on-going series of
workshop s focusing on a list of differentinformation retrieval (IR) research areas, or "tracks." It is co-sponsored by theNational Institute of Standards and Technology (NIST) and theDisruptive Technology Office of the U.S. Department of Defense, and began in 1992 as part of the TIPSTER Text program. Its purpose is to support and encourage research within the information retrieval community by providing the infrastructure necessary for large-scale "evaluation" oftext retrieval methodologies and to increase the speed of lab-to-product transfer of technology.Each track has a challenge wherein NIST provides participating groups with data sets and test problems. Depending on track, test problems might be questions, topics, or target extractable features. Uniform scoring is performed so the systems can be fairly evaluated. After evaluation of the results, a workshop provides a place for participants to collect together thoughts and ideas and present current and future research work.
Participation
The conference is made up of a varied, international group of researchers and developers. In 2003, there were 93 groups from both academia and industry from 22 countries participating.
Conference Contributions
TREC claims that within the first six years of the workshops, the effectiveness of retrieval systems approximately doubled. The conference was also the first to hold large-scale evaluations of non-English documents, speech, video and retrieval across languages. Additionally, the challenges have inspired a large body of [http://trec.nist.gov/pubs.html publications] . Technology first developed in TREC is now included in many of the world's commercial
search engine s.Tracks
"New tracks are added as new research needs are identified, this list is current for 2007."
*Blog Track - Goal: to explore information seeking behavior in theblogosphere .
* Enterprise Track - Goal: to study search over the data of an organization to complete some task.
* Genomics Track - Goal: to study the retrieval of genomic data, not just gene sequences but also supporting documentation such as research papers, lab reports, etc.
* Legal Track - Goal: to develop search technology that meets the needs of lawyers to engage in effective discovery in digital document collections.
* Million Query Track - Goal: to test the hypothesis that a test collection built from many very incompletely judged topics is a better tool than one built using traditional TREC collection pooling. New for 2007.
* Question Answering Track - Goal: to achieve moreinformation retrieval than justdocument retrieval by answering factoid, list and definition-style questions.
* Spam Track - Goal: to provide a standard evaluation of current and proposedspam filter ing approaches."Past tracks"
* Cross-Language Track - Goal: to investigate the ability of retrieval systems to find documents topically regardless of source language.
* Filtering Track - Goal: to binarily decide retrieval of new incoming documents given a stableinformation need .
* HARD Track - Goal: to achieve High Accuracy Retrieval from Documents by leveraging additional information about the searcher and/or the search context.
* Interactive Track - Goal: to study user interaction with text retrieval systems.
* Novelty Track - Goal: to investigate systems' abilities to locate new (i.e., non-redundant) information.
* Robust Retrieval Track - Goal: to focus on individual topic effectiveness.
*Terabyte Track - Goal: to investigate whether/how the IR community can scale traditional IR test-collection-based evaluation to significantly large collections.
* Video Track - Goal: to research in automatic segmentation, indexing, and content-based retrieval ofdigital video .:In 2003, this track became its own independent evaluation namedTRECVID .
* Web Track - Goal: to search on a document set that is a snapshot of the World Wide Web.In 1997, a Japanese counterpart of TREC was launched, called [http://research.nii.ac.jp/ntcir/ NTCIR] (NII Test Collection for IR Systems), and in 2001, a European counterpart was launched, called [http://www.clef-campaign.org/ CLEF] (Cross Language Evaluation Forum).
References
* [http://trec.nist.gov/ TREC website at NIST]
External links
* [http://www.nist.gov/itl/div894/894.02/related_projects/tipster/ TIPSTER]
Wikimedia Foundation. 2010.