Computational lexicology

Computational lexicology

Computational lexicology is that branch of computational linguistics, which is concerned with the use of computers in the study of lexicon. It has been more narrowly described by some scholars (Amsler, 1980) as the use of computers in the study of machine-readable dictionaries. It is distinguished from computational lexicography, which more properly would be the use of computers in the construction of dictionaries, though some researchers have used computational lexicography as synonymous.

Contents

History

Computational lexicology emerged as a separate discipline within computational linguistics with the appearance of machine-readable dictionaries, starting with the creation of the machine-readable tapes of the Merriam-Webster Seventh Collegiate Dictionary and the Merriam-Webster New Pocket Dictionary in the 1960s by John Olney et al. at System Development Corporation. Today, computational lexicology is best known through the creation and applications of WordNet.

Study of lexicon

Computational lexicology has contributed to the understanding of the content and limitations of print dictionaries for computational purposes (i.e. it clarified that the previous work of lexicography was not sufficient for the needs of computational linguistics). Through the work of computational lexicologists almost every portion of a print dictionary entry has been studied ranging from:

  1. what constitutes a headword - used to generate spelling correction lists;
  2. what variants and inflections the headword forms - use to empirically understand morphology;
  3. how the headword is delimited into syllables;
  4. how the headword is pronounced - used in speech generation systems;
  5. the parts of speech the headword takes on - used for POS taggers;
  6. any special subject or usage codes assigned to the headword - used to identify text document subject matter;
  7. the headword's definitions and their syntax - used as an aid to disambiguation of word in context;
  8. the etymology of the headword and its use to characterize vocabulary by languages of origin - used to characterize text vocabulary as to its languages of origin;
  9. the example sentences;
  10. the run-ons (additional words and multi-word expressions that are formed from the headword); and
  11. related words such as synonyms and antonyms.

Many computational linguists were disenchanted with the print dictionaries as a resource for computational linguistics because they lacked sufficient syntactic and semantic information for computer programs. The work on computational lexicology quickly led to efforts in two additional directions.

Successors to Computational Lexicology

First, collaborative activities between computational linguists and lexicographers led to an understanding of the role that corpora played in creating dictionaries. Most computational lexicologists moved on to build large corpora to gather the basic data that lexicographers had used to create dictionaries. The ACL/DCI (Data Collection Initiative) and the LDC (Linguistic Data Consortium) went down this path. The advent of markup languages led to the creation of tagged corpora that could be more easily analyzed to create computational linguistic systems. Part-of-speech tagged corpora and semantically tagged corpora were created in order to test and develop POS taggers and word semantic disambiguation technology.

The second direction was toward the creation of Lexical Knowledge Bases (LKBs). A Lexical Knowledge Base was deemed to be what a dictionary should be for computational linguistic purposes, especially for computational lexical semantic purposes. It was to have the same information as in a print dictionary, but totally explicated as to the meanings of the words and the appropriate links between senses. Many began creating the resources they wished dictionaries were, if they had been created for use in computational analysis. WordNet can be considered to be such a development, as can the newer efforts at describing syntactic and semantic information such as the FrameNet work of Fillmore. Outside of computational linguistics, the Ontology work of artificial intelligence can be seen as an evolutionary effort to build a lexical knowledge base for AI applications.

Standardization

Optimizing the production, maintenance and extension of computational lexicons is one of the crucial aspects impacting NLP. The main problem is the interoperability: various lexicons are frequently incompatible. The most frequent situation is: how to merge two lexicons, or fragments of lexicons? A secondary problem is that a lexicon is usually specifically tailored to a specific NLP program and has difficulties being used within other NLP programs or applications.

To this respect, the various data models of Computational lexicons are studied by ISO/TC37 since 2003 within the project lexical markup framework leading to an ISO standard in 2008.

References

Amsler, Robert A. 1980. Ph.D. Dissertation, "The Structure of the Merriam-Webster Pocket Dictionary". The University of Texas at Austin.

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Computational — may refer to: Computer Computational algebra Computational Aeroacoustics Computational and Information Systems Laboratory Computational and Systems Neuroscience Computational archaeology Computational auditory scene analysis Computational biology …   Wikipedia

  • Lexicology — Not to be mistaken with lexicography. [http://cougar.eb.com/soundc11/l/lexico06.wav Lexicology] (from lexiko , in the Late Greek lexikon ) is that part of linguistics which studies words , their nature and meaning, words elements, relations… …   Wikipedia

  • Computational linguistics — This article is about the scientific field. For the journal, see Computational Linguistics (journal). Linguistics …   Wikipedia

  • Lexical Markup Framework — (LMF) is a work in progress within International Organization for Standardization ISO/TC37 in order to define a common standardized framework for the construction of natural language processing (NLP) and machine readable dictionary (MRD) lexicons …   Wikipedia

  • Culturomics — is a form of computational lexicology that studies human behavior and cultural trends through the quantitative analysis of digitized texts.[1][2] Researchers data mine large digital archives to investigate cultural phenomena reflected in language …   Wikipedia

  • Machine-readable dictionary — ( MRD ) is a dictionary stored as machine (computer) data instead of being printed on paper. It is an electronic dictionary and lexical database.A machine readable dictionary is a dictionary in an electronic form that can be loaded in a database… …   Wikipedia

  • List of linguists — Linguistics …   Wikipedia

  • Patrick Hanks — (born 1940), English lexicographer and corpus linguist.He has edited dictionaries of general language, as well as dictionaries of personal names.After graduation from University College, Oxford, he started his lexicographic career as editor of… …   Wikipedia

  • Outline of linguistics — See also: Index of linguistics articles The following outline is provided as an overview of and topical guide to linguistics: Linguistics is the scientific study of natural language. Someone who engages in this study is called a linguist.… …   Wikipedia

  • WordNet — is a lexical database for the English language.[1] It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. The purpose is twofold:… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”