Terms of CL

Artificial Intelligence (AI)

A branch of science that includes making and solving problems of technical or program modeling, which are mainly concerned with implementation of intelligent human behaviour - such as presentation of knowledge, teaching, learning, association, planning, explaining, acquisition of language etc. - into machines. See Elithorn and Banerji 1984; Schmitz et al. 1990; Jacobs 1992.

Natural Language Processing(NLP)

A general term used to refer to all processes related to analysis of texts in natural languages (natural language - imitation of a human language by a machine) as well as their understanding and synthesis with human language. Natural Language Processing is closely related to the other fields of Computational Linguistics: Machine Translation (MT), Artificial Intelligence (AI), Corpus Linguistics. See Sparck Jones 1992; Galliers and Sparck Jones 1993; Rustin 1973.

Computational Linguistics (CL)

A branch of linguistics in which computational techniques and concepts are applied to the elucidation of linguistic and phonetic problems. Several research areas have developed, including speech synthesis, corpus linguistics, speech recognition, machine translation, concordance compilation, testing of grammars, and many other areas where statistical counts and analyses are required. See Newmeyer 1988: Ch. 11; McEnery 1992; Souter and Atwell 1993.

Corpus Linguistics

A study of language that includes all processes related to processing, usage and analysis of written or spoken machine-readable corpora. Corpus linguistics is a relatively modern term used to refer to a methodology, which is based on examples of ‘real life’ language use. At present, effectiveness and usefulness of corpus linguistics is closely related to the development of computer science. See McEnery and Wilson 1996; Aarts and Meijs 1990; Leech 1991; Svartvik 1992.

Corpus Processing

A general term used to refer to all processes related to annotation, presentation and analysis of corpora. See Aarts and Meijs 1990; McEnery and Wilson 1996: Ch. 2.

Alignment

A term is used to refer to the practice of defining explicit links between texts in a parallel corpus. Alignment is linking the elements (sentences, phrases or words) that are mutual translations of each other in parallel corpus. Sentence and word alignment (the term for performing this operation - aligner) may be performed with a high degree of accuracy automatically. See McEnery and Oakes 1996; McEnery and Wilson 1996: Ch. 2.

Annotation

A term is used to refer to (i) the practice of adding explicit additional information to machine-readable text; (ii) the physical representation of such information. Annotation (or markup) makes it quicker and easier to retrieve and analyse information about the language contained in the corpus. A corpus may be annotated manually, by a single person or by a number of people; alternatively, the annotation may be carried out completely automatically or semi-automatically (output needs to be post-edited by human beings in the latter case) by a computer program. Certain kinds of linguistic annotation, which involve the attachment of special codes to words in order to indicate particular features, are frequently known as tagging rather than annotation, and the codes which are assigned are known as tags. See McEnery and Wilson 1996: Ch. 2; Leech 1993; Aarts and Meijs 1990; Brill 1992; Källgren 1996; Leech and Wilson 1994.

anaphoric annotation A form of annotation that refers to the marking of pronoun reference in corpora. Anaphoric annotation can only be carried out by human analysts, since it is one of the aims of the annotation to provide the data on which to train computer programs to carry out this task (see bootstrapping). It is of great importance to NLP since a large amount of conceptual context of a text is carried out by pronouns. See McEnery and Wilson 1996: Ch. 2; Halliday and Hasan 1976; Garside 1993.

discoursal annotation A type of annotation that is used to annotate items whose role in the discourse is primarily to do with discourse management (i.e. politeness, level of formality etc.) rather than with propositional content. Discoursal annotations have never become widely used in corpus linguistics since their identification in texts is a difficult task that causes a great source of dispute between different linguists. See McEnery and Wilson 1996: Ch. 2; Aone and Bennet 1994; Stenström 1984.

ditto tagging, ditto tag A term used to refer to the practice of assigning the same tag to each word in an idiomatic sequence to indicate that they belong to a single phraseological unit. See McEnery and Wilson 1996: Ch. 1; Garside 1987.

part-of-speech tagging A most basic type of linguistic corpus annotation (or grammatical tagging, morphosyntactic annotation, part-of-speech annotation); its aim is to assign a code (or tag) indicating its part-of-speech (e.g. singular common noun - NN, past participle - VBN) to each lexical unit in the text. Part-of-speech information is a fundamental basis for increasing the specificity of data retrieval from corpora and also forms an essential foundation for further forms of analysis such as syntactic parsing and semantic field annotation. See McEnery and Wilson 1996: Ch. 2; Leech and Wilson 1994; Garside 1987; Brill 1992.

phonetic transcription A form of phonetic annotation that is used to transcribe spoken corpora. Not many examples of publicly available fully phonetically transcribed corpora exist at the present time. Much of phonetic annotation exist at the level of prosodic annotation. Phonetic transcription needs to be carried out by human beings rather than computer programs, and moreover these need to be human beings who are well skilled in the perception and transcription of speech sounds. See McEnery and Wilson 1996: Ch. 2.

portmanteau tag A term used to refer to the practice of assigning two tags to some words in order to help the user in cases where there is a strong chance that the computer might otherwise have selected the wrong part-of-speech from the choices available to it. See McEnery and Wilson 1996: Ch. 1.

problem-oriented tagging A particular type of annotation that is used to annotate only the phenomena directly relevant to the research rather than the whole corpus or text (each word, each sentence etc.). It is not exhaustive. Problem-oriented tagging uses an annotation scheme which is selected not for its broad coverage and consensus-based theory-neutrality but for the relevance of the distinctions which it makes to the specific questions which each analyst wishes to ask of his or her data. See McEnery and Wilson 1996: Ch. 2; Haan 1984.

prosodic annotation A type of annotation that aims to capture in a written form the suprasegmental features of spoken language — primarily stress, intonation and pauses. Prosodic annotation (or prosodic transcription-) is a task which requires the manual involvement of highly skilled phoneticians: unlike part-of-speech analysis, it is not task which can be delegated to the computer. See McEnery and Wilson 1996: Ch. 2; Nespor and Vogel 1990; Johansson et al. 1991; O’Connor and Arnold 1961.

recoverability A term used to refer to the possibility for the user to recover the basic original text from any text which has been annotated with further information. See McEnery and Wilson 1996: Ch. 2.

semantic annotation A type of annotation that is used to mark semantic relationships between items in the text (e.g. agents or patients of particular actions) or semantic features of words in a text (the annotation of word senses in one form or another. See McEnery and Wilson 1996: Ch. 2; Jansen 1990; Schmidt 1991.

tag A term used to refer to (i) a code attached to words in a text representing some feature or set of features relating to those words; (ii) in the TEI, to refer to the physical markup of an element such as a paragraph. See McEnery and Wilson 1996: Ch. 2.

tagset A term used to refer to a collection of tags in the form of a scheme for annotating corpora. See McEnery and Wilson 1996: Ch. 2; Johansson et al. 1986; Garside et al. 1987.

Concordance

A term that signifies a list of a particular word or sequence of words in a context. The concordance is at the centre of corpus linguistics, because it gives access to many important language patterns in texts. Concordances of major works such as the Bible and Shakespeare have been available for many years. The computer has made concordances easy to compile.

The computer-generated concordances can be very flexible; the context of a word can be selected on various criteria (for example counting the words on either side, or finding the sentence boundaries). Also, sets of examples can be ordered in various ways. See Sinclair 1991: Ch. 2; McEnery and Wilson 1996: Ch. 1; Collier 1994; Kaye 1990; Hockey and Martin 1988.

co-text A more precise term than context or verbal context used to refer to the words on either side of a selected word or phrase. See Sinclair 1991: Ch. 9.

collocate A term used to refer to the words that occur to the left and to the right of the node. See Sinclair 1991: Ch. 8; Kennedy 1991; Kjellmer 1991; Kjellmer 1990; Renouf and Sinclair 1991; Jackson 1988.

collocation A term used to refer to the combination of words that have a certain mutual expectancy i.e. words regulary keep company with certain other words. When a collocation appears with a greater frequency than chance, then it is called a significant collocation. The usual measure of proximity is a maximum of four words intervening. The identification of patterns of word co-occurrence in textual data is particularly important in dictionary writing, natural language processing and language teaching. See Sinclair 1991: Ch. 8; Kennedy 1991; Kjellmer 1991; Kjellmer 1990; Renouf and Sinclair 1991; Jackson 1988.

KWAL An abbreviation for key word and line; a form of concordance which can allow several lines of context either side of the key word. See McEnery and Wilson 1996.

KWIC An abbreviation for key word in context; a form of concordance in which a word is given within x words of context and is normally centered down the middle of the page. See Sinclair 1991: Ch. 2; Kaye 1989.

node A term used to refer the word or phrase in a collocation whose lexical behaviour is under examination. See Sinclair 1991: Ch. 8; Jackson 1988.

span A term used to refer to the measurement, in words, of the co-text of a word selected for study. A span of -4, +4 means that four words on either side of the node word will be taken to be its relevant verbal environment. See Sinclair 1991; Jackson 1988.

Text Chunking

A term used to refer to the practice of dividing sentences into non-overlapping segments on the basis of fairly superficial analysis. Text chunking is a useful preliminary step to parsing. Chunking includes identifying the non-recursive portions of noun phrases, it can also be useful for other purposes including index term generation. See Ramshaw and Marcus; Sinclair 1991: Ch. 9.

Disambiguation

A term used to refer to the practice of doing away with ambiguity by choosing one specific analysis, or code (tag), from a variety of possibilities in corpus processing. Procedure of disambiguation may be used at many levels from deciding the part-of-speech of an ambiguous word (i.e. a word that may be associated with a number of different parts-of-speech) through to choosing one possible translation from many. Disambiguation may be probabilistic, i.e., carried out using statistically based methods, or rule-based, i.e., performed using rules created by drawing on a linguist’s intuitive knowledge. See McEnery and Wilson 1996: Ch. 5; Jansen 1990; Hindle 1989; DeRose 1988.

Encoding

A term used to refer to the practice of representing textual and linguistic data (i.e. annotations, or tags) in a certain format in a corpus. The demand for extensive reusability of large text collections requires standardisation of encoding formats. A standard encoding format must provide the most possible generality and flexibility, i.e., accommodate all potential types of information and processing. See Bryan 1988; McEnery and Wilson 1996: Ch. 2; Ide 1996.

CES An abbreviation for Corpus Encoding Standard used to refer to a set of encoding standards developed by MULTEXT (one of the largest EU projects in the domain of language tools and resources). The CES is an application of SGML, based on and in broad agreement with the TEI Guidelines and is optimally suited for use in corpus linguistics and language engineering applications. See Ide and Véronis 1995; Erjavec et al. 1995.

COCOA references A name of a very early computer program used for extracting indexes of words in context from machine-readable texts. Its conventions were carried forward into several other programs (e.g. Oxford Concordance Program (OCP)). COCOA references only represent an informal trend for encoding specific types of textual information, for example, authors, dates, and titles. See McEnery and Wilson 1996: Ch. 2; Hockey and Martin 1988.

DTD An abbreviation for Document Type Definition used in the TEI. TEI DTD is a formal representation which tells the user or a computer program what elements a text contains and how these elements are combined. A TEI DTD is composed of the core tagsets, a single base tagset, and any number of user selected additional tagsets, built up according to a set of rules documented in the TEI Guidelines. See Ide 1995; McEnery and Wilson 1996: Ch. 2; Sperberg-McQueen and Burnard 1994.

EAGLES An abbreviation for Expert Advisory Groups on Language Engineering Standards, an EU sponsored project to define standards for the computational treatment (e.g. annotation) of EU languages, and also used to refer to a base set of features for the annotation of parts-of-speech. See McEnery and Wilson 1996: Ch. 2.

entity reference A term in the TEI used to refer to a shorthand way of encoding information in a text. See Sperberg-McQueen and Burnard 1994.

SGML An abbreviation for Standard Generalized Markup Language used to refer to a text encoding standard (TEI conformant). SGML is an internationally recognized standard. SGML-aware software is widely used in corpus processing. See McEnery and Wilson 1996: Ch. 2; Erjavec 1995; Ide 1995; Goldfarb 1990; Bryan 1988.

TEI An abbreviation for Text Encoding Initiative, which signifies an international cooperative research project established (1988) to develop a general and flexible set of guidelines for the preparation and interchange of electronic texts. TEI employs an already existing form of document markup known as SGML. The TEI’s own original contribution is a detailed set of guidelines as to how this standard is to be used in text encoding. See Ide 1995; McEnery and Wilson 1996: Ch. 2; Sperberg-McQueen and Burnard 1994.

base tagset A term is used in the TEI to refer to a particular group of codes (tags) which determines the basic structure of the document with which it is to be used. Eight distinct TEI base tagsets are proposed: prose, verse, drama, transcribed speech, letters and memos, dictionary entries, terminological entries, language corpora and collections. See Ide 1995; Sperberg-McQueen and Burnard 1994.

TEI Guidelines A term used to refer to standardized encoding conventions for encoding and interchange of machine-readable texts. TEI Guidelines (issued in May 1994) provide standardized encoding conventions for a large range of text types and features relevant for a broad range of applications, including NLP, information retrieval, hypertext, electronic publishing, various forms of literary and historical analysis, lexicography, etc. The Guidelines are intended to apply to texts, written or spoken, in any natural language, of any date, in any genre or text type, without restriction on form or content. SGML is the framework for development of the Guidelines. See Sperberg-McQueen and Burnard 1994; Ide 1995; McEnery and Wilson 1996: Ch. 2.

header A term used to refer to a part of electronic document preceding the text proper and containing information about the document such as author, title, source and so on. See Ide 1995; McEnery and Wilson 1996: Ch. 2; Sperberg-McQueen and Burnard 1994.

WSD An abbreviation for Writing System Declaration used in the TEI to define the character set used in encoding an electronic text. See Sperberg-McQueen and Burnard 1994.

Lemmatisation

A term refers to the practice of reduction of word forms to their respective lexemes (head word forms that one would look up if one were looking for words in a dictionary) in a corpus. For example, the forms kicks, kicked, and kicking would all be reduced to the lexeme KICK. These variants are said to form the lemma of the lexeme KICK. Lemmatisation applies equally to morphologically irregular forms, so that went as well as goes, going, and gone, belongs to the lemma of GO. Lemmatisation allows the researcher to extract and examine all the variants of a particular lexeme without having to input all the possible variants. (A software for lemmatisation is called lemmatizer). See McEnery and Wilson: Ch. 2; Beale 1987; Sinclair 1991: Ch. 3.

Parsing

A term used to refer to the practice of assigning the syntactic structure to a text. Parsing is usually performed after basic morphosyntactic categories have been identified in a text; it brings these categories into higher level syntactic relationships with one another. Parsing is probably the most commonly encountered form of corpus annotation after part-of-speech tagging. Corpora which have been parsed are sometimes known as treebanks. See McEnery and Wilson 1996: Ch. 2; Garside and McEnery 1993; Sampson 1992; Aarts and Heuvel 1985.

full parsing A type of parsing that aims to provide analysis of the sentence structure as detailed as possible. See McEnery and Wilson 1996: Ch. 2.

skeleton parsing A type of parsing that is a less detailed approach which tends to use a less finely distinguished set of syntactic constituent types and ignores, for example, the internal structure of certain constituent types. See Garside and McEnery 1993; Leech and Garside 1991.

Validation

A term used to refer to the investigation of conformance of any products or elements to certain acknowledged standards, i.e., the corpus has to be the size it claims, it must be composed and encoded the way it claims, all features encoded can be used for retrieval, annotations conform to a given standard, and, the error rate for encoding and annotation does not exceed a certain level. Validation guarantees the client that he gets what he ordered and that he can rely on the resources to the extent stated by the validation certificate. Validation has to be carried out on an unbiased and neutral basis, and this means not by the institution where the resources were created. See Teubert 1995.

Language/Linguistic Resources

A general term used to refer to such resources as corpora of spoken and written language, frequency lists, lexicons, computational linguistic lexicons and tools to extract linguistic knowledge to develop and optimize products. Linguistic resources are divided into corpora, lexical resources and tools. However, the borderline is not very distinct. See Gellerstam 1995; McEnery and Wilson 1996; Aarts and Meijs 1990; Edwards 1994.

Corpora

A central term in corpus linguistics used to refer to (i) (loosely) any body of text; (ii) (most commonly) a body of machine-readable text; (iii) (more strictly) a finite collection of machine-readable texts, sampled to be maximally representative of a language variety. See McEnery and Wilson 1996: Ch. 2; Sinclair 1982 and 1991; Johansson 1991; Collins 1988; Meyer 1986; Aarts and Meijs 1990; Biber and Finegan 1991; Edwards 1994.

annotated corpus A type of corpus enhanced with various types of linguistic information (or tagged corpus). An annotated corpus may be considered to be a repository of linguistic information, because the information which was implicit in the plain text has been made explicit through concrete annotation. See McEnery and Wilson 1996: Ch. 1; Aarts and Meijs 1990.

balanced corpus A type of corpus composed according to parameters such as text type, genre or domain. See Teubert 1995.

comparable (reference) corpus A type of corpus used for comparison of different languages. Comparable corpus consist of a number of corpora in each language and follows the same composition pattern. The Commission of the European Community is funding a project whose main goal is the creation of comparable reference corpora (of 50 million words each) for all the official languages of the European Union. Comparable corpora are an indispensable source for bilingual and multilingual lexicons and a new generation of dictionaries. See LE-PAROLE 1995: Ann. 1.

monitor corpus A type of corpus which is a growing, non-finite collection of texts, of primary use in lexicography. Monitor corpus reflects language changes in a constant growth rate of corpora, leaving untouched the relative weight of its components (i.e. balance) as defined by the parameters. The same composition schema should be followed year by year, the basis being a reference corpus with texts spoken or written in one single year. See Sinclair and Ball 1995; Sinclair 1991: Ch. 1; Clear 1987.

monolingual corpus A type of corpus which contains texts in a single language. See McEnery and Wilson 1996: Ch. 2.

multilingual corpus A type of corpus which represents small collections of individual monolingual corpora (or subcorpora) in the sense that they use the same or similar sampling procedures and categories for each language but contain completely different texts in those several languages (for two languages bilingual corpus). See McEnery and Wilson 1996: Ch. 2; McEnery and Oakes 1994.

opportunistic corpus A type of corpus which stands for inexpensive collection of electronic texts that can be obtained, converted, and used free or at a very modest price; but is often unfinished and incomplete: the users are left to fill in blank spots for themselves. Their place is in environments where size and corpus access do not pose a problem. The opportunistic corpus is a virtual corpus in the sense that the selection of an actual corpus (from the opportunistic corpus) is up to the needs of a particular project. Today’s monitor corpora usually are opportunistic corpora. See Sinclair and Ball 1995.

parallel (aligned) corpus A type of multilingual corpus where texts in one language and their translations into other languages are aligned, sentence by sentence, preferably phrase by phrase. Sometimes reciprocate parallel corpora are set up, corpora containing authentic texts as well as translations in each of the languages involved. This allows double-checking translation equivalents.

Note: Some corpus linguists employ a different terminology for multilingual corpora: they refer to parallel corpora (as we defined here) as ‘translation corpora’ and use term ‘parallel corpora’ instead to refer to the other kind of multilingual corpus which does not contain the same texts in different languages. See Sinclair and Ball 1995; McEnery and Wilson 1996: Ch. 2; McEnery and Oakes 1994 and 1996; Zanettin 1994; Erjavec et al. 1995.

reference corpus A type of corpus that is composed on the basis of relevant parameters agreed upon by the linguistic community and should include spoken and written, formal and informal language representing various social and situational strata. They are used as benchmarks for lexicons and for the performance of generic tools and specific language technology applications. They are large in size; 50 million words is considered to be the absolute minimum; 100 million will become the European standard in a few years. See Sinclair and Ball 1995.

sampled corpus A type of corpus which contains a finite collection of texts, often chosen with great care and studied in detail. Once a sampled corpus is established, it is not added to or changed in any way. See Sinclair 1991: Ch. 1.

saturated corpus A type of corpus whose growth rate of the vocabulary stops decreasing and becomes constant (i.e. saturated). Thus, saturation is a point from which there will be perhaps eight new words for each 10000 additional words of text. Saturation of corpora is a fairly new concept, and no one knows what it leads to in terms of corpus size. See Teubert 1995.

special corpus A type of corpora that are assembled for a specific purpose, and they vary in size and composition according to their purpose. Special corpora are not balanced (except within the scope of their given purpose) and, if used for other purposes, give a distorted view of the language segment. Their main advantage is that the texts can be selected in such a way that the phenomena one is looking for occur much more frequently in special corpora than in balanced corpus. A corpus that is enriched in such a way can be much smaller than a balanced corpus providing the same data. See Sinclair and Ball 1995.

spoken corpus A type of corpora that contain texts of spoken language. Spoken corpora are annotated using a form of phonetic transcription. Not many examples of publicly available fully phonetically transcribed corpora exist at the present time. Phonetically transcribed corpora are a useful addition to the battery of annotated corpora, especially for the linguist who lacks the technological tools and expertise for the laboratory analysis of recorded speech. See McEnery and Wilson 1996: Ch. 2; Crowdy 1993; Greenbaum 1990.

treebank A type of corpora which have been annotated with phrase structure information (or parsed corpus). This term alludes to the representation of syntactic relationships (see parsing) by tree diagrams or phrase markers. See McEnery and Wilson 1996: Ch. 2; Garside and McEnery 1993; Souter and Atwell 1994.

unannotated corpus A type of corpora that are in raw states of plain text; opposed to annotated corpora. Unannotated corpora (or raw corpus) have been, and are, of considerable use in language study, but the utility of the corpus is considerably increased by the provision of annotation. See McEnery and Wilson 1996: Ch. 2.

Lexical Resources/Data

A general term used to refer to lexical data, preferably in machine-readable form, that can be used in lexical research and/or form the basis of commercial products. See Gellerstam 1995; Calzolari 1989.

computational linguistic lexicon A more complex type of lexicon for parsing, for artificial intelligence (question-answering) and for machine translation. See Gellerstam 1995.

frequency list A term used to refer to a list that is based on word frequency counts or on counts of other textual elements in a text, and listing the frequencies of their appearance. At present, making of frequency lists is one of the most trivial functions that lingware deals with. See Sinclair 1991: Ch. 2; Johansson and Hofland 1989; Woods et al. 1986; McEnery and Wilson 1996.

lexical data base (LDB) A term used to refer to data bases which contain formalized lexical information at many descriptive levels. It is one of the chief tools today for processing great quantities of lexical data. It can be used for various types of linguistic applications and for general research in the lexical field. Data base management system provides user with tools which enable him to access the data without necessarily being familiar with the internal or physical organisation, but only with the type of information he can retrieve. See Gellerstam 1995; Halteren and Heuvel 1990; Haan 1987; Kaye 1988; Calzolari 1989.

lexicon A term essentially synonymous with ‘dictionary’ - a collection of words and information about them, but this term is used more commonly than dictionary to refer to machine-readable dictionary data bases (or electronic dictionary). See Beale 1987; McEnery and Wilson 1996: Ch. 5; Garside and McEnery 1993; Garside 1987; Zernik 1991; Sinclair 1996; Calzolari 1989.

machine lexicon A type of lexicon which is not designed to be read by humans but provide explicit lexical information for performing specific tasks, e.g., automatic lemmatisation. See Gellerstam 1995.

Products

A general concept which includes any tools or applications that are worth putting money into. See Engelien and McBryde 1991.

automatic hyphenizer A tool that automatically hyphenates a text according to grammatical conventions. See Gellerstam 1995.

computer-aided learning / computer-assisted language learning (CALL) A term used to refer to computer applications and software based on lexical data that can be used in various types of interactive teaching of written or spoken language skills such as sentence restructuring, checking of translation, dictation tasks, dictionary look-up, etc. One method of language learning is a data-driven learning approach that attempts to give direct access to the data and cut out the middleman. This approach is based on assumption that effective language learning is a form of research performed by the learner himself / herself. See Johns 1991; McEnery and Wilson 1993; McEnery et al. 1995; Wilson and McEnery 1994.

computer-aided translation (CAT) (or translator’s workbench) A term used to refer to computer systems, programs or applications which contain tools and facilities which help translators to increase their productivity and the quality of their work. These include monolingual or bilingual lexicons, translation memories (which help to avoid translating the same or similar fragments more than once), spelling checkers, terminology databases, terminology management systems, translation editors, terminology extraction, access to previously translated texts, document comparison, thesauruses, etc. See Krauwer 1995; McEnery and Wilson 1996: Ch. 5.

general text checker A tool that checks practical things like starting a new sentence with a capital letter, spotting extra spaces between words, etc. See Gellerstam 1995.

spelling checker A tool that is usually based on a collection of word forms representing an actual corpus or a list of word forms generated from a dictionary, and it is used to find spelling errors in a text. Spelling checker is probably the number one commercial application and its facilities are more or less standard ingredients in word processing today. See Teubert 1995; Gellerstam 1995.

style checker A tool that performs checking of particular words from stylistic point of view (“why do you use the passive form?”), parsing for spotting grammatical errors (like congruence), and checking of contextual data (“have you used the right preposition after the verb?”). See Gellerstam 1995.

Lingware/Language Engineering Tools

A general term used to refer to relatively small independent pieces of software as well as larger systems, meant for extracting linguistic information from lexical resources or corpora. Generally all language engineering tools are divided into rule-based tools (hand-crafted rules) and statistical tools. However, most systems are hybrid in one sense or another and they form a continuum of approaches. Thus the division is not very distinct and is not used here. The section includes general types of tools and description of their functions as well as some well known and recognized specific lingware. See Erjavec 1995; Engelien and McBryde 1991.

CL tools A general term that stands for Computational Linguistic tools to refer to software that belongs to computational linguistics proper and includes morphological analysers, implementations of formalisms, and lexicon environments. These systems can hardly be considered ‘tools’ as they are often large and complex. They are furthermore, only distantly connected to corpus development or exploitation. See Erjavec 1995: Ch. 5; Engelien and McBryde 1991; Manandhar 1993.

CLAWS The leading English part-of-speech tagger (developed by Garside R. in 1987). CLAWS system employs a system of probabilistic disambiguation based on probabilities which were automatically derived from the previously constructed Brown corpus. See Garside 1987; McEnery and Wilson 1996: Ch. 5.

Cutting tagger A part-of-speech tagger (developed by Cutting D. in 1992) which employs similar probabilistic techniques to CLAWS. Cutting tagger has a success rate which is comparable to that achieved by the leading English language part-of-speech taggers. Cutting tagger can train on unannotated sections of text. It also claims to do construction of its lexicon and training of probabilistic model directly from an automatically analysed corpus. See Cutting et al. 1992; McEnery and Wilson 1996: Ch. 5.

parser/syntactic parser A type of tool that semantically analyzes a text i.e. performs parsing. A parser determines in a sentence what part of speech to assign to each of the words and combines these part-of-speech tagged words into larger and larger grammatical fragments, using some kind of grammar that tells what combinations are possible and/or likely. The output of this analysis, either a single-rooted tree or a string of tree fragments, then goes through semantic analysis, which determines the literal meaning of a sentence in isolation. Developers of parsers employed a variety of approaches. However, it must be mentioned that all the existing systems are far from being robust and their rate of accuracy is rather low yet. They are useless as a practical tool for the corpus linguist at present. See McEnery and Wilson 1996: Ch. 5; Marcus 1995; Eeg-Olofsson 1990.

part-of-speech tagger A tool that assigns a part-of-speech to a word form in corpus i.e. performs part-of-speech tagging. Part-of-speech tagger takes as its input a word form together with all its possible morphosyntactic interpretations and outputs its most likely interpretation, given the context in which the word form appears. Automated part-of-speech taggers are amongst the very best NLP applications in use today, in terms of reliability and hence usefulness. Both probabilistic (or stochastic, i.e. based on statistical grammar) and rule-based taggers (i.e. based on traditional hand-crafted rule grammars) are developed. See for specific taggers: Cutting tagger, CLAWS, TAGGIT. See Cutting et al. 1992; Eeg-Olofsson 1990; Greene and Rubin 1971.

public domain tools A term used to refer to freely available software (sometimes public domain generic tools) that can be used for any purpose. Public domain tools allow for exploring a particular technology even if it is not completely accomplished at the moment. See Erjavec 1995; Engelien and McBryde 1991.

query(ing) tools A type of tools which allows retrieving all or part of a specific data that is in a corpus. In other words, querying tools answer your questions about your lexical or corpus data. An integrated corpus query system must combine speed, a powerful querying language, and a display engine. The choice of corpus querying tools is quite limited at present. See Erjavec 1995: Ch. 4; Engelien and McBryde 1991; Jacobs 1992; Kobsa and Wahlster 1989.

TAGGIT One of the earliest (developed by Green and Rubin in 1971) part-of-speech tagging programs, which achieved a success rate of 77 per cent correctly tagged words. Their program made use of rule-based templates for disambiguation. See Greene and Rubin 1971; McEnery and Wilson 1996: Ch. 2.

terminological data bank (TDB) Terminological tools that are more or less sophisticated organizational structures established for the handling and maintenance of terminological data with the help of TMS (the abbreviation for terminological management systems). TDBs can comprise several or many terminology databases. See Galinski 1995: Ch. 2; Daille 1995.

terminology management system (TMS) Terminological tools used to record, store, process, and output terminological data in a professional manner. TMS modules are integrated into all kinds of application software for co-operative writing, documentation, or co-operative terminology work. See Galinski 1995; Daille 1995.

terminology databases Terminological tools that consist of terminological data and a TMS (an abbreviation for terminological management system) to handle this data. Several terminology databases can be included into one terminological data bank (TDB). See Galinski 1995; Daille 1995; Pearson and Kenny 1991.

Language Engineering (LE)

The aim of Language Engineering (or sometimes can be referred to as language technology) is to facilitate the use of telematics applications and to increase the possibilities for communication in and between world languages by integrating new spoken and written language processing methods. Language Engineering covers the following action lines: (i) creation and improvement of pilot applications (document creation and management, information and communication services, translation and foreign language acquisition); (ii) corpora; (iii) language engineering research; (iv) support issues specific to language engineering (i.e. standards, assessment and evaluation, awareness activities, user surveys). See Andersen 1995; Cohen et al. 1990.

Machine Translation (MT)

A branch of computational linguistics that includes all the processes related to automatic translation. Literally, the machine translation refers to imitation of a human translator by a computer or machine. See Krauwer 1995; Hutchins and Sommers 1992; Copeland et al. 1991; Hutchins 1986; Nagao 1989.