Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDelmonte, Rodolfo
dc.description.abstractThe topic of this book is the theoretical foundations of a theory LSLT – Lexical Semantic Language Theory - and its implementation in a the system for text analysis and understanding called GETARUN , developed at the University of Venice, Laboratory of Computational Linguistics, Department of Language Sciences. LSLT encompasses a psycholinguistic theory of the way the language faculty works, a grammatical theory of the way in which sentences are analysed and generated – for this we will be using Lexical-Functional Grammar -, a semantic theory of the way in which meaning is encoded and expressed in utterances – for this we will be using Situation Semantics -, and a parsing theory of the way in which components of the theory interact in a common architecture to produce the needed language representation to be eventually spoken aloud or interpreted by the phonetic/acoustic language interface. LSLT will then be put to use to show how discourse relations are mapped automatically from text using the tools available in the 4 sub-theories, and in particular we will focus on Causal Relations showing how the various sub-theories contribute to address different types of causality. We assume that the main task the child is faced with is creating an internal mental LEXICON which we further assume should contain two types of information: Grammatical – to feed the Grammatical component of the language faculty – and Semantic – to allow for meaning to be associated to each lexical entry. This activity is guided by two criteria: Semantic Criterion The goal of the language faculty is that of creating meaning relations between words and (mental representations of) reality, that is events, entities and their attributes Communicative Criterion The goal of the language faculty is that of allowing communication between humans to take place Both criteria are taken as primitives: the developmental evidence of communicative intentions is given in Bara(2007). That communicating implies understanding hence the need of semantic processing follows. We start by addressing the psycholinguistic theory in which the basic goal is the creation of meaning relations between linguistic objects – words – and bits of reality – situations for short. To do that we set forth the strong claim that in order to have Analysis and Generation become two facets of the same coin, Semantics needs to be called in and Lexical information be specified in such a way as to have the Parser/Generator work properly. In this respect, syntax only represents a subcomponent of the Grammatical theory and as such contributes to the definition of the primitives of the LSLT. We assume that some type of X-bar syntax is inherited and innate together with innate knowledge of basic syntactic principles and parameters – or Universal Grammar (UG). As is usually assumed by all linguistic theories, language acquisition activates both universal grammar and some peripheral grammar rules. Both morphological and syntactic principles are learnt with semantic ones, which alone may guarantee their consistency. We will take the stance that the existence of a backbone of rewriting rules with reference to recursion is inherently innate (see Hauser, Chomsky, 2002). However, together with Hinzen (2006;2007), we assume that syntactic structure should be underspecified with regard to the semantic (and pragmatic) task to be performed: “domain-general principles of organizing information economically lend themselves to semantic uses, they engender semantic consequences”(2007c). As will appear clear in the chapter on Text Generation, it is the Planning phase that organizes meaning structures which are then externalized by the Realization component where language-dependent syntactic constraints flesh out the appropriate surface forms. At the same time, we claim that recursion is a mechanism driven by communicative needs, and in the last resort relates, to the second important goal of a psycholinguistic theory, that is the necessity the child has to communicate. Communicating with the external world will slowly make the child aware of the existence of a point of view which is external from his own. Here by recursion we only refer to high sentence level recursion and not to the more technical concept of recursion in formal grammars, where it corresponds to the notion that recursion occurs every time a rewriting rule contains the same symbol in both right and left hand side. We are only concerned with recursive calls to Sentence level rules, which constitute a problem for parsers and an increase in complexity for sentence comprehension. From a linguistic point of view, recursion in utterances is represented basically by two types of structures: sentential complements which have a reportive semantic content, and relative clauses which have a supportive semantic content. Reportive contents are governed by communication predicates, which have the semantic content of introducing two propositions related to two separate situations in spatiotemporal terms. Supportive contents are determined by the need to bring in at the interpretation level a situation which helps better individuate the entity represented by the governing nominal predicate. Thus, we might assume that recursion is triggered by communicative processes and referential semantic properties of utterances and the underlying propositions. The Grammatical Theory (hence GT) defines the way in which lexical entries need to be organized. However, the Lexicon is informed both by the Grammatical and the Semantic Theory which alone can provide the link to the Ontology or Knowledge of the World Repository. As in LFG, we assume the existence of lexical forms where lexical knowledge is encoded, which is composed of grammatical information – categorial, morphological, syntactic, and selectional restrictions. These are then mapped onto semantic forms, where semantic roles are encoded and aspectual lexical classes are associated. In Analysis, c-structures are mapped onto f-structures and eventually turned into s-structures. Rules associating lexical representations with c-structures are part of GT. The mapping is effortless being just a bijective process, and is done by means of FSA – finite state automata. C-structure building is done in two phases. After grammatical categories are associated to inflected wordforms, a disambiguation phase takes place on the basis of local and available lexical information. The disambiguated tagged words are organized into local X-bar based head-dependent structures, which are then further developed into a complete clause-level hierarchically based structure, through a cascaded series of FSA which make use of recursion only when there are lexical constraints – both grammatical and semantic – requiring it. C-structure is mapped onto f-structure by interpretation processes based on rules defined in the grammar and translated into parsing procedures. It is a fact that Grammatical relations are only limited to what are usually referred to as Predicate-Argument relations, which may only encompass obligatory and optional arguments of a predicate. The Semantic Theory will add a number of important items of interpretation to the Grammatical representation, working at propositional level: negation, quantification, modality and pronominal binding. These items will appear in the semantic representation associated to each clause and are activated by means of parsing procedures specialized for those tasks. Semantic Theory also has the task of taking care of non-grammatical objects usually defined with the two terms, Modifiers and Adjuncts. In order to properly interpret meaning relations for these two optional component of utterance linguistic content, the Semantic theory may access Knowledge of the World as represented by a number of specialized lexical resources, like Ontology, for inferential relations; Associative Lexical Fields for Semantic Similarity relations; Collocates for most frequent modifier and adjunct relations; Idiomatic and Metonymic relations as well as Paraphrases for best stylistic purposes. In Generation, a plan is created and predicates are inserted in predicate-argument structures (hence PAS) with attributes – i.e. modifiers and adjuncts. Syntax plays only a secondary role in that they are hooked to stylistic, rhetorical rules which are genre and domain related. They are also highly idiosyncratic depending strongly on each individual social background. Surface forms will be produced according to rhetorical and discourse rules, by instantiating features activated by semantic information. In particular, our theory of linguistic knowledge acquisition being strongly semantically founded, helps explain why lexical knowledge is coupled to the way in which words are used in sentences and how they are used to convey and comprehend knowledge of the world. In our perspective, lexical knowledge is gathered from and is constituted of the following semantic and pragmatic items: o Events or Situations with Participants characterized by Semantic Roles, a Perspective or Point of View, a Temporal Extension of Event As a consequence of that, we have a number of issues that are strictly related to lexical acquisition and need appropriate description in lexical entries – as they have in our lexicon: a. Meaning of lexical entry is related to the actual world or not (factuality) b. Events carry consequences on the state of affairs described (causality) c. Relations of events to spatiotemporal locations of arguments may change or not (aspectuality) d. Complexity of lexical meaning contained in the lexical entry (semantic decomposition) The basic issue we will tackle in prospecting our theory is the way in which this knowledge is filtered from sentences as they are produced in the context of acquisition for the child. As we know from any semantic theory – but here we will be referring to Situation Semantics – sentences are to be interpreted as follows: sentential surface structures are propositions which contain among others, the following linguistic items for the semantic interpretation, organized into primary (i.) and secondary (ii.) items: i. Predicates, Arguments, Adjuncts, Spatiotemporal Locations ii. Modality, Negation, Conditionality, Quantification, Opacity. From a computational point of view, we assume that to encode such knowledge, we need to posit the following 4 levels of lexical knowledge and consequent mapping operations which are organized by increasingly restrictive layers of representation, where syntax and morphology are used to provide starting elements, i.e. heads or lexemes for lexical encoding, which corresponds to 0 level: I. Level 1. GRAMMATICAL FUNCTIONS – associated to each head/predicate from constituency labels by functional mapping and syntactic information (the subject being the NP that agrees with the verb in features) – SUBJect, OBJect, OBLique, ARG-MOD-agent, ADJunct, MODifier, PROPosition, etc. II. Level 2. SEMANTIC MAPPING of each f-structure to a predicate-argument structure with modifiers and adjuncts – ARG0, ARG1, etc., and the other secondary components Level 3. PRAGMATIC MAPPING from domain-bound definitions with semantically disambiguated meaning via an ontology-like knowledge base (WordNet) into a semantically structured representation. As a consequence of that, there are two main tenets of the theory which are supporting the construction of the system: one is that it is possible to reduce access to domain world knowledge by means of contextual reasoning, i.e. reasoning triggered independently by contextual or linguistic features of the text or discourse under analysis. In other words, it adopts what could be termed the Shallow Processing Hypothesis: access to the Ontology is reduced and substituted whenever links are missing through inferences on the basis of hand-coded lexical and grammatical knowledge given to the system, which are worked out in a fully general manner. In exploring this possibility we make one fundamental assumption and it is that the psychological processes needed for language analysis and understanding are controlled by a processing device which is completely separated from that of language generation with which it shares a common lexicon though. In our approach there is no statistical processing, but only algorithms based on symbolic rules – even though we use FSA to help tag disambiguation and parsing. The reason for this is twofold: an objective one, statistical language models need linguistic resources which in turn are very time-consuming to produce and highly error-prone activities. On more general terms, one needs to consider that highly sophisticated linguistic resources are always language and genre dependent, besides the need to comply with requirements of statistical representativeness. No such limitations can be deemed for symbolic algorithms which on the contrary are more general and easily portable from one language to another. Differences in genre can also be easily accounted for by scaling rules adequately. It is sensible to assume that when understanding a text a human reader or listener does make use of his encyclopaedia parsimoniously. Contextual reasoning is the only way in which a system for Natural Language Understanding should tap external knowledge of the domain. In other words, a system should be allowed to perform an inference on the basis of domain world knowledge when needed and only then. In this way, the system could simulate the actual human behaviour in that access to extralinguistic knowledge is triggered by contextual factors independently present in the text and detected by the system itself. This would be required only for implicit linguistic relations as can happen with bridging descriptions, to cope with anaphora resolution phenomena, for instance. In other words, we want to show that there are principled ways by which linguistic processes must interact with knowledgde representation or the ontology. It is also our view that humans understand texts only whenever all the relevant information is supplied and available. Descriptive and narrative texts are usually self-explanatory - not so, literary texts - in order to allow even naive readers to grasp their meaning. Note that we are not here dealing with spoken dialogues, where a lot of what is meant can be left unsaid or must be implicitly understood. In the best current systems for natural language, the linguistic components are kept separate from the knowledge representation, and work which could otherwise be done directly by the linguistic analysis is duplicated by the inferential mechanism. The linguistic representation is usually mapped onto a logical representation which is in turn fed onto the knowledge representation of the domain in order to understand and validate a given utterance or query. We shall comment and discuss some such systems in the book. Thus the domain world model or ontology must be priorly built, usually in view of a given task the system is set out to perform. This modelling is domain and task limited and generality can only be achieved from coherent lexical representations, as will be discussed in the book. In some of these systems, the main issue is how to make the two realms interact as soon as possible in order to take advantage of the inferential mechanism to reduce ambiguities present in the text or to allow for reasoning on linguistic data, which otherwise couldn't be understandable. We assume that an integration between linguistic information and knowledge of the world can/must be carried out at all levels of linguistic description and that contextual reasoning can be thus performed on the fly rather than sequentially. This does not imply that external knowledge of the world is useless and should not be provided at all: it simply means that access to this knowledge must be filtered out by the analysis of the linguistic content of surface linguistic forms and their abstract representations of the utterances making up the text. As we said, the task we are faced with when trying to simulate human understanding of texts is to scientifically isolate the contexts in which external knowledge of the world should be made available to the system, as well as providing the tools to deal with this task adequately. There is a description of our task which deserves quoting, and is taken from P.Bosch contribution to a book by Herzog & Rollinger(eds), Text Understanding in LILOG, which we take to be the best example of the attempt to come to terms with the problem at stake. In his paper, the author makes the point of what he takes to be main problem to be tackled: i.e. identifying in a text "inferentially unstable" concepts which are to be kept distinct from "inferentially stable" ones. The latter should be analysed solely on the basis of linguistic description, while the former should tap external linguistic knowledge of the world. Before entering into a comment of this issue, we would like to quote from his Conclusions: "The central point of this paper is to try to give a direction to work on the interaction of linguistic analysis and knowledge representation in knowledge-based NL Systems. I have tried to argue and to demonstrate that without a full linguistic analysis there is little hope that we shall ever have reasonably general and portable language modules in NL systems. It has also become clear, I hope, that this is not a trivial task but requires a decent amount of empirical research for many years to come. But the linguistic research required is not isolated research in pure linguistics, but close cooperation with work on knowledge representation and - although this is a point I have not argued for - psychological work on conceptual systems, is imperative. The most difficult problem to overcome, I believe, is that the most generally held belief in the scientific community with respect to our problem is that the distinction between linguistic and conceptual facts is arbitrary and hence not a proper research question, but a matter of pragmatic decisions. It is this belief more than anything else that inhibits further progress of the kind Brachman found lacking."(p.257) Our book can then be regarded as a contribution towards this final goal which we identify tout court with contextual reasoning, i.e. performing inferential processes on the basis of linguistic information while keeping under control the contribution of external knowledge in order to achieve understanding of a text.en
dc.publisherNova Science Publishers - New Yorken
dc.subjectComputational Linguistic Text Processing – Logical Form, Semantic Interpretation, Discourse Relations and Question Answeringen
dc.titleComputational Linguistic Text Processing – Logical Form, Semantic Interpretation, Discourse Relations and Question Answeringen
Appears in Collections:Articles, book chapters by CLS members

Files in This Item:
File Description SizeFormat 
CLTPrevised.pdfBook2 pdf format3.73 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.