To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the discussion of English and other languages in 1.2.2, it was seen that Agents and Patients (and, indirectly, Subjects and Objects) are distinguished in terms of:
(i) word order,
(ii) morphology of the noun or pronoun,
(iii) agreement with the verb.
The use of the term ‘Subject’ implies, of course, the identification of S and A (in active sentences), that identification being made in terms of these criteria. There are, however, languages in which S is identified with P (using the same criteria), these being generally known as ‘ergative languages’, though as argued in 1.3.1 (and see below) we should talk rather about ‘ergative systems’.
It may be that there are languages in which ergativity is marked by word order, i.e. where S and P occupy the same position but a different one from that of A. This distinction would not be possible in languages where both A and P precede or follow the verb (and this covers very many languages of the world), because in such languages A and P have different positions only i relation to each other, but it would be possible if one of the terms preceded, and the other followed, the verb.
In 1.3.1, Dyirbal was given as an example of a language with an ergative system, ergativity being shown in the morphology of the noun: A alone is in the ergative case (with suffix -ngu), while both P and S are in the (unmarked) absolutive case.
This monograph advocates a relativized approach to the bindingtheoretic notions of binder and SUBJECT, making it possible to subsume polarity items under the Binding theory. The Binding theory was primarily designed to cover locality conditions between anaphors and pronominals and their antecedents. I will defend the view that binding/obviation principles only happened to be formulated for reflexives/pronouns first, but that there is no reason to think that they are restricted to them. In fact, in a maximally restrictive model, the same locality conditions should hold for all dependent and anti-dependent phenomena. Since reflexives and negative polarity items are both dependent, they both must be in the scope of (or bound to) their licenser or antecedent, and since pronouns and positive polarity items are both anti-dependent, they both must be interpreted outside of the scope of a local antecedent.
These scope properties of polarity items must be stipulated in other frameworks, but follow directly from the binding analysis. It is a longnoted puzzle that negative polarity items always receive narrow-scope interpretation with respect to negation, and that positive polarity items must receive a wide-scope interpretation with respect to local negation, but either narrow or wide-scope interpretation with respect to superordinate negation. This sets polarity items apart from other quantifiers, which enter freely into scope ambiguities with negation, exhibiting both wide- and narrow-scope readings (cf many and every in (1)).
This book offers cross-linguistic data on negative polarity, reflexive binding and the subjunctive mood and proposes a unified analysis for various languages. It can be used as a textbook for intermediary or advanced courses in syntax and semantics, especially courses dealing with negative polarity, binding, or syntax/semantics interfaces in general. The discussion presupposes acquaintance with only an elementary syntax text, such as Haegeman's or Radford's introduction to the syntactic theory.
The monograph is based on my (1988) dissertation “A Binding Approach to Polarity Sensitivity”, although it has been considerably modified and updated. There are three major additions to the dissertation: (i) a novel treatment of the subjunctive mood, as involving deletion of functional categories Infl and Comp in LF (chapter 6); (ii) the discussion of the Relativized SUBJECT approach to long-distance anaphora (section 0.2); and (iii) an overview of the basic issues in Serbian/Croatian syntax (section 1.1.). Moreover, the monograph provides a discussion of some recent developments in the treatment of negative polarity, such as Zanuttini (1991) and Laka (1990), especially concerning the analysis of negative concord.
I am indebted to many people for their valuable help with the dissertation. The deepest thanks go to my advisor, Joseph Aoun, who at the same time taught, directed, and inspired, and continues to do so. For their precious comments, I owe a great deal to Marc Authier, Mürvet Enç, Irene Heim, Larry Hyman, Osvaldo Jaeggli, Audrey Yen-Hui Li, and Dominique Sportiche.
The virtues of viewing the lexicon as an inheritance network are its succinctness and its tendency to highlight significant clusters of linguistic properties. From its succinctness follow two practical advantages, namely its ease of maintenance and modification. In this chapter we present a feature-based foundation for lexical inheritance. We shall argue that the feature-based foundation is both more economical and expressively more powerful than non-feature-based systems. It is more economical because it employs only mechanisms already assumed to be present elsewhere in the grammar (viz., in the feature system), and it is more expressive because feature systems are more expressive than other mechanisms used in expressing lexical inheritance (cf. DATR). The lexicon furthermore allows the use of default inheritance, based on the ideas of default unification, defined by Bouma (1990a).
These claims are buttressed in sections sketching the opportunities for lexical description in feature-based lexicons in two central lexical topics: inflection and derivation. Briefly, we argue that the central notion of paradigm may be defined directly in feature structures, and that it may be more satisfactorily (in fact, immediately) linked to the syntactic information in this fashion. Our discussion of derivation is more programmatic; but here, too, we argue that feature structures of a suitably rich sort provide a foundation for the definition of lexical rules.
We illustrate theoretical claims in application to German lexical structure.
Natural language lexicons form an obvious application for techniques involving default inheritance developed for knowledge representation in artificial intelligence (AI). Many of the schemes that have been proposed are highly complex – simple tree-form taxonomies are thought to be inadequate, and a variety of additional mechanisms are employed. As Touretzky et al. (1987) show, the intuitions underlying the behaviour of such systems may be unstable, and in the general case they are intractable (Selman and Levesque, 1989).
It is an open question whether the lexicon requires this level of sophistication – by sacrificing some of the power of a general inheritance system one may arrive at a simpler, more restricted, version, which is nevertheless sufficiently expressive for the domain. The particular context within which the lexicon described here has been devised seems to permit further reductions in complexity. It has been implemented as part of the ELU unification grammar development environment for research in machine translation, comprising parser, generator, lexicon, and transfer mechanism.
Overview of Formalism
An ELU lexicon consists of a number of ‘classes’, each of which is a structured collection of constraint equations and/or macro calls encoding information common to a set of words, together with links to other more general ‘superclasses’. Lexical entries are themselves classes, and any information they contain is standardly specific to an individual word; lexical and non-lexical classes differ in that analysis and generation take only the former as entry points to the lexicon.