We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter develops the first collection of results around change of enrichment along a (possibly nonsymmetric) multifunctor. As in Chapter 6, it is shown that this theory extends the classical theory for enrichment over (possibly symmetric) monoidal categories. Compositionality and 2-functoriality for the change-of-enrichment constructions are treated in Sections 7.4 and 7.5, respectively.
This chapter describes the theory of self-enrichment for closed multicategories, and of standard enrichment for multifunctors between closed multicategories. The self-enrichment of the multicategory of permutative categories, from Chapter 8, is a special case. Compositionality of standard enrichment is discussed in Section 9.3, and applied to the factorization of Elmendorf–Mandell K-theory in Section 9.4.
This work develops techniques and basic results concerning the homotopy theory of enriched diagrams and enriched Mackey functors. Presentation of a category of interest as a diagram category has become a standard and powerful technique in a range of applications. Diagrams that carry enriched structures provide deeper and more robust applications. With an eye to such applications, this work provides further development of both the categorical algebra of enriched diagrams, and the homotopy theoretic applications in K-theory spectra. The title refers to certain enriched presheaves, known as Mackey functors, whose homotopy theory classifies that of equivariant spectra. More generally, certain stable model categories are classified as modules - in the form of enriched presheaves - over categories of generating objects. This text contains complete definitions, detailed proofs, and all the background material needed to understand the topic. It will be indispensable for graduate students and researchers alike.
How do we understand any sentence, from the most ordinary to the most creative? The traditional assumption is that we rely on formal rules combining words (compositionality). However, psycho- and neuro-linguistic studies point to a linguistic representation model that aligns with the assumptions of Construction Grammar: there is no sharp boundary between stored sequences and productive patterns. Evidence suggests that interpretation alternates compositional (incremental) and noncompositional (global) strategies. Accordingly, systematic processes of language productivity are explainable by analogical inferences rather than compositional operations: novel expressions are understood 'on the fly' by analogy with familiar ones. This Element discusses compositionality, alternative mechanisms in language processing, and explains why Construction Grammar is the most suitable approach for formalizing language comprehension.
This Element covers the interaction of two research areas: linguistic semantics and deep learning. It focuses on three phenomena central to natural language interpretation: reasoning and inference; compositionality; extralinguistic grounding. Representation of these phenomena in recent neural models is discussed, along with the quality of these representations and ways to evaluate them (datasets, tests, measures). The Element closes with suggestions on possible deeper interactions between theoretical semantics and language technology based on deep learning models.
In this chapter the focus moves beyond Pāṇini’s grammar to address a topic of major concern within the broader Indian tradition: semantics. While some observations regarding semantics can be drawn from Pāṇini, for the most part semantics was treated as a separate field of inquiry within the Indian tradition until the early modern period. This chapter provides introductions to the traditions of semantic analysis in ancient India and the modern West, and a comparison of their approaches to one issue of central concern in semantic theory: compositionality.
This paper provides evidence that the inveterate way of assessing linguistic items’ degrees of analysability by calculating their derivation to base frequency ratios may obfuscate the difference between two meaning processing models: one based on the principle of compositionality and another on the principle of parsability. I propose to capture the difference between these models by estimating the ratio of two transitional probabilities for complex words: P (affix | base) and P (base | affix). When transitional probabilities are comparably low, each of the elements entering into combination is equally free to vary. The combination itself is judged by speakers to be semantically transparent, and its derivational element tends to be more linguistically productive. In contrast, multi-morphemic words that are characterised by greater discrepancies between transitional probabilities are similar to collocations in the sense that they also consist of a node (conditionally independent element) and a collocate (conditionally dependent element). Such linguistic expressions are also considered to be semantically complex but appear less transparent because the collocate’s meaning does not coincide with the meaning of the respective free element (even if it exists) and has to be parsed out from what is available.
Motor neuroscience centers on characterizing human movement, and the way it is represented and generated by the brain. A key concept in this field is that despite the rich repertoire of human movements and their variability across individuals, both the behavioral and neuronal aspects of movement are highly stereotypical, and can be understood in terms of basic principles or low dimensional systems. Highlighting this concept, this chapter outlines three core topics in this research field: (1) Trajectory planning, where prominent theories based on optimal control and geometric invariance aim at describing end-effector kinematics using basic unifying principles; (2) Compositionality, and specifically the ideas of motor primitives and muscle synergies that account for motion generation and muscle activations, using hierarchical low-dimensional structures; and (3) Neural control models, which regard the neural machinery that gives rise to sequences of motor commands, exploiting dynamical systems and artificial neural network approaches.
It is controversial which idioms can occur with which syntactic structures. For example, can Mary kicked the bucket (figurative meaning: ‘Mary died’) be passivized to The bucket was kicked by Mary? We present a series of experiments in which we test which structures are compatible with which idioms in German (for which there are few experimental data so far) and English, using acceptability judgments. For some of the tested structures – including German left dislocation, scrambling, and prefield fronting – it is particularly contested to what extent they are restricted by semantic factors and, as a consequence, to what extent they are compatible with idioms. In our data, these structures consistently showed similar limitations: they were fully compatible with one subset of our test idioms (those categorized as semantically compositional) and degraded with another (those categorized as non-compositional). Our findings only partly align with previously proposed hierarchies of structures with respect to their compatibility with idioms.
Critics of Berkeley's divine language argument usually dismiss it for one of two main reasons: (1) it appears to be a mere variation on Descartes's argument for the existence of other minds, or (2) there is too little similarity between human languages and the ‘discourse of nature’. I will first show that the compositional features of language on which Berkeley partially bases his argument include systematicity and productivity – not merely the generativity on which Descartes's is based. I will then show that the analogy between human languages and the discourse of nature is stronger than typically appreciated, even given contemporary understandings of language.
This article aims to examine to what extent English and Jordanian Arabic (JA) have the same classification of N + N compounds based on their degree of compositionality. It also attempts to propose a universally applicable classification of compositionality in N + N compounds. I suggest a modified version of the degree of compositionality based on previous classifications by Fernando (1996), Dirven and Verspoor (1998), and Kavka (2009). The new classification is based on the semantic contribution of the head and the non-head to the meaning of the whole compound. After I have applied the new scale to the JA data, I argue that English and JA have compounds that exhibit the four degrees of compositionality; namely completely compositional, semi-compositional, semi non-compositional and completely non-compositional. The article concludes with some recommendations for future research.
This chapter reviews the theoretical and conceptual issues central to acceptability judgment tasks, and related paradigms, at the syntax–semantics interface, and provides a broad overview of core results obtained from research in this domain. Challenges faced by studies in experimental semantics are distinct from those in experimental syntax, which at times requires different linking hypotheses, research questions, or experimental paradigms. However, the current state of affairs suggests that acceptability and other offline judgments will continue to contribute highly informative and profitable tools for exploration of phenomena at the syntax–semantics interface.
This chapter covers morphological productivity. We first look at the main factors that contribute to or detract from productivity, including transparency, compositionality, and usefulness. We review types of restrictions on word formation rules including categorial, phonological, syntactic, semantic, and etymological restrictions.Students consider the difficulty of measuring productivity and learn to calculate productivity using Baayen’s P formula. We then take a historical perspective and look at changes to productivity over time. The chapter ends with a consideration of the distinction between morphological productivity and morphological creativity.
We give an in-depth account of compositional matrix-space models (CMSMs), a type of generic models for natural language, wherein compositionality is realized via matrix multiplication. We argue for the structural plausibility of this model and show that it is able to cover and combine various common compositional natural language processing approaches. Then, we consider efficient task-specific learning methods for training CMSMs and evaluate their performance in compositionality prediction and sentiment analysis.
Semantic communication is about transmitting mental representations of reality. Three research questions address the nature of this process in primates. Can primates produce signals that are meaningful in a lexical sense? Are they capable of compositional semantics? Can they create and infer meaning by integrating context and intention? There is good evidence that, as recipients, primates have capacities at all three levels, whereas for signallers the evidence is less compelling. This difference may have cognitive roots, due to the fact that primate signallers are typically engaged in the here-and-now and, unlike humans, less able to refer to memory content. Future research will have to clarify what mental structures primates can take into account during communication, including entities that are not physically present.
This pioneering study combines insights from philosophy and linguistics to develop a novel framework for theorizing about linguistic meaning and the role of context in interpretation. A key innovation is to introduce explicit representations of context - assignment variables - in the syntax and semantics of natural language. The proposed theory systematizes a spectrum of 'shifting' phenomena in which the context relevant for interpreting certain expressions depends on features of the linguistic environment. Central applications include local and non-local contextual dependencies with quantifiers, attitude ascriptions, conditionals, questions, and relativization. The result is an innovative philosophically informed compositional semantics compatible with the truth-conditional paradigm. At the forefront of contemporary interdisciplinary research into meaning and communication, Semantics with Assignment Variables is essential reading for researchers and students in a diverse range of fields.
This chapter begins with an overview of several terms important to a discussion of meaning in language, and introduces the reader to the theory of linguistic relativism and the relationship between language and thought. This section transitions into a review of the extension, reference, and the features that begin to form a comprehensive theory of semantics. With this foundation, students turn to a deeper investigation of formal semantics, including definitions for logical expressions and relationships, and then to a presentation of word sense, and the interactions between various parts of speech in the lexicon. The principle of compositionality is introduced, and it is used to explore several examples of non-compositional language. The end of the chapter ties these concepts to an investigation of pragmatics, including politeness, Gricean Maxims, and implicature.
This chapter introduces the theoretical context for the compositional semantic framework to be developed in the book. A key innovation is to posit explicit representations of context – formally, variables for assignment functions – in the syntax and semantics of natural language. A primary focus is on a spectrum of linguistic shifting phenomena, in which the context relevant for interpretation depends on features of the linguistic environment. The proposed theory affords a standardization of quantification across domains, and an improved framework for theorizing about linguistic meaning and the role of context in interpretation. Comparisons with alternative operator-based theories are briefly considered. An outline of the subsequent chapters is presented.
This chapter develops an improved assignment-variable-based compositional semantics for head-raising analyses of restrictive relative clauses, and applies the account to certain types of pronominal anaphora. The speculative choice-function based analysis of names from Chapter 4 is extended to certain indefinites, relative words, and donkey pronouns. An analysis of donkey pronouns as copies of their linguistic antecedent is supported by crosslinguistic data. Nominal quantifiers are treated as introducing quantification over assignments. The proposed semantics for quantifiers helps capture linguistic shifting data in universal, existential, and asymmetric readings of donkey sentences. Additional composition rules or principles for interpreting reconstructed phrases aren’t required (e.g., Predicate Abstraction, Predicate Modification, Trace Conversion). The semantics is fully compositional. Critical challenges are discussed.
This chapter draws on independent work on the syntax–semantics interface to motivate a more complex clausal architecture for an assignment-variable-based theory. Binding across syntactic categories and semantic domains is captured uniformly from a generalized binder-index feature, which attaches directly to expressions undergoing movement for type reasons. World-binding (intensionality) arises from the complementizer, which moves from the world-argument position of the clause’s main predicate; assignment-binding arises from modal elements, which move from an internal assignment-argument position of the complementizer. The semantics is fully compositional. The remainder of the book develops the account and applies it to a range of constructions and types of linguistic shifting phenomena.