We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The multilingual landscape of Canada creates opportunities for many heterogeneous bilingual communities to experience systematic phonetic variation within and across languages and dialects, and exposes listeners to different pronunciation variants. This paper examines phonetic variation through the lens of an ongoing sound change in Cantonese involving word-initial [n] and [l] across two primed lexical decision tasks (Experiment 1: Immediate repetition priming task, Experiment 2: Long-distance repetition priming task). Our main question is: How are sound change pronunciation variants recognized and represented in a Cantonese-English bilingual lexicon? The results of both experiments suggest that [n]- and [l]-initial variants facilitate processing in both short and long-term spoken word recognition. Thus, regular exposure to Cantonese endows bilingual listeners with the perceptual flexibility to dually and gradiently map pronunciation variants to a single lexical representation.
Work by Chomsky et al. (2019) and Epstein et al. (2018) develops a third-factor principle of computational efficiency called “Determinacy”, which rules out “ambiguous” syntactic rule-applications by requiring one-to-one correspondences between the input or output of a rule and a single term in the domain of that rule. This article first adopts the concept of “Input Determinacy” articulated by Goto and Ishii (2019, 2020), who apply Determinacy specifically to the input of operations like Merge, and then proposes to extend Determinacy to the labeling procedure developed by Chomsky (2013, 2015). In particular, Input Determinacy can explain restrictions on labeling in contexts where multiple potential labels are available (labeling ambiguity), and it can also provide an explanation for Chomsky's (2013, 2015) proposal that syntactic movement of an item (“Internal Merge”) renders that item invisible to the labeling procedure.
For a finite abelian p-group A and a subgroup $\Gamma \le \operatorname {\mathrm {Aut}}(A)$, we say that the pair $(\Gamma ,A)$ is fusion realizable if there is a saturated fusion system ${\mathcal {F}}$ over a finite p-group $S\ge A$ such that $C_S(A)=A$, $\operatorname {\mathrm {Aut}}_{{\mathcal {F}}}(A)=\Gamma $ as subgroups of $\operatorname {\mathrm {Aut}}(A)$, and . In this paper, we develop tools to show that certain representations are not fusion realizable in this sense. For example, we show, for $p=2$ or $3$ and $\Gamma $ one of the Mathieu groups, that the only ${\mathbb {F}}_p\Gamma $-modules that are fusion realizable (up to extensions by trivial modules) are the Todd modules and in some cases their duals.
This chapter discusses phonological motivations for morpho-syntactic changes in history. In general, Old Chinese was monosyllabic, which means that the overwhelming majority of words were represented by a single syllable, regardless of whether they were content or function words. In Middle Chinese, the phonological system was dramatically simplified; the number of consonants and vowels was reduced, and the syllabic structures were simplified. To restore the phonological distinctions of lexical items, the language increased the number of syllables for words, typically by adding one syllable to originally monosyllabic words. This disyllabification tendency has lasted nearly two millennia since then. This new sort of prosodic unit stimulated the fusion of two monosyllabic items, a key factor for the emergence of the resultative construction and other grammatical morphemes.
Building on previous chapters’ conclusions, this chapter posits various organizational levels of intentionality and speculates as to how they interconnect through powers-based processes and biological evolutionary developments. These levels of intentionality form an intentionality continuum. The Intentionality Continuum Thesis holds that intentionality is not a local phenomenon, present only in creatures with minds, but a global phenomenon present in fundamental physical phenomena, biological cells, plants, animals and humans, and human societies. So the continuum of intentionality in nature runs from the physical intentionality of fundamental powers to the complex, higher-level psychological powers of organisms as well as social groups. This might require emergent powers, a possibility that is defended and the account of which is inspired by the fusion theory of emergent properties advanced by Paul Humphreys. Given the panintentionality implied by the Intentionality Continuum Thesis, along with some other defensible assumptions, a reasonable though certainly tentative case can be made for pantheism. The chapter, and the book, conclude by identifying open questions.
We show that Miller partition forcing preserves selective independent families and P-points, which implies the consistency of $\mbox {cof}(\mathcal {N})=\mathfrak {a}=\mathfrak {u}=\mathfrak {i}<\mathfrak {a}_T=\omega _2$. In addition, we show that Shelah’s poset for destroying the maximality of a given maximal ideal preserves tight mad families and so we establish the consistency of $\mbox {cof}(\mathcal {N})=\mathfrak {a}=\mathfrak {i}=\omega _1<\mathfrak {u}=\mathfrak {a}_T=\omega _2$.
Shifting to an examination of identity formation from below, Chapter 4 observes popular culture through music and opens a discussion on the nature of Iranian identity. Music is not only a cultural expression; in Iran it has also been used as a political tool and as part of resistance movements. Iranians voiced their allegiance with the revolution and their identity as Shiite Muslims through song-like protest chants and musical tracks. Protest chants and group singing heighten the meaning of words and help facilitate a sense of unity. These techniques were employed as an emotive force during the revolution and by later generations to proclaim their identity and as a form of resistance after the controversial election of 2009. The Green Movement is a pertinent example of how popular music is utilized by Iranians as a mode of expression. Consequently, popular music can be used as a tool for investigation in order to facilitate a better understanding of contemporary Iranian identity and society.
Chapter 23 reviews the role of grammaticalization at different levels of grammar: phonological, morpho-syntactic, paradigmatic, and semantic-pragmatic. It first discusses the dischronic principles and mechanisms that have been proposed in previous grammaticalization studies. The chapter then examines the prominent patterns by source category, dividing these into nouns and verbs. In the nominal category, four source construction types are illustrated with the reanalyses that led to their emergence as grammatical forms such as postpositions, conjunctions, and auxiliaries. In the verbal category, four source construction types are illustrated, with special focus on the grammaticalization of de-verbal postpositions and auxiliary verbs. The chapter further addresses select aspects of Korean grammaticalization from a typological perspective. It discusses the productive use of affixes, converbs, and predicatives known as “mermaid constructions”, which are among the characteristics shared by many languages in the eastern Eurasian region. It also discusses the presence of large inventories of de-verbal postpositions and numeral classifiers.
In Chapter 6, there is a review of the historical record, with the help of the Oxford English Dictionary Online, other older corpora and studies by historical linguists, in an attempt to identify earlier forms of general extenders and to trace the development of those in current use. One clear pathway of change is identified in terms of phrases with specific reference becoming more general in their range of reference and even losing referential function over time. Detailed paths of development are provided for all the most common forms. The different processes involved in grammaticalization are also described and illustrated, with attention given to lexical replacement, semantic bleaching, morphosyntactic and phonological change and pragmatic shift.
The interaction of intense, ultrashort laser pulses with ordered nanostructure arrays offers a path to the efficient creation of ultra-high-energy density (UHED) matter and the generation of high-energy particles with compact lasers. Irradiation of deuterated nanowires arrays results in a near-solid density environment with extremely high temperatures and large electromagnetic fields in which deuterons are accelerated to multi-megaelectronvolt energies, resulting in deuterium–deuterium (D–D) fusion. Here we focus on the method of fabrication and the characteristics of ordered arrays of deuterated polyethylene nanowires. The irradiation of these array targets with femtosecond pulses of relativistic intensity and joule-level energy creates a micro-scale fusion environment that produced $2\times {10}^6$ neutrons per joule, an increase of about 500 times with respect to flat solid CD2 targets irradiated with the same laser pulses. Irradiation with 8 J laser pulses was measured to generate up to 1.2 × 107 D–D fusion neutrons per shot.
During the latter half of the Reconstruction era, Republicans in the South faced major electoral defeats due to the enfranchisement of white voters, dismal economic conditions, and Democratic Party-sponsored terror against black voters. As a result, by 1877 the Democrats won unified control of state governments across the region – and largely held it for the succeeding two decades. Yet this decline in Republican electoral strength did not reduce the South’s influence at the GOP national convention. Indeed, from 1877 to 1896, the eleven states of the former Confederacy made up around 25 percent of Republican convention delegates. There were three reasons for this. First, many Republican national leaders remained hopeful that the end of Reconstruction was not the final word on the GOP’s role in the South and believed that a winning electoral strategy could be devised for the party to remain a viable political force in the South. Second, Southern delegates passionately – and, to a large extent, correctly – argued that their states’ inability to produce electoral votes and congressional seats for the GOP was due to Democratic sabotage of the electoral process. With Southern blacks increasingly excluded from the democratic process at home, the Republican National Convention remained one of the few remaining political arenas in which they could participate. For the party of Lincoln to try and strip these delegates of their role within the party was, for some, problematic. Finally, Southern delegates were very helpful to presidential hopefuls from other parts of the country because their support could be easily acquired through patronage and other forms of bribery. Thus, whoever could afford to court the South could go into the convention with a sizable bloc of votes.
Law and equity fused administratively in the nineteenth century in most jurisdictions. But fusion is a prominent theme in equity today: it has become the means by which lawyers access the fundamental questions presented by equity in common law systems. What is the place of equity? Is it certain or open-ended? And so on. This chapter considers how a modern lawyer can best approach those questions. A wider perspecitve on fusion is needed than has recently prevailed, and a theory of equity is needed which the evidence lends itself to. The features of such a theory are identified, and the practical significance of fusion is discussed with specific reference to relief from forfeiture and modern writing on the law of restitution.
David Dudley Field was the architect of the union – or fusion or merger – of equity and law in New York state, and the Field Code was widely adopted in other states. Field’s vision of the union of law and equity has prevailed in the United States, including at the federal level, at least in theory. However, the practise of law and acts of the courts indicate that the reality is rather different. Equity was not sundered by the Field Code or its federal counterpart, the Federal Code of Civil Procedure 1938. Equity continues to operate distinctly in various ways, even if it is less well understood now. Field’s own behaviour as an attorney was also ambivalent: where he maintained a strong posture against equity in theory, his practice as an attorney revealed his willingness to continue to recognise and rely on equity even under his Code.
The fusion of law and equity in common law systems was a crucial moment in the development of the modern law. Common law and equity were historically the two principal sources of rules and remedies in the judge-made law of England, and this bifurcated system travelled to other countries whose legal systems were derived from the English legal system. The division of law and equity - their fission - was a pivotal legal development and is a feature of most common law systems. The fusion of the common law and equity has brought about major structural, institutional and juridical changes within the common law tradition. In this volume, leading scholars undertake historical, comparative, doctrinal and theoretical analysis that aims to shed light on the ways in which law and equity have fused, and the ways in which they have remained distinct even in a 'post-fusion' world.
The most frequently expressed concern is whether early bilinguals will be able to differentiate their languages. Research on child bilingualism has demonstrated that this is indeed the case. Children acquiring two languages simultaneously are able to differentiate their lexical and grammatical systems from very early on. Ocassional language mixing is not an indication of an underlying unitary system, fusing two or more linguistic competences. Rather, mixing is a performance phenomenon. Most mixed utterances are instances of code-switching, i.e. linguistic behaviour constrained by grammatical and sociolinguistic principles. Early mixes may also result from a choice of language that is unexpected from an adult perspective. Choosing the adequate language in bilingual settings requires sociolinguistic knowledge that is acquired in the course of children’s socialization. Parents can support bilingual L1 acquisition by their own linguistic behaviour. Following the ‘one person, one language’ (OPOL) principle is a method that has been applied successfully for over 100 years.
In this paper we propose a new theory and methodology to tackle the problem of unifying Monte Carlo samples from distributed densities into a single Monte Carlo draw from the target density. This surprisingly challenging problem arises in many settings (for instance, expert elicitation, multiview learning, distributed ‘big data’ problems, etc.), but to date the framework and methodology proposed in this paper (Monte Carlo fusion) is the first general approach which avoids any form of approximation error in obtaining the unified inference. In this paper we focus on the key theoretical underpinnings of this new methodology, and simple (direct) Monte Carlo interpretations of the theory. There is considerable scope to tailor the theory introduced in this paper to particular application settings (such as the big data setting), construct efficient parallelised schemes, understand the approximation and computational efficiencies of other such unification paradigms, and explore new theoretical and methodological directions.
We finish the classification, begun in two earlier papers, of all simple fusion systems over finite nonabelian p-groups with an abelian subgroup of index p. In particular, this gives many new examples illustrating the enormous variety of exotic examples that can arise. In addition, we classify all simple fusion systems over infinite nonabelian discrete p-toral groups with an abelian subgroup of index p. In all of these cases (finite or infinite), we reduce the problem to one of listing all 𝔽pG-modules (for G finite) satisfying certain conditions: a problem which was solved in the earlier paper [15] using the classification of finite simple groups.
Light sheet fluorescence microscopy (LSFM) allows for high-resolution three-dimensional imaging with minimal photo-damage. By viewing the sample from different directions, different regions of large specimens can be imaged optimally. Moreover, owing to their good spatial resolution and high signal-to-noise ratio, LSFM data are well suited for image deconvolution. Here we present the Huygens Fusion and Deconvolution Wizard, a unique integrated solution for restoring LSFM images, and show that improvements in signal and resolution of 1.5 times and higher are feasible.
Development of a detonation wave due to α heating following short pulse laser irradiation in pre-compressed deuterium–tritium (DT) plasma is considered. The laser parameters required for development of a detonation wave are calculated. We find that a laser irradiance and energy of IL = 1.75 × 1023 W/cm2 and 12.8 kJ accordingly during 1.0 ps in a pre-compressed target at 900 g/cm3 creates an α heating fusion detonation wave. In this case, the nuclear fusion ignition conditions for the pre-compressed DT plasma are achieved along the detonation wave orbit.
A criterion for a two temperature plasma nuclear fusion ignition is derived by using a common model. In particular, deuterium-tritium (DT) and proton–boron11 (pB11) are considered for pre-compressed plasma. The ignition criterion is described by a surface in the three-dimensional space defined by the electron and ion temperatures Te, Ti, and the plasma density times the hot spot dimension, ρ·R. The appropriate fusion ion temperatures Ti are larger than 10 keV for DT and 150 keV for pB11. The required value of ρ·R for pB11 ignition is larger by a factor of 50 or more than for DT, depending on the electron temperature. Furthermore, our ignition criterion obtained here for pB11 fusion is practically impossible for equal electron and ion temperatures. In this paper it is suggested to use a two temperature laser induced shock wave in the intermediate domain between relativistic and non-relativistic shock waves. The laser parameters required for fast ignition are calculated. In particular, we find that for DT case one needs a 3 kJ/1 ps laser to ignite a pre-compressed target at about 600 g/cm3. For pB11 ignition it is necessary to use more than three orders of magnitude of laser energy for the same laser pulse duration.