To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The integration of unmanned aerial vehicles (UAVs) into agriculture has emerged as a transformative approach to enhance resource efficiency and enable precision farming. UAVs are used for various agricultural tasks, including monitoring, mapping and spraying of pesticides, providing detailed data that support targeted and sustainable practices. However, effective deployment of UAVs in these applications faces complex control challenges. This paper presents a comprehensive review of UAVs in agricultural applications, highlighting the sophisticated control strategies required to address these challenges. Key obstacles, such as modelling inaccuracies, unstable centre of gravity (COG) due to shifting payloads, fluid sloshing within pesticide tanks and external disturbances like wind, are identified and analysed. The review delves into advanced control methodologies, with particular focus on adaptive algorithms, backstepping control and machine learning-enhanced systems, which collectively enhance UAV stability and responsiveness in dynamic agricultural environments. Through an in-depth examination of flight dynamics, stability control and payload adaptability, this paper highlights how UAVs can achieve precise and reliable operation despite environmental and operational complexities. The insights drawn from this review underscore the importance of integrating adaptive control frameworks and real-time sensor data processing, enabling UAVs to autonomously adjust to changing conditions and ensuring optimal performance in agriculture. Future research directions are proposed, advocating for the development of control systems that enhance UAV resilience, accuracy and sustainability. By addressing these control challenges, UAVs have the potential to significantly advance precision agriculture, offering practical and environmental benefits crucial to sustaining global food production demands.
Focuses on Ovid’s portrayal of the armillary sphere of Archimedes in book 6 of the Fasti. Ovid, taking a certain cue from Cicero, turns to the armillary sphere of Archimedes to develop an ekphrastic vision of the universe, which on initial glance appears to be divinely designed. The armillary sphere is envisaged as a miniature representation of the cosmos, with its creator operating as a foil for a creationist divinity, closely associated with the divine craftsman or demiurge from Plato’s Timaeus. The armillary sphere, however, also presents a series of challenges to both human and divine craftsmanship. It highlights the fallacy of human attempts to create working replicas of the complex movements of the heavenly bodies, while also indicating how the cosmos might be seen as dependent upon such models for its very generation. Despite being fundamentally flawed, models of the cosmos have the capacity to construct the realities they depict, while the multiplicity of such models (and the philosophical systems they are based upon) continues to disturb our sense of a fixed and stable reality.
In Chapter 9, I offer a discussion related to the main theoretical contributions of this study. I here elaborate on how these findings tie in with three concepts known to be well-supported functional principles at work in various languages. These are the principles of competition, iconicity and economy of expression. As for the principle of competition, I unfold a model of competition that can account a) for the specialization and non-specialization of the CPs, b) for an interaction between the token frequencies of the simple verb and the strength of semantic specialization in the CP and c) for why certain CPs do not fall under the scope of the hypothesis. I also briefly discuss how psycholinguistic experiments on the activation levels of competing constructions can extend our perspective beyond cases of semantic competition. The principle of iconicity, in turn,can account for why formal and semantic changes do not entirely drift apart. Finally, speakers’ preference for shorter rather than longer expression helps explain why the simple verbs are preferred over the CPs in those contexts where they are in semantic competition.
Suicidal and self-harming behaviours present a significant challenge for mental health services. Recent national guidelines advocate abandoning tools based on box-ticking and a move towards a personalised psychosocial assessment. This article examines evidence from theoretical and empirical research in this area and attempts to integrate it by introducing the source–problem–solution–motive (SPSM) model. The model, which builds on the contributions of other suicidologists, specially Jean Baechler, could be used as a framework for the assessment and management of these behaviours. The four stages of the model provide a comprehensive approach that enables an exploration of the internal logic of the behaviour. The model covers ‘because’ and ‘in-order-to’ motives. This allows a personalised approach, but also a structured one that can be taught and generalised.
The study of universal algebra, that is, the description of algebraic structures by means of symbolic expressions subject to equations, dates back to the end of the 19th century. It was motivated by the large number of fundamental mathematical structures fitting into this framework: groups, rings, lattices, and so on. From the 1970s on, the algorithmic aspect became prominent and led to the notion of term rewriting system. This chapter briefly revisits these ideas from a polygraphic viewpoint, introducing only what is strictly necessary for understanding. Term rewriting systems are introduced as presentations of Lawvere theories, which are particular cartesian categories. It is shown that a term rewriting system can also be described by a 3-polygraph in which variables are handled explicitly, i.e., by taking into account their duplication and erasure. Finally, a precise meaning is given to the statement that term rewriting systems are "cartesian polygraphs".
This chapter surveys some of the many types of models used in science, and some of the many ways scientists use models. Of particular interest for our purposes are the relationships between models and other aspects of scientific inquiry, such as data, experiments, and theories. Our discussion shows important ways in which modeling can be thought of as a distinct and autonomous scientific activity, but always models can be crucial for making use of data and theories and for performing experiments. The growing reliance on simulation models has raised new and important questions about the kind of knowledge gained by simulations and the relationship between simulation and experimentation. Is it important to distinguish between simulation and experimentation, and if so, why?
This chapter comes in two related but distinct parts. The first presents general trends in the neurosciences and considers how these impact upon psychiatry as a clinical science. The second picks up a recent and important development in neuroscience which seeks to explain mental functions such as perception and has been profitably extended into explanations of psychopathology. The second part can be viewed as a working example of the first’s overarching themes.
We introduce derivative securities and ask ourselves how to determine their price from a financial perspective. We discover that the cashflow of zero-coupon bonds and forward contracts can be artificially replicated by adopting a static trading strategy featuring primary assets. With almost no math, we obtain the central result that every product whose payoff is a linear function of the future price of tradeable products can be computed without relying on any model. The story is different for products whose payoff is a non-linear function of the future price of assets, such as European calls and puts. In such cases, pricing by replication may still be possible, but is more complex but it requires a model and features a dynamic replicating strategy, evolving through time. We use the law of one price to give a clear interpretation to the no-arbitrage price of derivatives. We conclude the chapter with the general expression of a derivative’s price, given by the risk-neutral expectation of its payoff discounted at the risk-free rate. The purpose of the book is to introduce all the concepts needed to understand why and when this result holds, and how it can be evaluated in practice.
This chapter provides an overview of posterior-based specification testing methods and model selection criteria that have been developed in recent years. For the specification testing methods, the first method is the posterior-based version of IOSA test. The second method is motivated by the power enhancement technique. For the model selection criteria, we first review the deviance information criterion (DIC). We discuss its asymptotic justification and shed light on the circumstances in which DIC fails to work. One practically relevant circumstance is when there are latent variables that are treated as parameters. Another important circumstance is when the candidate model is misspecified. We then review DICL for latent variable models and DICM for misspecified models.
Climate change is significantly altering our planet, with greenhouse gas emissions and environmental changes bringing us closer to critical tipping points. These changes are impacting species and ecosystems worldwide, leading to the urgent need for understanding and mitigating climate change risks. In this study, we examined global research on assessing climate change risks to species and ecosystems. We found that interest in this field has grown rapidly, with researchers identifying key factors such as species' vulnerability, adaptability, and exposure to environmental changes. Our work highlights the importance of developing better tools to predict risks and create effective protect strategies.
Technical summary
The rising concentration of greenhouse gases, coupled with environmental changes such as albedo shifts, is accelerating the approach to critical climate tipping points. These changes have triggered significant biological responses on a global scale, underscoring the urgent need for robust climate change risk assessments for species and ecosystems. We conducted a systematic literature review using the Web of Science database. Our bibliometric analysis shows an exponential growth in publications since 2000, with over 200 papers published annually since 2019. Our bibliometric analysis reveals that the number of studies has exponentially increased since 2000, with over 200 papers published annually since 2019. High-frequency keywords such as ‘impact’, ‘risk’, ‘vulnerability’, ‘response’, ‘adaptation’, and ‘prediction’ were prevalent, highlighting the growing importance of assessing climate change risks. We then identified five universally accepted concepts for assessing the climate change risk on species and ecosystems: exposure, sensitivity, adaptivity, vulnerability, and response. We provided an overview of the principles, applications, advantages, and limitations of climate change risk modeling approaches such as correlative approaches, mechanistic approaches, and hybrid approaches. Finally, we emphasize that the emerging trends of risk assessment of climate change, encompass leveraging the concept of telecoupling, harnessing the potential of geography, and developing early warning mechanisms.
Social media summary
Climate change risks to biodiversity and ecosystem: key insights, modeling approaches, and emerging strategies.
The extended $3/2$ short rate model is a mean reverting model of the short rate which, for suitably chosen parameters, permits a sensible term structure of bond yields and closed-form valuation formulae of zero-coupon bonds and options. This article supplies proofs of the formulae for the expected present values of future cash flows under the real-world probability measure, known as actuarial valuation. Finally, we give formulae for asymptotic levels of bond yields and formulae for bond option prices for the extended $3/2$ model, under particular conditions on its parameters.
The University of California (UC) Davis Clinical and Translational Science Center has established the “Join the Team” model, a Clinical Research Coordinator workforce pipeline utilizing a community-based approach. The model has been extensively tested at UC Davis and demonstrated to generate a viable pathway for qualified candidates for employment in clinical research. The model combines the following elements: community outreach; professional training materials created by the Association for Clinical Research Professionals and adapted to the local environment; financial support to trainees to encourage ethnic and socioeconomic diversity; and internship/shadowing opportunities. The program is tailored for academic medical centers (AMCs) in recognition of administrative barriers specific to AMCs. UC Davis’s model can be replicated at other locations using information in this article, such as key program features and barriers faced and surmounted. We also discuss innovative theories for future program iterations.
Following the introduction of the one-child policy in China, the capital-labor ratio of China increased relative to that of India, while FDI/GDP inflows to China versus India simultaneously declined. These observations are explained in the context of a simple neoclassical overlapping generations paradigm. The adjustment mechanism works as follows: the reduction in the growth rate of the (urban) labor force due to the one-child policy increases the capital per worker inherited from the previous generation. The resulting increase in China’s domestic capital-labor ratio thus "crowds out" the need for foreign direct investment (FDI) in China relative to India. Our paper is a contribution to the nascent literature exploring demographic transitions and their effects on FDI flows.
Elastin function is to endow vertebrate tissues with elasticity so that they can adapt to local mechanical constraints. The hydrophobicity and insolubility of the mature elastin polymer have hampered studies of its molecular organisation and structure-elasticity relationships. Nevertheless, a growing number of studies from a broad range of disciplines have provided invaluable insights, and several structural models of elastin have been proposed. However, many questions remain regarding how the primary sequence of elastin (and the soluble precursor tropoelastin) governs the molecular structure, its organisation into a polymeric network, and the mechanical properties of the resulting material. The elasticity of elastin is known to be largely entropic in origin, a property that is understood to arise from both its disordered molecular structure and its hydrophobic character. Despite a high degree of hydrophobicity, elastin does not form compact, water-excluding domains and remains highly disordered. However, elastin contains both stable and labile secondary structure elements. Current models of elastin structure and function are drawn from data collected on tropoelastin and on elastin-like peptides (ELPs) but at the tissue level, elasticity is only achieved after polymerisation of the mature elastin. In tissues, the reticulation of tropoelastin chains in water defines the polymer elastin that bears elasticity. Similarly, ELPs require polymerisation to become elastic. There is considerable interest in elastin especially in the biomaterials and cosmetic fields where ELPs are widely used. This review aims to provide an up-to-date survey of/perspective on current knowledge about the interplay between elastin structure, solvation, and entropic elasticity.
Soil amelioration via strategic deep tillage is occasionally utilized within conservation tillage systems to alleviate soil constraints, but its impact on weed seed burial and subsequent growth within the agronomic system is poorly understood. This study assessed the effects of different strategic deep-tillage practices, including soil loosening (deep ripping), soil mixing (rotary spading), or soil inversion (moldboard plow), on weed seed burial and subsequent weed growth, compared with a no-till control. The tillage practices were applied in 2019 at Yerecoin and Darkan, WA, and data on weed seed burial and growth were collected during the following 3-yr winter crop rotation (2019 to 2021). Soil inversion buried 89% of rigid ryegrass (Lolium rigidum Gaudin) and ripgut brome (Bromus diandrus Roth) seeds to a depth of 10 to 20 cm at both sites, while soil loosening and mixing left between 31% and 91% of the seeds in the top 0 to 10 cm of soil, with broad variation between sites. Few seeds were buried beyond 20 cm despite tillage working depths exceeding 30 cm at both sites. Soil inversion reduced the density of L. rigidum to <1 plant m−2 for 3 yr after strategic tillage. Bromus diandrus density was initially reduced to 0 to 1 plant m−2 by soil inversion, but increased to 4 plants m−2 at Yerecoin in 2020 and 147 plants at Darkan in 2021. Soil loosening or mixing did not consistently decrease weed density. The field data were used to parameterize a model that predicted weed density following strategic tillage with greater accuracy for soil inversion than for loosening or mixing. The findings provide important insights into the effects of strategic deep tillage on weed management in conservational agricultural systems and demonstrate the potential of models for optimizing weed management strategies.
South Africa is a net importer of fertilizer products, importing all of its potassium, as well as 60–70% of its nitrogen requirements. Thus, domestic prices are impacted significantly by international prices, shipping costs, and exchange rates. Producing these fertilizers locally would be far more economical. Phlogopite, a rich source of potassium, is discarded in large quantities during mining operations; the objective of the present study, therefore, was to determine the acid-leaching characteristics and behavior of phlogopite as a means of releasing potassium. Phlogopite samples were leached with nitric acid (source of nitrogen for fertilizers) at various concentrations, temperatures, and reaction times. The feed phlogopite and leached residue samples corresponding to conversions of 14% (LP1), 44% (LP2), and 100% (LP3) were collected and analyzed using X-ray fluorescence spectroscopy (XRF), X-ray diffractometry (XRD), attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR), Brunauer–Emmett–Teller surface area and porosity analysis (BET), thermogravimetric analysis (TGA), and field emission gun-scanning electron microscopy (FEG-SEM). The feed phlogopite was highly crystalline. The absence of defects in the lattice meant that the motion of H+ atoms penetrating into the lattice was restricted, suggesting internal diffusion-controlled leaching. Furthermore, results obtained from the various analytical techniques corroborated each other in terms of the release of cations during leaching. All leaching experiments were conducted batchwise, in a closed system. The gravimetric data from the experiments were used to identify a suitable model which predicts accurately the leaching behavior. The reaction was found to be internal diffusion-controlled, and the D1 model, which represents one-dimensional diffusion through a flat plate, predicts the leaching behavior most accurately. The observed activation energies (Ea) and pre-exponential constants (k0) varied with initial nitric acid concentration ([H+]0).
Fe(II)–Fe(III) green rust identified in soil as a natural mineral is responsible for the blue-green color of gley horizons, and exerts the main control on Fe dynamics. A previous EXAFS study of the structure of the mineral confirmed that the mineral belongs to the group of green rusts (GR), but showed that there is a partial substitution of Fe(II) by Mg(II), which leads to the general formula of the mineral: ${[{\rm{Fe}}_{1 - x}^{2 + }{\rm{Fe}}_x^{3 + }{\rm{M}}{{\rm{g}}_y}{({\rm{OH}})_{2 + 2y}}]^{x + }}{[x{\rm{O}}{{\rm{H}}^ - } \cdot m{{\rm{H}}_2}{\rm{O}}]^{x - }}$. The regular binary solid-solution model proposed previously must be extended to ternary, with provision for incorporation of Mg in the mineral. Assuming ideal substitution between Mg(II) and Fe(II), the chemical potential of any Fe(II)-Fe(III)-Mg(II) hydroxy-hydroxide is obtained as: ${\rm{\mu }} = {X_1}{\rm{\mu }}_{\rm{1}}^{\rm{o}} + {X_2}{\rm{\mu }}_{\rm{2}}^{\rm{o}} + {X_3}{\rm{\mu }}_{\rm{3}}^{\rm{o}} + {\rm{R}}T[{X_1}{\rm{ln}}{X_1} + {X_2}{\rm{ln}}{X_2} + {X_3}{\rm{ln}}{X_3}] + {A_{12}}{X_2}(1 - {X_2})$. All experimental data show that the mole ratio X2 = Fe(III)/[Fetotal + Mg] is constrained (1) structurally and (2) geochemically. Structurally, Fe(III) ions cannot neighbor each other, which leads to the inequality ${X_2}\leqslant {\raise0.5ex\hbox{$\scriptstyle 1$}\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{$\scriptstyle 3$}}.$ Geochemically, Fe(III) cannot be too remote from each other for GR to form as Fe(OH)2 and Mg(OH)2 are very soluble, so ${X_2}\geqslant {\raise0.5ex\hbox{$\scriptstyle 1$}\kern-0.1em/\kern-0.15em\lower0.25ex\hbox{$\scriptstyle 4$}}$. A linear relationship is obtained between the Gibbs free energy of formation of GR, normalized to one Fe atom, and the electronegativity ϰ of the interlayer anion, as: μo/n = −76.887ϰ — 491.5206 (r2 = 0.9985, N = 4), from which the chemical potential of the mineral fougerite μ is obtained in the limiting case X3 = 0, and knowing ${\rm{\mu }}_{\rm{1}}^{\rm{o}} = - 489.8$ kJmol−1 for Fe(OH)2, and ${\rm{\mu }}_{\rm{3}}^{\rm{o}} = - 832.16$ kJmol−1 for Mg(OH)2, the two unknown thermodynamic parameters of the solid-solution model are determined as
${\rm{\mu }}_{\rm{2}}^{\rm{o}} = + 119.18\;{\rm{kJmo}}{{\rm{l}}^{ - 1}}$ for Fe(OH)3 (virtual), and A12 = −1456.28 kJmol−1 (non-ideality parameter). From Mössbauer in situ measurements and our model, the chemical composition of the GR mineral is constrained into a narrow range and the soil solutions-mineral equilibria computed. Soil solutions appear to be largely overstaurated with respect to the two forms observed.
Chapter 2 introduces a framework for how to think about war reparations. It discusses how a reparation transfer can be smoothed out over time by borrowing the money. I then discuss other ways a transfer can be paid, by taxes or printing money, and the effects this has on the balance of payments and the terms of trade. Finally, in a technical section, I show how changed terms of trade affect the current account and national income.
Chapter 3 discusses sovereign debt theory and practice. It goes through the history of sovereign debt and how the current theories of borrowing and lending developed in the 1980s. I argue that countries want to be part of global society, and that means they sometimes repay unsustainable debt. The chapter dives into why countries might default, when they might default, how often countries have defaulted, and what the economic and political costs are. I then describe what happens when countries need to restructure their sovereign debt, both in theory and with a practical guide for the process. Finally, in another technical section, I describe a sovereign debt model. The model explains when countries should have no willingness to repay their debt. It allows me to characterise a set of stylised macroeconomic facts that usually accompany sovereign debt defaults. The default set that comes out of the model states when countries should default. These facts and default set (not part of the technical section) are used in Chapters 6, 8, and 10. Chapter 3 is the last overview chapter; the rest are case studies.
Most biological ideas can be viewed as models of nature we create to explain phenomena and predict outcomes in new situations. We use data to determine these models’ credibility. We translate our biological models into statistical ones, then confront those models with data. A mismatch suggests the biological model needs refinement. A biological idea can also be considered a signal that appears in the data among the background noise. Fitting the model to the data lets us see if such a signal exists and, importantly, measure its strength. This approach only works well if our biological hypotheses are clear, the statistical models match the biology, and we collect the data appropriately. This clarity is the starting point for any biological research program.