To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A short chapter that describes the book’s content. It covers the core principles, and discusses some ways in which the book’s description of them differs from that of less technical descriptions.
This chapter covers variational quantum algorithms, which act as a primitive ingredient for larger quantum algorithms in several application areas, including quantum chemistry, combinatorial optimization, and machine learning. Variational quantum algorithms are parameterized quantum circuits where the parameters are trained to optimize a certain cost function. They are often shallow circuits, which potentially makes them suitable for near-term devices that are not error corrected.
I introduce quaternions by recounting the story of how Hamilton discovered them, but in far more detail than other authors give. This detail is necessary for the reader to understand why Hamilton wrote his quaternion equations in the way that he did. I describe the role of quaternions in rotation, show how to convert between them and matrices, and discuss their role in modern computer graphics. I describe a modern problem in detail whereby Hamilton’s original definition has been ‘hijacked’ in a way that has now produced much confusion. I end by describing how quaternions play a role in topology and quantum mechanics.
This chapter covers a number of disparate applications of quantum computing in the area of machine learning. We only consider situations where the dataset is classical (rather than quantum). We cover quantum algorithms for big-data problems relying upon high-dimensional linear algebra, such as Gaussian process regression and support vector machines. We discuss the prospect of achieving a quantum speedup with these algorithms, which face certain input/output caveats and must compete against quantum-inspired classical algorithms. We also cover heuristic quantum algorithms for energy-based models, which are generative machine learning models that learn to produce outputs similar to those in a training dataset. Next, we cover a quantum algorithm for the tensor principal component analysis problem, where a quartic speedup may be available, as well as quantum algorithms for topological data analysis, which aim to compute topologically invariant properties of a dataset. We conclude by covering quantum neural networks and quantum kernel methods, where the machine learning model itself is quantum in nature.
When using machine learning to model environmental systems, it is often a model’s ability to predict extreme behaviors that yields the highest practical value to policy makers. However, most existing error metrics used to evaluate the performance of environmental machine learning models weigh error equally across test data. Thus, routine performance is prioritized over a model’s ability to robustly quantify extreme behaviors. In this work, we present a new error metric, termed Reflective Error, which quantifies the degree at which our model error is distributed around our extremes, in contrast to existing model evaluation methods that aggregate error over all events. The suitability of our proposed metric is demonstrated on a real-world hydrological modeling problem, where extreme values are of particular concern.
Reduced-order models encapsulating complex whole-body dynamics have facilitated stable walking in various bipedal robots. These models have enabled intermittent control methods by applying control inputs intermittently (alternating between zero input and feedback input), allowing robots to follow natural dynamics and provide energetically and computationally efficient walking. However, due to their inability to derive closed-form solutions for the angular momentum generated by swing motions and other dynamic actions, constructing a precise model for the walking phase with zero input is challenging, and controlling walking behavior using an intermittent controller remains problematic. This paper proposes an intermittent controller for bipedal robots, modeled as a multi-mass system consisting of an inverted pendulum and an additional mass representing the swing leg. The proposed controller alternates between feedback control during the double support (DS) phase and zero-input control during the single support (SS) phase. By deriving a constrained trajectory, the system behaves as a conservative system during the SS phase, enabling closed-form solutions to the equations of motion. This constraint allows the robot to track the target behavior accurately, intermittently adjusting energy during the DS phase. The effectiveness of the proposed method is validated through simulations and experiments with a bipedal robot, demonstrating its capability to accurately and stably track the target walking velocity using intermittent control.
Tunnel boring machines (TBMs) are essential equipment for tunnel excavation. The main component of TBMs for breaking rock is the disc cutter. The effectiveness and productivity of TBM operations are directly impacted by the disc cutter design and performance. This study investigates the effects of confining stress on the breaking force of disc cutters with various diameters. Both saturated and dry rock, such as low-strength concrete, medium-strength marble, and high-strength granite, are used in the tests. It is found that disc cutters with larger diameter can reduce the influence of the confining stress. Moreover, this research indicates that the influence of confining stress is more notable in rocks with higher strengths, especially in dry condition as opposed to saturated condition. The failure load is related to the confining stress, cutter diameter, and compressive strength of the rock in a multivariate linear regression model, suggesting that the confining stress is more significant than the other variables. These results highlight the importance of considering in-situ stress conditions when excavating tunnels by TBMs.
We investigate causal computations, which take sequences of inputs to sequences of outputs such that the $n$th output depends on the first $n$ inputs only. We model these in category theory via a construction taking a Cartesian category $\mathbb{C}$ to another category $\mathrm{St}(\mathbb{C})$ with a novel trace-like operation called “delayed trace,” which misses yanking and dinaturality axioms of the usual trace. The delayed trace operation provides a feedback mechanism in $\mathrm{St}(\mathbb{C})$ with an implicit guardedness guarantee. When $\mathbb{C}$ is equipped with a Cartesian differential operator, we construct a differential operator for $\mathrm{St}(\mathbb{C})$ using an abstract version of backpropagation through time (BPTT), a technique from machine learning based on unrolling of functions. This obtains a swath of properties for BPTT, including a chain rule and Schwartz theorem. Our differential operator is also able to compute the derivative of a stateful network without requiring the network to be unrolled.
Peat is formed by the accumulation of organic material in water-saturated soils. Drainage of peatlands and peat extraction contribute to carbon emissions and biodiversity loss. Most peat extracted for commercial purposes is used for energy production or as a growing substrate. Many countries aim to reduce peat usage but this requires tools to detect its presence in substrates. We propose a decision support system based on deep learning to detect peat-specific testate amoeba in microscopy images. We identified six taxa that are peat-specific and frequent in European peatlands. The shells of two taxa (Archerella sp. and Amphitrema sp.) were well preserved in commercial substrate and can serve as indicators of peat presence. Images from surface and commercial samples were combined into a training set. A separate test set exclusively from commercial substrates was also defined. Both datasets were annotated and YOLOv8 models were trained to detect the shells. An ensemble of eight models was included in the decision support system. Test set performance (average precision) reached values above 0.8 for Archerella sp. and above 0.7 for Amphitrema sp. The system processes thousands of images within minutes and returns a concise list of crops of the most relevant shells. This allows a human operator to quickly make a final decision regarding peat presence. Our method enables the monitoring of peat presence in commercial substrates. It could be extended by including more species for applications in restoration ecology and paleoecology.
Increasing penetration of variable and intermittent renewable energy resources on the energy grid poses a challenge for reliable and efficient grid operation, necessitating the development of algorithms that are robust to this uncertainty. However, standard algorithms incorporating uncertainty for generation dispatch are computationally intractable when costs are nonconvex, and machine learning-based approaches lack worst-case guarantees on their performance. In this work, we propose a learning-augmented algorithm, RobustML, that exploits the good average-case performance of a machine-learned algorithm for minimizing dispatch and ramping costs of dispatchable generation resources while providing provable worst-case guarantees on cost. We evaluate the algorithm on a realistic model of a combined cycle cogeneration plant, where it exhibits robustness to distribution shift while enabling improved efficiency as renewables penetration increases.
In recent years, passive motion paradigms (PMPs), derived from the equilibrium point hypothesis and impedance control, have been utilised as manipulation methods for humanoid robots and robotic manipulators. These paradigms are typically achieved by creating a kinematic chain that enables the manipulator to perform goal-directed actions without explicitly solving the inverse kinematics. This approach leverages a kinematic model constructed through the training of artificial neural networks, aligning well with principles of cybernetics and cognitive computation by enabling adaptive and flexible control. Specifically, these networks model the relationship between joint angles and end-effector positions, facilitating the computation of the Jacobian matrix. Although this method does not require an accurate robot model, traditional neural networks often suffer from drawbacks such as overfitting and inefficient training, which can compromise the accuracy of the final PMP model. In this paper, we implement the method using a deep neural network and investigate the impact of activation functions and network depth on the performance of the kinematic model. Additionally, we propose a transfer learning approach to fine-tune the pre-trained model, enabling it to be transferred to other manipulator arms with different kinematic properties. Finally, we implement and evaluate the deep neural network-based PMP on the Universal Robots, comparing it with traditional kinematic controllers and assessing its physical interaction capabilities and accuracy.
A topological space has a domain model if it is homeomorphic to the maximal point space $\mbox{Max}(P)$ of a domain $P$. Lawson proved that every Polish space $X$ has an $\omega$-domain model $P$ and for such a model $P$, $\mbox{Max}(P)$ is a $G_{\delta }$-set of the Scott space of $P$. Martin (2003) then asked whether it is true that for every $\omega$-domain $Q$, $\mbox{Max}(Q)$ is $G_{\delta }$-set of the Scott space of $Q$. In this paper, we give a negative answer to Martin’s long-standing open problem by constructing a counterexample. The counterexample here actually shows that the answer is no even for $\omega$-algebraic domains. In addition, we also construct an $\omega$-ideal domain $\widetilde{Q}$ for the constructed $Q$ such that their maximal point spaces are homeomorphic. Therefore, $\textrm{Max}(Q)$ is a $G_\delta$-set of the Scott space of the new model $\widetilde{Q}$ .
Smooth Infinitesimal Analysis (SIA) is a remarkable late twentieth-century theory of analysis. It is based on nilsquare infinitesimals, and does not rely on limits. SIA poses a challenge of motivating its use of intuitionistic logic beyond merely avoiding inconsistency. The classical-modal account(s) provided here attempt to do just that. The key is to treat the identity of an arbitrary nilsquare, e, in relation to 0 or any other nilsquare, as objectually vague or indeterminate—pace a famous argument of Evans [10]. Thus, we interpret the necessity operator of classical modal logic as “determinateness” in truth-value, naturally understood to satisfy the modal system, S4 (the accessibility relation on worlds being reflexive and transitive). Then, appealing to the translation due to Gödel et al., and its proof-theoretic faithfulness (“mirroring theorem”), we obtain a core classical-modal interpretation of SIA. Next we observe a close connection with Kripke semantics for intuitionistic logic. However, to avoid contradicting SIA’s non-classical treatment of identity relating nilsquares, we translate “=” with a non-logical surrogate, ‘E,’ with requisite properties. We then take up the interesting challenge of adding new axioms to the core CM interpretation. Two mutually incompatible ones are considered: one being the positive stability of identity and the other being a kind of necessity of indeterminate identity (among nilsquares). Consistency of the former is immediate, but the proof of consistency of the latter is a new result. Finally, we consider moving from CM to a three-valued, semi-classical framework, SCM, based on the strong Kleene axioms. This provides a way of expressing “indeterminacy” in the semantics of the logic, arguably improving on our CM. SCM is also proof-theoretically faithful, and the extensions by either of the new axioms are consistent.
On both global and local levels, one can observe a trend toward the adoption of algorithmic regulation in the public sector, with the Chinese social credit system (SCS) serving as a prominent and controversial example of this phenomenon. Within the SCS framework, cities play a pivotal role in its development and implementation, both as evaluators of individuals and enterprises and as subjects of evaluation themselves. This study engages in a comparative analysis of SCS scoring mechanisms for individuals and enterprises across diverse Chinese cities while also scrutinizing the scoring system applied to cities themselves. We investigate the extent of algorithmic regulation exercised through the SCS, elucidating its operational dynamics at the city level in China and assessing its interventionism, especially concerning the involvement of algorithms. Furthermore, we discuss ethical concerns surrounding the SCS’s implementation, particularly regarding transparency and fairness. By addressing these issues, this article contributes to two research domains: algorithmic regulation and discourse surrounding the SCS, offering valuable insights into the ongoing utilization of algorithmic regulation to tackle governance and societal challenges.
Africa had a busy election calendar in 2024, with at least 19 countries holding presidential or general elections. In a continent with a large youth population, a common theme across these countries is a desire for citizens to have their voices heard, and a busy election year offers an opportunity for the continent to redeem its democratic credentials and demonstrate its leaning towards strengthening free and fair elections and a more responsive and democratic governance. Given the central role that governance plays in security in Africa, the stakes from many of these elections are high, not only to achieve a democratically elected government but also to achieve stability and development. Since governance norms, insecurity, and economic buoyancy are rarely contained by borders, the conduct and outcomes from each of these elections will also have implications for neighbouring countries and the continent overall. This article considers how the results of recent elections across Africa have been challenged in courts based on mistrust in the use of technology platforms, how the deployment of emerging technology, including AI, is casting a shadow on the integrity of elections in Africa, and the policy options to address these emerging trends with a particular focus on governance of AI technologies through a human rights-based approach and equitable public procurement practices.
This book applies rotation theory to problems involving vectors and coordinates, with an approach that combines easily visualised procedures with smart mathematics. It constructs rotation theory from the ground up, building from basic geometry through to the motion and attitude equations of rockets, and the tensor analysis of relativity. The author replaces complicated pictures of superimposed axes with a simple and intuitive procedure of rotating a model aircraft, to create rotation sequences that are easily turned into mathematics. He combines the best of the 'active' and 'passive' approaches to rotation into a single coherent theory, and discusses many potential traps for newcomers. This volume will be useful to astronomers and engineers sighting planets and satellites, computer scientists creating graphics for movies, and aerospace engineers designing aircraft; also to physicists and mathematicians who study its abstract aspects.
Sustainable agricultural practices have become increasingly important due to growing environmental concerns and the urgent need to mitigate the climate crisis. Digital agriculture, through advanced data analysis frameworks, holds promise for promoting these practices. Pesticides are a common tool in agricultural pest control, which are key in ensuring food security but also significantly contribute to the climate crisis. To combat this, Integrated Pest Management (IPM) stands as a climate-smart alternative. We propose a causal and explainable framework for enhancing digital agriculture, using pest management and its sustainable alternative, IPM, as a key example to highlight the contributions of causality and explainability. Despite its potential, IPM faces low adoption rates due to farmers’ skepticism about its effectiveness. To address this challenge, we introduce an advanced data analysis framework tailored to enhance IPM adoption. Our framework provides (i) robust pest population predictions across diverse environments with invariant and causal learning, (ii) explainable pest presence predictions using transparent models, (iii) actionable advice through counterfactual explanations for in-season IPM interventions, (iv) field-specific treatment effect estimations, and (v) assessments of the effectiveness of our advice using causal inference. By incorporating these features, our study illustrates the potential of causality and explainability concepts to enhance digital agriculture regarding promoting climate-smart and sustainable agricultural practices, focusing on the specific case of pest management. In this case, our framework aims to alleviate skepticism and encourage wider adoption of IPM practices among policymakers, agricultural consultants, and farmers.
Navigation is an important skill required for an autonomous robot, as information about the location of the robot is necessary for making decisions about upcoming events. The objective of the localization technique is “to know about the location of the collected data.” In previous works, several deep learning methods were used to detect localization, but none of them gives sufficient accuracy. To address this issue, an Enhanced Capsule Generation Adversarial Network and optimized Dual Interactive Wasserstein Generative Adversarial Network for landmark detection and localization of autonomous robots in outdoor environments (ECGAN-DIWGAN-RSO-LAR) is proposed in this manuscript. Here, the outdoor robot localization dataset is taken from the Virtual KITTI dataset. It contains two phases, which are landmark detection and localization. The landmark detection phase is determined using Enhanced Capsule Generation Adversarial Network for detecting the landmark of the captured image. Then the robot localization phase is determined using Dual Interactive Wasserstein Generative Adversarial Network (DIWGAN) for determining the robot location coordinates as well as compass orientation from identified landmarks. After that, the weight parameters of the DIWGAN are optimized by Rat Swarm Optimization (RSO) algorithm. The proposed ECGAN-DIWGAN-RSO-LAR is implemented in Python. The efficiency of the proposed ECGAN-DIWGAN-RSO-LAR technique shows higher accuracy of 22.67%, 12.45 %, and 8.89% compared to the existing methods.
The ubiquity of social media platforms allows individuals to easily share and curate their personal lives with friends, family, and the world. The selective nature of sharing one’s personal life may reinforce the memories and details of the shared experiences while simultaneously inducing the forgetting of related, unshared memories/experiences. This is a well-established psychological phenomenon known as retrieval-induced forgetting (RIF, Anderson et al.). To examine this phenomenon in the context of social media, two experiments were conducted using an adapted version of the RIF paradigm in which participants either shared experimenter-contrived (Study 1) or personal photographs (Study 2) on social media platforms. Study 1 revealed that participants had more accurate recall of the details surrounding the shared photographs as well as enhanced recognition of the shared photographs. Study 2 revealed that participants had more consistent recall of event details captured in the shared photographs than details captured or uncaptured in the unshared photographs. These results suggest that selectively sharing photographs on social media may specifically enhance the recollection of event details associated with the shared photographs. The novel and ecologically embedded methods provide fodder for future research to better understand the important role of social media in shaping how individuals remember their personal experiences.