To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter lays the syntactic foundations for the book, covering topics in both first-order and higher-order logic. It introduces untyped lambda-terms and their properties, including beta-conversion and beta-normal forms. The chapter then defines types, signatures, and typed terms, restricting the typing judgment to beta-normal formulas. Finally, it introduces the concept of formulas and sequents, which are central to the proof-theoretic approach discussed in the book. Bibliographic notes provide references for further reading on these foundational concepts.
This chapter introduces higher-order quantification and its role in logic programming. It discusses the syntax and proof theory of higher-order quantification. The chapter explores the concept of near-focused proofs in the context of building proof systems for higher-order quantification. It builds upon the proof-theoretic foundations established in earlier chapters to extend the logic programming paradigms to the higher-order setting.
Applications of cryptography are plenty in everyday life. This guidebook is about the security analysis or 'cryptanalysis' of the basic building blocks on which these applications rely. Rather than covering a variety of techniques at an introductory level, this book provides a comprehensive and in-depth treatment of linear cryptanalysis. The subject is introduced from a mathematical point of view, providing an overview of the most influential papers on linear cryptanalysis and placing them in a consistent framework based on linear algebra. A large number of examples and exercises are included, drawing upon practice as well as theory. The book is accessible to students with no prior knowledge of cryptography. It covers linear cryptanalysis starting from the basics, including linear approximations and trails, correlation matrices, automatic search, key-recovery techniques, up to advanced topics, such as multiple and multidimensional linear cryptanalysis, zero-correlation approximations, and the geometric approach.
We introduce Displayed Type Theory (dTT), a multi-modal homotopy type theory with discrete and simplicial modes. In the intended semantics, the discrete mode is interpreted by a model for an arbitrary $\infty$-topos, while the simplicial mode is interpreted by Reedy fibrant augmented semi-simplicial diagrams in that model. This simplicial structure is represented inside the theory by a primitive notion of display or dependency, guarded by modalities, yielding a partially-internal form of unary parametricity. Using the display primitive, we then give a coinductive definition, at the simplicial mode, of a type of semi-simplicial types. Roughly speaking, a semi-simplicial type consists of a type together with, for each , a displayed semi-simplicial type over . This mimics how simplices can be generated geometrically through repeated cones, and is made possible by the display primitive at the simplicial mode. The discrete part of then yields the usual infinite indexed definition of semi-simplicial types, both semantically and syntactically. Thus, dTT enables working with semi-simplicial types in full semantic generality.
We present a critical survey on the consistency of uncertainty quantification used in deep learning and highlight partial uncertainty coverage and many inconsistencies. We then provide a comprehensive and statistically consistent framework for uncertainty quantification in deep learning that accounts for all major sources of uncertainty: input data, training and testing data, neural network weights, and machine-learning model imperfections, targeting regression problems. We systematically quantify each source by applying Bayes’ theorem and conditional probability densities and introduce a fast, practical implementation method. We demonstrate its effectiveness on a simple regression problem and a real-world application: predicting cloud autoconversion rates using a neural network trained on aircraft measurements from the Azores and guided by a two-moment bin model of the stochastic collection equation. In this application, uncertainty from the training and testing data dominates, followed by input data, neural network model, and weight variability. Finally, we highlight the practical advantages of this methodology, showing that explicitly modeling training data uncertainty improves robustness to new inputs that fall outside the training data, and enhances model reliability in real-world scenarios.
In the 1980s, Erdős and Sós initiated the study of Turán problems with a uniformity condition on the distribution of edges: the uniform Turán density of a hypergraph $H$ is the infimum over all $d$ for which any sufficiently large hypergraph with the property that all its linear-size subhypergraphs have density at least $d$ contains $H$. In particular, they asked to determine the uniform Turán densities of $K_4^{(3)-}$ and $K_4^{(3)}$. After more than 30 years, the former was solved in [Israel J. Math. 211 (2016), 349 – 366] and [J. Eur. Math. Soc. 20 (2018), 1139 – 1159], while the latter still remains open. Till today, there are known constructions of $3$-uniform hypergraphs with uniform Turán density equal to $0$, $1/27$, $4/27$, and $1/4$ only. We extend this list by a fifth value: we prove an easy to verify sufficient condition for the uniform Turán density to be equal to $8/27$ and identify hypergraphs satisfying this condition.
Blockchain technology has attracted attention from public sector agencies, mainly for its perceived potential to improve transparency, data integrity, and administrative processes. However, its concrete value and applicability within government settings remain contested, and real-world adoption has been limited and uneven. This raises questions regarding the conditions that promote or impede adoption at the institutional level. Fuzzy-set qualitative comparative analysis is employed in this research to explore how the combined effects of national-level regulatory clarity, financial provision, digital readiness, and ecosystem engagement shape patterns of blockchain adoption in the European public sector. Rather than identifying any single factor as decisive, our findings reveal a plurality of institutional paths leading to high adoption intensity, with regulatory certainty and European Union funding appearing most frequently on high-consistency paths. In contrast, digital readiness indicators and national research and development budgets are substitutable, challenging resource-based perceptions of technology adoption and supporting a configurational understanding that accounts for institutional interdependence and contextuality. We argue that policy strategies cannot look for overall readiness but should place key institutional strengths relative to local conditions and public value objectives.
Although design research is a relatively recent academic field, it has developed several influential typologies over the past decades. This study conducts a systematic review to evaluate how design research approaches relate to the design process, with a specific focus on two overlooked dimensions: the point of research integration in design and the research attitude guiding the inquiry. Drawing on foundational models by Frayling, Cross and Buchanan, the paper proposes a conceptual framework that cross-analyzes research typologies with these two dimensions. Seventy peer-reviewed studies in architecture and related disciplines were identified and analyzed through PRISMA guidelines and Critical Appraisal Skills Programme (CASP) checklist. The findings reveal four distinct clusters: (1) research about design – basic research – design epistemology, (2) research through design – applied research – design praxeology, (3) research for design – clinical research – design phenomenology and (4) a fourth category, research through design (II) – applied research – design epistemology. Moreover, five research attitudes were identified across the studies: practitioner, practitioner with user, practitioner with AI, researcher and user. These findings provide a more nuanced understanding of how design knowledge is produced in architectural research.
The core topics at the intersection of human-computer interaction (HCI) and US law -- privacy, accessibility, telecommunications, intellectual property, artificial intelligence (AI), dark patterns, human subjects research, and voting -- can be hard to understand without a deep foundation in both law and computing. Every member of the author team of this unique book brings expertise in both law and HCI to provide an in-depth yet understandable treatment of each topic area for professionals, researchers, and graduate students in computing and/or law. Two introductory chapters explaining the core concepts of HCI (for readers with a legal background) and U.S. law (for readers with an HCI background) are followed by in-depth discussions of each topic.
In recent years, the manufacturing sector has seen an influx of artificial intelligence applications, seeking to harness its capabilities to improve productivity. However, manufacturing organizations have limited understanding of risks that are posed by the usage of artificial intelligence, especially those related to trust, responsibility, and ethics. While significant effort has been put into developing various general frameworks and definitions to capture these risks, manufacturing and supply chain practitioners face difficulties in implementing these and understanding their impact. These issues can have a significant effect on manufacturing companies, not only at an organization level but also on their employees, clients, and suppliers. This paper aims to increase understanding of trustworthy, responsible, and ethical Artificial Intelligence challenges as they apply to manufacturing and supply chains. We first conduct a systematic mapping study on concepts relevant to trust, responsibility and ethics and their interrelationships. We then use a broadened view of a machine learning lifecycle as a basis to understand how risks and challenges related to these concepts emanate from each phase in the lifecycle. We follow a case study driven approach, providing several illustrative examples that focus on how these challenges manifest themselves in actual manufacturing practice. Finally, we propose a series of research questions as a roadmap for future research in trustworthy, responsible and ethical artificial intelligence applications in manufacturing, to ensure that the envisioned economic and societal benefits are delivered safely and responsibly.
In many contexts, an individual’s beliefs and behavior are affected by the choices of their social or geographic neighbors. This influence results in local correlation in people’s actions, which in turn affects how information and behaviors spread. Previously developed frameworks capture local social influence using network games, but discard local correlation in players’ strategies. This paper develops a network games framework that allows for local correlation in players’ strategies by incorporating a richer partial information structure than previous models. Using this framework we also examine the dependence of equilibrium outcomes on network clustering—the probability that two individuals with a mutual neighbor are connected to each other. We find that clustering reduces the number of players needed to provide a public good and allows for market sharing in technology standards competitions.
Air pollution is a major environmental and public health risk globally leading to millions of premature deaths annually and negative economic effects. One of the key challenges in managing air quality is the availability of actionable spatial air quality data. The sparse networks or absence of air quality monitoring stations in many places means that there are limited data and information on air pollution in places without coverage. The spatial prediction of air quality can contribute to increasing data access for locations without air quality monitoring, ultimately improving awareness of the risk of air pollution exposure for vulnerable people. In this study, we investigated the air quality prediction task in two cities in Uganda (i.e., Jinja and Kampala), with unique geographic and economic contexts. Primarily, we used Gaussian processes to predict the PM$ {}_{2.5} $ levels in the two cities, selected because of their relative importance in the country and their varying characteristics. We achieved promising results with an average root-mean-square error (RMSE) of 18.32 μg/m3 and 16.88 μg/m3 in Kampala and Jinja, respectively. These results provide valuable insights into the air quality profiles of two urban sub-Saharan cities with different demographics, which can in turn aid in decision-making for targeted actions at different levels.
Language models (LMs) have attracted the attention of researchers from the natural language processing (NLP) and machine learning (ML) communities working in specialized domains, including climate change. NLP and ML practitioners have been making efforts to reap the benefits of LMs of various sizes, including large language models, in order to both simplify and accelerate the processing of large collections of text data, and in doing so, help climate change stakeholders to gain a better understanding of past and current climate-related developments, thereby staying on top of both ongoing changes and increasing amounts of data. This paper presents a brief history of language models and ties LMs’ beginnings to them becoming an emerging technology for analysing and interacting with texts in the specialized domain of climate change. The paper reviews existing domain-specific LMs and systems based on general-purpose large language models for analysing climate change data, with special attention being paid to the LMs’ and LM-based systems’ functionalities, intended use and audience, architecture, the data used in their development, the applied evaluation methods, and their accessibility. The paper concludes with a brief overview of potential avenues for future research vis-à-vis the advantages and disadvantages of deploying LMs and LM-based solutions in a high-stakes scenario such as climate change research. For the convenience of readers, explanations of specialized terms used in NLP and ML are provided.
This study presents the control of an omnidirectional automated guided vehicle (AGV) with mecanum wheels using a hybrid optimization algorithm that combines a modified A* algorithm and the dynamic window approach (ADWA-HO). The method ensures efficient and precise navigation in both static and dynamic environments. The modified A* algorithm generates global paths, removes redundant nodes, and refines trajectories to improve efficiency and smoothness. At the same time, the dynamic window approach (DWA) enables real-time local path planning and obstacle avoidance. By evaluating the AGV’s motion commands in real time, ADWA-HO selects optimal velocity commands within a dynamically updated window, thereby reducing route conflicts and ensuring stable movement. Compared with benchmark methods including dynamic A* (D*), artificial potential field (APF), DWA, probabilistic roadmap (PRM) & rapidly exploring random tree (RRT) fusion, and PRM & DWA fusion, the proposed ADWA-HO achieves improvements in average path length of 28.10%, 22.95%, 21.16%, 17.35%, and 10.71% and in average motion time of 23.48%, 17.85%, 15.47%, 11.86%, and 7.53% on both Map 1 and Map 2, respectively. The difference between simulation and real-world experiments is limited to 5.35% in path length and 4.38% in motion time, confirming the method’s practical reliability. Furthermore, the algorithm achieves lower standard deviation in both metrics, indicating higher consistency of performance. This work also introduces a novel map-building strategy based on geometric and semantic data modules, which enhances the adaptability of real-world AGV deployment.
Digital oratory skills have become essential for academic and professional success in today’s digital world, making it imperative to integrate digital oratory training into public speaking pedagogy. This study examined second language (L2) speakers’ public speaking anxiety and nonverbal speech performance in the context of digital oratory. Participants were 40 English as a second language students enrolled in a public speaking course at a Hong Kong university. Each student recorded and uploaded an 8-minute speech to a digital learning platform. They also completed a questionnaire measuring digital oratory anxiety and participated in semi-structured interviews sharing their perceptions of digital presentation. Nonverbal speech performance was assessed, and correlations with digital oratory anxiety were analyzed. The results showed that cognitive and physiological factors had a greater influence on digital oratory anxiety than behavioral and technical factors. Although no significant correlations were found between digital oratory anxiety and nonverbal speech performances, the technical factor had the least impact on L2 students’ anxiety, leading to positive outcomes regarding the technical quality of the speech videos. Comparatively, eye contact and gestures attained much lower mean scores than voice control and facial expressions. Interview results further elucidated the benefits and challenges students experienced during digital presentations. Pedagogically, the findings highlight the need of a holistic approach considering cognitive, physiological, behavioral, and technical factors to address L2 learners’ digital oratory anxiety. Given its affordability and accessibility, digital oratory can be effectively integrated into instruction to develop L2 students’ multimodal communication and nonverbal delivery skills.