Due to planned maintenance, between 12:00 am - 2:30 am GMT, you may experience difficulty in adding to basket and purchasing. We apologise for any inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This research proposes a novel conceptual framework that combines the concepts of Human-Computer Interaction (HCI) and Ambient Intelligence (AmI). The proposed framework aims to shed light on the importance of considering the needs and the social interactions of various building occupants in different types of buildings and designing HBI strategies accordingly. Specifically, we take educational buildings as a case that is less explored in the HBI research and apply the proposed framework, investigating how HBI strategies and interactions should be designed to address the needs of students, as primary occupants. Focus groups and semi-structured interviews were conducted among students in a flagship smart engineering building at Virginia Tech. Qualitative coding and concept mapping were used to analyze the qualitative data and determine the impact of occupant-specific needs on the learning experience of students. “Finding study space” was found to have the highest direct impact on the learning experience of students, and “Indoor Environment Quality (IEQ)” was found to have the highest indirect impact. The results show a clear need to integrate occupant needs in designing HBI strategies in different types of buildings. Finally, we discuss new ideas for designing potential Intelligent User Interfaces (IUI) to address the identified needs.
This study explores the relationship between alter centrality in various social domains and the perception of linguistic similarity within personal networks. Linguistic similarity perception is defined as the extent to which individuals perceive others to speak similarly to themselves. A survey of 126 college students and their social connections (n = 1035) from the French-speaking region of Switzerland was conducted. We applied logistic multilevel regressions to account for the hierarchical structure of dyadic ties. The results show that alters holding central positions in supportive networks are positively associated with perceived linguistic similarity, while those who are central in conflict networks show a negative association. The role of ambivalence yielded mixed results, with a positive and significant association emerging when ambivalence was linked to family members.
Digital twins are a new paradigm for our time, offering the possibility of interconnected virtual representations of the real world. The concept is very versatile and has been adopted by multiple communities of practice, policymakers, researchers, and innovators. A significant part of the digital twin paradigm is about interconnecting digital objects, many of which have previously not been combined. As a result, members of the newly forming digital twin community are often talking at cross-purposes, based on different starting points, assumptions, and cultural practices. These differences are due to the philosophical world-view adopted within specific communities. In this paper, we explore the philosophical context which underpins the digital twin concept. We offer the building blocks for a philosophical framework for digital twins, consisting of 21 principles that are intended to help facilitate their further development. Specifically, we argue that the philosophy of digital twins is fundamentally holistic and emergentist. We further argue that in order to enable emergent behaviors, digital twins should be designed to reconstruct the behavior of a physical twin by “dynamically assembling” multiple digital “components”. We also argue that digital twins naturally include aspects relating to the philosophy of artificial intelligence, including learning and exploitation of knowledge. We discuss the following four questions (i) What is the distinction between a model and a digital twin? (ii) What previously unseen results can we expect from a digital twin? (iii) How can emergent behaviours be predicted? (iv) How can we assess the existence and uniqueness of digital twin outputs?
This article establishes a data-driven modeling framework for lean hydrogen ($ {\mathrm{H}}_2 $)-air reaction rates for the Large Eddy Simulation (LES) of turbulent reactive flows. This is particularly challenging since $ {\mathrm{H}}_2 $ molecules diffuse much faster than heat, leading to large variations in burning rates, thermodiffusive instabilities at the subfilter scale, and complex turbulence-chemistry interactions. Our data-driven approach leverages a Convolutional Neural Network (CNN), trained to approximate filtered burning rates from emulated LES data. First, five different lean premixed turbulent $ {\mathrm{H}}_2 $-air flame Direct Numerical Simulations (DNSs) are computed each with a unique global equivalence ratio. Second, DNS snapshots are filtered and downsampled to emulate LES data. Third, a CNN is trained to approximate the filtered burning rates as a function of LES scalar quantities: progress variable, local equivalence ratio, and flame thickening due to filtering. Finally, the performances of the CNN model are assessed on test solutions never seen during training. The model retrieves burning rates with very high accuracy. It is also tested on two filter and downsampling parameters and two global equivalence ratios between those used during training. For these interpolation cases, the model approximates burning rates with low error even though the cases were not included in the training dataset. This a priori study shows that the proposed data-driven machine learning framework is able to address the challenge of modeling lean premixed $ {\mathrm{H}}_2 $-air burning rates. It paves the way for a new modeling paradigm for the simulation of carbon-free hydrogen combustion systems.
Artificial intelligence (AI) requires new ways of evaluating national technology use and strategy for African nations. We conduct a survey of existing “readiness” assessments both for general digital adoption and AI policy in particular. We conclude that existing global readiness assessments do not fully capture African states’ progress in AI readiness and lay the groundwork for how assessments can be better used for the African context. We consider the extent to which these indicators map to the African context and what these indicators miss in capturing African states’ on-the-ground work in meeting AI capability. Through case studies of four African nations of diverse geographic and economic dimensions, we identify nuances missed by global assessments and offer high-level policy considerations for how states can best improve their AI readiness standards and prepare their societies to capture the benefits of AI.
We present PCFTL (Probabilistic CounterFactual Temporal Logic), a new probabilistic temporal logic for the verification of Markov Decision Processes (MDP). PCFTL introduces operators for causal inference, allowing us to express interventional and counterfactual queries. Given a path formula ϕ, an interventional property is concerned with the satisfaction probability of ϕ if we apply a particular change I to the MDP (e.g., switching to a different policy); a counterfactual formula allows us to compute, given an observed MDP path τ, what the outcome of ϕ would have been had we applied I in the past and under the same random factors that led to observing τ. Our approach represents a departure from existing probabilistic temporal logics that do not support such counterfactual reasoning. From a syntactic viewpoint, we introduce a counterfactual operator that subsumes both interventional and counterfactual probabilities as well as the traditional probabilistic operator. This makes our logic strictly more expressive than PCTL⋆. The semantics of PCFTL rely on a structural causal model translation of the MDP, which provides a representation amenable to counterfactual inference. We evaluate PCFTL in the context of safe reinforcement learning using a benchmark of grid-world models.
Federal and local agencies have identified a need to create building databases to help ensure that critical infrastructure and residential buildings are accounted for in disaster preparedness and to aid the decision-making processes in subsequent recovery efforts. To respond effectively, we need to understand the built environment—where people live, work, and the critical infrastructure they rely on. Yet, a major discrepancy exists in the way data about buildings are collected across the United SStates There is no harmonization in what data are recorded by city, county, or state governments, let alone at the national scale. We demonstrate how existing open-source datasets can be spatially integrated and subsequently used as training for machine learning (ML) models to predict building occupancy type, a major component needed for disaster preparedness and decision -making. Multiple ML algorithms are compared. We address strategies to handle significant class imbalance and introduce Bayesian neural networks to handle prediction uncertainty. The 100-year flood in North Carolina is provided as a practical application in disaster preparedness.
The Cambridge Handbook of Emerging Issues at the Intersection of Commercial Law and Technology is a timely and interdisciplinary examination of the legal and societal implications of nascent technologies in the global commercial marketplace. Featuring contributions from leading international experts in the field, this volume offers fresh and diverse perspectives on a range of topics, including non-fungible tokens, blockchain technology, the Internet of Things, product liability for defective goods, smart readers, liability for artificial intelligence products and services, and privacy in the era of quantum computing. This work is an invaluable resource for academics, policymakers, and anyone seeking a deeper understanding of the social and legal challenges posed by technological innovation, as well as the role of commercial law in facilitating and regulating emerging technologies.
Analysing hierarchical design processes is difficult due to the technical and organizational dependencies spanning over multiple levels. The V-Model of Systems Engineering considers multiple levels. It is, however, not quantitative. We propose a model for simulating hierarchical product design processes based on the V-Model. It includes, first, a product model which structures physical product properties in a hierarchical dependency graph; second, an organizational model which formalizes the assignment of stakeholder responsibility; third, a process model which describes the top-down and bottom-up flow of design information; fourth, an actor model which simulates the combination of product, organization and process by using computational agents. The quantitative model is applied to a simple design problem with three stakeholders and three separate areas of responsibility. The results show the following phenomena observed in real-world product design: design iterations occur naturally as a consequence of the designers’ individual behaviour; inconsistencies in designs emerge and are resolved. The simple design problem is used to compare point-based and interval-based requirement decomposition quantitatively. It is shown that development time can be reduced significantly by using interval-based requirements if requirements are always broken down immediately.
A modeling method of multi-objective optimization design for parallel mechanisms (PMs) is proposed, whose implementation is illustrated with 2RPU-RPS mechanism as an example. The orientation of biased output axis on moving platform is depicted by spherical attitude angles, and its kinematic model is deduced through vector method. With screw theory as mathematic tool, a comprehensive evaluation method of kinematic performance for PM is established. On this basis, the expensive constrained multi-objective optimization model of dimensional parameters for the discussed mechanism is constructed. The NSDE-II algorithm, formed by replacing the genetic algorithm operators in non-dominated sorting genetic algorithm II (NSGA-II) with DE operators, is utilized to solve this multi-objective optimization problem, thus obtaining multiple Pareto optimal solutions with engineering application significance, which proves the feasibility and effectiveness of the proposed modeling method and algorithm. Moreover, the normalization coverage space and the minimum adjacent vector angle are proposed to evaluate the computational performance of NSDE-II. Finally, the potential engineering application value for the optimized 2RPU-RPS PM is presented.
This paper initiates the explicit study of face numbers of matroid polytopes and their computation. We prove that, for the large class of split matroid polytopes, their face numbers depend solely on the number of cyclic flats of each rank and size, together with information on the modular pairs of cyclic flats. We provide a formula which allows us to calculate $f$-vectors without the need of taking convex hulls or computing face lattices. We discuss the particular cases of sparse paving matroids and rank two matroids, which are of independent interest due to their appearances in other combinatorial and geometric settings.
In this paper, fractional-order (FO), intelligent, and robust sliding mode control (SMC) and stabilization of inherently nonlinear, multi-input, multi-output 6-DOF robot manipulators are investigated. To ensure robust control and better performance of the robot system, significant studies on various control transactions have been explored. First, a sliding proportional-integral-derivative (PID) surface is conceived and then its FO constitute is developed. It is an important fact that in SMC, the reaching phase is fast and the chattering is abated in the sliding phase. In particular, the discontinuity in the SMC is prevented in view of the boundary layer obtained by recommending the sigmoid function together with fuzzy logic to eliminate the chattering phenomenon. A hybrid tuning method consisting of gray wolf optimization and particle swarm optimization (GWO-PSO) algorithms is applied to tune the parameters of PID sliding mode control (PIDSMC), FO PIDSMC (FOPIDSMC), fuzzy PIDSMC (FPIDSMC), and FO fuzzy PIDSMC (FOFPIDSMC) controllers. In simulation results, the tuned FOFPIDSMC controller consistently outperforms PIDSMC, FOPIDSMC, and FPIDSMC controllers tuned by the GWO-PSO in dynamic performance, trajectory tracking, disturbance rejection, and mass uncertainty scenarios. It has been seen through a thorough performance analysis that 91.93% and 44.13% improvement are, respectively, obtained for mean absolute error (MAE) and torques root mean square (RMS) values of the joints when using from the PIDSMC to the FOFPIDSMC. Finally, the simulation outcomes reveal the superior aspects of the designed FOFPIDSMC and also demonstrate that the FOFPIDSMC controller enhances the dynamic performances of the 6-revolute universal robots 5 (6R UR5) robot manipulator under a variety of operating conditions.
Sexual and gender–based violence (SGBV) is a multifaceted, endemic, and nefarious phenomenon that remains poorly measured and understood, despite greater global awareness of the issue. While efforts to improve data collection methods have increased–including the implementation of the Demographic and Health Survey (DHS) in some countries–the lack of reliable SGBV data remains a significant challenge to developing targeted policy interventions and advocacy initiatives. Using a recent mixed–methods research project conducted by the authors in Sierra Leone as a case study, this paper discusses the current status of SGBV data, challenges faced, and potential research a pproaches.
Climate change exacerbates existing risks and vulnerabilities for people globally, and migration is a longstanding adaptation response to climate risk. The mechanisms through which climate change shapes human mobility are complex, however, and gaps in data and knowledge persist. In response to these gaps, the United Nations Development Programme’s (UNDP) Predictive Analytics, Human Mobility, and Urbanization Project employed a hybrid approach that combined predictive analytics with participatory foresight to explore climate change-related mobility in Pakistan and Viet Nam from 2020 to 2050. Focusing on Karachi and Ho Chi Minh City, the project estimated temporal and spatial mobility patterns under different climate change scenarios and evaluated the impact of such in-migration across key social, political, economic, and environmental domains. Findings indicate that net migration into these cities could significantly increase under extreme climate scenarios, highlighting both the complex spatial patterns of population change and the potential for anticipatory policies to mitigate these impacts. While extensive research exists on foresight methods and theory, process reflections are underrepresented. The innovative approach employed within this project offers valuable insights on foresight exercise design choices and their implications for effective stakeholder engagement, as well as the applicability and transferability of insights in support of policymaking. Beyond substantive findings, this paper offers a critical reflection on the methodological alignment of data-driven and participatory foresight with the aim of anticipatory policy ideation, seeking to contribute to the enhanced effectiveness of foresight practices.
This informative Handbook provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI's impact on society and how it should be regulated. This title is also available as Open Access on Cambridge Core.
One of the key challenges of regulating internet platforms is international cooperation. This chapter offers some insights into platform responsibility reforms by relying on forty years of experience in regulating cross-border financial institutions. Internet platforms and cross-border banks have much in common from a regulatory perspective. They both operate in an interconnected global market that lacks a supranational regulatory framework. And they also tend to generate cross-border spillovers that are difficult to control. Harmful content and systemic risks – the two key regulatory challenges for platforms and banks, respectively – can be conceptualized as negative externalities.
One of the main lessons learned in regulating cross-border banks is that, under certain conditions, international regulatory cooperation is possible. We have witnessed that in the successful design and implementation of the Basel Accord – the global banking standard that regulates banks’ solvency and liquidity risks. In this chapter, I will analyze the conditions under which cooperation can ensue and what the history of the Basel Accord can teach to platform responsibility reforms. In the last part, I will discuss what can be done when cooperation is more challenging.
Digital behavior does not occur in isolation within space, but is instead ever-present, vying for a user’s time against other alternative behaviors. The determinants of behavior choice are diverse, yet in behavioral sciences, this “behavioral competition” is operationalized by alterations in the value of the contingent reinforcer to latent behaviors. In this competitive environment, where the user has limited time to enact certain behaviors, they must choose by seeking a balance that maximizes their satisfaction (law of diminishing marginal utility & utility maximization model). This shifting process is known as behavioral contrast, which reflects a variation in some behavioral component due to the change in the value of reinforcers associated with any of the present behaviors. In the design of digital behaviors, understanding this process is fundamental, as it directs the designer towards potential enhancements of the digital service (through improving the reinforcers) to better its positioning against competitors.