To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We interrogate efforts to legislate artificial intelligence (AI) through Canada’s Artificial Intelligence and Data Act (AIDA) and argue it represents a series of missed opportunities that so delayed the Act that it died. We note how much of this bill was explicitly tied to economic development and implicitly tied to a narrow jurisdictional form of shared prosperity. Instead, we contend that the benefits of AI are not shared but disproportionately favour specific groups, in this case, the AI industry. This trend appears typical of many countries’ AI and data regulations, which tend to privilege the few, despite promises to favour the many. We discuss the origins of AIDA, drafted by Canada’s federal Department for Innovation Science and Economic Development (ISED). We then consider four problems: (1) AIDA relied on public trust in a digital and data economy; (2) ISED tried to both regulate and promote AI and data; (3) Public consultation was insufficient for AIDA; and (4) Workers’ rights in Canada and worldwide were excluded in AIDA. Without strong checks and balances built into regulation like AIDA, innovation will fail to deliver on its claims. We recommend the Canadian government and, by extension, other governments invest in an AI act that prioritises: (1) Accountability mechanisms and tools for the public and private sectors; (2) Robust workers’ rights in terms of data handling; and (3) Meaningful public participation in all stages of legislation. These policies are essential to countering wealth concentration in the industry, which would stifle progress and widespread economic growth.
This paper presents an efficient trajectory planning method for a 4-DOF robotic arm designed for pick-and-place manipulation tasks. The method addresses several challenges, where traditional optimization approaches struggle with high dimensionality, and data-driven methods are costly to collect enough data. The proposed approach leverages Bézier curves for computationally efficient, smooth trajectory generation, minimizing abrupt changes in motion. When continuous solutions for the end-effector angle are unavailable, joint angles are interpolated using Bézier or Hermite interpolation. Additionally, we use custom metrics to evaluate deviation between the interpolated trajectory and the original trajectory, as well as the overall smoothness of the path. When a continuous solution exists, the trajectory is treated as a Gaussian process, where a prior factor is generated using the centerline. This prior is then combined with a smoothness factor to optimize the trajectory, ensuring it remains as smooth as possible within the feasible solution space through stochastic gradient descent. The method is evaluated through simulations in Nvidia Isaac Sim; results highlight the method’s suitability, and future work will explore enhancements in prior trajectory integration and smoothing techniques.
Veltman semantics is the basic Kripke-like semantics for interpretability logic. Verbrugge semantics is a generalization of Veltman semantics. An appropriate notion of bisimulation between a Verbrugge model and a Veltman model is developed in this paper. We show that each Verbrugge model can be transformed to a bisimilar Veltman model.
The problem of reconstructing a distribution with bounded support from its moments is practically relevant in many fields, such as chemical engineering, electrical engineering, and image analysis. The problem is closely related to a classical moment problem, called the truncated Hausdorff moment problem (THMP). We call a method that finds or approximates a solution to the THMP a Hausdorff moment transform (HMT). In practice, selecting the right HMT for specific objectives remains a challenge. This study introduces a systematic and comprehensive method for comparing HMTs based on accuracy, computational complexity, and precision requirements. To enable fair comparisons, we present approaches for generating representative moment sequences. The study also enhances existing HMTs by reducing their computational complexity. Our findings show that the performances of the approximations differ significantly in their convergence, accuracy, and numerical complexity and that the decay order of the moment sequence strongly affects the accuracy requirement.
This commentary examines the dual role of artificial intelligence (AI) in shaping electoral integrity and combating misinformation, with a focus on the 2025 Philippine elections. It investigates how AI has been weaponised to manipulate narratives and suggests strategies to counteract disinformation. Drawing on case studies from the Philippines, Taiwan, and India—regions in the Indo-Pacific with vibrant democracies, high digital engagement, and recent experiences with election-related misinformation—it highlights the risks of AI-driven content and the innovative measures used to address its spread. The commentary advocates for a balanced approach that incorporates technological solutions, regulatory frameworks, and digital literacy to safeguard democratic processes and promote informed public participation. The rise of generative AI tools has significantly amplified the risks of disinformation, such as deepfakes, and algorithmic biases. These technologies have been exploited to influence voter perceptions and undermine democratic systems, creating a pressing need for protective measures. In the Philippines, social media platforms have been used to spread revisionist narratives, while Taiwan employs AI for real-time fact-checking. India’s proactive approach, including a public misinformation tipline, showcases effective countermeasures. These examples highlight the complex challenges and opportunities presented by AI in different electoral contexts. The commentary stresses the need for regulatory frameworks designed to address AI’s dual-use nature, advocating for transparency, real-time monitoring, and collaboration between governments, civil society, and the private sector. It also explores the criteria for effective AI solutions, including scalability, adaptability, and ethical considerations, to guide future interventions. Ultimately, it underscores the importance of digital literacy and resilient information ecosystems in supporting informed democratic participation.
Increasing sustainability expectations requires support for the design of systems that are reactive in minimizing potential negative impact and proactive in guiding engineering decision-making toward more value-robust long-term decisions. This article identifies a gap in the methodological support for the design of circular systems, building on the hypothesis that computer-based simulation models will drive the development of more value-robust systems designed to behave positively in a changeable operational environment during the whole lifecycle. The article presents a framework for value-robust circular systems design, complementing the current approaches for circular design aimed at increasing decision-makers’ awareness about the complexity of circular systems to be designed. The framework is theoretically described and demonstrated through its applications in four case studies in the field of construction machinery investigating new circular solutions for the future of mining, quarrying and road construction. The framework supports the development of more resilient and sustainable systems, strengthening the feedback loop between exploring new technologies, proposing innovative concepts and evaluating system performance.
This article surveys spatial music and sonic art influenced by the traditional Japanese concept of ma – translated as space, interval, or pause – against the cultural backdrop of Shintoism and Zen Buddhism. Works by Jōji Yuasa, Midori Takada, Michael Fowler, Akiko Hatakeyama, Kaija Saariaho and Jim Franklin created in conscious engagement with ma are analysed with respect to diverse manifestations of ma in Japanese arts and social sciences, including theatre, poetry, painting, rock garden, shakuhachi and psychotherapy. Jean-Baptiste Barrière provided the Max patch for Saariaho’s Only the Sound Remains (2015) for this survey. I propose a framework of six interlinking dimensions of ma – temporal, physical, musical, semantic, therapeutic and spiritual – for discussing creative approaches to ma, alongside their resonance with Hisamatsu Shin’ichi’s seven interconnected characteristics of Zen art: Asymmetry, Simplicity, Austere Sublimity/Lofty Dryness, Naturalness, Subtle Profundity/Deep Reserve, Freedom from Attachment and Tranquility. The aim is first to examine how each composer uses different techniques, technologies and systems to engage with specific dimensions of ma. Second, to illuminate possible futures of exploring these dimensions in spatial music and sonic art through three methods: Inspiration, Transmediation and Expansion.
The rise of generative artificial intelligences (AIs) has quickly made them auxiliary tools in numerous fields, especially in the creative one. Many scientific works already discuss the comparison of the creative capacity of AIs with human beings. In the field of Engineering Design, however, numerous design methodologies have been developed that enhance the creativity of the designer in their idea generation phase. Therefore, this work aims to expand previous works by leading a Generative Pre-trained Transformer 4 (GPT-4) based generative AI to use a design methodology to generate creative concepts. The results suggest that these types of tools can be useful for designers in that they can inspire novel ideas, but they still lack the necessary capacity to discern technically feasible ideas.
A structurally transforming multi-mode product can realize a changing set of functions across its modes, replacing multiple related products while offering increased cost, space, and time efficiency. However, there is a lack of connected methods that address the additional design complexities due to the product’s physical transformations and the resulting structural component-sharing between modes. A framework, grounded in standard design practice and built upon existing methods, is proposed to help navigate the two most impacted design stages: 1. Problem Definition and 2. Conceptual Design. The Problem Definition stage in this new framework involves identifying the external factor that determines the product’s modes and defining the functional requirements for the modes and transformation methods. The Conceptual Design stage involves iteratively linking conceptualized forms of each mode to adjacent modes through conceptualized transformation methods. The framework is demonstrated in a case study involving the design of a structurally transforming multi-mode piece of children’s furniture that transforms between a cradle, floor seat and a multipurpose toddler step stool. The proposed framework is a promising step toward systematically, cohesively, and comprehensively addressing design challenges during the development of a wide variety of structurally transforming multi-mode products, therefore facilitating better, more effective product design.
Artificial neural networks are increasingly used for geophysical modeling to extract complex nonlinear patterns from geospatial data. However, it is difficult to understand how networks make predictions, limiting trust in the model, debugging capacity, and physical insights. EXplainable Artificial Intelligence (XAI) techniques expose how models make predictions, but XAI results may be influenced by correlated features. Geospatial data typically exhibit substantial autocorrelation. With correlated input features, learning methods can produce many networks that achieve very similar performance (e.g., arising from different initializations). Since the networks capture different relationships, their attributions can vary. Correlated features may also cause inaccurate attributions because XAI methods typically evaluate isolated features, whereas networks learn multifeature patterns. Few studies have quantitatively analyzed the influence of correlated features on XAI attributions. We use a benchmark framework of synthetic data with increasingly strong correlation, for which the ground truth attribution is known. For each dataset, we train multiple networks and compare XAI-derived attributions to the ground truth. We show that correlation may dramatically increase the variance of the derived attributions, and investigate the cause of the high variance: is it because different trained networks learn highly different functions or because XAI methods become less faithful in the presence of correlation? Finally, we show XAI applied to superpixels, instead of single grid cells, substantially decreases attribution variance. Our study is the first to quantify the effects of strong correlation on XAI, to investigate the reasons that underlie these effects, and to offer a promising way to address them.
The ARLE GPS tool provides computer-aided design support for solving problems with the spatial planning and design of houses, using a robust design model with physical-biological and cost strategies. This enables architects to eliminate uncertainties and to make robust decisions by applying computational thinking to decision making and action implementation. This support enables the architect to deal with the complexity arising from the interrelationships between the design variables and transforms the spatial planning problem, which is conceptualized as illdefined, into a well-defined problem. A scientific method is used, based on mathematical modeling of the action-decision field of design geometric variables, rather than a drawn method involving sketches. This tool acts as an aid mechanism, an assembler, a simulator, and an evaluator of geometric prototypes (virtual or graphical) and can be used to systematize the assembly or modeling of the FPL structure, particularly with respect to the performance required of a house. This candidate solution, provided by the tool, defines the spatial dimensions of the rooms in the house, the topological data of the assembly sequence, and the connections between rooms. The architect converts this virtual prototype into a graphical FPL prototype, which is then modeled, refined and evaluated continuously and objectively with the aid of ARLE GPS until a solution is obtained that satisfies the requirements, constraints and objectives of the problem. In this way, a solution to the problem (i.e., the project) can be captured and generated.
Recommender systems are ubiquitous in modern life and are one of the main monetization channels for Internet technology giants. This book helps graduate students, researchers and practitioners to get to grips with this cutting-edge field and build the thorough understanding and practical skills needed to progress in the area. It not only introduces the applications of deep learning and generative AI for recommendation models, but also focuses on the industry architecture of the recommender systems. The authors include a detailed discussion of the implementation solutions used by companies such as YouTube, Alibaba, Airbnb and Netflix, as well as the related machine learning framework including model serving, model training, feature storage and data stream processing.
Emerging wildlife pathogens often display geographic variability due to landscape heterogeneity. Modeling approaches capable of learning complex, non-linear spatial dynamics of diseases are needed to rigorously assess and mitigate the effects of pathogens on wildlife health and biodiversity. We propose a novel machine learning (ML)-guided approach that leverages prior physical knowledge of ecological systems, using partial differential equations. We present our approach, taking advantage of the universal function approximation property of neural networks for flexible representation of the underlying dynamics of the geographic spread and growth of wildlife diseases. We demonstrate the benefits of our approach by comparing its forecasting power with commonly used methods and highlighting the obtained insights on disease dynamics. Additionally, we show the theoretical guarantees for the approximation error of our model. We illustrate the implementation of our ML-guided approach using data from white-nose syndrome (WNS) outbreaks in bat populations across the US. WNS is an infectious fungal disease responsible for significant declines in bat populations. Our results on WNS are useful for disease surveillance and bat conservation efforts. Our methods can be broadly used to assess the effects of environmental and anthropogenic drivers impacting wildlife health and biodiversity.
As the global population ages, effective home healthcare solutions become essential. Over a decade ago, ambient-assisted living (AAL) emerged as a promising solution, especially when combined with the potential of the Internet of Things (IoT) to revolutionize healthcare delivery. However, integrating diverse smart home devices with healthcare systems poses challenges regarding interoperability and real-time, context-aware responses. Addressing these challenges, this study introduces an ontology for AAL that seamlessly merges IoT and Smart Home ontologies with the established healthcare ontology, SNOMED CT. This ontology-centric approach facilitates semantic interoperability and knowledge sharing, paving the way for more personalized healthcare delivery. The core of this work lies in developing an AAL monitoring system grounded in this ontology. By incorporating Semantic Web Rule Language (SWRL) rules, the system can provide context-sensitive automated alerts and responses, taking into account patient-specific attributes, household features, and instantaneous sensor data. Empirical testing in the Halmstad Intelligent Home (HINT) highlights the system’s viability for practical deployment. Preliminary results indicate that the proposed integrative ontology-driven strategy holds significant potential to enhance healthcare services in AAL environments, marking an essential step towards achieving personalized, patient-centric care.
Deep geological repositories are critical for the long-term storage of hazardous materials, where understanding the mechanical behavior of emplacement drifts is essential for safety assurance. This study presents a surrogate modeling approach for the mechanical response of emplacement drifts in rock salt formations, utilizing Gaussian processes (GPs). The surrogate model serves as an efficient substitute for high-fidelity mechanical simulations in many-query scenarios, including time-dependent sensitivity analyses and calibration tasks. By significantly reducing computational demands, this approach facilitates faster design iterations and enhances the interpretation of monitoring data. The findings indicate that only a few key parameters are sufficient to accurately reflect in-situ conditions in complex rock salt models. Identifying these parameters is crucial for ensuring the reliability and safety of deep geological disposal systems.
Aerosol-cloud interactions contribute significant uncertainty to modern climate model predictions. Analysis of complex observed aerosol-cloud parameter relationships is a crucial piece of reducing this uncertainty. Here, we apply two machine learning methods to explore variability in in-situ observations from the NASA ACTIVATE mission. These observations consist of flights over the Western North Atlantic Ocean, providing a large repository of data including aerosol, meteorological, and microphysical conditions in and out of clouds. We investigate this dataset using principal component analysis (PCA), a linear dimensionality reduction technique, and an autoencoder, a deep learning non-linear dimensionality reduction technique. We find that we can reduce the dimensionality of the parameter space by more than a factor of 2 and verify that the deep learning method outperforms a PCA baseline by two orders of magnitude. Analysis in the low dimensional space of both these techniques reveals two consistent physically interpretable regimes—a low pollution regime and an in-cloud regime. Through this work, we show that unsupervised machine learning techniques can learn useful information from in-situ atmospheric observations and provide interpretable results of low-dimensional variability.
The 1994 discovery of Shor's quantum algorithm for integer factorization—an important practical problem in the area of cryptography—demonstrated quantum computing's potential for real-world impact. Since then, researchers have worked intensively to expand the list of practical problems that quantum algorithms can solve effectively. This book surveys the fruits of this effort, covering proposed quantum algorithms for concrete problems in many application areas, including quantum chemistry, optimization, finance, and machine learning. For each quantum algorithm considered, the book clearly states the problem being solved and the full computational complexity of the procedure, making sure to account for the contribution from all the underlying primitive ingredients. Separately, the book provides a detailed, independent summary of the most common algorithmic primitives. It has a modular, encyclopedic format to facilitate navigation of the material and to provide a quick reference for designers of quantum algorithms and quantum computing researchers.
In this original and modern book, the complexities of quantum phenomena and quantum resource theories are meticulously unravelled, from foundational entanglement and thermodynamics to the nuanced realms of asymmetry and beyond. Ideal for those aspiring to grasp the full scope of quantum resources, the text integrates advanced mathematical methods and physical principles within a comprehensive, accessible framework. Including over 760 exercises throughout, to develop and expand key concepts, readers will gain an unrivalled understanding of the topic. With its unique blend of pedagogical depth and cutting-edge research, it not only paves the way for a deep understanding of quantum resource theories but also illuminates the path toward innovative research directions. Providing the latest developments in the field as well as established knowledge within a unified framework, this book will be indispensable to students, educators, and researchers interested in quantum science's profound mysteries and applications.