To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To address the problems of accuracy degradation, localization drift, and even failure of Simultaneous Localization and Mapping (SLAM) algorithms in unstructured environments with sparse geometric features, such as outdoor parks, highways, and urban roads, a multi-metric light detection and ranging (LiDAR) SLAM system based on the fusion of geometric and intensity features is proposed. Firstly, an adaptive method for extracting multiple types of geometric features and salient intensity features is proposed to address the issue of insufficient sparse feature extraction. In addition to extracting traditional edge and planar features, vertex features are also extracted to fully utilize the geometric information, and intensity edge features are extracted in areas with significant intensity changes to increase multi-level perception of the environment. Secondly, in the state estimation, a multi-metric error estimation method based on point-to-point, point-to-line, and point-to-plane is used, and a two-step decoupling strategy is employed to enhance pose estimation accuracy. Finally, qualitative and quantitative experiments on public datasets demonstrate that compared to state-of-the-art pure geometric and intensity-assisted LiDAR SLAM algorithms, our proposed algorithm achieves superior localization accuracy and mapping clarity, with an ATE accuracy improvement of 28.93% and real-time performance of up to 62.9 ms. Additionally, test conducted in real campus environments further validates the effectiveness of our approach in complex, unstructured scenarios.
As artificial intelligence grows, human–robot collaboration becomes more common for efficient task completion. Effective communication between humans and AI-assisted robots is crucial for maximizing collaboration potential. This study explores human–robot interactions, focusing on the differing mental models used by humans and collaborative robots. Humans communicate using knowledge, skills, and emotions, while robotic systems rely on algorithms and technology. This communication disparity can hinder productivity. Integrating emotional intelligence with cognitive intelligence is key for successful collaboration. To address this, a communication model tailored for human–robot teams is proposed, incorporating robots’ observation of human emotions to optimize workload allocation. The model’s efficacy is demonstrated through a case study in an SAP system. By enhancing understanding and proposing practical solutions, this study contributes to optimizing teamwork between humans and AI-assisted robots.
Static analysis is an essential component of many modern software development tools. Unfortunately, the ever-increasing complexity of static analyzers makes their coding error-prone. Even analysis tools based on rigorous mathematical techniques, such as abstract interpretation, are not immune to bugs. Ensuring the correctness and reliability of software analyzers is critical if they are to be inserted in production compilers and development environments. While compiler validation has seen notable success, formal validation of static analysis tools remains relatively unexplored. In this paper we present checkification, a simple, automatic method for testing static analyzers. Broadly, it consists in checking, over a suite of benchmarks, that the properties inferred statically are satisfied dynamically. The main advantage of our approach lies in its simplicity, which stems directly from framing it within the Ciao assertion-based validation framework, and its blended static/dynamic assertion checking approach. We demonstrate that in this setting, the analysis can be tested with little effort by combining the following components already present in the framework: 1) the static analyzer, which outputs its results as the original program source with assertions interspersed; 2) the assertion run-time checking mechanism, which instruments a program to ensure that no assertion is violated at run time; 3) the random test case generator, which generates random test cases satisfying the properties present in assertion preconditions; and 4) the unit-test framework, which executes those test cases. We have applied our approach to the CiaoPP static analyzer, resulting in the identification of many bugs with reasonable overhead. Most of these bugs have been either fixed or confirmed, helping us detect a range of errors not only related to analysis soundness but also within other aspects of the framework.
Every directed graph $G$ induces a locally ordered metric space $\mathcal{X}_{(G)}$ together with a local order $\tilde {\mathcal{X}}_{(G)}$ that is locally dihomeomorphic to the standard pospace $\mathbb{R}$; both are related by a morphism ${\beta }_{(G)} G:\tilde {\mathcal{X}}_{(G)}\to {\mathcal{X}}_{(G)}$ satisfying a universal property. The underlying set of $\tilde {\mathcal{X}_{(G)}}$ admits a non-Hausdorff atlas $\mathcal{A}_{G}$ equipped with a non-vanishing vector field ${{f}}_{G}$; the latter is associated to $\tilde {\mathcal{X}}_{(G)}$ through the correspondence between local orders and cone fields on manifolds. The above constructions are compatible with Cartesian products, so the geometric model of a conservative program is lifted through ${{\beta }_{G_1}} \times \cdots \times {{\beta }}_{G_n}$ to a subset $M$ of the parallelized manifold $\mathcal{A}_{G_1} \times \cdots \times \mathcal{A}_{G_n}$. By assigning the suitable norm to each tangent space of $\mathcal{A}_{G_1} \times \cdots \times \mathcal{A}_{G_n}$, the length of every directed smooth path $\gamma$ on $M$, i.e. $\int {{|\gamma '(t)|}}_{\gamma (t)}dt$, corresponds to the execution time of the sequence of multi-instructions associated to $\gamma$. This induces a pseudometric ${{d}}_{\mathcal{A}}$ whose restrictions to sufficiently small open sets of $\mathcal{A}_{G_1} \times \cdots \times \mathcal{A}_{G_n}$ (we refer to the manifold topology, which is strictly finer than the pseudometric topology) are isometric to open subspaces of ${\mathbb{R}}^n$ with the $\alpha$-norm for some $\alpha \in [{{1}},{{\infty }}]$. The transition maps of $\mathcal{A}_{G}$ are translations, so the representation of a tangent vector does not depend on the chart of $\mathcal{A}_{G}$ in which it is represented; consequently, differentiable maps between open subsets of $\mathcal{A}_{G_{1}} \times \cdots \times \mathcal{A}_{G_{n}}$ are handled as if they were maps between open subsets of ${\mathbb{R}}^n$. For every directed path $\gamma$ on $M$ (possibly the representation of a sequence $\sigma$ of multi-instructions), there is a shorter directed smooth path on $M$ that is arbitrarily close to $\gamma$, and that can replace $\gamma$ as a representation of $\sigma$.
In this paper, we propose a novel online informative path planner for 3-D modeling of unknown structures using micro aerial vehicles. Different from the explore-then-exploit strategy, our planner can cope with exploration and coverage simultaneously and thus obtain complete and high-quality 3-D models. We first devise a set of evaluation metrics considering the perception constraints of the sensor for efficiently evaluating the coverage quality of the reconstructed surfaces. Then, the coverage quality is utilized to guide the subsequent informative path planning. Specifically, our hierarchical planner consists of two planning stages – a local coverage stage for inspecting surfaces with low coverage quality and a global exploration stage for transiting the robot to unexplored regions at the global scale. The local coverage stage computes the coverage path that takes into account both the exploration and coverage objectives based on the estimated coverage quality and frontiers, and the global exploration stage maintains a sparse roadmap in the explored space to achieve fast global exploration. We conduct both simulated and real-world experiments to validate the proposed method. The results show that our planner outperforms the state-of-the-art algorithms and especially decreases the reconstruction error (at least 12.5% lower on average).
Adaptation to climate change requires robust climate projections, yet the uncertainty in these projections performed by ensembles of Earth system models (ESMs) remains large. This is mainly due to uncertainties in the representation of subgrid-scale processes such as turbulence or convection that are partly alleviated at higher resolution. New developments in machine learning-based hybrid ESMs demonstrate great potential for systematically reduced errors compared to traditional ESMs. Building on the work of hybrid (physics + AI) ESMs, we here discuss the additional potential of further improving and accelerating climate models with quantum computing. We discuss how quantum computers could accelerate climate models by solving the underlying differential equations faster, how quantum machine learning could better represent subgrid-scale phenomena in ESMs even with currently available noisy intermediate-scale quantum devices, how quantum algorithms aimed at solving optimization problems could assist in tuning the many parameters in ESMs, a currently time-consuming and challenging process, and how quantum computers could aid in the analysis of climate models. We also discuss hurdles and obstacles facing current quantum computing paradigms. Strong interdisciplinary collaboration between climate scientists and quantum computing experts could help overcome these hurdles and harness the potential of quantum computing for this urgent topic.
Due to the ever-increasing complexity of technical products, the quantity of system requirements, which are typically expressed in natural language, is inevitably rising. Model-based formalization through the application of Model-based Systems Engineering is a common solution to cope with rising complexity. Thereby, grouping requirements to use cases forms the first step towards model-based requirements and allows to improve the understanding of the system. To support this manual and subjective task, automation by artificial intelligence and methods of natural language processing are needed. This contribution proposes a novel pipeline to derive use cases from natural language requirements by considering incomplete manual mappings and the possibility that one requirement contributes to multiple use cases. The approach utilizes semi-supervised requirements graph generation with subsequent overlapping graph clustering. Each identified use case is described by keyphrases to increase accessibility for the user. Industrial requirement sets from the automotive industry are used to evaluate the pipeline in two application scenarios. The proposed pipeline overcomes limitations of prior work in the practical application, which is emphasized by critical discussions with experts from the industry. The proposed pipeline automatically generates proposals for use cases defined in the requirement set, forming the basis for use case diagrams.
Data governance has emerged as a pivotal area of study over the past decade, yet despite its growing importance, a comprehensive analysis of the academic literature on this subject remains notably absent. This paper addresses this gap by presenting a systematic review of all academic publications on data governance from 2007 to 2024. By synthesizing insights from more than 3500 documents authored by more than 9000 researchers across various sources, this study offers a broad yet detailed perspective on the evolution of data governance research.
Designers often rely on their self-evaluations – either independently or using design tools – to make concept selection decisions. When evaluating designs for sustainability, novice designers, given their lack of experience, could demonstrate psychological distance from sustainability-related issues, leading to faulty concept evaluations. We aim to investigate the accuracy of novice designers’ self-evaluations of the sustainability of their solutions and the moderating role of their (1) trait empathy and (2) their beliefs, attitudes and intentions toward sustainability on this accuracy. We conducted an experiment with first-year engineering students comprising a sustainable design activity. In the activity, participants evaluated the sustainability of their own designs, and these self-evaluations were compared against expert evaluations. We see that participants’ self-evaluations were consistent with the expert evaluations on the following sustainable design heuristics: (1) longevity and (2) finding wholesome alternatives. Second, trait empathy moderated the accuracy of self-evaluations, with lower levels of fantasy and perspective-taking relating to more accurate self-evaluations. Finally, beliefs, attitudes and intentions toward sustainability also moderated the accuracy of self-evaluations, and these effects vary based on the sustainable design heuristic. Taken together, these findings suggest that novice designers’ individual differences (e.g., trait empathy) could moderate the accuracy of the evaluation of their designs in the context of sustainability.
New musical instruments of the electronic and digital eras have explored spatialisation through multidimensional speaker arrays. Many facets of 2D and 3D sound localisation have been investigated, often in tandem with immersive fixed-media compositions: spatial trajectory and panning; physics-based effects such as artificial acoustics, reverberation and Doppler shifts; and spatially derived synthesis methods. Within the realm of augmented spatial string instruments, the EV distinguishes itself through a unique realisation of the possibilities afforded by these technologies. Initially conceived as a tool for convolving the timbres of synthesised and acoustic string signals, the EV’s exploration of spatial sound has led to new experiments with timbre. Over time, additional sound-generation modules have been integrated, resulting in an increasingly versatile palette for immersive composition. Looking forward, the EV presents compelling opportunities for sonic innovation.
This paper presents the design, control strategy, and preliminary testing of Epi.Q, a modular unmanned vehicle (UGV) tailored for challenging environments, including exploration and surveillance tasks. To manage the complexities of the articulated structure, including lateral slip and the risk of jackknifing, a fuzzy logic-based traction control system was implemented. To improve traction stability by modulating power distribution between modules, the system optimally controls steering and traction. Subsequently, the paper introduces the fuzzy control system and presents preliminary validation experiments, including hill-climbing, obstacle navigation, steering, and realignment tests. Preliminary results indicate that the proposed fuzzy control strategy significantly improves traction and maneuverability even on steep inclines and uneven surfaces. These findings highlight the potential for fuzzy logic control to improve UGV performance.
The automation of assembly operations with industrial robots is pivotal in modern manufacturing, particularly for multispecies, low-volume, and customized production. Traditional programing methods are time-consuming and lack adaptability to complex, variable environments. Reinforcement learning-based assembly tasks have shown success in simulation environments, but face challenges like the simulation-to-reality gap and safety concerns when transferred to real-world applications. This article addresses these challenges by proposing a low-cost, image-segmentation-driven deep reinforcement learning strategy tailored for insertion tasks, such as the assembly of peg-in-hole components in satellite manufacturing, which involve extensive contact interactions. Our approach integrates visual and forces feedback into a prior dueling deep Q-network for insertion skill learning, enabling precise alignment of components. To bridge the simulation-to-reality gap, we transform the raw image input space into a canonical space based on image segmentation. Specifically, we employ a segmentation model based on U-net, pretrained in simulation and fine-tuned with real-world data, significantly reducing the need for labor-intensive real image segment labels. To handle the frequent contact inherent in peg-in-hole tasks, we integrated safety protections and impedance control into the training process, providing active compliance and reducing the risk of assembly failures. Our approach was evaluated in both simulated and real robotic environments, demonstrating robust performance in handling camera position errors and varying ambient light intensities and different lighting colors. Finally, the algorithm was validated in a real satellite assembly scenario, achieving a success rate of 15 out of 20 tests.
This paper presents the design and dynamic analysis of a reconfigurable four-wheeled mobile robot, with front wheels capable of transforming from a conventional circular wheel into a five-spoke wheel-legged (wheg) configuration. The transformation is achieved through a reconfiguration mechanism integrating a slider-crank chain with a rack and pinion system. A comprehensive dynamic analysis of the mechanism is conducted to evaluate the torque requirements for actuation and to support the selection of a suitable off-the-shelf motor. The required actuation torque is primarily influenced by the normal contact (reaction) force between the wheel and the ground or terrain, which varies depending on surface or terrain conditions. This contact force is computed using system dynamics, and its variations are further analyzed through the robot’s dynamic response. Numerical simulations, supported by real-world field tests, validate the effectiveness of the proposed design in moderately uneven environments.
Providing in-depth coverage, this book covers the fundamentals of computation and programming in C language. Essential concepts including operators and expressions, input and output statements, loop statements, arrays, pointers, functions, strings and preprocessors are described in a lucid manner. A unique approach - 'Learn by quiz' - features questions based on confidence-based learning methodology. It helps the reader to identify the right answer with adequate explanation and reasoning as to why the other options are incorrect. Computer programs and review questions are interspersed throughout the text. The book is appropriate for undergraduate students of engineering, computer science and information technology. It can be used for self-study and assists in the understanding of theoretical concepts and their applications.
Artificial intelligence (AI) facilitates designers in generating creative ideas. How designers work with AI to effectively stimulate and enhance their creativity is an urgent topic in the context of design education and creation. This study conducted a controlled experiment in a design education context to explore the effects of generative AI tools and visual stimuli on the creativity of designers in the early design stages. The results show that the use of text-to-image (T2I) AI tools and the stimulation of abstract visuals enhance the design creativity in both divergent and convergent thinking processes. However, it is important to be aware of the design fixation during this process. This study sheds light on the role and value of different AI tools for designers in the design process, offers a more effective solution of using AI for designers so as to improve creativity and design quality, and provides a theoretical basis for the application of AI-assisted design process.
Unintended technical interactions across system interfaces can lead to costly failures and rework, particularly in the early design stages of complex products. This study examines how structured risk assessment tools influence teams’ ability to identify, evaluate and mitigate risks from such indirect interactions. In a controlled experiment, 14 engineering teams (comprising professionals and graduate students) engaged in simulated design decisions across three system configurations. Tool usage – including models of direct and indirect risk propagation and value-based trade-offs – was continuously logged and linked to outcomes. Teams that engaged earlier and more deliberately with the tools identified risks sooner and selected mitigation actions with more favourable cost–benefit profiles. Results show that strategic, not merely frequent, tool use improves risk management performance, particularly when addressing cascading effects from indirect physical interactions. These findings support the use of structured supports to enhance both the efficiency of early-stage risk evaluation and the efficacy of risk treatment.
In recent decades, researchers have analyzed professional military education (PME) organizations to understand the characteristics and transformation of the core of military culture, the officer corps. Several historical studies have demonstrated the potential of this approach, but they were limited by both theoretical and methodological hurdles. This paper presents a new historical-institutionalist framework for analyzing officership and PME, integrating computational social science methods for large-scale data collection and analysis to overcome limited access to military environments and the intensive manual labor required for data collection and analysis. Furthermore, in an era where direct demographic data are increasingly being removed from the public domain, our indirect estimation methods provide one of the few viable alternatives for tracking institutional change. This approach will be demonstrated using web-scraping and a quantitative text analysis of the entire repository of theses from an elite American military school.
The global food system puts enormous pressure on the environment. Managing these pressures requires understanding not only where they occur (i.e., where food is produced), but also who drives them (i.e., where food is consumed). However, the size and complexity of global supply chains make it difficult to trace food production to consumption. Here, we provide the most comprehensive dataset of bilateral trade flows of environmental pressures stemming from food production from producing to consuming nations. The dataset provides environmental pressures for greenhouse gas emissions, water use, nitrogen and phosphorus pollution, and the area of land/water occupancy of food production for crops and animals from land, freshwater, and ocean systems. To produce these data, we improved upon reported food trade and production data to identify producing and consuming nations for each food item, allowing us to match food flows with appropriate environmental pressure data. These data provide a resource for research on sustainable global food consumption and the drivers of environmental impact.
This article aims at facilitating the widespread application of Energy Management Systems (EMSs), especially in buildings and cities, in order to support the realization of future carbon-neutral energy systems. We claim that economic viability is a severe issue for the utilization of EMSs at scale and that the provisioning of forecasting and optimization algorithms as a service can make a major contribution to achieving it. To this end, we present the Energy Service Generics software framework that allows the derivation of fully functional services from existing forecasting or optimization code with ease. This work documents the strictly systematic development of the framework, beginning with requirement analysis, from which a sophisticated design concept is derived, followed by a description of the implementation of the framework. Furthermore, we present the concept of the Open Energy Services community, our effort to continuously maintain the service framework but also provide ready-to-use forecasting and optimization services. Finally, an evaluation of our framework and community concept, as well as a demarcation between our work and the current state of the art, is presented.
While the Sustainable Development Goals (SDGs) were being negotiated, global policymakers assumed that advances in data technology and statistical capabilities, what was dubbed the “data revolution”, would accelerate development outcomes by improving policy efficiency and accountability. The 2014 report to the United Nations Secretary General, “A World That Counts” framed the data-for-development agenda, and proposed four pathways to impact: measuring for accountability, generating disaggregated and real-time data supplies, improving policymaking, and implementing efficiency. The subsequent experience suggests that while many recommendations were implemented globally to advance the production of data and statistics, the impact on SDG outcomes has been inconsistent. Progress towards SDG targets has stalled despite advances in statistical systems capability, data production, and data analytics. The coherence of the SDG policy agenda has undoubtedly improved aspects of data collection and supply, with SDG frameworks standardizing greater indicator reporting. However, other events, including the response to COVID-19, have played catalytic roles in statistical system innovation. Overall, increased financing for statistical systems has not materialized, though planning and monitoring of these national systems may have longer-term impacts. This article reviews how assumptions about the data revolution have evolved and where new assumptions are necessary to advance the impact across the data value chain. These include focusing on measuring what matters most for decision-making needs across polycentric institutions, leveraging the SDGs for global data standardization and strategic financial mobilization, closing data gaps while enhancing policymaker analytic capabilities, and fostering collective intelligence to drive data innovation, credible information, and sustainable development outcomes.