To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Life cycle assessment (LCA) reports are commonly used for sustainability documentation, but extracting useful information from them is challenging and requires expert oversight. Designers frequently face technical obstacles and time constraints when interpreting LCA documents. As AI-driven tools become increasingly integrated into design workflows, there is an opportunity to improve access to sustainability data. This study used a mixed-methods approach to develop life cycle design heuristics to help non-LCA experts acquire relevant design knowledge from LCA reports. Developed through in-depth interviews with LCA experts (n = 9), these heuristics revealed five prominent categories of information: (1) scope of analysis, (2) priority components, (3) eco hotspots, (4) key metrics, and (5) design strategies. The utility of these heuristics was tested in a need-finding study with designers (n = 17), who annotated an LCA report using the heuristics. Findings suggest a need for additional support to help designers contextualize quantitative metrics (e.g., carbon footprints) and suggest relevant design strategies. A follow-up reflective interview study with LCA experts gathered feedback on the heuristics. These heuristics offer designers a framework for engaging with sustainability data, supporting product redesign, and a foundation for AI-assisted knowledge extraction to integrate life cycle information into design workflows efficiently.
The value-creation opportunities enabled by the ubiquitous availability of data indisputably lead to the necessity of restructuring innovation processes. Moreover, the variety of stakeholders potentially involved in innovation processes and the apparent heterogeneity of scenarios and contexts imply much less established practices and routines and not yet constituted reference frameworks to lead the transition to data-driven product innovation. In this context, the paper attempts, from the analysis of the data-driven innovation processes of 36 Italian companies, to recognise the emerging innovation opportunities offered by the rich network of the resulting data flows. However, these opportunities also imply new tasks, which in turn raise further concerns. Building on data-driven design literature and on industrial practices in the field of innovation management, the authors discuss the role that research achievements in the field of engineering design can play in addressing such concerns.
Speaking is often challenging for language learners to develop due to factors such as anxiety and limited practice opportunities. Dialogue-based computer-assisted language learning (CALL) systems have the potential to address these challenges. While there is evidence of their usefulness in second language (L2) learning, the effectiveness of these systems on speaking development remains unclear. The present meta-analysis attempts to provide a comprehensive overview of the effect of dialogue-based CALL in facilitating L2 speaking development. After an extensive literature search, we identified 16 studies encompassing 89 effect sizes. Through a three-level meta-analysis, we calculated the overall effect size and investigated the potential moderating effect of 13 variables spanning study context, study design and treatment, and measures. Results indicated a moderate overall effect size (g = .61) of dialogue systems on L2 learners’ speaking development. Notably, three moderators were found to have significant effects: type of system, system meaning constraint, and system modality. No significant moderating effect was identified for education stage, L2 proficiency, learning location, corrective feedback, length of intervention, type of interaction, measure, and key assessment component. These findings suggest directions for future research, including the role of corrective feedback in dialogue-based CALL, the effectiveness of such systems across proficiency levels, and their potential in diverse learning contexts with the integration of generative artificial intelligence.
In response to the prevailing trend of an aging society and the increasing requirements of rehabilitation, this paper presents an approach involving brain-machine interaction (BMI) for a single-degree-of-freedom (1-DOF) sit-to-stand transfer robot. Based on a 1-DOF rehabilitation robot, three experiment paradigms involving motor imagery (MI), action observation of motor imagery (AO-MI) and motor execution are designed using both electroencephalography (EEG) and electromyography (EMG). To enhance motion intention recognition accuracy, a Gumbel-ResNet-KANs decoding model is established. The Gumbel-ResNet-KANs model integrates the Gumbel-Softmax method with the ResNet-KANs network module and demonstrates strong decoding capability, as demonstrated by comparative tests in this paper. To validate the effect of robotic assistance, EEG and EMG coherence are analyzed to assess the impact of robotic assistance on rehabilitation from a neuromuscular perspective in both assisted and unassisted conditions. We assessed the effect of robotics on rehabilitation from an emotional perspective by analyzing the difference between the differential entropy of the right and left brain. The proposed study also reveals that the movement-related cortical potentials in AO-MI are beneficial for promoting the performance of BMI in sit-to-stand training, which provides a possible approach for the development of new types of robots for lower limb rehabilitation.
Capturing the non-cooperative tumbling target by the free-floating space robot stands as a crucial task within on-orbit servicing. However, the strong dynamic coupling of the base-spacecraft and the manipulator seriously disturbs the base-spacecraft, which reduces the power generation efficiency of solar panels and the communication quality with the earth station. In this paper, the trajectory planning method of the free-floating space robot for non-cooperative tumbling target capture based on deep reinforcement learning is proposed, which can reduce the disturbance of the base-spacecraft during target capture. First, the generalized Jacobian matrix of the space robot is derived, from which the dynamics model is obtained. The kinematics model of the space non-cooperative tumbling target is established. And the contact collision dynamics between the space robot and the tumbling target are analysed. Second, the twin delayed deep deterministic policy gradient algorithm is introduced to plan the trajectory for capturing the non-cooperative tumbling target, where apart from the motion parameters of the manipulator and the generalized manipulability of the space robot, the pose disturbance of the base-spacecraft is initially added to the reward function. Finally, the simulation for target capture is carried out. The results show that compared with the existing method, the proposed method converges faster with a larger reward, and the pose disturbance of the base-spacecraft is reduced. Moreover, the method performs well for capturing the non-cooperative tumbling target with different initial rotational angular velocities.
We prove a Poisson process approximation result for stabilising functionals of a determinantal point process. Our results use concrete couplings of determinantal processes with different Palm measures and exploit their association properties. Second, we focus on the Ginibre process and show in the asymptotic scenario of an increasing observation window that the process of points with a large nearest neighbour distance converges after a suitable scaling to a Poisson point process. As a corollary, we obtain the scaling of the maximum nearest neighbour distance in the Ginibre process, which turns out to be different from its analogue for independent points.
In recent years, the removal of orbital debris has become an increasingly urgent task due to advancements in human space exploration. Capturing space debris through caging manipulation offers notable advantages in terms of robustness and controllability. This paper proposes a configuration-based caging manipulation design method for a cable-driven flexible arm. First, the cable-driven flexible arm with multi self-lockable links is introduced. To quantify the caging configurations formed by different self-lockable link selections, a novel planar caging quality metric is then presented for arbitrary planar objects. Using this metric, a caging design method is developed for the flexible arm to conduct caging manipulations. Finally, the caging manipulation strategies are discussed with lock selection maps for different objects, and a robust caging strategy considering uncertainty is further explored. Simulation and experimental results validate the effectiveness of the proposed caging design method and demonstrate better performance compared to conventional caging methods.
Design computing refers to the usage of computer frameworks, models or systems in design-related activities. Design computing research, in turn, refers to the development of these frameworks/models/systems, and so forth. As design practice increasingly relies on computer tools, the demand for research in design computing grows. While this opens innumerable venues for research, the profusion of information in the field poses significant challenges for researchers. Therefore, meta-level surveys of the field are called for. To provide researchers with a useful overview of design computing research, we set out to identify some of the main clusters of activity in the field. By “clusters of activity”, we refer to groups of researchers pursuing similar or identical research questions. Our PRISMA-style review focuses on the identification of such clusters, based on the complete proceedings (N = 404) of a long-standing conference (Design Computing and Cognition, DCC, 2004–2024), which captures the richness and diversity of the field. The primary contribution of this work is a map that organizes the main questions explored and the approaches taken in exploring them, which are informative for researchers and educators alike. This map may also help to execute large-scale surveys via automation, toward obtaining a comprehensive view of the field.
Modeling detailed chemical kinetics is a primary challenge in combustion simulations. We present a novel framework to enforce physical constraints, specifically total mass and elemental conservation, during the reaction of ML models’ training for the reduced composition space chemical kinetics of large chemical mechanisms in combustion. In these models, the transport equations for a subset of representative species are solved with the ML approaches, while the remaining nonrepresentative species are “recovered” with a separate artificial neural network trained on data. Given the strong correlation between full and reduced solution vectors, our method utilizes a small neural network to establish an accurate and physically consistent mapping. By leveraging this mapping, we enforce physical constraints in the training process of the ML model for reduced composition space chemical kinetics. The framework is demonstrated here for methane, CH4, and oxidation. The resulting solution vectors from our deep operator networks (DeepONet)-based approach are accurate and align more consistently with physical laws.
This paper discusses the development of synthetic cohomology in Homotopy Type Theory (HoTT), as well as its computer formalisation. The objectives of this paper are (1) to generalise previous work on integral cohomology in HoTT by the current authors and Brunerie (2022) to cohomology with arbitrary coefficients and (2) to provide the mathematical details of, as well as extend, results underpinning the computer formalisation of cohomology rings by the current authors and Lamiaux (2023). With respect to objective (1), we provide new direct definitions of the cohomology group operations and of the cup product, which, just as in the previous work by the current authors and Brunerie (2022), enable significant simplifications of many earlier proofs in synthetic cohomology theory. In particular, the new definition of the cup product allows us to give the first complete formalisation of the axioms needed to turn the cohomology groups into a graded commutative ring. We also establish that this cohomology theory satisfies the HoTT formulation of the Eilenberg–Steenrod axioms for cohomology and study the classical Mayer–Vietoris and Gysin sequences. With respect to objective (2), we characterise the cohomology groups and rings of various spaces, including the spheres, torus, Klein bottle, real/complex projective planes, and infinite real projective space. All results have been formalised in Cubical Agda, and we obtain multiple new numbers, similar to the famous ‘Brunerie number’, which can be used as benchmarks for computational implementations of HoTT. Some of these numbers are infeasible to compute in Cubical Agda and hence provide new computational challenges and open problems which are much easier to define than the original Brunerie number.
The family of relevant logics can be faceted by a hierarchy of increasingly fine-grained variable sharing properties—requiring that in valid entailments $A\to B$, some atom must appear in both A and B with some additional condition (e.g., with the same sign or nested within the same number of conditionals). In this paper, we consider an incredibly strong variable sharing property of lericone relevance that takes into account the path of negations and conditionals in which an atom appears in the parse trees of the antecedent and consequent. We show that this property of lericone relevance holds of the relevant logic $\mathbf {BM}$ (and that a related property of faithful lericone relevance holds of $\mathbf {B}$) and characterize the largest fragments of classical logic with these properties. Along the way, we consider the consequences for lericone relevance for the theory of subject-matter, for Logan’s notion of hyperformalism, and for the very definition of a relevant logic itself.
In previous publications, it was shown that finite non-deterministic matrices are quite powerful in providing semantics for a large class of normal and non-normal modal logics. However, some modal logics, such as those whose axiom systems contained the Löb axiom or the McKinsey formula, were not analyzed via non-deterministic semantics. Furthermore, other modal rules than the rule of necessitation were not yet characterized in the framework.
In this paper, we will overcome this shortcoming and present a novel approach for constructing semantics for normal and non-normal modal logics that is based on restricted non-deterministic matrices. This approach not only offers a uniform semantical framework for modal logics, while keeping the interpretation of the involved modal operators the same, and thus making different systems of modal logic comparable. It might also lead to a new understanding of the concept of modality.
In this paper, we aim to investigate the fluid model associated with an open large-scale storage network of non-reliable file servers with finite capacity, where new files can be added, and a file with only one copy can be lost or duplicated. The Skorokhod problem with oblique reflection in a bounded convex domain is used to identify the fluid limits. This analysis involves three regimes: the under-loaded, the critically loaded, and the overloaded regimes. The overloaded regime is of particular importance. To identify the fluid limits, new martingales are derived, and an averaging principle is established. This paper extends the results of El Kharroubi and El Masmari [7].
This paper introduces the Arachne System, a scalable, cost-effective mobile microrobot swarm platform designed for educational and research applications. It details the design, functionalities, and potential of the Arachne Bots, emphasizing their accessibility to users with minimal robotics expertise. By providing a comprehensive overview of the system’s hardware, sensory capabilities, and control algorithms, the paper demonstrates the platform’s capacity to democratize and reduce entry barriers in mobile robotic swarms research, fostering innovation and educational opportunities in the field. Extensive experimental validation of the system showcases its broad range of capabilities and effectiveness in real-world implementation.
We discuss the logical principle of extensionality for set-valued operators and its relation to mathematical notions of continuity for these operators in the context of systems of finite types as used in proof mining. Concretely, we initially exhibit an issue that arises with treating full extensionality in the context of the prevalent intensional approach to set-valued operators in such systems. Motivated by these issues, we discuss a range of useful fragments of this full extensionality statement where these issues are avoided and discuss their interrelations. Further, we study the continuity principles associated with these fragments of extensionality and show how they can be introduced in the logical systems via a collection of axioms that do not contribute to the growth of extractable bounds from proofs. In particular, we place an emphasis on a variant of extensionality and continuity formulated using the Hausdorff-metric and, in the course of our discussion, we in particular employ a tame treatment of suprema over bounded sets developed by the author in previous work to provide the first proof-theoretically tame treatment of the Hausdorff metric in systems geared for proof mining. To illustrate the applicability of these treatments for the extraction of quantitative information from proofs, we provide an application of proof mining to the Mann iteration of set-valued mappings which are nonexpansive w.r.t. the Hausdorff metric and extract highly uniform and effective quantitative information on the convergence of that method.
Structural induction is pervasively used by functional programmers and researchers for both informal reasoning as well as formal methods for program verification and semantics. In this paper, we promote its dual—structural coinduction—as a technique for understanding corecursive programs only in terms of the logical structure of their context. We illustrate this technique as an informal method of proofs which closely match the style of informal inductive proofs, where it is straightforward to check that all cases are covered and the coinductive hypotheses are used correctly. This intuitive idea is then formalized through a syntactic theory for deriving program equalities, which is justified purely in terms of the computational behavior of abstract machines and proved sound with respect to observational equivalence.
Counting independent sets in graphs and hypergraphs under a variety of restrictions is a classical question with a long history. It is the subject of the celebrated container method which found numerous spectacular applications over the years. We consider the question of how many independent sets we can have in a graph under structural restrictions. We show that any $n$-vertex graph with independence number $\alpha$ without $bK_a$ as an induced subgraph has at most $n^{O(1)} \cdot \alpha ^{O(\alpha )}$ independent sets. This substantially improves the trivial upper bound of $n^{\alpha },$ whenever $\alpha \le n^{o(1)}$ and gives a characterisation of graphs forbidding which allows for such an improvement. It is also in general tight up to a constant in the exponent since there exist triangle-free graphs with $\alpha ^{\Omega (\alpha )}$ independent sets. We also prove that if one in addition assumes the ground graph is chi-bounded one can improve the bound to $n^{O(1)} \cdot 2^{O(\alpha )}$ which is tight up to a constant factor in the exponent.
Most research on UAV swarm architectures remains confined to simulation-based studies, with limited real-world implementation and validation. In order to mitigate this issue, this research presents an improved task allocation and formation control system within ARCog-NET (Aerial Robot Cognitive Architecture), aimed at deploying autonomous UAV swarms as a unified and scalable solution. The proposed architecture integrates perception, planning, decision-making, and adaptive learning, enabling UAV swarms to dynamically adjust path planning, task allocation, and formation control in response to evolving mission demands. Inspired by artificial intelligence and cognitive science, ARCog-NET employs an Edge-Fog-Cloud (EFC) computing model, where edge UAVs handle real-time data acquisition and local processing, fog nodes coordinate intermediate control, and cloud servers manage complex computations, storage, and human supervision. This hierarchical structure balances real-time autonomy at the UAV level with high-level optimization and decision-making, creating a collective intelligence system that automatically fine-tunes decision parameters based on configurable triggers. To validate ARCog-NET, a realistic simulation framework was developed using SITL (Software-In-The-Loop) with actual flight controller firmware and ROS-based middleware, enabling high-fidelity emulation. This framework bridges the gap between virtual simulations and real-world deployments, allowing evaluation of performance in environmental monitoring, search and rescue, and emergency communication network deployment. Results demonstrate superior energy efficiency, adaptability, and operational effectiveness compared to conventional robotic swarm methodologies. By dynamically optimizing data processing based on task urgency, resource availability, and network conditions, ARCog-NET bridges the gap between theoretical swarm intelligence models and real-world UAV applications, paving the way for future large-scale deployments.