To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The generation of floor plan layouts has been extensively studied in recent years, driven by the need for efficient and functional architectural designs. Despite significant advancements, existing methods often face limitations when dealing with specific input adjacency graphs or room shapes and boundary layouts. When adjacency graphs contain separating triangles, the floor plan must include rectilinear rooms (non-rectangular rooms with concave corners). From a design perspective, minimizing corners or bends in rooms is crucial for functionality and aesthetics. In this article, we present a Python-based application called G-Drawer for automatically generating floor plans with a minimum number of bends. G-Drawer takes any plane triangulated graph as an input and outputs a floor plan layout with minimum bends. It prioritizes generating a rectangular floor plan (RFP); if an RFP is not feasible, it then generates an orthogonal floor plan or an irregular floor plan. G-Drawer modifies orthogonal drawing techniques based on flow networks and applies them on the dual graph of a given PTG to generate the required floor plans. The results of this article demonstrate the efficacy of G-Drawer in creating efficient floor plans. However, in future, we need to work on generating multiple dimensioned floor plans having non-rectangular rooms as well as non-rectangular boundary. These enhancements will address both mathematical and architectural challenges, advancing the automated generation of floor plans toward more practical and versatile applications.
We consider the hard-core model on a finite square grid graph with stochastic Glauber dynamics parametrized by the inverse temperature $\beta$. We investigate how the transition between its two maximum-occupancy configurations takes place in the low-temperature regime $\beta \to \infty$ in the case of periodic boundary conditions. The hard-core constraints and the grid symmetry make the structure of the critical configurations for this transition, also known as essential saddles, very rich and complex. We provide a comprehensive geometrical characterization of these configurations that together constitute a bottleneck for the Glauber dynamics in the low-temperature limit. In particular, we develop a novel isoperimetric inequality for hard-core configurations with a fixed number of particles and show how the essential saddles are characterized not only by the number of particles but also their geometry.
Applications of deep learning to physical simulations such as Computational Fluid Dynamics have recently experienced a surge in interest, and their viability has been demonstrated in different domains. However, due to the highly complex, turbulent, and three-dimensional flows, they have not yet been proven usable for turbomachinery applications. Multistage axial compressors for gas turbine applications represent a remarkably challenging case, due to the high-dimensionality of the regression of the flow field from geometrical and operational variables. This paper demonstrates the development and application of a deep learning framework for predictions of the flow field and aerodynamic performance of multistage axial compressors. A physics-based dimensionality reduction approach unlocks the potential for flow-field predictions, as it re-formulates the regression problem from an unstructured to a structured one, as well as reducing the number of degrees of freedom. Compared to traditional “black-box” surrogate models, it provides explainability to the predictions of the overall performance by identifying the corresponding aerodynamic drivers. The model is applied to manufacturing and build variations, as the associated performance scatter is known to have a significant impact on $ \mathrm{C}{\mathrm{O}}_2 $ emissions, which poses a challenge of great industrial and environmental relevance. The proposed architecture is proven to achieve an accuracy comparable to that of the CFD benchmark, in real-time, for an industrially relevant application. The deployed model is readily integrated within the manufacturing and build process of gas turbines, thus providing the opportunity to analytically assess the impact on performance with actionable and explainable data.
This study examines Nigeria’s National Information Technology Development Agency Code of Practice for Interactive Computer Service Platforms as one of Africa’s first push towards digital and social media co-regulation. Already established as a regulatory practice in Europe, co-regulation emphasises the need to impose duties of care on platforms and hold them, instead of users, accountable for safe online experiences. It is markedly different from the prior (and existing) regulatory paradigm in Nigeria, which is based on direct user regulation. By analysing the Code of Practice, therefore, this study considers what Nigeria’s radical turn towards co-regulation means for digital policy and social media regulation in relation to standards, information-gathering, and enforcement. It further sheds light on what co-regulation entails for digital regulatory practice in the wider African context, particularly in terms of the balance of power realities between Global North platforms and Global South countries.
Migrants encounter multiple challenges, such as learning new languages and adapting to a new life. While digital technologies help them learn, limited research has been conducted on their digital skills development. In this article, we report on migrants’ digital skills development while learning language through culture using a web app developed by an EU-funded project that aimed to promote social cohesion through a two-way exchange of knowledge and skills. Forty-six migrant and 43 home community members in Finland, Spain, Türkiye, and the UK participated in intercultural and intergenerational pairs to engage with and co-create interactive digital cultural activities in multiple languages. Participants’ digital, linguistic and cultural gains were measured before and after the workshops. We report on participants’ digital skills, measured by a digital competence self-assessment tool developed based on DigComp, and interviews with the participants. Quantitative data were analysed using descriptive and inferential statistics. Qualitative data were analysed deductively using the categories of the DigComp framework. Findings indicate statistically significant improvement in migrants’ self-reported digital skills. Highest gains were in the competency area of digital content creation. Comparison of migrants’ digital skill development with that of home community members did not show any statistically significant differences, supporting our argument against the deficiency perspective towards migrant populations. Interview data suggested overall positive evaluations and highlighted the role of the web app instructions for content creation. We conclude with suggestions for further research and argue for inclusive pedagogies, emphasising how both community members learned from and with each other during the workshops.
The multi-robot path planning problem is an NP-hard problem. The coati optimization algorithm (COA) is a novel metaheuristic algorithm and has been successfully applied in many fields. To solve multi-robot path planning optimization problems, we embed two differential evolution (DE) strategies into COA, a self-adaptive differential evolution-based coati optimization algorithm (SDECOA) is proposed. Among these strategies, the proposed algorithm adaptively selects more suitable strategies for different problems, effectively balancing global and local search capabilities. To validate the algorithm’s effectiveness, we tested it on CEC2020 benchmark functions and 48 CEC2020 real-world constrained optimization problems. In the latter’s experiments, the algorithm proposed in this paper achieved the best overall results compared to the top five algorithms that won in the CEC2020 competition. Finally, we applied SDECOA to optimization multi-robot online path planning problem. Facing extreme environments with multiple static and dynamic obstacles of varying sizes, the SDECOA algorithm consistently outperformed some classical and state-of-the-art algorithms. Compared to DE and COA, the proposed algorithm achieved an average improvement of 46% and 50%, respectively. Through extensive experimental testing, it was confirmed that our proposed algorithm is highly competitive. The source code of the algorithm is accessible at: https://ww2.mathworks.cn/matlabcentral/fileexchange/164876-HDECOA.
A transverse ledge climbing robot inspired by athletic locomotion is a customized robot aiming to travel through horizontal ledges in vertical walls. Due to the safety issue and complex configurations in graspable ledges such as horizontal, inclined ledges, and gaps between ledges, existing well-known vision-based navigation methods suffering from occlusion problems may not be applicable to this special kind of application. This study develops a force feedback-based motion planning strategy for the robot to explore and make feasible grasping actions as it continuously travels through reachable ledges. A contact force detection algorithm developed using a momentum observer approach is implemented to estimate the contact force between the robot’s exploring hand and the ledge. Then, to minimize the detection errors due to dynamic model uncertainties and noises, a time-varying threshold is integrated. When the estimated contact force exceeds the threshold value, the robot control system feeds the estimated force into the admittance controller to revise the joint motion trajectories for a smooth transition. To handle the scenario of gaps between ledges, several ledge-searching algorithms are developed to allow the robot to grasp the next target ledge and safely overcome the gap transition. The effectiveness of the proposed motion planning and searching strategy has been justified by simulation, where the four-link transverse climbing robot successfully navigates through a set of obstacle scenarios modeled to approximate the actual environment. The performance of the developed grasping ledge searching methods for various obstacle characteristics has been evaluated.
While previous studies in computer-assisted language learning have extensively explored sociolinguistic factors, such as cultural competence, important psycholinguistic factors such as online L2 motivational self-system, L2 grit, and online self-regulation in relation to virtual exchange (VE) have remained widely unexplored. To address this gap, a study was conducted with 92 Spanish English as a foreign language learners who exchanged language and culture with Cypriot and Irish students and responded to questionnaires adapted for the study context, as part of the SOCIEMOVE (Socioemotional Skills Through Virtual Exchange) Project. The partial least squares structural equation modeling approach showed that language learners who set positive personal goals for the future and evaluate their current learning progress in VE can regulate their learning in it. Interestingly, the sign of authenticity gap was found in the study context, since learners’ motivation to learn in VE was higher compared to their previous language learning contexts, resulting in more effort and consistency of interest in setting their goals, evaluating their progress, and asking for help from others. Furthermore, learners’ L2 grit moderated and mediated the correlation between learners’ online motivation and online self-regulation, indicating that VE success requires long-term perseverance of effort and consistency of interest. Accordingly, a new conceptual framework for VE was developed. In addition, one of the main implications is that teachers who employ VE should focus more on learners’ current needs and the goals they wish to achieve when exchanging information rather than only focusing on their accomplishments based on the course syllabus.
This paper reports on the experiences of using an early assessment intervention, specifically employing a Use-Modify-Create scaffold, to teach first-year undergraduate functional programming. The particular intervention that was trialled was the use of an early assessment instrument, in which students had to use code given to them, or slightly modify it, to achieve certain goals. The intended outcome was that the students would thus engage earlier with the functional language, enabling them to be better prepared for the second piece of assessment, where they create code to solve given problems. This intervention showed promise: the difference between a student’s score on the Create assignment improved by an average of 9% in the year after the intervention was implemented, a small effect.
The aim of this paper is to give a full exposition of Leibniz’s mereological system. My starting point will be his papers on Real Addition, and the distinction between the containment and the part-whole relation. In the first part (§2), I expound the Real Addition calculus; in the second part (§3), I introduce the mereological calculus by restricting the containment relation via the notion of homogeneity which results in the parthood relation (this corresponds to an extension of the Real Addition calculus via what I call the Homogeneity axiom). I analyze in detail such a notion, and argue that it implies a gunk conception of (proper) part. Finally, in the third part (§4), I scrutinize some of the applications of the containment-parthood distinction showing that a number of famous Leibnizian doctrines depend on it.
The two main sources of difficulty for a group of mobile robots employing sensors to find a source are robot collisions and wireless ambient noise, such as light, sound, and other sounds. This paper introduces a novel approach to multi-robot system cooperation and collision avoidance: the new modified source-seeking control with noise cancelation technology. The robot team works together on an incline of a light source field; the team’s mobility is dependent upon following the upward gradient’s direction and forming a particular movement pattern. The proposed program also takes into account each robot’s size, speed limit, obstacles, and noise. The noise cancelation technique has been used to avoid the delay and false decisions to find the target point of the source. When the noise is canceled, all control inputs to the algorithm are accurate, and the feedback decision will be true. In this study, we use the MATLAB simulation tools to test the velocity, position, time delay, and performance of each robot in the used group of robots. The simulation and practical results of the robots in searching for a light source showed very satisfactory performance compared with the results in the literature.
A method is proposed for identifying robot gravity and friction torques based on joint currents. The minimum gravity term parameters are obtained using the Modified Denavit–Hartenberg (MDH) parameters, and the dynamic equations are linearized. The robot’s friction torque is identified using the Stribeck friction model. Additionally, a zero-force drag algorithm is designed to address the issue of excessive start-up torque during dragging. A sinusoidal compensation algorithm is proposed to perform periodic friction compensation for each stationary joint, utilizing the identified maximum static friction torque. Experimental results show that when the robot operates at a uniform low speed, the theoretical current calculated based on the identified gravity and friction fits the actual current well, with a maximum root mean square error within 50 mA, confirming the accuracy of the identification results. The start-up torque compensation algorithm reduces the robot’s start-up torque by an average of $ 60.58\mathrm{\%}$, improving the compliance of the dragging process and demonstrating the effectiveness of the compensation algorithm.
Global platforms present novel challenges. They are powerful conduits of commerce and global community, and their potential to influence behavior is enormous. Defeating Disinformation explores how to balance free speech and dangerous online content to reduce societal risks of digital platforms. The volume offers an interdisciplinary approach, drawing upon insights from different geographies and parallel challenges of managing global phenomena with national policies and regulations. Chapters also examine the responsibility of platforms for their content, which is limited by national laws such as Section 230 of the Communications Decency Act in the US. This balance between national rules and the need for appropriate content moderation threatens to splinter platforms and reduce their utility across the globe. Timely and expansive, Defeating Disinformation develops a global approach to address these tensions while maintaining, and even enhancing, the social contribution of platforms. This title is also available as open access on Cambridge Core.
Providing an in-depth treatment of an exciting research area, this text's central topics are initial algebras and terminal coalgebras, primary objects of study in all areas of theoretical computer science connected to semantics. It contains a thorough presentation of iterative constructions, giving both classical and new results on terminal coalgebras obtained by limits of canonical chains, and initial algebras obtained by colimits. These constructions are also developed in enriched settings, especially those enriched over complete partial orders and complete metric spaces, connecting the book to topics like domain theory. Also included are an extensive treatment of set functors, and the first book-length presentation of the rational fixed point of a functor, and of lifting results which connect fixed points of set functors with fixed points of endofunctors on other categories. Representing more than fifteen years of work, this will be the leading text on the subject for years to come.
Aiming at the problems of many path inflection points, unsmooth paths, and poor local obstacle avoidance in path planning of inspection robots in static-dynamic scenes under complex geological conditions in coal mine roadways, a hybrid path planning method based on the improved A* algorithm and dynamic window approach (DWA) algorithm is proposed. First, the inspection robot platform and system model are constructed. An improved heuristic function that incorporates target weight information is proposed based on the A* global path planning algorithm. Additionally, redundant nodes are eliminated, and the path is smoothed using the Floyd algorithm and B-spline curves. Second, the global path planning A* algorithm and the local path planning DWA algorithm are fused. The dynamic path planning is carried out by setting the key node information of the global path extracted from the improved A* algorithm as the local target point of the DWA algorithm. On this basis, a grid map is established to simulate and analyze the proposed path planning algorithm. Finally, the autonomous path planning and walking experiment of inspection robot in simulated roadway environment are carried out. The results show that the hybrid path planning method based on improved A* algorithm and DWA algorithm proposed in this paper is more efficient and safer, which can meet the motion requirements of inspection robot in coal mine roadway.
In this paper, we provide a systematic review of existing artificial intelligence (AI) regulations in Europe, the United States, and Canada. We build on the qualitative analysis of 129 AI regulations (enacted and not enacted) to identify patterns in regulatory strategies and in AI transparency requirements. Based on the analysis of this sample, we suggest that there are three main regulatory strategies for AI: AI-focused overhauls of existing regulation, the introduction of novel AI regulation, and the omnibus approach. We argue that although these types emerge as distinct strategies, their boundaries are porous as the AI regulation landscape is rapidly evolving. We find that across our sample, AI transparency is effectively treated as a central mechanism for meaningful mitigation of potential AI harms. We therefore focus on AI transparency mandates in our analysis and identify six AI transparency patterns: human in the loop, assessments, audits, disclosures, inventories, and red teaming. We contend that this qualitative analysis of AI regulations and AI transparency patterns provides a much needed bridge between the policy discourse on AI, which is all too often bound up in very detailed legal discussions and applied sociotechnical research on AI fairness, accountability, and transparency.
Pouch-type actuators have recently garnered significant interest and are increasingly utilized in diverse fields, including soft wearable robotics and prosthetics. This is largely due to their lightweight, high output force, and low cost. However, the inherent hysteresis behavior markedly affects the stability and force control of pouch-type driven systems. This study proposes a modified generalized Prandtl–Ishlinskii (MGPI) model, which includes generalized play operators, the tangent envelope function, and one-sided dead-zone operators, to describe the asymmetric and non-convex hysteresis characteristics of pouch-type actuators. Compared to a classical Prandtl–Ishlinskii (PI) model incorporating one-sided dead-zone functions, the MGPI model exhibits smaller relative errors at six different air pressures, demonstrating its capability to accurately describe asymmetric and non-convex hysteresis curves. Subsequently, the MGPI hysteresis model is integrated with displacement sensing technology to establish a load compensation control system for maintaining human posture. Four healthy subjects are recruited to conduct a 1 kg load compensation test, achieving efficiencies of 85.84%, 84.92%, 83.63%, and 68.86%, respectively.
Real-time systems need to be built out of tasks for which the worst-case execution time is known. To enable accurate estimates of worst-case execution time, some researchers propose to build processors that simplify that analysis. These architectures are called precision-timed machines or time-predictable architectures. However, what does this term mean? This paper explores the meaning of time predictability and how it can be quantified. We show that time predictability is hard to quantify. Rather, the worst-case performance as the combination of a processor, a compiler, and a worst-case execution time analysis tool is an important property in the context of real-time systems. Note that the actual software has implications as well on the worst-case performance. We propose to define a standard set of benchmark programs that can be used to evaluate a time-predictable processor, a compiler, and a worst-case execution time analysis tool. We define worst-case performance as the geometric mean of worst-case execution time bounds on a standard set of benchmark programs.