To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Additive manufacturing (AM) enables the production of innovative, lightweight component designs in the aerospace industry. However, AM processes introduce new production feasibility considerations that must be addressed during product development. Therefore, engineers require effective design support and a new design approach to fully exploit AM’s capabilities while balancing its constraints. Through an interview study involving 20 AM aerospace industry professionals from nine countries and 10 organisations, this research identifies AM design opportunities and challenges and explores the design supports used to achieve and overcome them. The findings indicate that Laser Powder Bed Fusion is a predominant AM process in aeronautical and space applications. Further, the study identifies practical and computational design supports, describes how AM design is approached during product development and provides a model outlining a general AM design approach. Key AM design challenges identified include insufficient knowledge of material properties, limited sharing of design knowledge and a lack of understanding of the relationship between AM design and post-processing requirements. Consequently, skills gaps and educational needs for Design for AM in aerospace engineering are highlighted. Additionally, the study suggests that further AM aerospace standards, enhanced computer-aided engineering software for AM and artificial intelligence integration could improve design support.
In this paper, we show that if $\mathscr{C}$ is a category and if $F\colon \mathscr{C}^{\;\textrm {op}} \to \mathfrak{Cat}$ is a pseudofunctor such that for each object $X$ of $\mathscr{C}$ the category $F(X)$ is a tangent category and for each morphism $f$ of $\mathscr{C}$ the functor $F(\,f)$ is part of a strong tangent morphism $(F(\,f),\!\,_{f}{\alpha })$ and that furthermore the natural transformations $\!\,_{f}{\alpha }$ vary pseudonaturally in $\mathscr{C}^{\;\textrm {op}}$, then there is a tangent structure on the pseudolimit $\mathbf{PC}(F)$ which is induced by the tangent structures on the categories $F(X)$ together with how they vary through the functors $F(\,f)$. We use this observation to show that the forgetful $2$-functor $\operatorname {Forget}:\mathfrak{Tan} \to \mathfrak{Cat}$ creates and preserves pseudolimits indexed by $1$-categories. As an application, this allows us to describe how equivariant descent interacts with the tangent structures on the category of smooth (real) manifolds and on various categories of (algebraic) varieties over a field.
Learners’ confidence in using a second language (L2 self-efficacy) and their L2 grit are key psychological factors in developing intercultural competence (ICC). As English as a foreign language (EFL) students increasingly encounter diverse cultures though informal digital learning of English (IDLE), this study examines whether IDLE serves as a pathway connecting these psychological traits to ICC. Grounded in the broaden-and-build theory, this explanatory mixed-methods research investigates how L2 self-efficacy and grit contribute to ICC through IDLE among 416 Chinese EFL students. Structural equation modeling revealed that higher L2 self-efficacy fosters greater L2 grit, which in turn promotes more frequent engagement in both receptive IDLE activities (e.g. watching English media) and productive ones (e.g. participating in online conversations). This increased engagement was positively linked to higher levels of ICC. Qualitative findings further illuminated the mechanisms behind this process, illustrating how psychological strengths support meaningful digital encounters across cultures. The findings offer pedagogical insights: by cultivating students’ self-efficacy and grit, educators can encourage deeper engagement in IDLE, thereby equipping learners with effective and culturally sensitive communication in an increasingly interconnected digital world.
The first word in the title is intended in a sense suggested by Lawvere and Schanuel whereby finite sets are objective natural numbers. At the objective level, the axioms defining abstract Mackey and Tambara functors are categorically familiar. The first step was taken by Harald Lindner in 1976 when he recognized that Mackey functors, defined as pairs of functors, were equivalently single functors with domain a category of spans. In 1993, Tambara recognized that TNR-functors (that is, functors designed to have abstract trace, norm and restriction operations, and now called Tambara functors) were equivalently certain functors out of a category of polynomials. We define objective Mackey and objective Tambara functors as parametrized categories that have local finite products and satisfy some parametrized completeness and cocompleteness restriction. However, we can replace the original parametrizing base for objective Mackey functors by a bicategory of spans while the replacement for objective Tambara functors is a bicategory obtained by iterating the span construction; these iterated spans are polynomials. There is an objective Mackey functor of ordinary Mackey functors. We show that there is a distributive law relating objective Mackey functors to objective Tambara functors analogous to the distributive law relating abelian groups to commutative rings. We remark on hom enrichment matters involving the 2-category $\textrm{Cat}_{+}$ of categories admitting finite coproducts and functors preserving them, both as a closed base and as a skew-closed base.
Lower limb-assisted exoskeletons can provide payloads and support, but the hip joints of current lower limb-assisted exoskeletons have problems such as single mechanisms, few degrees of freedom, and limitations in the types of gaits that can be acted upon. To address these problems, a novel double-cam hip joint assist mechanism is proposed for walking, running, and carrying gait situations. The double cam and two rectangular compression springs with different stiffness are used to satisfy the differences in exoskeleton assistance requirements in multiple gaits. First, biomechanical simulation of walking, running, and carrying movements is carried out with OpenSim to obtain the hip joint angle and torque data, and then the structural design of the assist mechanism is carried out. Hip joint angle planning and contour solving are carried out for the contour lines of the cams, so that the cam contour lines are assist according to different hip joint angles, and the stiffness of compression springs is determined by D’Alembert’s principle, and the assist torques are analyzed. Meanwhile, a human–machine coupling model was established for theoretical analysis. Finally, the muscle power change curves were exported using OpenSim, and the assistance was verified by comparison.
It is well known that almost all graphs are canonizable by a simple combinatorial routine known as colour refinement, also referred to as the 1-dimensional Weisfeiler–Leman algorithm. With high probability, this method assigns a unique label to each vertex of a random input graph and, hence, it is applicable only to asymmetric graphs. The strength of combinatorial refinement techniques becomes a subtle issue if the input graphs are highly symmetric. We prove that the combination of colour refinement and vertex individualization yields a canonical labelling for almost all circulant digraphs (i.e., Cayley digraphs of a cyclic group). This result provides first evidence of good average-case performance of combinatorial refinement within the class of vertex-transitive graphs. Remarkably, we do not even need the full power of the colour refinement algorithm. We show that the canonical label of a vertex $v$ can be obtained just by counting walks of each length from $v$ to an individualized vertex. Our analysis also implies that almost all circulant graphs are compact in the sense of Tinhofer, that is, their polytops of fractional automorphisms are integral. Finally, we show that a canonical Cayley representation can be constructed for almost all circulant graphs by the more powerful 2-dimensional Weisfeiler–Leman algorithm.
Plan repair is the problem of solving a given planning problem by using a solution plan of a similar problem. This paper presents the first approach where the repair has to be done optimally, that is, we aim at finding a minimum distance plan from an input plan; we do so by introducing a number of compilation schemes that convert a classical planning problem into another where optimal plans correspond to plans with the minimum distance from an input plan. We also address the problem of finding a minimum distance plan from a set of input plans, instead of just one plan. Our experiments using a number of planners show that such a simple approach can solve many problems optimally and more effectively than replanning from scratch for a large number of cases. Also, the approach proves competitive with ${\mathsf{LPG}\textrm{-}\mathsf{adapt}}$, a state-of-the-art approach for the plan repair problem.
Cyber risk is an important consideration in today’s risk management and insurance industries. However, the statistical features of cyber risk, including concerns of solvency for cyber insurance providers, are still emerging. This study investigates the dynamics of ransomware severity, specifically focusing on different statistical dimensions of extortion payments from ransomware attacks across various ransomware strains and/or variants. Our results indicate that extortion payments are not identically distributed across ransomware strains/variants, and thus violate necessary assumptions for solvency determinations using classical ruin theory. These findings emphasize the importance of re-examining these assumptions under empirical data and implementing dynamic cyber risk modelling for portfolio losses from extortion payments from ransomware attacks. Additionally, such findings suggest that removing coverage for extortion payments from insurance policies may protect cyber insurance firms from insolvency, as well as create a potential deterrence effect against ransomware threat actors due to lack of extortion payment from victims. Our work has implications for insurance regulators, policymakers, and national security advisors focused on the financial impact of extortion payments from ransomware attacks.
Managing cognitive load is central to designing interactive systems, particularly within augmented reality (AR) environments that impose complex and immersive demands. This study investigates two complementary approaches in parts to managing cognitive load in AR: refining interaction modalities and integrating adaptive physiological feedback. In Part 1, eye-tracking and hand-based modalities are evaluated across tasks of varying difficulty, using skin conductance responses (SCRs) as a proxy for cognitive load. Results show that while hand gestures improved task performance in simple tasks, cognitive load levels are comparable across modalities. In Part 2, an adaptive feedback system based on a signal-derived metric, cumulative SCR (CSCR), is developed to trigger short rest interventions during sustained cognitive load. Statistical analyses illustrate that rest interventions significantly reduced cumulative cognitive load, though their effect on task performance was inconclusive. These findings emphasize the trade-offs between cognitive relief and performance continuity and highlight the potential of physiologically adaptive systems in supporting cognitive-aware interaction design.
Collective memories of intergroup history persist as dynamic structures that shape how societies perceive foreign others. This article proposes a framework for understanding stereotypes rooted in collective memory as both premises for journalistic coverage – guiding story selection – and tools within it, offering adaptable templates for framing. Analysing Israeli media’s coverage of Poland across two decades of conflict, conciliation, and routine reporting, I show how journalists reproduce and renegotiate stereotyped perceptions, clarifying their dual role as memory agents: sustaining stereotype-laden perceptions anchored in collective memory, while recalibrating these perceptions in light of shifting political and narrative contexts. The study foregrounds journalism’s dual role in carrying forward and adapting the collective memory structures through which foreign nations are perceived.
Multiple mobile manipulators (MMs) show superiority in the tasks requiring mobility and dexterity compared with a single robot, especially when manipulating/transporting bulky objects. However, closed-chain of the system, redundancy of each MM, and obstacles in the environment bring challenges to the motion planning problem. In this paper, we propose a novel semi-coupled hierarchical framework (SCHF), which decomposes the problem into two semi-coupled sub-problems. To be specific, the centralized layer plans the object’s motion first, and then the decentralized layer independently explores the redundancy of each robot in real-time. A notable feature is that the lower bound of the redundancy constraint metric is ensured, besides the closed-chain and obstacle-avoidance constraints in the centralized layer, which ensures the object’s motion can be executed by each robot in the decentralized layer. Simulated results show that the success rate and time cost of SCHF outperform the fully centralized planner and fully decoupled hierarchical planner significantly. In addition, cluttered real-world experiments also show the feasibility of the SCHF in the transportation tasks. A video clip in various scenarios can be found at https://youtu.be/Y8ZrnspIuBg.
Drought forecasting is a critical tool for mitigating the severe impacts of water scarcity, particularly in regions like North Benin, where agriculture is a cornerstone of livelihoods. Despite the vital importance of its accurate prediction in resource management, the ability to quantify uncertainties in forecasts is a significant pain point to enable more informed and trustworthy decision-making. So, this study aims to develop an uncertainty-aware prediction model for drought forecasting in six key localities within the Alibori department—Banikoara, Gogounou, Kandi, Karimama, Malanville, and Segbana—each facing unique challenges due to drought. To achieve this, we conducted a comprehensive experiment involving six machine learning models (linear regression, ridge regression, random forest, Xgboost, LightGBM, and SVM) and four deep learning models (Conv1D, LSTM, GRU, and Conv1D-LSTM) using the Standardized Precipitation Index at a 6-month scale. To address the uncertainty quantification challenge, we employed the Ensemble Batch Prediction Interval, a conformal prediction method specifically designed for time series data. Our comparative analysis, framed within the Borda count methodology, utilized performance metrics such as R2, RMSE, MSE, and carbon footprint, as well as uncertainty quantification metrics, including empirical coverage and the width of prediction intervals. The top-performing models achieved $ {R}^2 $ scores of 98.29, 97.84, 97.76, 97.42, 96.61, and 97.07%, and prediction interval coverages of 0.94, 0.79, 0.93, 0.77, 0.73, and 0.93, respectively, for Banikoara, Gogounou, Malanville, Kandi, Segbana, and Karimama. The Conv1D-LSTM model stood out as the most effective, offering an optimal balance between predictive accuracy and uncertainty coverage.
Societal challenges such as climate change and health inequalities require complex policy decisions, for which governmental organizations rely on a good information position. Having access to data from various domains is seen as a facilitator of making evidence-informed decisions that are more legitimate and less uncertain. To identify and make data available that is stored at various organizations, stakeholders participate in sociotechnical networks, also known as data ecosystems. Data ecosystems aimed at addressing societal challenges are characterized as complex because knowledge about societal issues is uncertain, information is scattered among (governmental) actors, collaboration extends beyond existing organizational networks, and values and interests of network actors can be conflicting. In this translational article, we examine how to successfully establish and maintain data ecosystems aimed at addressing societal challenges, given these complexities. We analyze two cases of successful data ecosystems in the Netherlands and present five narratives about how these data ecosystems navigated these complexities. We find that establishing collaboration among network actors, using bottom-up approaches, contributed to the success of both cases. The cases created structures in which participants were able prioritize the right questions, find common interests, and work together. The narratives present insights for government officials about collaboration in data ecosystems and add to the literature by highlighting the importance of organizational capabilities.
This chapter introduces linear cryptanalysis from the point of view that historically led to its discovery. This “original” description has the advantage of being concrete, but it is not very effective. However, it raises important questions that motivate later chapters.
This chapter details the mathematical tools and techniques required by some of the advanced algorithms. Beginners may choose to skip this section and refer back to it as needed. The chapter discusses the spectral theorem, density matrices and the partial trace, Schmidt decomposition and state purification, as well as various operator decompositions.
The main extensions of linear cryptanalysis were introduced in previous chapters; they are multiple, multidimensional, and zero-correlation linear cryptanalysis. However, these are far from the only extensions proposed in the literature. This chapter is a tour of some of the most important proposals. Most of the extensions of linear cryptanalysis discussed in this chapter are partly conjectural: they show how certain combinatorial properties might be used to attack cryptographic primitives, but do not provide a clear way to analyze or find these properties. Chapter 11 returns to this issue.