To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Inequality is a critical global issue, particularly in the United States, where economic disparities are among the most pronounced. Social justice research traditionally studies attitudes towards inequality—perceptions, beliefs, and judgments—using latent variable approaches. Recent scholarship adopts a network perspective, showing that these attitudes are interconnected within inequality belief systems. However, scholars often compare belief systems using split-sample approaches without examining how emotions, such as anger, shape these systems. Moreover, they rarely investigate Converse’s seminal idea that changes in central attitudes can lead to broader shifts in belief systems. Addressing these gaps, we applied a tripartite analytical strategy using U.S. data from the 2019 ISSP Social Inequality module. First, we used a mixed graphical model to demonstrate that inequality belief systems form cohesive small-world networks, with perception of large income inequality and belief in public redistribution as central nodes. Second, a moderated network model revealed that anger towards inequality moderates nearly one-third of network edges, consolidating the belief system by polarizing associations. Third, Ising model simulations showed that changes to central attitudes produce broader shifts across the belief system. This study advances belief system research by introducing innovative methods for comparing structures and testing dynamics of attitude change. It also contributes to social justice research by integrating emotional dynamics and highlighting anger’s role in structuring inequality belief systems.
The human hand’s exceptional dexterity and compliance, derived from its rigid-soft coupling structure and tendon-driven interphalangeal coordination, inspire robotic grippers capable of versatile grasping and force adaptation. Traditional rigid manipulators lack compliance for delicate tasks, while soft robots often suffer from instability and low load capacity. To bridge this gap, we propose a biomimetic multi-joint composite finger integrating a 3D-printed rigid phalanges (46–51 mm) with dual fabric-reinforced pneumatic bladders, mimicking human finger biomechanics. This hybrid design combines hinge-jointed rigidity and anisotropic fabric constraints, enabling two rotational degrees of freedom with higher radial stiffness, achieving 2.18× higher critical burst pressure (240 kPa) than non-reinforced bladders, while preserving axial compliance. Experimental validation demonstrates a 4.77 N maximum fingertip force at 200 kPa and rapid recovery (< 2s) post-impact. The composite finger exhibits human-like gestures (enveloping, pinching, flipping) and adapts to irregular/fragile objects (e.g., eggs, screws) through coordinated bladder actuation. Assembled into a modular gripper, it sustains 1 kg payloads and executes thin-object flipping via proximal-distal joint synergy. This rigid-soft coupling design bridges compliance and robustness, offering high environmental adaptability for applications in industrial automation, human–robot interaction, and delicate manipulation.
Implementing changes to digital health systems in real-life contexts poses many challenges. Design as a field has the potential to tackle some of these. This article illustrates how design knowledge, through published literature, is currently referenced in relation to the implementation of digital health. To map design literature’s contribution to this field, we conducted a scoping review on digital health implementation publications and their use of references from nine prominent design journals. The search in Scopus and Web of Science yielded 382 digital health implementation publications, of which 70 were included for analysis. From those, we extracted data on publication characteristics and how they cited the design literature. The 70 publications cited 58 design articles, whose characteristics were also extracted. The results show that design is mainly cited to provide information about specific design methods and approaches, guidelines for using them and evidence of their benefits. Examples of referenced methods and approaches were co-design, prototyping, human-centered design, service design, understanding user needs and design thinking. The results thus show that design knowledge primarily contributed to digital health implementation with insights into methods and approaches. In addition, our method showcases a new way for understanding how design literature influences other fields.
We define the generalized equilibrium distribution, that is the equilibrium distribution of a random variable with support in $\mathbb{R}$. This concept allows us to prove a new probabilistic generalization of Taylor’s theorem. Then, the generalized equilibrium distribution of two ordered random variables is considered and a probabilistic analog of the mean value theorem is proved. Results regarding distortion-based models and mean-median-mode relations are illustrated as well. Conditions for the unimodality of such distributions are obtained. We show that various stochastic orders and aging classes are preserved through the proposed equilibrium transformations. Further applications are provided in actuarial science, aiming to employ the new unimodal equilibrium distributions for some risk measures, such as Value-at-Risk and Conditional Tail Expectation.
Design-by-analogy (DbA) is a powerful method for product innovation design, leveraging multidomain design knowledge to generate new ideas. Previous studies have relied heavily on designers’ experiences to retrieve analogical knowledge from other domains, lacking a structured method to organize and understand multidomain analogical knowledge. This presents a significant challenge in recommending high-quality analogical sources, which needs to be addressed. To tackle these issues, a knowledge graph-assisted DbA approach via structured analogical knowledge retrieval is proposed. First, an improved function-effect-structure ontology model is constructed to extract functions and effects as potential analogical sources, and six semantic matching rules are established to output entity triplets, and the DbA knowledge graph (DbAKG) is developed. Second, based on the knowledge of semantic relationships in DbAKG, the domain distance and similarity between the design target and the analogical sources are introduced to establish an analogical value model, ensuring the novelty and feasibility of analogical sources. After that, with function as the design target, analogical sources transfer strategy is formed to support innovative solution solving, and TRIZ theory is used to solve design conflicts. Finally, a pipeline inspection robot case study is further employed to verify the proposed approach. Additionally, a knowledge graph-assisted analogical design system has been developed to assist in managing multidomain knowledge and the analogical process, facilitate the adoption of innovative design strategies, and assist companies in providing more competitive products to seize the market.
Underwater robots conducting inspections require autonomous obstacle avoidance capabilities to ensure safe operations. Training methods based on reinforcement learning (RL) can effectively develop autonomous obstacle avoidance strategies for underwater robots; however, training in real environments carries significant risks and can easily result in robot damage. This paper proposes a Sim-to-Real pipeline for RL-based training of autonomous obstacle avoidance in underwater robots, addressing the challenges associated with training and deploying RL methods for obstacle avoidance in this context. We establish a simulation model and environment for underwater robot training based on the mathematical model of the robot, comprehensively reducing the gap between simulation and reality in terms of system inputs, modeling, and outputs. Experimental results demonstrate that our high-fidelity simulation system effectively facilitates the training of autonomous obstacle avoidance algorithms, achieving a 94% success rate in obstacle avoidance and collision-free operation exceeding 5000 steps in virtual environments. Directly transferring the trained strategy to a real robot successfully performed obstacle avoidance experiments in a pool, validating the effectiveness of our method for autonomous strategy training and sim-to-real transfer in underwater robots.
Our family album is often the first medium through which we encounter war: nestled in the heart of home life and revisited throughout childhood, its pages intertwine peacetime photos of vacations and gatherings with wartime images featuring smiling soldiers and pastoral landscapes from missions abroad, blending these contrasting realities into one familiar story. This article introduces, for the first time, this overlooked heritage, tracing its roots to WWI – the first conflict photographed by the public. With the outbreak of war, the amateur photography industry, focused on leisure and holidays, came to a halt. Kodak found an unexpected solution: rebranding the camera as a tool to transform harsh realities into peaceful moments by capturing images that portrayed war as joyfully as a summer vacation. It marketed the zoom as a way to avoid violence by keeping it out of the frame while promoting one-click shooting as a means to preserve fleeting moments of beauty amid chaos. The flash was positioned as a source of optimism in dark times, and the family album was framed as a nostalgic object creating a view of the ongoing war as if it had already ended. Capitalizing on witnesses’ longing for peace, this campaign achieved unprecedented success, establishing norms for amateur war photography. This article defines this model that shapes how we see, capture, and share the experience of war, acquiring renewed significance as amateur war photography expands from family albums to the global reach of social media.
This study investigates the applicability of generative artificial intelligence (AI) in early-stage architectural design by evaluating the daylight performance of AI-generated sustainable housing plans across five distinct climate zones. A three-phase methodology was implemented: (1) Plan generation using text-to-image diffusion models (ChatGPT, Copilot, and LookX); (2) digital reconstruction in AutoCAD; and (3) daylight simulation via Velux Daylight Visualizer. Climate-adaptive prompts were formulated to guide the AI tools in producing context-specific floor plans with passive strategies. Out of 31 initial plans, eight valid outputs (five from ChatGPT and three from Copilot) were reconstructed in AutoCAD and simulated. Quantitative simulations were conducted on equinox and solstice dates, and average illuminance values were analyzed for key interior spaces (living room, kitchen, and bedroom). ChatGPT-generated plans demonstrated higher spatial clarity and more balanced daylight performance, whereas Copilot outputs varied significantly, and LookX was excluded due to insufficient architectural legibility. Results revealed that none of the models consistently integrated solar orientation or seasonal lighting considerations, indicating a gap between generative representation and environmental logic. The research contributes a replicable workflow that bridges generative AI and performance-based evaluation, offering critical insight into the current limitations and future potential of AI-assisted architectural design. The findings underscore the need for next-generation AI systems capable of semantic, spatial, and climatic reasoning to support environmentally responsive design practices.
The Grothendieck construction establishes an equivalence between fibrations, a.k.a. fibred categories and indexed categories and is one of the fundamental results of category theory. Cockett and Cruttwell introduced the notion of fibrations into the context of tangent categories and proved that the fibres of a tangent fibration inherit a tangent structure from the total tangent category. The main goal of this paper is to provide a Grothendieck construction for tangent fibrations. Our first attempt will focus on providing a correspondence between tangent fibrations and indexed tangent categories, which are collections of tangent categories and tangent morphisms indexed by the objects and morphisms of a base tangent category. We will show that this construction inverts Cockett and Cruttwell’s result, but it does not provide a full equivalence between these two concepts. In order to understand how to define a genuine Grothendieck equivalence in the context of tangent categories, inspired by Street’s formal approach to monad theory we introduce a new concept: tangent objects. We show that tangent fibrations arise as tangent objects of a suitable $2$-category and we employ this characterisation to lift the Grothendieck construction between fibrations and indexed categories to a genuine Grothendieck equivalence between tangent fibrations and tangent indexed categories.
This text accompanies the performance A Foot, A Mouth, A Hundred Billion Stars, which premiered at the Lapworth Museum of Geology in the United Kingdom on 18 March 2023, as part of the Flatpack film festival. It includes both the text and a film version, developed during a residency at the museum. Over 18 months, I had full access to the collection and archives, selecting objects that served as prompts for stories about time and memory. A central theme of the work is slippage – misremembering and misunderstanding – as a generative methodology for exploring the connection between the collection, our past, and possible futures.
A Foot, A Mouth, A Hundred Billion Stars combines analogue media and digital technologies to examine our understanding of remembering and forgetting. I used a live digital feed and two analogue slide projectors to explore the relationships between image and memory. This article does not serve as a guide to the performance but instead reflects on the process and the ideas behind the work. My goal is to share my practice of rethinking memory through direct engagement with materials. In line with the performance’s tangential narrative, this text weaves together diverse references, locations, thoughts, and ideas, offering a deeper look into the conceptual framework of the work.
Earth’s forests play an important role in the fight against climate change and are in turn negatively affected by it. Effective monitoring of different tree species is essential to understanding and improving the health and biodiversity of forests. In this work, we address the challenge of tree species identification by performing tree crown semantic segmentation using an aerial image dataset spanning over a year. We compare models trained on single images versus those trained on time series to assess the impact of tree phenology on segmentation performance. We also introduce a simple convolutional block for extracting spatio-temporal features from image time series, enabling the use of popular pretrained backbones and methods. We leverage the hierarchical structure of tree species taxonomy by incorporating a custom loss function that refines predictions at three levels: species, genus, and higher-level taxa. Our best model achieves a mean Intersection over Union (mIoU) of 55.97%, outperforming single-image approaches particularly for deciduous trees where phenological changes are most noticeable. Our findings highlight the benefit of exploiting the time series modality via our Processor module. Furthermore, leveraging taxonomic information through our hierarchical loss function often, and in key cases significantly, improves semantic segmentation performance.
A wrist-hand exoskeleton designed to assist individuals with wrist and hand limitations is presented in this paper. The novel design is developed based on specific selection criteria, addressing all the Degrees of Freedom (DOF). In the conceptual design phase, design concepts are created and assessed before being screened and scored to determine which concept is the most promising. Performance and possible restrictions are assessed using kinematic and dynamic analysis. Using polylactic acid material, the exoskeleton is prototyped to ensure structural integrity and fit. Manual control, master-slave control, and electroencephalography (EEG) dataset-based control are among the control strategies that have been investigated. Direct manipulation is possible with manual control, nevertheless, master-slave control uses sensors to map user motions. Brain signals for hand opening and closing are interpreted by EEG dataset-based control, which manages the hand open-close of the exoskeleton. This study introduces a novel wrist-hand exoskeleton that improves usefulness, modularity, and mobility. While the numerous control techniques give versatility based on user requirements, the 3D printing process assures personalization and flexibility in design.
A finite-time adaptive composite integral sliding mode control strategy based on a fast finite-time observer is proposed for trajectory tracking of the Stewart parallel robot, considering unmodeled uncertainties and external disturbances. First, a global finite-time converging sliding mode surface composed of intermediate variables and integral terms is established to eliminate steady-state tracking errors. Next, a fast finite-time extended state observer is designed to compensate for uncertainties and external disturbances, improving the robustness of the control system. Finally, based on this, a finite-time sliding mode control rate is designed. The gain value is adjusted through an adaptive reaching law to reduce sliding mode chattering, and global finite-time convergence of the system is theoretically proven using Lyapunov theory. Experimental verification shows that the proposed control strategy has stronger robustness to uncertainties and external disturbances, faster error convergence, less chattering, and higher stability accuracy.
Federated learning (FL) is a machine learning technique that distributes model training to multiple clients while allowing clients to keep their data local. Although the technique allows one to break free from data silos keeping data local, to coordinate such distributed training, it requires an orchestrator, usually a central server. Consequently, organisational issues of governance might arise and hinder its adoption in both competitive and collaborative markets for data. In particular, the question of how to govern FL applications is recurring for practitioners. This research commentary addresses this important issue by inductively proposing a layered decision framework to derive organisational archetypes for FL’s governance. The inductive approach is based on an expert workshop and post-workshop interviews with specialists and practitioners, as well as the consideration of real-world applications. Our proposed framework assumes decision-making occurs within a black box that contains three formal layers: data market, infrastructure, and ownership. Our framework allows us to map organisational archetypes ex-ante. We identify two key archetypes: consortia for collaborative markets and in-house deployment for competitive settings. We conclude by providing managerial implications and proposing research directions that are especially relevant to interdisciplinary and cross-sectional disciplines, including organisational and administrative science, information systems research, and engineering.
The remote center of motion (RCM) mechanism is one of the key components of minimally invasive surgical robots. Nevertheless, the most widely used parallelogram-based RCM mechanism tends to have a large footprint, thereby increasing the risk of collisions between the robotic arms during surgical procedures. To solve this problem, this study proposes a compact RCM mechanism based on the coupling of three rotational motions realized by nonlinear transmission. Compared to the parallelogram-based RCM mechanism, the proposed design offers a smaller footprint, thereby reducing the risk of collisions between the robotic arms. To address the possible errors caused by the elasticity of the transmission belts, an error model is established for the transmission structure that includes both circular and non-circular pulleys. A prototype is developed to verify the feasibility of the proposed mechanism, whose footprint is further compared with that of the parallelogram-based RCM mechanism. The results indicate that our mechanism satisfies the constraints of minimally invasive surgery, provides sufficient stiffness, and exhibits a more compact design. The current study provides a new direction for the miniaturization design of robotic arms in minimally invasive surgical robots.
This study explores the potential of applying machine learning (ML) methods to identify and predict areas at risk of food insufficiency using a parsimonious set of publicly available data sources. We combine household survey data that captures monthly reported food insufficiency with remotely sensed measures of factors influencing crop production and maize price observations at the census enumeration area (EA) in Malawi. We consider three machine-learning models of different levels of complexity suitable for tabular data (TabNet, random forests, and LASSO) and classical logistic regression and examine their performance against the historical occurrence of food insufficiency. We find that the models achieve similar accuracy levels with differential performance in terms of precision and recall. The Shapley additive explanation decomposition applied to the models reveals that price information is the leading contributor to model fits. A possible explanation for the accuracy of simple predictors is the high spatiotemporal path dependency in our dataset, as the same areas of the country are repeatedly affected by food crises. Recurrent events suggest that immediate and longer-term responses to food crises, rather than predicting them, may be the bigger challenge, particularly in low-resource settings. Nonetheless, ML methods could be useful in filling important data gaps in food crises prediction, if followed by measures to strengthen food systems affected by climate change. Hence, we discuss the tradeoffs in training these models and their use by policymakers and practitioners.
We present a deep learning architecture that reconstructs a source of data at given spatio-temporal coordinates using other sources. The model can be applied to multiple sources in a broad sense: the number of sources may vary between samples, the sources can differ in dimensionality and sizes, and cover distinct geographical areas at irregular time intervals. The network takes as input a set of sources that each include values (e.g., the pixels for two-dimensional sources), spatio-temporal coordinates, and source characteristics. The model is based on the Vision Transformer, but separately embeds the values and coordinates and uses the embedded coordinates as relative positional embedding in the computation of the attention. To limit the cost of computing the attention between many sources, we employ a multi-source factorized attention mechanism, introducing an anchor-points-based cross-source attention block. We name the architecture MoTiF (multi-source transformer via factorized attention). We present a self-supervised setting to train the network, in which one source chosen randomly is masked and the model is tasked to reconstruct it from the other sources. We test this self-supervised task on tropical cyclone (TC) remote-sensing images, ERA5 states, and best-track data. We show that the model is able to perform TC ERA5 fields and wind intensity forecasting from multiple sources, and that using more sources leads to an improvement in forecasting accuracy.
In this article, we focus on the systemic expected shortfall and marginal expected shortfall in a multivariate continuous-time risk model with a general càdlàg process. Additionally, we conduct our study under a mild moment condition that is easily satisfied when the general càdlàg process is determined by some important investment return processes. In the presence of heavy tails, we derive asymptotic formulas for the systemic expected shortfall and marginal expected shortfall under the framework that includes wide dependence structures among losses, covering pairwise strong quasi-asymptotic independence and multivariate regular variation. Our results quantify how the general càdlàg process, heavy-tailed property of losses, and dependence structures influence the systemic expected shortfall and marginal expected shortfall. To discuss the interplay of dependence structures and heavy-tailedness, we apply an explicit order 3.0 weak scheme to estimate the expectations related to the general càdlàg process. This enables us to validate the moment condition from a numerical perspective and perform numerical studies. Our numerical studies reveal that the asymptotic dependence structure has a significant impact on the systemic expected shortfall and marginal expected shortfall, but heavy-tailedness has a more pronounced effect than the asymptotic dependence structure.