To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A seminal result of Komlós, Sárközy, and Szemerédi states that any $n$-vertex graph $G$ with minimum degree at least $(1/2+\alpha )n$ contains every $n$-vertex tree $T$ of bounded degree. Recently, Pham, Sah, Sawhney, and Simkin extended this result to show that such graphs $G$ in fact support an optimally spread distribution on copies of a given $T$, which implies, using the recent breakthroughs on the Kahn-Kalai conjecture, the robustness result that $T$ is a subgraph of sparse random subgraphs of $G$ as well. Pham, Sah, Sawhney, and Simkin construct their optimally spread distribution by following closely the original proof of the Komlós-Sárközy-Szemerédi theorem which uses the blow-up lemma and the Szemerédi regularity lemma. We give an alternative, regularity-free construction that instead uses the Komlós-Sárközy-Szemerédi theorem (which has a regularity-free proof due to Kathapurkar and Montgomery) as a black box. Our proof is based on the simple and general insight that, if $G$ has linear minimum degree, almost all constant-sized subgraphs of $G$ inherit the same minimum degree condition that $G$ has.
This paper presents a novel robust control method for a hip-assist exoskeleton robot’s joint module, addressing dynamic performance under variable loads. The proposed approach integrates traditional PID control with robust, model-based strategies, utilizing the system’s dynamic model and a Lyapunov-based robust controller to handle uncertainties. This method not only enhances traditional PID control but also offers practical advantages in implementation. Theoretical analysis confirms the system’s uniform boundedness and ultimate boundedness. A Matlab prototype was developed for simulation, demonstrating the control scheme’s feasibility and effectiveness. Numerical simulations show that the proposed fractional-order hybrid PD (FHPD) controller significantly reduces tracking error by 58.70% compared to the traditional PID controller, 55.41% compared to the MPD controller, and 32.32% compared to ADRC, highlighting its superior tracking performance and stability.
Pipeline inspection robots play a crucial role in maintaining the integrity of pipeline systems across various industries. In this paper, a novel pipeline inspection robot is designed based on a four degrees-of-freedom (DOF) generalized parallel mechanism (GPM). First, a four DOF mechanism is introduced using numerical and graph synthesis. The design employs numerical and graph synthesis methods to achieve an ideal symmetric configuration, enhancing the robot’s adaptability and mobility. The coupling mid-platform, inspired by parallelogram mechanisms, enables synchronized contraction motion, allowing the robot to adjust to different pipe diameters. Then, the constraints of the pipeline inspection robot in the elbow are analyzed based on task requirements. Through kinematic and performance analyses using screw theory, the mechanism’s feasibility in practical applications is confirmed. Theoretical analysis, simulations, and experiments demonstrate the robot’s ability to achieve active steering in T-branches and elbows. Experimental validation in straight and bent pipes shows that the robot meets the expected speed targets and can successfully navigate complex pipeline environments. This research highlights the potential of GPMs in advancing the capabilities of pipeline inspection robots for real-world applications.
This work proposes an optimization approach for the time-consuming parts of Light Detection and Ranging (LiDAR) data processing and IMU-LiDAR data fusion in the LiDAR-inertial odometry (LIO) method. Two key novelties enable faster and more accurate navigation in complex, noisy environments. Firstly, to improve map update and point cloud registration efficiency, we employ a sparse voxel maps with a new update function to construct a local map around the mobile robot and utilize an improved Generalized Iterative Closest Point algorithm based on sparse voxels to achieve LiDAR point clouds association, thereby boosting both map updating and computational speed. Secondly, to enhance real-time accuracy, this paper analyzes the residuals and covariances of both IMU and LiDAR data in a tightly coupled manner, and achieves system state estimation by fusing sensor information through Gauss-Newton method, effectively mitigating localization deviations by appropriately weighting the LiDAR covariances. The performance of our method is evaluated against advanced LIO algorithms using eight open datasets and five self-collected campus datasets. Results show a 24.7–60.1% reduction in average processing time per point cloud frame, along with improved robustness and higher precision motion trajectory estimation in most cluttered and complex indoor and outdoor environments.
This short research article interrogates the rise of digital platforms that enable ‘synthetic afterlives’, with a focus on how deathbots – AI-driven avatar interactions grounded in personal data and recordings – reshape memory practices. Drawing on socio-technical walkthroughs of four platforms – Almaya, HereAfter, Séance AI, and You, Only Virtual – we analyse how they frame, archive, and algorithmically regenerate memories. Our findings reveal a central tension: between preserving the past as a fixed archive and continually reanimating it through generative AI. Our walkthroughs demonstrate how these services commodify remembrance, reducing memory to consumer-driven interactions designed for affective engagement while obscuring the ethical, epistemological and emotional complexities of digital commemoration. In doing so, they enact reductive forms of memory that are embedded within platform economies and algorithmic imaginaries.
With the growing amount of historical infrastructure data available to engineers, data-driven techniques have been increasingly employed to forecast infrastructure performance. In addition to algorithm selection, data preprocessing strategies for machine learning implementations plays an equally important role in ensuring accuracy and reliability. The present study focuses on pavement infrastructure and identifies four categories of strategies to preprocess data for training machine-learning-based forecasting models. The Long-Term Pavement Performance (LTPP) dataset is employed to benchmark these categories. Employing random forest as the machine learning algorithm, the comparative study examines the impact of data preprocessing strategies, the volume of historical data, and forecast horizon on the accuracy and reliability of performance forecasts. The strengths and limitations of each implementation strategy are summarized. Multiple pavement performance indicators are also analysed to assess the generalizability of the findings. Based on the results, several findings and recommendations are provided for short-to medium-term infrastructure management and decision-making: (i) in data-scarce scenarios, strategies that incorporate both explanatory variables and historical performance data provides better accuracy and reliability, (ii) to achieve accurate forecasts, the volume of historical data should at least span a time duration comparable to the intended forecast horizon, and (iii) for International Roughness Index and transverse crack length, a forecast horizon up to 5 years is generally achievable, but forecasts beyond a three-year horizon are not recommended for longitudinal crack length. These quantitative guidelines ultimately support more effective and reliable application of data-driven techniques in infrastructure performance forecasting.
Vibration control in structures is essential to mitigate undesired dynamic responses, thereby enhancing stability, safety, and performance under varying loading conditions. Mechanical metamaterials have emerged as effective solutions, enabling tailored dynamic properties for vibration attenuation. This study introduces a convolutional autoencoder framework for the inverse design of local resonators embedded in mechanical metamaterials. The model learns from the dynamic behaviour of primary structures coupled with ideal absorbers to predict the geometric parameters of resonators that achieve desired vibration control performance. Unlike conventional approaches requiring full numerical models, the proposed method operates as a data-driven tool, where the target frequency to be mitigated is provided as input, and the model directly outputs the resonator geometry. A large dataset, generated through physics-informed simulations of ideal absorber dynamics, supports training while incorporating both spectral and geometric variability. Within the architecture, the encoder maps input receptance spectra to resonator geometries, while the decoder reconstructs the target receptance response, ensuring dynamic consistency. Once trained, the framework predicts resonator configurations that satisfy predefined frequency targets with high accuracy, enabling efficient design of passive controllers of the syntonized mass type. This study specifically demonstrates the application of the methodology to resonators embedded in wind turbine metastructures, a critical context for mitigating structural vibrations and improving operational efficiency. Results confirm strong agreement between predicted and target responses, underscoring the potential of deep learning techniques to support on-demand inverse design of mechanical metamaterials for smart vibration control in wind energy and related engineering applications.
Here we consider the hypergraph Turán problem in uniformly dense hypergraphs as was suggested by Erdős and Sós. Given a $3$-graph $F$, the uniform Turán density $\pi _{\boldsymbol{\therefore }}(F)$ of $F$ is defined as the supremum over all $d\in [0,1]$ for which there is an $F$-free uniformly $d$-dense $3$-graph, where uniformly $d$-dense means that every linearly sized subhypergraph has density at least $d$. Recently, Glebov, Král’, and Volec and, independently, Reiher, Rödl, and Schacht proved that $\pi _{\boldsymbol{\therefore }}(K_4^{(3)-})=\frac {1}{4}$, solving a conjecture by Erdős and Sós. Despite substantial attention, the uniform Turán density is still only known for very few hypergraphs. In particular, the problem due to Erdős and Sós to determine $\pi _{\boldsymbol{\therefore }}(K_4^{(3)})$ remains wide open.
In this work, we determine the uniform Turán density of the $3$-graph on five vertices that is obtained from $K_4^{(3)-}$ by adding an additional vertex whose link forms a matching on the vertices of $K_4^{(3)-}$. Further, we point to two natural intermediate problems on the way to determining $\pi _{\boldsymbol{\therefore }}(K_4^{(3)})$, and solve the first of these.
The adoption of corpus technology in school classroom settings remains limited, largely due to insufficient technological pedagogical content knowledge (TPACK) training for pedagogical corpus use. To address this gap, we investigated how teacher education in corpus-based language pedagogy (CBLP), a subdomain of TPACK for corpus technology tailored to language teachers, influenced student TESOL teachers’ self-efficacy for independent language learning and teaching. Employing a mixed-methods approach, including a CBLP training intervention (n = 120), survey data (n = 96), and interviews (n = 8) with student teachers at a university in Hong Kong SAR, China, the research validates a theoretical model through confirmatory factor analysis and structural equation modelling. Results demonstrate that corpus literacy (CL) is foundational for effective CBLP implementation and development of independent learning self-efficacy, which in turn fosters innovative, resource-rich instructional strategies. CBLP also enhances teachers’ self-efficacy for student engagement, fostering more interactive and motivating classrooms. These findings emphasise the value of embedding CL and CBLP within TESOL teacher-education programmes to prepare future language teachers for self-efficacy within dynamic, technology-enhanced classrooms.
Excellent products often contain profound cultural connotations. To improve the quality of cultural products, it is important to study how typical cultural carriers can be more promptly and efficiently identified and incorporated into products through a detailed and easy-to-use design process. In this article, we propose an approach from three different levels to assist designers in incorporating cultural features into products, including: (1) the integrated framework of the composition and division of cultural carriers, (2) the extraction and translation model from cultural carriers, cultural elements to cultural features and (3) the cultural product design process. The proposed approach was applied in a large and complex cultural product case, that is, inter-city train design. The evaluation of the recognition of culture features indicated that the approach contributed to conferring culture on products through thoughtful design and could ensure that the product schemes reflect cultural features as well as interesting cultural connotations.
It is of great importance to integrate human-centered design concepts at the core of both algorithmic research and the implementation of applications. In order to do so, it is essential to gain an understanding of human–computer interaction and collaboration from the perspective of the user. To address this issue, this chapter initially presents a description of the process of human–AI interaction and collaboration, and subsequently proposes a theoretical framework for it. In accordance with this framework, the current research hotspots are identified in terms of interaction quality and interaction mode. Among these topics, user mental modeling, interpretable AI, trust, and anthropomorphism are currently the subject of academic interest with regard to interaction quality. The level of interaction mode encompasses a range of topics, including interaction paradigms, role assignment, interaction boundaries, and interaction ethics. To further advance the related research, this chapter identifies three areas for future exploration: cognitive frameworks about Human–AI Interaction, adaptive learning, and the complementary strengths of humans and AI.
In the technological wave of the twenty-first century, artificial intelligence (AI), as a transformative technology, is rapidly reshaping our society, economy, and daily life. Since the concept of AI was first proposed, this field has experienced many technological innovations and application expansions. Artificial intelligence has experienced three booms in the past half century and has developed rapidly. In the 1960s, marked by the Turing test, the application of knowledge reasoning systems and other technologies set off the first boom. Computer scientists at that time began to explore how to let computers simulate human intelligence. Early AI research focused on rule systems and logical reasoning. The rise of expert systems and artificial neural networks brought a second wave of enthusiasm (McDermott, 1982). The third boom is marked by deep learning and big data, especially the widespread application of artificial intelligence-generated content represented by ChatGPT. During this period, AI technology shifted from traditional rule systems to methods that relied on algorithms to learn patterns from data. The rise of deep learning enabled AI to achieve significant breakthroughs in areas such as image recognition and natural language processing.
This chapter mainly investigates the role of Artificial Intelligence (AI) in augmenting search interactions to enhance users’ understanding across various domains. The chapter begins by examining the current limitations of traditional search interfaces in meeting diverse user needs and cognitive capacities. It then discusses how AI-driven enhancements can revolutionize search experiences by providing tailored, contextually relevant information and facilitating intuitive interactions. Through case studies and empirical analysis, the effectiveness of AI-supported search interaction in improving users’ understanding is evaluated in different scenarios. This chapter contributes to the literature on AI and human–computer interaction by highlighting the transformative potential of AI in optimizing search experiences for users, leading to enhanced comprehension and decision-making. It concludes with implications for research and practice, emphasizing the importance of human-centered design principles in developing AI-driven search systems.
AI-supported crowdsourcing for knowledge sharing is a collaborative approach that leverages artificial intelligence (AI) technologies to facilitate the gathering, organizing, and sharing of information or expertise among a large group of people, known as crowd workers. Despite the growing body of research on motivations in crowdsourcing, the impact of AI-supported crowdsourcing on workers’ motives remains unclear, as does the extent to which their participation can effectively address societal challenges. A systematic review is first conducted to identify trends and gaps in AI-supported crowdsourcing. This chapter then employs a case study through a crowdsourcing platform to look for missing children to demonstrate the pivotal role of AI in crowdsourcing in managing a major societal challenge. Emerging trends and technologies shaping motivations in AI-supported crowdsourcing will be discussed. Additionally, we offer recommendations for practitioners and researchers to integrate AI into crowdsourcing projects to address societal challenges.