To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One can perform equational reasoning about computational effects with a purely functional programming language thanks to monads. Even though equational reasoning for effectful programs is desirable, it is not yet mainstream. This is partly because it is difficult to maintain pencil-and-paper proofs of large examples. We propose a formalization of a hierarchy of effects using monads in the Coq proof assistant that makes monadic equational reasoning practical. Our main idea is to formalize the hierarchy of effects and algebraic laws as interfaces like it is done when formalizing hierarchy of algebras in dependent-type theory. Thanks to this approach, we clearly separate equational laws from models. We can then take advantage of the sophisticated rewriting capabilities of Coq and build libraries of lemmas to achieve concise proofs of programs. We can also use the resulting framework to leverage on Coq’s mathematical theories and formalize models of monads. In this article, we explain how we formalize a rich hierarchy of effects (nondeterminism, state, probability, etc.), how we mechanize examples of monadic equational reasoning from the literature, and how we apply our framework to the design of equational laws for a subset of ML with references.
Numerical solutions of partial differential equations require expensive simulations, limiting their application in design optimization, model-based control, and large-scale inverse problems. Surrogate modeling techniques aim to decrease computational expense while retaining dominant solution features and characteristics. Existing frameworks based on convolutional neural networks and snapshot-matrix decomposition often rely on lossy pixelization and data-preprocessing, limiting their effectiveness in realistic engineering scenarios. Recently, coordinate-based multilayer perceptron networks have been found to be effective at representing 3D objects and scenes by regressing volumetric implicit fields. These concepts are leveraged and adapted in the context of physical-field surrogate modeling. Two methods toward generalization are proposed and compared: design-variable multilayer perceptron (DV-MLP) and design-variable hypernetworks (DVH). Each method utilizes a main network which consumes pointwise spatial information to provide a continuous representation of the solution field, allowing discretization independence and a decoupling of solution and model size. DV-MLP achieves generalization through the use of a design-variable embedding vector, while DVH conditions the main network weights on the design variables using a hypernetwork. The methods are applied to predict steady-state solutions around complex, parametrically defined geometries on non-parametrically-defined meshes, with model predictions obtained in less than a second. The incorporation of random Fourier features greatly enhanced prediction and generalization accuracy for both approaches. DVH models have more trainable weights than a similar DV-MLP model, but an efficient batch-by-case training method allows DVH to be trained in a similar amount of time as DV-MLP. A vehicle aerodynamics test problem is chosen to assess the method’s feasibility. Both methods exhibit promising potential as viable options for surrogate modeling, being able to process snapshots of data that correspond to different mesh topologies.
Rapid urbanization poses several challenges, especially when faced with an uncontrolled urban development plan. Therefore, it often leads to anarchic occupation and expansion of cities, resulting in the phenomenon of urban sprawl (US). To support sustainable decision–making in urban planning and policy development, a more effective approach to addressing this issue through US simulation and prediction is essential. Despite the work published in the literature on the use of deep learning (DL) methods to simulate US indicators, almost no work has been published to assess what has already been done, the potential, the issues, and the challenges ahead. By synthesising existing research, we aim to assess the current landscape of the use of DL in modelling US. This article elucidates the complexities of US, focusing on its multifaceted challenges and implications. Through an examination of DL methodologies, we aim to highlight their effectiveness in capturing the complex spatial patterns and relationships associated with US. This work begins by demystifying US, highlighting its multifaceted challenges. In addition, the article examines the synergy between DL and conventional methods, highlighting the advantages and disadvantages. It emerges that the use of DL in the simulation and forecasting of US indicators is increasing, and its potential is very promising for guiding strategic decisions to control and mitigate this phenomenon. Of course, this is not without major challenges, both in terms of data and models and in terms of strategic city planning policies.
We introduce a comprehensive data-driven framework aimed at enhancing the modeling of physical systems, employing inference techniques and machine-learning enhancements. As a demonstrative application, we pursue the modeling of cathodic electrophoretic deposition, commonly known as e-coating. Our approach illustrates a systematic procedure for enhancing physical models by identifying their limitations through inference on experimental data and introducing adaptable model enhancements to address these shortcomings. We begin by tackling the issue of model parameter identifiability, which reveals aspects of the model that require improvement. To address generalizability, we introduce modifications, which also enhance identifiability. However, these modifications do not fully capture essential experimental behaviors. To overcome this limitation, we incorporate interpretable yet flexible augmentations into the baseline model. These augmentations are parameterized by simple fully-connected neural networks, and we leverage machine-learning tools, particularly neural ordinary differential equations, to learn these augmentations. Our simulations demonstrate that the machine-learning-augmented model more accurately captures observed behaviors and improves predictive accuracy. Nevertheless, we contend that while the model updates offer superior performance and capture the relevant physics, we can reduce off-line computational costs by eliminating certain dynamics without compromising accuracy or interpretability in downstream predictions of quantities of interest, particularly film thickness predictions. The entire process outlined here provides a structured approach to leverage data-driven methods by helping us comprehend the root causes of model inaccuracies and by offering a principled method for enhancing model performance.
The global number of individuals experiencing forced displacement has reached its highest level in the past decade. In this context, the provision of services for those in need requires timely and evidence-based approaches. How can mobile phone data (MPD) based analyses address the knowledge gap on mobility patterns and needs assessments in forced displacement settings? To answer this question, in this paper, we examine the capacity of MPD to function as a tool for anticipatory analysis, particularly in response to natural disasters and conflicts that lead to internal or cross-border displacement. The paper begins with a detailed review of the processes involved in acquiring, processing, and analyzing MPD in forced displacement settings. Following this, we critically assess the challenges associated with employing MPD in policy-making, with a specific focus on issues of user privacy and data ethics. The paper concludes by evaluating the potential benefits of MPD analysis for targeted and effective policy interventions and discusses future research avenues, drawing on recent studies and ongoing collaborations with mobile network operators.
For precision-required robot operations, the robot’s positioning accuracy, repeatability, and stiffness characteristics should be considered. If the mechanism has the desired repeatability performance, a kinematic calibration process can enhance the positioning accuracy. However, for robot operations where high accelerations are needed, the compliance characteristics of the mechanism affect the trajectory-tracking accuracy adversely. In this paper, a novel approach is proposed to enhance the trajectory-tracking accuracy of a robot operating at high accelerations by predicting the compliant displacements when there is no physical contact of the robot with its environment. Also, this case study compares the trajectory-tracking characteristics of an over-constrained and a normal-constrained 2-degrees-of-freedom (DoF) planar parallel mechanism during high-acceleration operations up to 5 g accelerations. In addition, the influence of the end-effector’s center of mass (CoM) position along the normal of the plane is investigated in terms of its effects on the proposed trajectory-enhancing algorithm.
In 2020, the COVID-19 pandemic resulted in a rapid response from governments and researchers worldwide, but information-sharing mechanisms were variable, and many early efforts were insufficient for the purpose. We interviewed fifteen data professionals located around the world, working with COVID-19-relevant data types in semi-structured interviews. Interviews covered both challenges and positive experiences with data in multiple domains and formats, including medical records, social deprivation, hospital bed capacity, and mobility data. We analyze this qualitative corpus of experiences for content and themes and identify four sequential barriers a researcher may encounter. These are: (1) Knowing data exists, (2) being able to access that data, (3) data quality, and (4) ability to share data onwards. A fifth barrier, (5) human throughput capacity, is present throughout all four stages. Examples of these barriers range from challenges faced by single individuals to non-existent records of historic mingling/social distance laws, and up to systemic geopolitical data suppression. Finally, we recommend that governments and local authorities explicitly create machine-readable temporal “law as code” for changes in laws such as mobility/mingling laws and changes in geographical regions.
This paper proposes a mobile robot recovery mechanism for low-cost robotic systems due to vision sensor failure in vSLAM systems. The approach takes advantage of ROS architecture and adopts the Shannon Nyquist sampling theory to selectively sample path parameters that will be used for back travel in case of vision sensor failure. As opposed to point clouds normally used to store vSLAM data, this paper proposes to store and use lightweight variables namely distance between sampled points, velocity combinations, i.e., linear and angular velocity, sampled period, and yaw angle values to describe the robot path and reduce the memory space required to store these variables. In this study, low-cost robotic systems typically using cameras aided by proprioceptive sensors such as IMU during vSLAM activities are investigated. A demonstration is made on how the ROS architecture can be used in a scenario where vision sensing is adversely affected, resulting in mapping failure. Additionally, a recommendation is made for adoption of the approach for vSLAM platforms implemented on both ROS1 and ROS2. Furthermore, a proposal is made to add an additional layer to vSLAM systems that will be exclusively used for back travel in case of vision loss during vSLAM activities resulting in mapping failure.
Data assimilation is a core component of numerical weather prediction systems. The large quantity of data processed during assimilation requires the computation to be distributed across increasingly many compute nodes; yet, existing approaches suffer from synchronization overhead in this setting. In this article, we exploit the formulation of data assimilation as a Bayesian inference problem and apply a message-passing algorithm to solve the spatial inference problem. Since message passing is inherently based on local computations, this approach lends itself to parallel and distributed computation. In combination with a GPU-accelerated implementation, we can scale the algorithm to very large grid sizes while retaining good accuracy and compute and memory requirements.
Surrogate models of turbulent diffusive flames could play a strategic role in the design of liquid rocket engine combustion chambers. The present article introduces a method to obtain data-driven surrogate models for coaxial injectors, by leveraging an inductive transfer learning strategy over a U-Net with available multifidelity Large Eddy Simulations (LES) data. The resulting models preserve reasonable accuracy while reducing the offline computational cost of data-generation. First, a database of about 100 low-fidelity LES simulations of shear-coaxial injectors, operating with gaseous oxygen and gaseous methane as propellants, has been created. The design of experiments explores three variables: the chamber radius, the recess-length of the oxidizer post, and the mixture ratio. Subsequently, U-Nets were trained upon this dataset to provide reasonable approximations of the temporal-averaged two-dimensional flow field. Despite the fact that neural networks are efficient non-linear data emulators, in purely data-driven approaches their quality is directly impacted by the precision of the data they are trained upon. Thus, a high-fidelity (HF) dataset has been created, made of about 10 simulations, to a much greater cost per sample. The amalgamation of low and HF data during the the transfer-learning process enables the improvement of the surrogate model’s fidelity without excessive additional cost.
Currently, artificial intelligence (AI) is integrated across various segments of the public sector, in a scattered and fragmented manner, aiming to enhance the quality of people’s lives. While AI adoption has proven to have a great impact, there are several aspects that hamper its utilization in public administration. Therefore, a large set of initiatives is designed to play a pivotal role in promoting the adoption of reliable AI, including documentation as a key driver. The AI community has been proactively recommending a variety of initiatives aimed at promoting the adoption of documentation practices. While currently proposed AI documentation artifacts play a crucial role in increasing the transparency and accountability of various facts about AI systems, we propose a code-bound declarative documentation framework that aims to support the responsible deployment of AI-based solutions. Our proposed framework aims to address the need to shift the focus from data and models being considered in isolation to the reuse of AI solutions as a whole. By introducing a formalized approach to describing adaptation and optimization techniques, we aim to enhance existing documentation alternatives. Furthermore, its utilization in the public administration aims to foster the rapid adoption of AI-based applications due to the open access to common use cases in the public sector. We further showcase our proposal with a public sector-specific use case, such as a legal text classification task, and demonstrate how the AI Product Card enables its reuse through the interactions of the formal documentation specifications with the modular code references.
Published in collaboration with The British Universities Industrial Relations Association (BUIRA), this book critically reviews the future of Industrial Relations (IR) in a changing work landscape and traces its historical evolution. Essential for academics, students and trade unions, it explores IR's significant changes over the past decade and its ongoing influence on our lives.
Wind speed at the sea surface is a key quantity for a variety of scientific applications and human activities. For its importance, many observation techniques exist, ranging from in situ to satellite observations. However, none of such techniques can capture the spatiotemporal variability of the phenomenon at the same time. Reanalysis products, obtained from data assimilation methods, represent the state-of-the-art for sea-surface wind speed monitoring but may be biased by model errors and their spatial resolution is not competitive with satellite products. In this work, we propose a scheme based on both data assimilation and deep learning concepts to process spatiotemporally heterogeneous input sources to reconstruct high-resolution time series of spatial wind speed fields. This method allows to us make the most of the complementary information conveyed by the different sea-surface information typically available in operational settings. We use synthetic wind speed data to emulate satellite images, in situ time series and reanalyzed wind fields. Starting from these pseudo-observations, we run extensive numerical simulations to assess the impact of each input source on the model reconstruction performance. We show that our proposed framework outperforms a deep learning–based inversion scheme and can successfully exploit the spatiotemporal complementary information of the different input sources. We also show that the model can learn the possible bias in reanalysis products and attenuate it in the output reconstructions.
This article addresses a critical gap in international research concerning digital literacies and empowerment among adults who are English as an additional language (EAL) learners. In the Australian context, where digital communication and services are embedded in all aspects of life and work, proficiency in digital literacies, including advanced technologies like generative artificial intelligence (AI), is vital for working and living in Australia. Despite the increasing prevalence and significance of generative AI platforms such as ChatGPT, there is a notable absence of dedicated programs to assist EAL learners in understanding and utilising generative AI, potentially impacting their employability and everyday life. This article presents findings from a larger study conducted within training providers, spanning adult educational institutions nationwide. Through analysis of data gathered from surveys and focus groups, the article investigates the knowledge and attitudes of students, educators, and leaders regarding integrating generative AI into the learning program for adult EAL learners. The results reveal a hesitance among educators, particularly concerning beginning language learners, in incorporating generative AI into educational programs. Conversely, many adult learners demonstrate enthusiasm for learning about its potential benefits despite having limited understanding. These disparities underscore the pressing need for comprehensive professional development for educators and program leaders. The findings also highlight the need to develop the AI literacy of learners to foster their understanding and digital empowerment. The article concludes by advocating for a systemic approach to include generative AI as an important part of learning programs with students often from adult migrant and refugee backgrounds.
Drawing upon Darvin and Norton’s (2015) model of investment, this article examines how Xing and Jimmy (both pseudonyms) as two male Chinese English as a foreign language learners from rural migrant backgrounds negotiate their identities and assemble their social and cultural resources to invest in autonomous digital literacies for language learning and the assertion of a legitimate place in urban spaces. Employing a connective ethnographic design, this study collected data through interviews, reflexive journals, digital artifacts, and on-campus observations. Data were analyzed using an inductive thematic approach as well as within- and cross-case data analysis methods. The findings indicate that Xing and Jimmy experienced a profound sense of alienation and exclusion as they migrated from under-resourced rural spaces to the urban elite field. The unequal power relations in urban classrooms subjected them to marginalized and inadequate rural identities by denying them the right to speak and be heard. However, engaging with digital literacies in the wild allowed these migrant learners to access a wide range of linguistic, cultural, and symbolic resources, empowering them to reframe their identities as legitimate English speakers. The acquisition of such legitimacy enabled them to challenge the prevailing rural–urban exclusionary ideologies to claim the right to speak. This article closes by offering implications for empowering rural migrant students as socially competent members of the Chinese higher education system in the digital age.
This three-year longitudinal case study explored how trilingual Uyghur intranational migrant students utilized digital technologies to learn languages and negotiate their identities in Han-dominant environments during their internal migrations within China, a topic that has been scarcely researched before. Adopting a poststructuralist perspective of identity, the study traced four Uyghur students who migrated from underdeveloped southern Xinjiang to northern Xinjiang for junior high school education, and to more developed cities in eastern and southern China for senior high school education and higher education. A qualitative approach was adopted, utilizing semi-structured interviews, class and campus observations, daily conversations, WeChat conversations, participants’ reflections, and assignments. Findings reveal that Uyghur minority students utilized digital technologies to bridge the English proficiency gap with Han students, negotiate their marginalized identities, integrate into the mainstream education system, and extend the empowerment to other ethnic minority students. This was in sharp contrast to the significant challenges and identity crises they faced when they did not have access to digital technologies to learn Mandarin in boarding secondary schools. An unprecedented finding is that, with digital empowerment, Uyghur minority students could achieve accomplishments that were even difficult for Han students to attain and gain upward social mobility by finding employment in Han-dominant first-tier cities. The implications of utilizing digital technologies to support intranational migrant ethnic minority students’ language learning and identity development are discussed.
The growing demand for global wind power production, driven by the critical need for sustainable energy sources, requires reliable estimation of wind speed vertical profiles for accurate wind power prediction and comprehensive wind turbine performance assessment. Traditional methods relying on empirical equations or similarity theory face challenges due to their restricted applicability beyond the surface layer. Although recent studies have utilized various machine learning techniques to vertically extrapolate wind speeds, they often focus on single levels and lack a holistic approach to predicting entire wind profiles. As an alternative, this study introduces a proof-of-concept methodology utilizing TabNet, an attention-based sequential deep learning model, to estimate wind speed vertical profiles from coarse-resolution meteorological features extracted from a reanalysis dataset. To ensure that the methodology is applicable across diverse datasets, Chebyshev polynomial approximation is employed to model the wind profiles. Trained on the meteorological features as inputs and the Chebyshev coefficients as targets, the TabNet more-or-less accurately predicts unseen wind profiles for different wind conditions, such as high shear, low shear/well-mixed, low-level jet, and high wind. Additionally, this methodology quantifies the correlation of wind profiles with prevailing atmospheric conditions through a systematic feature importance assessment.
The paper presents a novel control method aimed at enhancing the trajectory tracking accuracy of two-link mechanical systems, particularly nonlinear systems that incorporate uncertainties such as time-varying parameters and external disturbances. Leveraging the Udwadia–Kalaba equation, the algorithm employs the desired system trajectory as a servo constraint. First, the system’s constraints to construct its dynamic equation and apply generalized constraints from the constraint equation to an unconstrained system. Second, we design a robust approximate constraint tracking controller for manipulator control and establish its stability using Lyapunov’s law. Finally, we numerically simulate and experimentally validate the controller on a collaborative platform using model-based design methods.