To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is impossible to view the news at present without hearing talk of crisis: the economy, the climate, the pandemic. This book asks how these larger societal issues lead to a crisis with work, making it ever more precarious, unequal and intense. Experts diagnose the nature of the problem and offer a programme for transcending above the crises.
This paper proposes to solve the vortex gust mitigation problem on a 2D, thin flat plate using onboard measurements. The objective is to solve the discrete-time optimal control problem of finding the pitch rate sequence that minimizes the lift perturbation, that is, the criterion where is the lift coefficient obtained by the unsteady vortex lattice method. The controller is modeled as an artificial neural network, and it is trained to minimize using deep reinforcement learning (DRL). To be optimal, we show that the controller must take as inputs the locations and circulations of the gust vortices, but these quantities are not directly observable from the onboard sensors. We therefore propose to use a Kalman particle filter (KPF) to estimate the gust vortices online from the onboard measurements. The reconstructed input is then used by the controller to calculate the appropriate pitch rate. We evaluate the performance of this method for gusts composed of one to five vortices. Our results show that (i) controllers deployed with full knowledge of the vortices are able to mitigate efficiently the lift disturbance induced by the gusts, (ii) the KPF performs well in reconstructing gusts composed of less than three vortices, but shows more contrasted results in the reconstruction of gusts composed of more vortices, and (iii) adding a KPF to the controller recovers a significant part of the performance loss due to the unobservable gust vortices.
This paper presents a climbing robot (CR) designed for the purpose of pipeline maintenance, with capability to avoid the risks inherent in manual operations. In the design process, a three degree of freedom (DOF) parallel mechanism coupled with a remote center of motion (RCM) mechanism linkage mechanism were designed to serve as the CR’s climbing mechanism, which met the specific demands for climbing movements. The modified Kutzbach–Grübler formula and the screw theory were applied to calculate the DOFs of the CR. Then, the inverse and forward position analysis for the CR was derived. Furthermore, velocity and acceleration analysis of parallel mechanism were conducted and derived the Jacobian matrix, through which the singularity of parallel mechanism was analyzed. In order to evaluate kinematic performance of parallel mechanism, the motion/force transmission index (LTI) of workspace was calculated, which directed the followed dimensional optimization process. According to the optimization result, a prototype was constructed and a series of motion experiments were carried out to validate its climbing capability.
In practice, nondestructive testing (NDT) procedures tend to consider experiments (and their respective models) as distinct, conducted in isolation, and associated with independent data. In contrast, this work looks to capture the interdependencies between acoustic emission (AE) experiments (as meta-models) and then use the resulting functions to predict the model hyperparameters for previously unobserved systems. We utilize a Bayesian multilevel approach (similar to deep Gaussian Processes) where a higher-level meta-model captures the inter-task relationships. Our key contribution is how knowledge of the experimental campaign can be encoded between tasks as well as within tasks. We present an example of AE time-of-arrival mapping for source localization, to illustrate how multilevel models naturally lend themselves to representing aggregate systems in engineering. We constrain the meta-model based on domain knowledge, then use the inter-task functions for transfer learning, predicting hyperparameters for models of previously unobserved experiments (for a specific design).
In this paper, we analyze a polling system on a circle. Random batches of customers arrive at a circle, where each customer, independently, obtains a location that is uniformly distributed on the circle. A single server cyclically traverses the circle to serve all customers. Using mean value analysis, we derive the expected number of waiting customers within a given distance of the server. We exploit this to obtain closed-form expressions for both the mean batch sojourn time and the mean time to delivery.
Gradual typing provides a model for when a legacy language with less precise types interacts with a newer language with more precise types. Casts mediate between types of different precision, allocating blame when a value fails to conform to a type. The blame theorem asserts that blame always falls on the less-precisely typed side of a cast. One instance of interest is when a legacy language (such as Java) permits null values at every type, while a newer language (such as Scala or Kotlin) explicitly indicates which types permit null values. Nieto et al. in 2020 introduced a gradually typed calculus for just this purpose. The calculus requires three distinct constructors for function types and a non-standard proof of the blame theorem; it can embed terms from the legacy language into the newer language (or vice versa) only when they are closed. Here, we define a simpler calculus that is more orthogonal, with one constructor for function types and one for possibly nullable types, and with an entirely standard proof of the blame theorem; it can embed terms from the legacy language into the newer language (and vice versa) even if they are open. All results in the paper have been mechanized in Coq.
This commentary explores MENA”s AI governance, addressing gaps, showcasing successful strategies, and comparing national approaches. It emphasizes current deficiencies, highlights regional contributions to global AI governance, and offers insights into effective frameworks. The study reveals distinctions and trends in MENA”s national AI strategies, serving as a concise resource for policymakers and industry stakeholders.
The generic multiverse was introduced in [74] and [81] to explicate the portion of mathematics which is immune to our independence techniques. It consists, roughly speaking, of all universes of sets obtainable from a given universe by forcing extension. Usuba recently showed that the generic multiverse contains a unique definable universe, assuming strong large cardinal hypotheses. On the basis of this theorem, a non-pluralist about set theory could dismiss the generic multiverse as irrelevant to what set theory is really about, namely that unique definable universe. Whatever one’s attitude towards the generic multiverse, we argue that certain impure proofs ensure its ongoing relevance to the foundations of set theory. The proofs use forcing-fragile theories and absoluteness to prove ${\mathrm {ZFC}}$ theorems about simple “concrete” objects.
This paper investigates a closed-loop visual servo control scheme for controlling the position of a fully constrained cable-driven parallel robot (CDPR) designed for functional rehabilitation tasks. The control system incorporates real-time position correction using an Intel RealSense camera. Our CDPR features four cables exiting from pulleys, driven by AC servomotors, to move the moving platform (MP). The focus of this work is the development of a control scheme for a closed-loop visual servoing system utilizing depth/RGB images. The developed algorithm uses this data to determine the actual Cartesian position of the MP, which is then compared to the desired position to calculate the required Cartesian displacement. This displacement is fed into the inverse kinematic model to generate the servomotor commands. Three types of trajectories (circular, square, and triangular) are used to test the controller’s compliance with its position. Compared to the open-loop control of the robot, the new control system increases positional accuracy and effectively handles cable behavior, various perturbations, and modeling errors. The obtained results showed significant improvements in control performance, notably reduced root mean square error and maximal error in terms of position.
The estimation of workspace for parallel kinematic machines (PKMs) typically relies on geometric considerations, which is suitable for PKMs operating under light load conditions. However, when subjected to heavy load, PKMs may experience significant deformation in certain postures, potentially compromising their stiffness. Additionally, heavy load conditions can impact motor loading performance, leading to inadequate motor loading in specific postures. Consequently, in addition to geometric constraints, the workspace of PKMs under heavy load is also constrained by mechanism deformation and motor loading performance.
This paper aims at developing a new heavy load 6-PSS PKM for multi-degree of freedom forming process. Additionally, it proposes a new method for estimating the workspace, which takes into account both mechanism deformation and motor loading performance. Initially, the geometric workspace of the machine is predicted based on its geometric configuration. Subsequently, the workspace is predicted while considering the effects of mechanism deformation and motor loading performance separately. Finally, the workspace is synthesized by simultaneously accounting for both mechanism deformation and motor loading performance, and a new index of workspace utilization rate is proposed. The results indicate that the synthesized workspace of the machine diminishes as the load magnitude and load arm increase. Specifically, under a heavy load magnitude of 6000 kN and a load arm of 200 mm, the utilization rate of the synthesized workspace is only 9.9% of the geometric workspace.
Data is the foundation of any scientific, industrial, or commercial process. Its journey flows from collection to transport, storage, and processing. While best practices and regulations guide its management and protection, recent events have underscored their vulnerabilities. Academic research and commercial data handling have been marred by scandals, revealing the brittleness of data management. Data is susceptible to undue disclosures, leaks, losses, manipulation, or fabrication. These incidents often occur without visibility or accountability, necessitating a systematic structure for safe, honest, and auditable data management. We introduce the concept of Honest Computing as the practice and approach that emphasizes transparency, integrity, and ethical behaviour within the realm of computing and technology. It ensures that computer systems and software operate honestly and reliably without hidden agendas, biases, or unethical practices. It enables privacy and confidentiality of data and code by design and default. We also introduce a reference framework to achieve demonstrable data lineage and provenance, contrasting it with Secure Computing, a related but differently orientated form of computing. At its core, Honest Computing leverages Trustless Computing, Confidential Computing, Distributed Computing, Cryptography, and AAA security concepts. Honest Computing opens new ways of creating technology-based processes and workflows which permit the migration of regulatory frameworks for data protection from principle-based approaches to rule-based ones. Addressing use cases in many fields, from AI model protection and ethical layering to digital currency formation for finance and banking, trading, and healthcare, this foundational layer approach can help define new standards for appropriate data custody and processing.
The condition assessment of underground infrastructure (UI) is critical for maintaining the safety, functionality, and longevity of subsurface assets like tunnels and pipelines. This article reviews various data acquisition techniques, comparing their strengths and limitations in UI condition assessment. In collecting structured data, traditional methods like strain gauge can only obtain relatively low volumes of data due to low sampling frequency, manual data collection, and transmission, whereas more advanced and automatic methods like distributed fiber optic sensing can gather relatively larger volumes of data due to automatic data collection, continuous sampling, or comprehensive monitoring. Upon comparison, unstructured data acquisition methods can provide more detailed visual information that complements structured data. Methods like closed-circuit television and unmanned aerial vehicle produce large volumes of data due to their continuous video recording and high-resolution imaging, posing great challenges to data storage, transmission, and processing, while ground penetration radar and infrared thermography produce smaller volumes of image data that are more manageable. The acquisition of large volumes of UI data is the first step in its condition assessment. To enable more efficient, accurate, and reliable assessment, it is recommended to (1) integrate data analytics like artificial intelligence to automate the analysis and interpretation of collected data, (2) to develop robust big data management platforms capable of handling large volumes of data storage, processing and analysis, (3) to couple different data acquisition technologies to leverage the strengths of each technique, and (4) to continuously improve data acquisition methods to ensure efficient and reliable data acquisition.
Due to their significant role in creative design ideation, databases of causal ontology-based models for biological and technical systems have been developed. However, creating structured database entries through system models using a causal ontology requires the time and effort of experts. Researchers have worked toward developing methods that can automatically generate representations of systems from documents using causal ontologies by leveraging machine learning (ML) techniques. However, these methods use limited, hand-annotated data for building the ML models and have manual touchpoints that are not documented. While opportunities exist to improve the accuracy of these ML models, more importantly, it is required to understand the complete process of generating structured representations using causal ontology. This research proposes a new method and a set of rules to extract information relevant to the constructs of the SAPPhIRE model of causality from descriptions of technical systems in natural language and report the performance of this process. This process aims to understand the information in the context of the entire description. The method starts by identifying the system interactions involving material, energy and information and then builds the causal description of each system interaction using the SAPPhIRE ontology. This method was developed iteratively, verifying the improvements through user trials in every cycle. The user trials of this new method and rules with specialists and novice users of the SAPPhIRE modeling showed that the method helps in accurately and consistently extracting the information relevant to the constructs of the SAPPhIRE model from a given natural language description.
The distinction between the proofs that only certify the truth of their conclusion and those that also display the reasons why their conclusion holds has a long philosophical history. In the contemporary literature, the grounding relation—an objective, explanatory relation which is tightly connected with the notion of reason—is receiving considerable attention in several fields of philosophy. While much work is being devoted to characterising logical grounding in terms of deduction rules, no in-depth study focusing on the difference between grounding rules and logical rules exists. In this work, we analyse the relation between logical grounding and classical logic by focusing on the technical and conceptual differences that distinguish grounding rules and logical rules. The calculus employed to conduct the analysis provides moreover a strong confirmation of the fact that grounding derivations are logical derivations of a certain kind, without trivialising the distinction between grounding and logical rules, explanatory and non-explanatory parts of a derivation. By a further formal analysis, we negatively answer the question concerning the possible correspondence between grounding rules and intuitionistic logical rules.
Structural convergence is a framework for the convergence of graphs by Nešetřil and Ossona de Mendez that unifies the dense (left) graph convergence and Benjamini-Schramm convergence. They posed a problem asking whether for a given sequence of graphs $(G_n)$ converging to a limit $L$ and a vertex $r$ of $L$, it is possible to find a sequence of vertices $(r_n)$ such that $L$ rooted at $r$ is the limit of the graphs $G_n$ rooted at $r_n$. A counterexample was found by Christofides and Král’, but they showed that the statement holds for almost all vertices $r$ of $L$. We offer another perspective on the original problem by considering the size of definable sets to which the root $r$ belongs. We prove that if $r$ is an algebraic vertex (i.e. belongs to a finite definable set), the sequence of roots $(r_n)$ always exists.
Designers rely on many methods and strategies to create innovative designs. However, design research often overlooks the personality and attitudinal factors influencing method utility and effectiveness. This article defines and operationalizes the construct design mindset and introduces the Design Mindset Inventory (D-Mindset0.1), allowing us to measure and leverage statistical analyses to advance our understanding of its role in design. The inventory’s validity and reliability are evaluated by analyzing a large sample of engineering students (N = 473). Using factor analysis, we identified four underlying factors of D-Mindset0.1 related to the theoretical concepts: Conversation with the Situation, Iteration, Co-Evolution of Problem–Solution and Imagination. The latter part of the article finds statistical and theoretically meaningful relationships between design mindset and the three design-related constructs of sensation-seeking, self-efficacy and ambiguity tolerance. Ambiguity tolerance and self-efficacy emerge as positively correlated with design mindset. Sensation-seeking, which is only significantly correlated with subconstructs of D-Mindset0.1, is both negatively and positively correlated. These relationships lend validity D-Mindset0.1 and, by drawing on previously established relationships between the three personality traits and specific behaviors, facilitate further investigations of what its subconstructs capture.
We consider the task completion time of a repairable server system in which a server experiences randomly occurring service interruptions during which the server works slowly. Every service-state change preempts the task that is being processed. The server may then resume the interrupted task, it may replace the task with a different one, or it may restart the same task from the beginning, under the new service-state. The total time that the server takes to complete a task of random size including interruptions is called completion time. We study the completion time of a task under the last two cases as a function of the task size distribution, the service interruption frequency/severity, and the repair frequency. We derive closed form expressions for the completion time distribution in Laplace domain under replace and restart recovery disciplines and present their asymptotic behavior. In general, the heavy tailed behavior of completion times arises due to the heavy tailedness of the task time. However, in the preempt-restart service discipline, even in the case that the server still serves during interruptions albeit at a slower rate, completion times may demonstrate power tail behavior for exponential tail task time distributions. Furthermore, we present an $M/G/\infty$ queue with exponential service time and Markovian service interruptions. Our results reveal that the stationary first order moments, that is, expected system time and expected number in the system are insensitive to the way the service modulation affects the servers; system-wide modulation affecting every server simultaneously vs identical modulation affecting each server independently.
In this work, we consider extensions of the dual risk model with proportional gains by introducing dependence structures among gain sizes and gain interarrival times. Among others, we further consider the case where the proportionality parameter is randomly chosen, the case where it is a uniformly random variable, as well as the case where we may have upward as well as downward jumps. Moreover, we consider the case with causal dependence structure, as well as the case where the dependence is based on the generalized Farlie–Gumbel–Morgenstern copula. The ruin probability and the distribution of the time to ruin are investigated.
Experiments in engineering are typically conducted in controlled environments where parameters can be set to any desired value. This assumes that the same applies in a real-world setting, which is often incorrect as many experiments are influenced by uncontrollable environmental conditions such as temperature, humidity, and wind speed. When optimizing such experiments, the focus should be on finding optimal values conditionally on these uncontrollable variables. This article extends Bayesian optimization to the optimization of systems in changing environments that include controllable and uncontrollable parameters. The extension fits a global surrogate model over all controllable and environmental variables but optimizes only the controllable parameters conditional on measurements of the uncontrollable variables. The method is validated on two synthetic test functions, and the effects of the noise level, the number of environmental parameters, the parameter fluctuation, the variability of the uncontrollable parameters, and the effective domain size are investigated. ENVBO, the proposed algorithm from this investigation, is applied to a wind farm simulator with eight controllable and one environmental parameter. ENVBO finds solutions for the entire domain of the environmental variable that outperform results from optimization algorithms that only focus on a fixed environmental value in all but one case while using a fraction of their evaluation budget. This makes the proposed approach very sample-efficient and cost-effective. An off-the-shelf open-source version of ENVBO is available via the NUBO Python package.