To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Elbow, with complex physiological structure, plays an important role in upper limb motion which can be assisted with exoskeleton in rehabilitation. However, the stiffness of elbow changes while training which decline the comfort and effect of rehabilitation. Moreover, the rotation axis of elbow is changing which will cause secondary injuries. In this paper, we design an elbow exoskeleton with a variable stiffness actuator and a deviation compensation unit to assist elbow rehabilitation. Firstly, we design a variable stiffness actuator by symmetric actuation principle to adapt the change of elbow stiffness. The parameters of the variable stiffness actuator are optimized by motion simulation. Next, we design a deviation compensation unit to follow the rotation axis deviation outside the horizontal plane. The compensation area is simulated to cover the deviation. Finally, simulation and experiments are carried out to show the performance of our elbow exoskeleton. The workspace can meet the need of daily elbow motion while the variable stiffness actuator can adjust the exoskeleton stiffness as expectation.
Offering theoretical frameworks from experts as well as practical examples to support women transitioning through menopause in the workplace, this is a go-to reference for academics and policy makers working in the field.
Emerging technologies eventually disappear into the atmosphere of everyday life - they become ordinary and enmeshed in ignored infrastructures and patterns of behaviour. This is how Mundania takes form. Based on original research, this book uses the concept of mundania to better understand our relationship with technology.
Early robust design (RD) can lead to significant cost savings in the later stages of product development. In order to design systems that are insensitive to various sources of deviation in the early stages, specific design knowledge (SDK) plays a crucial role. Different design situations result in higher or lower levels of derivable SDK, which leads to different activities to achieve the development goal. Due to the variety of design situations, it is difficult for design engineers to choose a more robust concept to avoid the costly iterations that occur in the later development stages. Existing RD methods often do not adequately support these differences in design situations. To address the problem, this paper outlines an adaptive modeling method using the Embodiment Function Relation and Tolerance model. The method is developed in two contrasting design situations, each with a high and low level of derivable SDK, and evaluated in another two corresponding case studies. It has a consistent structure with five stages and gates. At each stage, the derivable SDK is taken into account and the individual modeling steps are adapted. This method provides design engineers with concrete support for early robustness evaluation of a product concept in different development scenarios.
Buildings employ an ensemble of technical systems like those for heating and ventilation. Ontologies such as Brick, IFC, SSN/SOSA, and SAREF have been created to describe such technical systems in a machine-understandable manner. However, these focus on describing system topology, whereas several relevant use cases (e.g., automated fault detection and diagnostics (AFDD)) also need knowledge about the physical processes. While mathematical simulation can be used to model physical processes, these are practically expensive to run and are not integrated with mainstream technical systems ontologies today. We propose to describe the effect of component actuation on underlying physical mechanisms within component stereotypes. These stereotypes are linked to actual component instances in the technical system description, thereby accomplishing an integration of knowledge about system structure and physical processes. We contribute an ontology for such stereotypes and show that it covers 100% of Brick heating, ventilation, and air-conditioning (HVAC) components. We further show that the ontology enables automatically inferring relationships between components in a real-world building in most cases, except in two situations where component dependencies are underreported. This is due to missing component models for passive parts like splits and join in ducts, and hence points at concrete future extensions of the Brick ontology. Finally, we demonstrate how AFDD applications can utilize the resulting knowledge graph to find expected consequences of an action, or conversely, to identify components that may be responsible for an observed state of the process.
Bridge engineering design drawings basic elements contain a large amount of important information such as structural dimensions and material indexes. Basic element detection is seen as the basis for digitizing drawings. Aiming at the problem of low detection accuracy of existing drawing basic elements, an improved basic elements detection algorithm for bridge engineering design drawings based on YOLOv5 is proposed. Firstly, coordinate attention is introduced into the feature extraction network to enhance the feature extraction capability of the algorithm and alleviate the problem of difficult recognition of texture features inside grayscale images. Then, targeting objectives across different scales, the standard 3 × 3 convolution in the feature pyramid network is replaced with switchable atrous convolution, and the atrous rate is adaptively selected for convolution computation to expand the sensory field. Finally, experiments are conducted on the bridge engineering design drawings basic elements detection dataset, and the experimental results show that when the Intersection over Union is 0.5, the proposed algorithm achieves a mean average precision of 93.6%, which is 3.4% higher compared to the original YOLOv5 algorithm, and it can satisfy the accuracy requirement of bridge engineering design drawings basic elements detection.
We prove a new lower bound for the almost 20-year-old problem of determining the smallest possible size of an essential cover of the $n$-dimensional hypercube $\{\pm 1\}^n$, that is, the smallest possible size of a collection of hyperplanes that forms a minimal cover of $\{\pm 1\}^n$ and such that, furthermore, every variable appears with a non-zero coefficient in at least one of the hyperplane equations. We show that such an essential cover must consist of at least $10^{-2}\cdot n^{2/3}/(\log n)^{2/3}$ hyperplanes, improving previous lower bounds of Linial–Radhakrishnan, of Yehuda–Yehudayoff, and of Araujo–Balogh–Mattos.
With the promise of greater efficiency and effectiveness, public authorities have increasingly turned to algorithmic systems to regulate and govern society. In Algorithmic Rule By Law, Nathalie Smuha examines this reliance on algorithmic regulation and shows how it can erode the rule of law. Drawing on extensive research and examples, Smuha argues that outsourcing important administrative decisions to algorithmic systems undermines core principles of democracy. Smuha further demonstrates that this risk is far from hypothetical or one that can be relegated to authoritarian regimes, as many of her examples are drawn from public authorities in liberal democracies that are already making use of algorithmic regulation. Focusing on the European Union, Smuha argues that the EU's digital agenda is misaligned with its aim to protect the rule of law. Novel and timely, this book should be read by anyone interested in the intersection of law, technology, and government. This title is also available as open access on Cambridge Core.
Introduction to Probability and Statistics for Data Science provides a solid course in the fundamental concepts, methods and theory of statistics for students in statistics, data science, biostatistics, engineering, and physical science programs. It teaches students to understand, use, and build on modern statistical techniques for complex problems. The authors develop the methods from both an intuitive and mathematical angle, illustrating with simple examples how and why the methods work. More complicated examples, many of which incorporate data and code in R, show how the method is used in practice. Through this guidance, students get the big picture about how statistics works and can be applied. This text covers more modern topics such as regression trees, large scale hypothesis testing, bootstrapping, MCMC, time series, and fewer theoretical topics like the Cramer-Rao lower bound and the Rao-Blackwell theorem. It features more than 250 high-quality figures, 180 of which involve actual data. Data and R are code available on our website so that students can reproduce the examples and do hands-on exercises.
The seminal Krajewski–Kotlarski–Lachlan theorem (1981) states that every countable recursively saturated model of $\mathsf {PA}$ (Peano arithmetic) carries a full satisfaction class. This result implies that the compositional theory of truth over $\mathsf {PA}$ commonly known as $\mathsf {CT}^{-}[\mathsf {PA}]$ is conservative over $\mathsf {PA}$. In contrast, Pakhomov and Enayat (2019) showed that the addition of the so-called axiom of disjunctive correctness (that asserts that a finite disjunction is true iff one of its disjuncts is true) to $\mathsf {CT}^{-}[\mathsf {PA}]$ axiomatizes the theory of truth $\mathsf {CT}_{0}[\mathsf {PA}]$ that was shown by Wcisło and Łełyk (2017) to be nonconservative over $\mathsf {PA}$. The main result of this paper (Theorem 3.12) provides a foil to the Pakhomov–Enayat theorem by constructing full satisfaction classes over arbitrary countable recursively saturated models of $\mathsf {PA}$ that satisfy arbitrarily large approximations of disjunctive correctness. This shows that in the Pakhomov–Enayat theorem the assumption of disjunctive correctness cannot be replaced with any of its approximations.
Soft robots show an advantage when conducting tasks in complex environments due to their enormous flexibility and adaptability. However, soft robots suffer interactions and nonlinear deformation when interacting with soft and fluid materials. The reason behind is the free boundary interactions, which refers to undetermined contact between soft materials, specifically containing nonlinear deformation in air and nonlinear interactions in fluid for soft robot simulation. Therefore, we propose a new approach using material point method (MPM), which can solve the free boundary interactions problem, to simulate soft robots under such environments. The proposed approach can autonomously predict the flexible and versatile behaviors of soft robots. Our approach entails incorporating automatic differentiation into the algorithm of MPM to simplify the computation and implement an efficient implicit time integration algorithm. We perform two groups of experiments with an ordinary pneumatic soft finger in different free boundary interactions. The results indicate that it is possible to simulate soft robots with nonlinear interactions and deformation, and such environmental effects on soft robots can be restored.
With the widespread application of proton exchange membrane fuel cells (PEMFCs), ensuring the safe and reliable operation of the PEMFCs is becoming more and more important. Timely diagnosis of fault types and the implementation of targeted interventions are crucial for addressing these challenges. In this study, a simulated PEMFC model is firstly introduced by using Fluent, and the effectiveness is validated through experiments involving membrane dry faults, water flooding faults, normal states, and unknown states. Then, a data-driven deep learning convolutional neural network, YOLOv5-CG-AS, is developed, which employs the EfficientViT network as the backbone, incorporating lightweight improvements through the proposed CG-AS attention layer. The results demonstrate that YOLOv5-CG-AS can automatically extract fault features from images for offline fault diagnosis and can perform real-time online diagnosis of multiple parameter curves of PEMFCs. Moreover, the experimental results have validated the feasibility and effectiveness of proposed method and shown the average precision mean Average Precision (mAP) of the trained model reaches 99.50%, superior than other conventional strategies. This has significant implications for advancing fault diagnosis methods, enhancing the reliability and durability of PEMFC systems, and promoting further development in the field.
The performance and confidence in fault detection and diagnostic systems can be undermined by data pipelines that feature multiple compounding sources of uncertainty. These issues further inhibit the deployment of data-based analytics in industry, where variable data quality and lack of confidence in model outputs are already barriers to their adoption. The methodology proposed in this paper supports trustworthy data pipeline design and leverages knowledge gained from one fully-observed data pipeline to a similar, under-observed case. The transfer of uncertainties provides insight into uncertainty drivers without repeating the computational or cost overhead of fully redesigning the pipeline. A SHAP-based human-readable explainable AI (XAI) framework was used to rank and explain the impact of each choice in a data pipeline, allowing the decoupling of positive and negative performance drivers to facilitate the successful selection of highly-performing pipelines. This empirical approach is demonstrated in bearing fault classification case studies using well-understood open-source data.
In our digitalized modern society where cyber-physical systems and internet-of-things (IoT) devices are increasingly commonplace, it is paramount that we are able to assure the cybersecurity of the systems that we rely on. As a fundamental policy, we join the advocates of multilayered cybersecurity measures, where resilience is built into IoT systems by relying on multiple defensive techniques. While existing legislation such as the General Data Protection Regulation (GDPR) also takes this stance, the technical implementation of these measures is left open. This invites research into the landscape of multilayered defensive measures, and within this problem space, we focus on two defensive measures: obfuscation and diversification. In this study, through a literature review, we situate these measures within the broader IoT cybersecurity landscape and show how they operate with other security measures built on the network and within IoT devices themselves. Our findings highlight that obfuscation and diversification show promise in contributing to a cost-effective robust cybersecurity ecosystem in today’s diverse cyber threat landscape.
Gas furnaces are the prevalent heating systems in Europe, but efforts to decarbonize the energy sector advocate for their replacement with heat pumps. However, this transition poses challenges for power grids due to increased electricity consumption. Estimating this consumption relies on the seasonal performance factor (SPF) of heat pumps, a metric that is complex to model and hard to measure accurately. We propose using an unpaired dataset of smart meter data at the building level to model the heat consumption and the SPF. We compare the distributions of the annual gas and heat pump electricity consumption by applying either the Jensen–Shannon Divergence or the Kolmogorov–Smirnov test. Through evaluation of a real-world dataset, we prove the ability of the methodology to predict the electricity consumption of future heat pumps replacing existing gas furnaces with a focus on single- and two-family buildings. Our results indicate anticipated SPFs ranging between 2.8 and 3.4, based on the Kolmogorov–Smirnov test. However, it is essential to note that the analysis reveals challenges associated with interpreting results when there are single-sided shifts in the input data, such as those induced by external factors like the European gas crisis in 2022. In summary, this extended version of a conference paper shows the viability of utilizing smart meter data to model heat consumption and seasonal performance factor for future retrofitted heat pumps.
In this paper we consider positional games where the winning sets are edge sets of tree-universal graphs. Specifically, we show that in the unbiased Maker-Breaker game on the edges of the complete graph $K_n$, Maker has a strategy to claim a graph which contains copies of all spanning trees with maximum degree at most $cn/\log (n)$, for a suitable constant $c$ and $n$ being large enough. We also prove an analogous result for Waiter-Client games. Both of our results show that the building player can play at least as good as suggested by the random graph intuition. Moreover, they improve on a special case of earlier results by Johannsen, Krivelevich, and Samotij as well as Han and Yang for Maker-Breaker games.
We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build a surrogate model that quantitatively depicts the behavior of an ADS in the specified traffic scenario. The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee. Given the complexity of a traffic scenario in autonomous driving, our approach further partitions the parameter space of a traffic scenario for the ADS into safe sub-spaces with varying levels of guarantees and unsafe sub-spaces with confirmed counter-examples. Innovatively, the partitioning is based on a branching algorithm that features explainable AI methods. We demonstrate the utility of the proposed approach by evaluating safety properties on the state-of-the-art ADS Interfuser, with a variety of simulated traffic scenarios, and we show that our approach and existing ADS testing work complement each other. We certify five safe scenarios from the verification results and find out three sneaky behavior discrepancies in Interfuser which can hardly be detected by safety testing approaches.
Transfer learning has been highlighted as a promising framework to increase the accuracy of the data-driven model in the case of data sparsity, specifically by leveraging pretrained knowledge to the training of the target model. The objective of this study is to evaluate whether the number of requisite training samples can be reduced with the use of various transfer learning models for predicting, for example, the chemical source terms of the data-driven reduced-order modeling (ROM) that represents the homogeneous ignition of a hydrogen/air mixture. Principal component analysis is applied to reduce the dimensionality of the hydrogen/air mixture in composition space. Artificial neural networks (ANNs) are used to regress the reaction rates of principal components, and subsequently, a system of ordinary differential equations is solved. As the number of training samples decreases in the target task, the ROM fails to predict the ignition evolution of a hydrogen/air mixture. Three transfer learning strategies are then applied to the training of the ANN model with a sparse dataset. The performance of the ROM with a sparse dataset is remarkably enhanced if the training of the ANN model is restricted by a regularization term that controls the degree of knowledge transfer from source to target tasks. To this end, a novel transfer learning method is introduced, Parameter control via Partial Initialization and Regularization (PaPIR), whereby the amount of knowledge transferred is systemically adjusted in terms of the initialization and regularization schemes of the ANN model in the target task.