To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In an era of globalized research endeavors, the interplay between government funding programs, funding decisions, and their influence on successful research collaborations and grant application success rates has emerged as a critical focus of inquiry. This study embarks on an in-depth analysis of cross-country funding dynamics over the past three decades, with a specific emphasis on support for academic-industry collaboration versus sole academic or industry funding. Drawing insights from comprehensive datasets and policy trends, our research illuminates the evolving landscape of research funding and collaboration policies. We examine funding by Innosuisse (Swiss Innovation Project Funding) and SBIR (US Small Business Innovation Research), exploring the rates of future grant success for both academic and industry partners. We find strong evidence of rich-get-richer phenomenon in the Innosuisse program for both academic partners and industry partners in terms of winning future grants. For SBIR we find weaker levels of continued funding to the same partners with most attaining at most a few grants. With the increasing prevalence of academic-industry collaborations among both funders, it is worth considering additional efforts to ensure that novel ideas and new individuals and teams are supported.
This article interrogates three claims made in relation to the use of data in relation to peace. That more data, faster data, and impartial data will lead to better policy and practice outcomes. Taken together, this data myth relies on a lack of curiosity about the provenance of data and the infrastructure that produces it and asserts its legitimacy. Our discussion is concerned with issues of power, inclusion, and exclusion, and particularly how knowledge hierarchies attend to the collection and use of data in relation to conflict-affected contexts. We therefore question the axiomatic nature of these data myth claims and argue that the structure and dynamics of peacebuilding actors perpetuate the myth. We advocate a fuller reflection of the data wave that has overtaken us and echo calls for an ethics of numbers. In other words, this article is concerned with the evidence base for evidence-based peacebuilding. Mindful of the policy implications of our concerns, the article puts forward five tenets of good practice in relation to data and the peacebuilding sector. The concluding discussion further considers the policy implications of the data myth in relation to peace, and particularly, the consequences of casting peace and conflict as technical issues that can be “solved” without recourse to human and political factors.
This analysis provides a critical account of AI governance in the modern “smart city” through a feminist lens. Evaluating the case of Sidewalk Labs’ Quayside project—a smart city development that was to be implemented in Toronto, Canada—it is argued that public–private partnerships can create harmful impacts when corporate actors seek to establish new “rules of the game” regarding data regulation. While the Quayside project was eventually abandoned in 2020, it demonstrates key observations for the state of urban algorithmic governance both within Canada and internationally. Articulating the need for a revitalised and participatory smart city governance programme prioritizes meaningful engagement in the forms of transparency and accountability measures. Taking a feminist lens, it argues for a two-pronged approach to governance: integrating collective engagement from the outset in the design process and ensuring the civilian data protection through a robust yet localized rights-based privacy regulation strategy. Engaging with feminist theories of intersectionality in relation to technology and data collection, this framework articulates the need to understand the broader histories of social marginalization when implementing governance strategies regarding artificial intelligence in cities.
Artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) are machine learning techniques that enable modeling and prediction of various properties in the milling process of alloy 2017A, including quality, cost, and energy consumption (QCE). To utilize ANNs or ANFIS for QCE prediction, researchers must gather a dataset consisting of input–output pairs that establish the relationship between QCE and various input variables such as machining parameters, tool properties, and material characteristics. Subsequently, this dataset can be employed to train a machine learning model using techniques like backpropagation or gradient descent. Once the model has been trained, predictions can be made on new input data by providing the desired input variables, resulting in predicted QCE values as output. This study comprehensively examines and identifies the scientific contributions of strategies, machining sequences, and cutting parameters on surface quality, machining cost, and energy consumption using artificial intelligence (ANN and ANFIS). The findings indicate that the optimal neural architecture for ANNs, utilizing the Bayesian regularization (BR) algorithm, is a {3-10-3} architecture with an overall mean square error (MSE) of 2.74 × 10−3. Similarly, for ANFIS, the optimal structure yielding better error and correlation for the three output variables (Etot, Ctot, and Ra) is a {2, 2, 2} structure. The results demonstrate that using the BR algorithm with a multi-criteria output response yields favorable outcomes compared to the ANFIS.
The international community, and the UN in particular, is in urgent need of wise policies, and a regulatory institution to put data-based systems, notably AI, to positive use and guard against their abuse. Digital transformation and “artificial intelligence (AI)”—which can more adequately be called “data-based systems (DS)”—present ethical opportunities and risks. Helping humans and the planet to flourish sustainably in peace and guaranteeing globally that human dignity is respected not only offline but also online, in the digital sphere, and the domain of DS requires two policy measures: (1) human rights-based data-based systems (HRBDS) and (2) an International Data-Based Systems Agency (IDA): IDA should be established at the UN as a platform for cooperation in the field of digital transformation and DS, fostering human rights, security, and peaceful uses of DS.
Anticipating future migration trends is instrumental to the development of effective policies to manage the challenges and opportunities that arise from population movements. However, anticipation is challenging. Migration is a complex system, with multifaceted drivers, such as demographic structure, economic disparities, political instability, and climate change. Measurements encompass inherent uncertainties, and the majority of migration theories are either under-specified or hardly actionable. Moreover, approaches for forecasting generally target specific migration flows, and this poses challenges for generalisation.
In this paper, we present the results of a case study to predict Irregular Border Crossings (IBCs) through the Central Mediterranean Route and Asylum requests in Italy. We applied a set of Machine Learning techniques in combination with a suite of traditional data to forecast migration flows. We then applied an ensemble modelling approach for aggregating the results of the different Machine Learning models to improve the modelling prediction capacity.
Our results show the potential of this modelling architecture in producing forecasts of IBCs and Asylum requests over 6 months. The explained variance of our models through a validation set is as high as 80%. This study offers a robust basis for the construction of timely forecasts. In the discussion, we offer a comment on how this approach could benefit migration management in the European Union at various levels of policy making.
Public procurement is a fundamental aspect of public administration. Its vast size makes its oversight and control very challenging, especially in countries where resources for these activities are limited. To support decisions and operations at public procurement oversight agencies, we developed and delivered VigIA, a data-based tool with two main components: (i) machine learning models to detect inefficiencies measured as cost overruns and delivery delays, and (ii) risk indices to detect irregularities in the procurement process. These two components cover complementary aspects of the procurement process, considering both active and passive waste, and help the oversight agencies to prioritize investigations and allocate resources. We show how the models developed shed light on specific features of the contracts to be considered and how their values signal red flags. We also highlight how these values change when the analysis focuses on specific contract types or on information available for early detection. Moreover, the models and indices developed only make use of open data and target variables generated by the procurement processes themselves, making them ideal to support continuous decisions at overseeing agencies.
We propose a physics-constrained convolutional neural network (PC-CNN) to solve two types of inverse problems in partial differential equations (PDEs), which are nonlinear and vary both in space and time. In the first inverse problem, we are given data that is offset by spatially varying systematic error (i.e., the bias, also known as the epistemic uncertainty). The task is to uncover the true state, which is the solution of the PDE, from the biased data. In the second inverse problem, we are given sparse information on the solution of a PDE. The task is to reconstruct the solution in space with high resolution. First, we present the PC-CNN, which constrains the PDE with a time-windowing scheme to handle sequential data. Second, we analyze the performance of the PC-CNN to uncover solutions from biased data. We analyze both linear and nonlinear convection-diffusion equations, and the Navier–Stokes equations, which govern the spatiotemporally chaotic dynamics of turbulent flows. We find that the PC-CNN correctly recovers the true solution for a variety of biases, which are parameterized as non-convex functions. Third, we analyze the performance of the PC-CNN for reconstructing solutions from sparse information for the turbulent flow. We reconstruct the spatiotemporal chaotic solution on a high-resolution grid from only 1% of the information contained in it. For both tasks, we further analyze the Navier–Stokes solutions. We find that the inferred solutions have a physical spectral energy content, whereas traditional methods, such as interpolation, do not. This work opens opportunities for solving inverse problems with partial differential equations.
Urban communities rely on built utility infrastructures as critical lifelines that provide essential services such as water, gas, and power, to sustain modern socioeconomic systems. These infrastructures consist of underground and surface-level assets that are operated and geo-distributed over large regions where continuous monitoring for anomalies is required but challenging to implement. This article addresses the problem of deploying heterogeneous Internet of Things sensors in these networks to support future decision-support tasks, for example, anomaly detection, source identification, and mitigation. We use stormwater as a driving use case; these systems are responsible for drainage and flood control, but act as conduits that can carry contaminants to the receiving waters. Challenges toward effective monitoring include the transient and random nature of the pollution incidents, the scarcity of historical data, the complexity of the system, and technological limitations for real-time monitoring. We design a SemanTics-aware sEnsor Placement framework (STEP) to capture pollution incidents using structural, behavioral, and semantic aspects of the infrastructure. We leverage historical data to inform our system with new, credible instances of potential anomalies. Several key topological and empirical network properties are used in proposing candidate deployments that optimize the balance between multiple objectives. We also explore the quality of anomaly representation in the network through new perspectives, and provide techniques to enhance the realism of the anomalies considered in a network. We evaluate STEP on six real-world stormwater networks in Southern California, USA, which shows its efficacy in monitoring areas of interest over other baseline methods.
Many documents are produced over the years of managing assets, particularly those with long lifespans. However, during this time, the assets may deviate from their original as-designed or as-built state. This presents a significant challenge for tasks that occur in later life phases but require precise knowledge of the asset, such as retrofit, where the assets are equipped with new components. For a third party who is neither the original manufacturer nor the operator, obtaining a comprehensive understanding of the asset can be a tedious process, as this requires going through all available but often fragmented information and documents. While common knowledge regarding the domain or general type of asset can be helpful, it is often based on the experiences of engineers and is, therefore, only implicitly available. This article presents a graph-based information management system that complements traditional PLM systems and helps connect fragments by utilizing generic information about assets. To achieve this, techniques from systems engineering and data science are used. The overarching management platform also includes geometric analyses and operations that can be performed with geometric and product information extracted from STEP files. While the management itself is first described generically, it is also later applied to cabin retrofit in aviation. A mock-up of an Airbus A320 is utilized as the case study to demonstrate further how the platform can provide benefits for retrofitting such long-living assets.
Learn with confidence with this hands-on undergraduate textbook for CS2 courses. Active-learning and real-world projects underpin each chapter, briefly reviewing programming fundamentals then progressing to core data structures and algorithms topics including recursion, lists, stacks, trees, graphs, sorting, and complexity analysis. Creative projects and applications put theoretical concepts into practice, helping students master the fundamentals. Dedicated project chapters supply further programming practice using real-world, interdisciplinary problems which students can showcase in their own online portfolios. Example Interview Questions sections prepare students for job applications. The pedagogy supports self-directed and skills-based learning with over 250 'Try It Yourself' boxes, many with solutions provided, and over 500 progressively challenging end-of-chapter questions. Written in a clear and engaging style, this textbook is a complete resource for teaching the fundamental skills that today's students need. Instructor resources are available online, including a test bank, solutions manual, and sample code.
Legal design is a rapidly growing field that seeks to improve the legal system's accessibility, usability, and effectiveness through human-centered design methods and principles. This book provides a comprehensive introduction to legal design, covering fundamental concepts, definitions, and theories. Chapters explore the role of legal design in promoting dignity, equity, and justice in the legal system. Contributors present a range of community-driven projects and method-focused case studies that demonstrate the potential of legal design to transform how people experience the law. This book is an essential resource for anyone interested in the future of law and the intersection of design and justice.
To the extent that internet cures are not online health misinformation, they resist the logic of intervention as problem-solving precisely because they provide resolution to problems that otherwise remain indeterminate: digital inscription of miracle cures as record-making and record-keeping, transnational networked sociality that emerges out of the increasingly datafied environment of executable text, the reconfiguration of downtime into connectedness and belonging, and the creation of an alternative miraculous space for therapy as a playful activity. The crowdsourcing of miracle cures happened organically via social media as an intermediary for matching community needs with community capacity; that the longevity of these online groups enables post hoc creation of datasets that can be explored computationally, that the dynamic knowledge-making processes that unfold on these groups become fully open to view thanks to platform affordances – are secondary to the pre-digital social dynamics that drove these practices forward. These secondary utilities, however, came to solidify and legitimize these practices in an ecology of datafied behaviour; in this process, these utilities also transformed the expectation around what it means to engage with miracle cures. If seeking herbal cures for cancer, for example, used to mean coming to a lương y (‘doctor of good conscience’) for advice, or to a herbal store to purchase thuốc gia truyền recipes (‘family transmission’ recipes) and coming away with instructions that are based on socially sanctioned expertise, increasingly people are taking to social media to work out the details of these bodies of knowledge both in response to emergent health concerns and to enact the work of care. That it became acceptable and even desirable to carry out this kind of work in such a digital context is a by-product of the historical continuation of practices that never quite ceased to exist in the first place – and also of emerging forms of sociality as compositions of meaning via digital platform affordances.
Miracle cures proliferate at the digital edge in ways that are very important to their survival: in languages that skirt the technical capabilities and political will of regimes of automated platform content moderation, as esoteric discourse that defies easy categorization, in formats that are prioritized for the imperative of platform profit, and at a temporality constantly recalibrated to accommodate self-time/ eigenzeit.
What kind of time do we experience when we wait? When we wait for someone to turn up to a video call, time feels like an interlude; when we wait for our Facebook News Feed to load, time feels like a clot in our throat as the buffering icon keeps on spinning. In waiting, we realize that there is a multitude to temporalities: time can be standardized so people can be in the same place at the same time, but time also dwells inside each of us – in the consciousness of our finite lifetime and in the rhythms of our body. Technology has been said to accelerate time and contract duration (Hassan, 2011; Wajcman, 2015); what, then, of waiting with technology? Technology has not made waiting redundant, but it seems to have transformed waiting substantially. When we reach for our phone as we wait, with or without a direction, waiting is given shape outside of our own body. When we wait on or with our phone, however, waiting re-emerges as viscerally within.
Liveness captures some of these entangled dynamics of waiting in the presence of technology. Certain temporal arrangements are to be made for liveness to be enacted: someone to ‘go live’, someone to ‘watch live’, something to be happening ‘live’, some technologies to faithfully carry out ‘the live’. Radio is thought of as a live medium (Vianello, 1985), broadcast television is live insofar as it competes with the new viewing platforms and business models such as Netflix and Hulu (van Es, 2017), ‘digital liveness’ has been understood as ‘our conscious act of grasping virtual entities as live in response to the claims they make on us’ (Auslander, 2012, p 10). Liveness temporality is multiple and contingent on its medium; the ‘paradox of liveness’ lies in its apparent constructedness and its seeming claim to provide direct access to the event relayed (van Es, 2017). There is a similar paradox to the temporality of waiting: waiting is an enactment of particular bodily and extra-bodily temporalities, but waiting is also time temporalizing through the body.
In the sections that follow, I explore what it means when people engage in liveness as a way to overcome waiting – to recalibrate downtime. I begin by outlining the paradox of liveness as algorithmic and liveness as lived by reviewing current disparate literatures.
We are entering into a brave new world of quantum technologies. This quantum impetus on society will also cause a system reformation in the legal arena. Here, we dissect and analyze what this quantum leap of the legal sphere contains. Furthermore, we present a pragmatic road map to find a path through an uncharted legal design landscape toward a prosperous quantum future.
This chapter focuses on visualising the law, in the form of comics, as a specific way to understand the realm of legal design. Focusing on the case study of Lawtoons, we detail the existing definitional inconsistencies of legal design and advocate for clarity in appreciating the purview of this emerging discipline. The legal design community must have, at its very core, the ability to visualise law to make law available at scale. We also briefly lay the conceptual foundations of visualisation in law and argue that graphics and storytelling are an important way to promote dignity in legal awareness and education.
This book is an introduction to the new field of legal design and a primer on both the application and theory of legal design that has developed so far in a decade of exploration and experimentation. We have assembled case studies of pioneering efforts from around the world, collected examples of methods and perspectives just now coming into focus, and offer a handful of proscriptions for the future. Bookending those three subject areas are both individual and collective articulations of these editors’ frames of reference and influence in our work together—dignity, law, and radical imagination. Our collective frame for this volume is relentlessly optimistic. We believe that the new field of legal design provides a promising intervention for challenging the harmful systems, structures, methodologies and outcomes that currently define legal systems, and designing systems that actually embody and effectuate the full promise of the rule of law – a just, peaceful, and equitable world for everyone.