To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The design provides innovative solutions to problems in the medical field. Collaboration between design and medicine can be fostered in several ways; however, educational programs linking these two academic fields are limited, and their frameworks and effectiveness are unknown. Hence, we launched an educational project to address medical problems through design. The framework and creative outcomes are based on the results of two consecutive one-year programs. The research subjects were 35 participants from three departments. The majority (22/35, 63%) were master’s and doctoral students in design. Eight participants were doctoral students and researchers who volunteered from the surgery, oral surgery, neurology and nursing departments at the Graduate School of Medicine and Hospital. The impact of the program on creativity was evaluated by the quality of ideas and the participants’ assessments. In total, 424 problems were identified and 387 ideas were created. Nine prototypes with mock-ups and functional models of products, games or service designs were created and positively evaluated for novelty, workability and relevance. Participants benefitted from the collaboration and gained new perspectives. Career expectations increased after the class, whereas motivation and skills remained high. A framework for a continuing educational program was suggested.
Displacement continues to increase at a global scale and is increasingly happening in complex, multicrisis settings, leading to more complex and deeper humanitarian needs. Humanitarian needs are therefore increasingly outgrowing the available humanitarian funding. Thus, responding to vulnerabilities before disaster strikes is crucial but anticipatory action is contingent on the ability to accurately forecast what will happen in the future. Forecasting and contingency planning are not new in the humanitarian sector, where scenario-building continues to be an exercise conducted in most humanitarian operations to strategically plan for coming events. However, the accuracy of these exercises remains limited. To address this challenge and work with the objective of providing the humanitarian sector with more accurate forecasts to enhance the protection of vulnerable groups, the Danish Refugee Council has already developed several machine learning models. The Anticipatory Humanitarian Action for Displacement uses machine learning to forecast displacement in subdistricts in the Liptako-Gourma region in Sahel, covering Burkina Faso, Mali, and Niger. The model is mainly built on data related to conflict, food insecurity, vegetation health, and the prevalence of underweight to forecast displacement. In this article, we will detail how the model works, the accuracy and limitations of the model, and how we are translating the forecasts into action by using them for anticipatory action in South Sudan and Burkina Faso, including concrete examples of activities that can be implemented ahead of displacement in the place of origin, along routes and in place of destination.
The pervasive use of media at current-day festivals thoroughly impacts how these live events are experienced, anticipated, and remembered. This empirical study examined eventgoers’ live media practices – taking photos, making videos, and in-the-moment sharing of content on social media platforms – at three large cultural events in the Netherlands. Taking a practice approach (Ahva 2017; Couldry 2004), the author studied online and offline event environments through extensive ethnographic fieldwork: online and offline observations, and interviews with 379 eventgoers. Analysis of this research material shows that through their live media practices eventgoers are continuously involved in mediated memory work (Lohmeier and Pentzold 2014; Van Dijck 2007), a form of live storytelling that revolves around how they want to remember the event. The article focuses on the impact of mediated memory work on the live experience in the present. It distinguishes two types of mediatised experience of live events: live as future memory and the experiential live. The author argues that memory is increasingly incorporated into the live experience in the present, so much so that, for many eventgoers, mediated memory-making is crucial to having a full live event experience. The article shows how empirical research in media studies can shed new light on key questions within memory studies.
In this paper, we explore the crucial role and challenges of computational reproducibility in geosciences, drawing insights from the Climate Informatics Reproducibility Challenge (CICR) in 2023. The competition aimed at (1) identifying common hurdles to reproduce computational climate science; and (2) creating interactive reproducible publications for selected papers of the Environmental Data Science journal. Based on lessons learned from the challenge, we emphasize the significance of open research practices, mentorship, transparency guidelines, as well as the use of technologies such as executable research objects for the reproduction of geoscientific published research. We propose a supportive framework of tools and infrastructure for evaluating reproducibility in geoscientific publications, with a case study for the climate informatics community. While the recommendations focus on future CIRCs, we expect they would be beneficial for wider umbrella of reproducibility initiatives in geosciences.
Machine learning models have been used extensively in hydrology, but issues persist with regard to their transparency, and there is currently no identifiable best practice for forcing variables in streamflow or flood modeling. In this paper, using data from the Centre for Ecology & Hydrology’s National River Flow Archive and from the European Centre for Medium-Range Weather Forecasts, we present a study that focuses on the input variable set for a neural network streamflow model to demonstrate how certain variables can be internalized, leading to a compressed feature set. By highlighting this capability to learn effectively using proxy variables, we demonstrate a more transferable framework that minimizes sensing requirements and that enables a route toward generalizing models.
Parameter learning is a crucial task in the field of Statistical Relational Artificial Intelligence: given a probabilistic logic program and a set of observations in the form of interpretations, the goal is to learn the probabilities of the facts in the program such that the probabilities of the interpretations are maximized. In this paper, we propose two algorithms to solve such a task within the formalism of Probabilistic Answer Set Programming, both based on the extraction of symbolic equations representing the probabilities of the interpretations. The first solves the task using an off-the-shelf constrained optimization solver while the second is based on an implementation of the Expectation Maximization algorithm. Empirical results show that our proposals often outperform existing approaches based on projected answer set enumeration in terms of quality of the solution and in terms of execution time.
This paper describes a semantics for pure Prolog programs with negation that provides meaning to metaprograms. Metaprograms are programs that construct and use data structures as programs. In Prolog a primary mataprogramming construct is the use of a variable as a literal in the body of a clause. The traditional Prolog 3-line metainterpreter is another example of a metaprogram. The account given here also supplies a meaning for clauses that have a variable as head, even though most Prolog systems do not support such clauses. This semantics naturally includes such programs, giving them their intuitive meaning. Ideas from Denecker and his colleagues form the basis of this approach. The key idea is to notice that if we give meanings to all propositional programs and treat Prolog rules with variables as the set of their ground instances, then we can give meanings to all programs. We must treat Prolog rules (which may be metarules) as templates for generating ground propositional rules, and not as first-order formulas, which they may not be. We use parameterized inductive definitions to give propositional models to Prolog programs, in which the propositions are expressions. Then the set of expressions of a propositional model determine a first-order Herbrand Model, providing a first-order logical semantics for all (pure) Prolog programs, including metaprograms. We give examples to show the applicability of this theory. We also demonstrate how this theory makes proofs of some important properties of metaprograms very straightforward.
This paper proposes a new methodology for early validation of high-level requirements on cyber-physical systems with the aim of improving their quality and, thus, lowering chances of specification errors propagating into later stages of development where it is much more expensive to fix them. The paper presents a transformation of a real-world requirements specification of a medical device—the Patient-Controlled Analgesia (PCA) Pump—into an Event Calculus model that is then evaluated using Answer Set Programming and the s(CASP) system. The evaluation under s(CASP) allowed deductive as well as abductive reasoning about the specified functionality of the PCA pump on the conceptual level with minimal implementation or design dependent influences and led to fully automatically detected nuanced violations of critical safety properties. Further, the paper discusses scalability and non-termination challenges that had to be faced in the evaluation and techniques proposed to (partially) solve them. Finally, ideas for improving s(CASP) to overcome its evaluation limitations that still persist as well as to increase its expressiveness are presented.
The dominating set reconfiguration problem is defined as determining, for a given dominating set problem and two among its feasible solutions, whether one is reachable from the other via a sequence of feasible solutions subject to a certain adjacency relation. This problem is PSPACE-complete in general. The concept of the dominating set is known to be quite useful for analyzing wireless networks, social networks, and sensor networks. We develop an approach to solve the dominating set reconfiguration problem based on answer set programming (ASP). Our declarative approach relies on a high-level ASP encoding, and both the grounding and solving tasks are delegated to an ASP-based combinatorial reconfiguration solver. To evaluate the effectiveness of our approach, we conduct experiments on a newly created benchmark set.
Recent efforts in interpreting convolutional neural networks (CNNs) focus on translating the activation of CNN filters into a stratified Answer Set Program (ASP) rule-sets. The CNN filters are known to capture high-level image concepts, thus the predicates in the rule-set are mapped to the concept that their corresponding filter represents. Hence, the rule-set exemplifies the decision-making process of the CNN w.r.t the concepts that it learns for any image classification task. These rule-sets help understand the biases in CNNs, although correcting the biases remains a challenge. We introduce a neurosymbolic framework called NeSyBiCor for bias correction in a trained CNN. Given symbolic concepts, as ASP constraints, that the CNN is biased toward, we convert the concepts to their corresponding vector representations. Then, the CNN is retrained using our novel semantic similarity loss that pushes the filters away from (or toward) learning the desired/undesired concepts. The final ASP rule-set obtained after retraining, satisfies the constraints to a high degree, thus showing the revision in the knowledge of the CNN. We demonstrate that our NeSyBiCor framework successfully corrects the biases of CNNs trained with subsets of classes from the Places dataset while sacrificing minimal accuracy and improving interpretability.
The development of large language models (LLMs), such as GPT, has enabled the construction of several socialbots, like ChatGPT, that are receiving a lot of attention for their ability to simulate a human conversation. However, the conversation is not guided by a goal and is hard to control. In addition, because LLMs rely more on pattern recognition than deductive reasoning, they can give confusing answers and have difficulty integrating multiple topics into a cohesive response. These limitations often lead the LLM to deviate from the main topic to keep the conversation interesting. We propose AutoCompanion, a socialbot that uses an LLM model to translate natural language into predicates (and vice versa) and employs commonsense reasoning based on answer set programming (ASP) to hold a social conversation with a human. In particular, we rely on s(CASP), a goal-directed implementation of ASP as the backend. This paper presents the framework design and how an LLM is used to parse user messages and generate a response from the s(CASP) engine output. To validate our proposal, we describe (real) conversations in which the chatbot’s goal is to keep the user entertained by talking about movies and books, and s(CASP) ensures (i) correctness of answers, (ii) coherence (and precision) during the conversation—which it dynamically regulates to achieve its specific purpose—and (iii) no deviation from the main topic.
When we want to compute the probability of a query from a probabilistic answer set program, some parts of a program may not influence the probability of a query, but they impact on the size of the grounding. Identifying and removing them is crucial to speed up the computation. Algorithms for SLG resolution offer the possibility of returning the residual program which can be used for computing answer sets for normal programs that do have a total well-founded model. The residual program does not contain the parts of the program that do not influence the probability. In this paper, we propose to exploit the residual program for performing inference. Empirical results on graph datasets show that the approach leads to significantly faster inference. The paper has been accepted at the ICLP2024 conference and under consideration in Theory and Practice of Logic Programming (TPLP).
The highly digitalised nature of contemporary society has made digital literacy important for newly arrived migrants. However, for teachers, the use of information and communication technologies can be challenging. The aim of the present study is to gain a deeper understanding of how teachers perceive digital resources as useful for teaching migrants language and subject skills. The research question is, In what way do teachers at the language introduction programme for newly arrived migrants in Sweden articulate the use of digital resources in relation to language teaching and in relation to subject teaching? This qualitative study is based on observations of 28 lessons in different subjects in the language introduction programme, as well as interviews with the observed teachers. In analysing the material, we first used the TPACK in situ model (Pareto & Willermark, 2019) to organise the data on the use of digital resources, and thereafter discourse theory (Howarth, 2005) was used to analyse the data. The results show that the teachers limited their students’ use of digital resources during the lessons, which is apparent in two discourses: distrust and dichotomy. In the discourse on distrust, digital technology is seen as an obstacle to teaching, and the discourse dichotomy is about the opposition between the digital and the physical. Moreover, articulations were often expressed in terms of identity; the teachers talked about themselves in relation to digital resources, rather than talking about how they use digital resources in their teaching.
Variable sharing is a fundamental property in the static analysis of logic programs, since it is instrumental for ensuring correctness and increasing precision while inferring many useful program properties. Such properties include modes, determinacy, non-failure, cost, etc. This has motivated significant work on developing abstract domains to improve the precision and performance of sharing analyses. Much of this work has centered around the family of set-sharing domains, because of the high precision they offer. However, this comes at a price: their scalability to a wide set of realistic programs remains challenging and this hinders their wider adoption. In this work, rather than defining new sharing abstract domains, we focus instead on developing techniques which can be incorporated in the analyzers to address aspects that are known to affect the efficiency of these domains, such as the number of variables, without affecting precision. These techniques are inspired in others used in the context of compiler optimizations, such as expression reassociation and variable trimming. We present several such techniques and provide an extensive experimental evaluation of over 1100 program modules taken from both production code and classical benchmarks. This includes the Spectector cache analyzer, the s(CASP) system, the libraries of the Ciao system, the LPdoc documenter, the PLAI analyzer itself, etc. The experimental results are quite encouraging: we have obtained significant speedups, and, more importantly, the number of modules that require a timeout was cut in half. As a result, many more programs can be analyzed precisely in reasonable times.
Minimal models of a Boolean formula play a pivotal role in various reasoning tasks. While previous research has primarily focused on qualitative analysis over minimal models; our study concentrates on the quantitative aspect, specifically counting of minimal models. Exact counting of minimal models is strictly harder than $\#\mathsf{P}$, prompting our investigation into establishing a lower bound for their quantity, which is often useful in related applications. In this paper, we introduce two novel techniques for counting minimal models, leveraging the expressive power of answer set programming: the first technique employs methods from knowledge compilation, while the second one draws on recent advancements in hashing-based approximate model counting. Through empirical evaluations, we demonstrate that our methods significantly improve the lower bound estimates of the number of minimal models, surpassing the performance of existing minimal model reasoning systems in terms of runtime.
We are interested in automating reasoning with and about study regulations, catering to various stakeholders, ranging from administrators, over faculty, to students at different stages. Our work builds on an extensive analysis of various study programs at the University of Potsdam. The conceptualization of the underlying principles provides us with a formal account of study regulations. In particular, the formalization reveals the properties of admissible study plans. With these at end, we propose an encoding of study regulations in Answer Set Programming that produces corresponding study plans. Finally, we show how this approach can be extended to a generic user interface for exploring study plans.
We propose a stable model semantics for higher-order logic programs. Our semantics is developed using Approximation Fixpoint Theory (AFT), a powerful formalism that has successfully been used to give meaning to diverse non-monotonic formalisms. The proposed semantics generalizes the classical two-valued stable model semantics of Gelfond and Lifschitz as well as the three-valued one of Przymusinski, retaining their desirable properties. Due to the use of AFT, we also get for free alternative semantics for higher-order logic programs, namely supported model, Kripke-Kleene, and well-founded. Additionally, we define a broad class of stratified higher-order logic programs and demonstrate that they have a unique two-valued higher-order stable model which coincides with the well-founded semantics of such programs. We provide a number of examples in different application domains, which demonstrate that higher-order logic programming under the stable model semantics is a powerful and versatile formalism, which can potentially form the basis of novel ASP systems.
Answer Set Programming with Quantifiers (ASP(Q)) has been introduced to provide a natural extension of ASP modeling to problems in the polynomial hierarchy (PH). However, ASP(Q) lacks a method for encoding in an elegant and compact way problems requiring a polynomial number of calls to an oracle in $\Sigma _n^p$ (that is, problems in $\Delta _{n+1}^p$). Such problems include, in particular, optimization problems. In this paper, we propose an extension of ASP(Q), in which component programs may contain weak constraints. Weak constraints can be used both for expressing local optimization within quantified component programs and for modeling global optimization criteria. We showcase the modeling capabilities of the new formalism through various application scenarios. Further, we study its computational properties obtaining complexity results and unveiling non-obvious characteristics of ASP(Q) programs with weak constraints.
Environmental data science for spatial extremes has traditionally relied heavily on max-stable processes. Even though the popularity of these models has perhaps peaked with statisticians, they are still perceived and considered as the “state of the art” in many applied fields. However, while the asymptotic theory supporting the use of max-stable processes is mathematically rigorous and comprehensive, we think that it has also been overused, if not misused, in environmental applications, to the detriment of more purposeful and meticulously validated models. In this article, we review the main limitations of max-stable process models, and strongly argue against their systematic use in environmental studies. Alternative solutions based on more flexible frameworks using the exceedances of variables above appropriately chosen high thresholds are discussed, and an outlook on future research is given. We consider the opportunities offered by hybridizing machine learning with extreme-value statistics, highlighting seven key recommendations moving forward.