To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This is a foundation for algebraic geometry, developed internal to the Zariski topos, building on the work of Kock and Blechschmidt (Kock (2006) [I.12], Blechschmidt (2017)). The Zariski topos consists of sheaves on the site opposite to the category of finitely presented algebras over a fixed ring, with the Zariski topology, that is, generating covers are given by localization maps for finitely many elements $f_1,\dots, f_n$ that generate the ideal $(1)=A\subseteq A$. We use homotopy-type theory together with three axioms as the internal language of a (higher) Zariski topos. One of our main contributions is the use of higher types – in the homotopical sense – to define and reason about cohomology. Actually computing cohomology groups seems to need a principle along the lines of our “Zariski local choice” axiom, which we justify as well as the other axioms using a cubical model of homotopy-type theory.
In this study, a novel kinematic modeling method of parallel mechanism is proposed. It can obtain position and posture space simultaneously in a single model. Compared with the traditional method only based on inverse kinematics, the novel method can significantly improve computational performance. The original evaluation metric $\mathfrak{R}$ is proposed to evaluate the performance of the two modeling methods. Three groups of experiments with different calculation times are carried out for the classical PPU-3RUS parallel mechanism, and the new RS-3UPRU parallel mechanism after the effectiveness and wide applicability of the novel modeling method is proved. The calculation time and output rate are recorded, respectively, and then the respective $\mathfrak{R}$ values are obtained by weighting. The results show that the novel modeling method has better performance.
Actor languages realize concurrency via message passing, which most of the time is easy to use. Empirical code inspection provides evidence, however, that on occasion, programmers wish to have an actor share some of its state with others. The dataspace model adds a tightly controlled state-exchange mechanism, dubbed dataspace, to the actor model for just this purpose. Experience with dataspaces suggests that this form of sharing calls for linguistic constructs that allow programmers to state temporal aspects of actor conversations. In response, this paper presents the facet notation: its theory, its type system, its behavioral type system, and some first experiences with an implementation.
A graph $G$ is $q$-Ramsey for another graph $H$ if in any $q$-edge-colouring of $G$ there is a monochromatic copy of $H$, and the classic Ramsey problem asks for the minimum number of vertices in such a graph. This was broadened in the seminal work of Burr, Erdős, and Lovász to the investigation of other extremal parameters of Ramsey graphs, including the minimum degree.
It is not hard to see that if $G$ is minimally $q$-Ramsey for $H$ we must have $\delta (G) \ge q(\delta (H) - 1) + 1$, and we say that a graph $H$ is $q$-Ramsey simple if this bound can be attained. Grinshpun showed that this is typical of rather sparse graphs, proving that the random graph $G(n,p)$ is almost surely $2$-Ramsey simple when $\frac{\log n}{n} \ll p \ll n^{-2/3}$. In this paper, we explore this question further, asking for which pairs $p = p(n)$ and $q = q(n,p)$ we can expect $G(n,p)$ to be $q$-Ramsey simple.
We first extend Grinshpun’s result by showing that $G(n,p)$ is not just $2$-Ramsey simple, but is in fact $q$-Ramsey simple for any $q = q(n)$, provided $p \ll n^{-1}$ or $\frac{\log n}{n} \ll p \ll n^{-2/3}$. Next, when $p \gg \left ( \frac{\log n}{n} \right )^{1/2}$, we find that $G(n,p)$ is not $q$-Ramsey simple for any $q \ge 2$. Finally, we uncover some interesting behaviour for intermediate edge probabilities. When $n^{-2/3} \ll p \ll n^{-1/2}$, we find that there is some finite threshold $\tilde{q} = \tilde{q}(H)$, depending on the structure of the instance $H \sim G(n,p)$ of the random graph, such that $H$ is $q$-Ramsey simple if and only if $q \le \tilde{q}$. Aside from a couple of logarithmic factors, this resolves the qualitative nature of the Ramsey simplicity of the random graph over the full spectrum of edge probabilities.
Self-instructional media in education has the potential to address educational challenges such as accessibility, flexible and personalised learning, real-time assessment and resource efficiency. The objectives of this study are to (1) develop programmed instructions to teach design thinking concepts and (2) investigate its effects on secondary school students’ understanding of these concepts. A design thinking workshop was conducted with secondary school students; subsequently, their understanding of design thinking concepts gained through digital programmed instructions was evaluated. The study involved 33 novice secondary school students from grades 6 to 9 in India, who worked in teams to find and solve real-life, open-ended, complex problems during the workshop using the design thinking process. Data on (i) the individual performance in understanding design thinking concepts and (ii) team performance in design problem finding and solving were collected using individual tests and teams’ outcome evaluations, respectively. Students’ perceptions of the effectiveness of the programmed instructions for supporting understanding of the concepts were also captured. Results show the positive effects on students’ understanding of design thinking concepts as well as on their problem-finding and solving skills. The results justify the use of programmed instructions in secondary school curricula to advance design thinking concepts. The current version of programmed instruction has limitations, including the absence of branching mechanisms, a detailed feedback system, multimodal content and backend functionalities. Future work will aim to address these issues and overcome these shortcomings.
Discussions of the development and governance of data-driven systems have, of late, come to revolve around questions of trust and trustworthiness. However, the connections between them remain relatively understudied and, more importantly, the conditions under which the latter quality of trustworthiness might reliably lead to the placing of ‘well-directed’ trust. In this paper, we argue that this challenge for the creation of ‘rich’ trustworthiness, which we term the Trustworthiness Recognition Problem, can usefully be approached as a problem of effective signalling, and suggest that its resolution can be informed by a multidisciplinary approach that relies on insights from economics and behavioural ecology. We suggest, overall, that the domain specificity inherent to the signalling theory paradigm offers an effective solution to the TRP, which we believe will be foundational to whether and how rapidly improving technologies are integrated in the healthcare space. We suggest that solving the TRP will not be possible without taking an interdisciplinary approach and suggest further avenues of inquiry that we believe will be fruitful.
Generative artificial intelligence (GenAI) has gained significant popularity in recent years. It is being integrated into a variety of sectors for its abilities in content creation, design, research, and many other functionalities. The capacity of GenAI to create new content—ranging from realistic images and videos to text and even computer code—has caught the attention of both the industry and the general public. The rise of publicly available platforms that offer these services has also made GenAI systems widely accessible, contributing to their mainstream appeal and dissemination. This article delves into the transformative potential and inherent challenges of incorporating GenAI into the domain of judicial decision-making. The article provides a critical examination of the legal and ethical implications that arise when GenAI is used in judicial rulings and their underlying rationale. While the adoption of this technology holds the promise of increased efficiency in the courtroom and expanded access to justice, it also introduces concerns regarding bias, interpretability, and accountability, thereby potentially undermining judicial discretion, the rule of law, and the safeguarding of rights. Around the world, judiciaries in different jurisdictions are taking different approaches to the use of GenAI in the courtroom. Through case studies of GenAI use by judges in jurisdictions including Colombia, Mexico, Peru, and India, this article maps out the challenges presented by integrating the technology in judicial determinations, and the risks of embracing it without proper guidelines for mitigating potential harms. Finally, this article develops a framework that promotes a more responsible and equitable use of GenAI in the judiciary, ensuring that the technology serves as a tool to protect rights, reduce risks, and ultimately, augment judicial reasoning and access to justice.
Expert drivers possess the ability to execute high sideslip angle maneuvers, commonly known as drifting, during racing to navigate sharp corners and execute rapid turns. However, existing model-based controllers encounter challenges in handling the highly nonlinear dynamics associated with drifting along general paths. While reinforcement learning-based methods alleviate the reliance on explicit vehicle models, training a policy directly for autonomous drifting remains difficult due to multiple objectives. In this paper, we propose a control framework for autonomous drifting in the general case, based on curriculum reinforcement learning. The framework empowers the vehicle to follow paths with varying curvature at high speeds, while executing drifting maneuvers during sharp corners. Specifically, we consider the vehicle’s dynamics to decompose the overall task and employ curriculum learning to break down the training process into three stages of increasing complexity. Additionally, to enhance the generalization ability of the learned policies, we introduce randomization into sensor observation noise, actuator action noise, and physical parameters. The proposed framework is validated using the CARLA simulator, encompassing various vehicle types and parameters. Experimental results demonstrate the effectiveness and efficiency of our framework in achieving autonomous drifting along general paths. The code is available at https://github.com/BIT-KaiYu/drifting.
In this article, I will consider the moral issues that might arise from the possibility of creating more complex and sophisticated autonomous intelligent machines or simply artificial intelligence (AI) that would have the human capacity for moral reasoning, judgment, and decision-making, and (the possibility) of humans enhancing their moral capacities beyond what is considered normal for humanity. These two possibilities raise an urgency for ethical principles that could be used to analyze the moral consequences of the intersection of AI and transhumanism. In this article, I deploy personhood-based relational ethics grounded on Afro-communitarianism as an African ethical framework to evaluate some of the moral problems at the intersection of AI and transhumanism. In doing so, I will propose some Afro-ethical principles for research and policy development in AI and transhumanism.
Loneliness and social isolation are prevalent concerns among older adults and can lead to negative health consequences and a reduced lifespan. New technologies are increasingly being developed to help address loneliness and social isolation in older adults, including monitoring systems, social networks, robots, companions, smart televisions, augmented reality (AR) and virtual reality (VR) applications. This systematic review maps human-centered design (HCD) and user-centered design (UCD) approaches, human needs, and contextual factors considered in current technological interventions designed to address the problems of loneliness and social isolation in older adults. We conducted a scoping review and in-depth examination of 98 papers through a qualitative content analysis. We found 12 studies applying either an HCD or UCD approach and observed strengths in continuous user involvement and implementation in field studies but limitations in participant inclusion criteria and methodological reporting. We also observed the consideration of important human needs and contextual factors. However, more research is needed on stakeholder perspectives, the functioning of applications in different housing environments, as well as studies that include diverse socio-economic groups.
Project-based learning (PBL) has gained widespread acceptance as a cutting-edge teaching approach in universities, particularly for imparting engineering design skills. PBL allows students to showcase their design skills and put into practice the theoretical concepts acquired through instruction. Throughout the various phases of the design process and project execution, students prepare design artifacts, which serve as tangible indicators of their design skills – an essential competency for engineers. The objective of this research is to evaluate and compare the application of engineering design skills among first-year and third-year engineering students as evidenced by their design artifacts. This comparative analysis aims to pinpoint areas of strength and opportunities for growth, thereby offering a holistic view of student development in design proficiency throughout their undergraduate education. Employing a standardized rubric to evaluate these artifacts allows for an unbiased assessment of the students’ design process acumen. The findings offer insights into the design skill proficiency of two student groups at different points in the design process. It is imperative for engineering educators to strategically highlight every aspect of the design process within PBL, ensuring the comprehensive development of design competencies.
Advances in artificial intelligence (AI) have great potential to help address societal challenges that are both collective in nature and present at national or transnational scale. Pressing challenges in healthcare, finance, infrastructure and sustainability, for instance, might all be productively addressed by leveraging and amplifying AI for national-scale collective intelligence. The development and deployment of this kind of AI faces distinctive challenges, both technical and socio-technical. Here, a research strategy for mobilising inter-disciplinary research to address these challenges is detailed and some of the key issues that must be faced are outlined.
Evaluation approaches are needed to ensure the development of effective design support. These approaches help developers ensure that their design support possesses the general design support characteristics necessary to enable designers to achieve their desired outcomes. Consequently, evaluating design support based on these characteristics ensures that the design support fulfils its intended purpose.
This work reviews design support definitions and identifies and describes 11 design support characteristics. The characteristics are applied to evaluate a proposed design support that uses additive manufacturing (AM) design artefacts (AMDAs) to explore design uncertainties. Product-specific design artefacts were designed and tested to investigate buildability limits and the relationship between surface roughness and fatigue performance of a design feature in a space industry component. The AMDA approach aided the investigation of design uncertainties, identified design solution constraints, and uncovered previously unknown uncertainties. However, the results provided by product-specific artefacts depend on how well the user frames their problem and understands their AM process and product. Hence, iterations can be required. Based on the evaluation of the AMDA process, setting test evaluation criteria is recommended, and the AMDA method is proposed.
Extensible continuum robots (ECRs) offer distinct advantages over conventional continuum robots due to their ability to enhance workspace adaptability through length adjustments. This makes ECRs particularly promising for applications that require variable lengths involving the manipulation of objects in challenging environments, such as risky, cluttered, or confined. The development of ECRs necessitates careful consideration of mechanical structures, actuation methods, methods of stiffness variability, and control methods. The selection of papers is based on their relevance to ECRs within the period of 2010 to 2023 in the databases of Scopus and Web of Science. Distinguishing itself from other review papers, this paper aims to deliver a comprehensive and critical discussion about the advantages and disadvantages of ECRs concerning their mechanical structures, actuation methods, stiffness variability, and control methods. It is a beneficial resource for researchers and engineers interested in ECRs, providing essential insights to guide future developments in this field. Based on the literature, existing ECRs exhibit an inherent trade-off between flexibility and structural strength due to the absence of systematic design methods. Additionally, there is a lack of intelligent and effective controllers for achieving complex control performance and autonomous stiffness variability.
Incorporating contextual factors into engineering design processes is recommended to develop solutions that function appropriately in their intended use contexts. In global health settings, failing to tailor solutions to their broader context has led to many product failures. Since prior work has thus far not investigated the use of contextual factors in global health design practice, we conducted semi-structured interviews with 15 experienced global health design practitioners. Our analysis identified 351 instances of participants incorporating contextual factors in their previous design experiences, which we categorized into a taxonomy of contextual factors, including 9 primary and 32 secondary classifications. We summarized and synthesized key patterns within all the identified contextual factor categories. Next, this study presents a descriptive model for incorporating contextual factors developed from our findings, which identifies that participants actively sought contextual information and made conscious decisions to adjust their solutions, target markets and implementation plans to accommodate contextual factors iteratively throughout their design processes. Our findings highlight how participants sometimes conducted formal evaluations while other times they relied on their own experience, the experience of a team member or other stakeholder engagement strategies. The research findings can ultimately inform design practice and engineering pedagogy for global health applications.
Several real-world optimization problems are dynamic and involve a number of objectives. Different researches using evolutionary algorithms focus on these characteristics, but few works investigate problems that are both dynamic and many-objective. Although widely investigated in formulations with multiple objectives, the evolutionary approaches are still challenged by the dynamic multiobjective optimization problems defining a relevant research topic. Some models have been proposed specifically to attack them as the well-known DNSGA-II and MS-MOEA algorithms, which have been extensively investigated on formulations with two or three objectives. Recently, the D-MEANDS algorithm was proposed for dynamic many-objective problems (DMaOPs). In a previous work, D-MEANDS was confronted to DNSGA-II and MS-MOEA solving dynamic many-objective scenarios of the knapsack problem: up to six objectives with five changes or four objectives with ten changes. In this work, we evaluate the behavior of such algorithms in instances up to eight objectives and twenty environmental changes. These enabled us to better understand D-MEANDS weak points which led us to the proposition of D-MEANDS-MD. The proposal offers a better balance between memory and diversity. We also included a more recent MOEA in this comparison: the DDIS-MOEA/D-DE. From the results obtained using 27 instances of the dynamic multiobjective knapsack problem, D-MEANDS-MD showed promise for solving discrete DMaOPs compared with the others.
A detailed exploration is presented of the integration of human–machine collaboration in governance and policy decision-making, against the backdrop of increasing reliance on artificial intelligence (AI) and automation. This exploration focuses on the transformative potential of combining human cognitive strengths with machine computational capabilities, particularly emphasizing the varying levels of automation within this collaboration and their interaction with human cognitive biases. Central to the discussion is the concept of dual-process models, namely Type I and II thinking, and how these cognitive processes are influenced by the integration of AI systems in decision-making. An examination of the implications of these biases at different levels of automation is conducted, ranging from systems offering decision support to those operating fully autonomously. Challenges and opportunities presented by human–machine collaboration in governance are reviewed, with a focus on developing strategies to mitigate cognitive biases. Ultimately, a balanced approach to human–machine collaboration in governance is advocated, leveraging the strengths of both humans and machines while consciously addressing their respective limitations. This approach is vital for the development of governance systems that are both technologically advanced and cognitively attuned, leading to more informed and responsible decision-making.
Sound and new media arts appear to be both historical and contemporary means to invest in the notion of more-than-human. Although the concept was formulated in the late 1990s (Abram 1996), certain related practices in art works exploring machine or animal agency have existed since the 1960s, especially in new media arts using sound, video, and electronic and computational technologies.
This article maps out some of the relationships between performers and their instruments in live and improvised electronic music. In these practices, musical machines – be they computers, mechanical assemblages or combinations of different sound-makers and processors – act as generators of musical material and sources of unpredictability with which to improvise. As a lens through which to consider these practices, we examine a number of different roles these musical machines may take on during improvised performances. These include running, playing, surprising, evolving, malfunctioning, collaborating and learning. We explore the values of these different roles to the improvising musician, and contextualise them within some broad and historical trends of contemporary music. Finally, we consider how this taxonomy may make us more open to the vital materialism of musical instruments, and offer novel insights into the flows of agency and interaction possibilities in technologically mediated musical practices.