To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper introduces a quantitative approach to assessing the effectiveness of student–tutor crit interactions in fostering constructivist learner-centred education in design studios. Implementing the learner-centred education approach is highly encouraged in constructivist education. However, methodological gaps in the quantitative assessment of its implementation restrict comparative studies and implementation across different settings. Based on the design practices learned and taught in studio crits, a quantitative measure of learner-centred education is developed. This measure accounts for the student’s cognitive engagement in introducing new design issues and active practice in developing these issues towards a design solution. The analysis techniques combine the syntactic tracking of the first occurrence of design issues, a coding of their types and the transitions between them using the Function–Behaviour–Structure ontology. A case study is presented to demonstrate the application of this learner-centred education metric in a natural experiment involving student–tutor interactions in six design critiques throughout a single semester. Critiques alternated between two different educational media. Results indicate that this approach can determine differences in the interaction and across media. This demonstrates the potential of the approach for assessing the effectiveness of student–tutor interactions for implementing learner-centred education in studio-based education.
This chapter argues that care must be taken when considering whether law reform is essential in light of new technologies and their applications. The application of each new technology raises its own issues, and not all of these will invariably require legal change – but some undoubtedly will because the issues raised are beyond the reach of existing laws. In line with this argument, a sketch is presented of a methodical approach for determining whether and how consumer protection law should be reformed in the face of technological developments. Focusing on the need to determine the precise challenges the new technology poses invites an open mind to the legal reform response, and it is important to test each option (tweaking existing rules, creating of analogous rules attuned to the digital and technological advances, or new models of regulation including solutions focused on technological applications rather than consumer rights) to find the best mix of responses, subject to the overriding requirement to ensure that consumer protection is not diluted. This approach is then tested in respect of two areas, the reform of the EU’s Product Liability regime and the arrival of digital assistants which will enable algorithmically automated contracting.
The use of large language models (LLMs) has exploded since November 2022, but there is sparse evidence regarding LLM use in health, medical, and research contexts. We aimed to summarise the current uses of and attitudes towards LLMs across our campus’ clinical, research, and teaching sites. We administered a survey about LLM uses and attitudes. We conducted summary quantitative analysis and inductive qualitative analysis of free text responses. In August–September 2023, we circulated the survey amongst all staff and students across our three campus sites (approximately n = 7500), comprising a paediatric academic hospital, research institute, and paediatric university department. We received 281 anonymous survey responses. We asked about participants’ knowledge of LLMs, their current use of LLMs in professional or learning contexts, and perspectives on possible future uses, opportunities, and risks of LLM use. Over 90% of respondents have heard of LLM tools and about two-thirds have used them in their work on our campus. Respondents reported using LLMs for various uses, including generating or editing text and exploring ideas. Many, but not necessarily all, respondents seem aware of the limitations and potential risks of LLMs, including privacy and security risks. Various respondents expressed enthusiasm about the opportunities of LLM use, including increased efficiency. Our findings show LLM tools are already widely used on our campus. Guidelines and governance are needed to keep up with practice. Insights from this survey were used to develop recommendations for the use of LLMs on our campus.
Inference and prediction under partial knowledge of a physical system is challenging, particularly when multiple confounding sources influence the measured response. Explicitly accounting for these influences in physics-based models is often infeasible due to epistemic uncertainty, cost, or time constraints, resulting in models that fail to accurately describe the behavior of the system. On the other hand, data-driven machine learning models such as variational autoencoders are not guaranteed to identify a parsimonious representation. As a result, they can suffer from poor generalization performance and reconstruction accuracy in the regime of limited and noisy data. We propose a physics-informed variational autoencoder architecture that combines the interpretability of physics-based models with the flexibility of data-driven models. To promote disentanglement of the known physics and confounding influences, the latent space is partitioned into physically meaningful variables that parametrize a physics-based model, and data-driven variables that capture variability in the domain and class of the physical system. The encoder is coupled with a decoder that integrates physics-based and data-driven components, and constrained by an adversarial training objective that prevents the data-driven components from overriding the known physics, ensuring that the physics-grounded latent variables remain interpretable. We demonstrate that the model is able to disentangle features of the input signal and separate the known physics from confounding influences using supervision in the form of class and domain observables. The model is evaluated on a series of synthetic case studies relevant to engineering structures, demonstrating the feasibility of the proposed approach.
Legal ‘regulatory escape’, ‘regulatory disconnection’ or ‘regulatory disruption’ on the part of particular regulatees or commercial practices has been observed across diverse regulatory environments, ranging from environmental protection to provision of gambling services. Instances of legal regulatory escape appear particularly prevalent with the introduction of novel technology products and services. Evaluation of technology-related legal regulatory escapes provides examples of deliberate, even overt, evasion of legal constraints, as well as avoidance via practices such as regulatory mimicry or differentiation. This chapter identifies examples of recent legal technology-related ‘regulatory escape’, discusses key reasons why legal regulation may fail to effectively cater for complications arising from specific technology practices, products or classes of regulatee and considers possible regulatory responses to address the risks, or capture the benefits, of technological advances.
This chapter explores the potential and limitations of AI in the legal field, with a focus on its application in legal research through tools like Lexis+ AI. It critically evaluates Lexis+ AI’s capability in case retrieval, a crucial function for legal professionals who rely on accurate and comprehensive legal sources to inform their work. The study provides an empirical analysis of Lexis+ AI’s performance on cryptocurrency-related legal queries, revealing that while the tool can generate accurate responses, it often falls short in terms of relevance and completeness. This chapter concludes by discussing the implications for legal professionals and legal tech companies, emphasizing the need for ongoing refinement of AI technologies, the importance of keeping legal professionals involved in decision-making processes, and the necessity of further collaboration between the legal and tech sectors.
The conventional design method for high-performance concrete (HPC) mixture proportion requires a large amount of trial mixing work to obtain the desired HPC mixture proportion, which consumes a lot of manpower, material resources, and time resources during the trial mixing process. In recent years, an intelligent scheme for HPC mixture proportion design has been developed. To more effectively optimize HPC mixture proportions, this article proposes a novel intelligent HPC mixture proportion design method. Firstly, this article establishes a hybrid multi-objective optimization (MOO) method for HPC mixture proportion design problem, called CNN–NSDBO–EWTOPSIS. In this MOO framework, there are three objective functions, namely the compressive strength (CS) of concrete, cost, and carbon dioxide emissions. Among them, based on the various components of concrete, this article constructs a convolutional neural network (CNN) regression prediction model for predicting the CS of concrete. The calculation of cost and carbon dioxide emissions involves the utilization of two polynomials. Additionally, dung beetle optimizer (DBO) is used to optimize the hyperparameters of the CNN. Furthermore, this article incorporates the constructed CNN regression prediction model and two polynomials as the three objective functions for HPC mixture proportion design problem. This three-objective optimization problem is solved using a non-dominated sorting dung beetle optimizer (NSDBO). Finally, based on the obtained Pareto front, this article obtains a good solution using the entropy weight technique for order preference by similarity to an ideal solution (EWTOPSIS) method. The experimental results indicate that the proposed CNN–NSDBO–EWTOPSIS approach can achieve HPC mixture proportion design.
A three-valued logic is subclassical when it is defined by a single matrix having the classical two-element matrix as a subreduct. In this case, the language of can be expanded with special unary connectives, called external operators. The resulting logic is called the external version of , a notion originally introduced by D. Bochvar in 1938 with respect to his weak Kleene logic. In this paper we study the semantic properties of the external version of a three-valued subclassical logic . We determine sufficient and necessary conditions to turn a model of into a model of . Moreover, we establish some distinctive semantic properties of .
Being Human in the Digital World is a collection of essays by prominent scholars from various disciplines exploring the impact of digitization on culture, politics, health, work, and relationships. The volume raises important questions about the future of human existence in a world where machine readability and algorithmic prediction are increasingly prevalent and offers new conceptual frameworks and vocabularies to help readers understand and challenge emerging paradigms of what it means to be human. Being Human in the Digital World is an invaluable resource for readers interested in the cultural, economic, political, philosophical, and social conditions that are necessary for a good digital life. This title is also available as Open Access on Cambridge Core.
This study examines the status of mixed-methods research (MMR) in computer-assisted language learning (CALL). A total of 204 studies employing MMR were analyzed. Manual coding was carried out to reveal MMR purposes, designs, features, and rhetorical justifications. Findings indicate CALL authors mostly adopt MMR for triangulation and complementarity purposes. Core designs are more favored in CALL MMR research articles, compared to complex designs. Moderate size random sampling prevails in the data, where data sources are sequentially collected and analyzed using parametric tests. Symptomatic argumentative schemes are found to be the most common justification of MMR. Based on the findings, it is evident that most CALL researchers employ conventional MMR designs. The study concludes with implications for CALL stakeholders and authors.
A meta-conjecture of Coulson, Keevash, Perarnau, and Yepremyan [12] states that above the extremal threshold for a given spanning structure in a (hyper-)graph, one can find a rainbow version of that spanning structure in any suitably bounded colouring of the host (hyper-)graph. We solve one of the most pertinent outstanding cases of this conjecture by showing that for any $1\leq j\leq k-1$, if $G$ is a $k$-uniform hypergraph above the $j$-degree threshold for a loose Hamilton cycle, then any globally bounded colouring of $G$ contains a rainbow loose Hamilton cycle.
Political polarization is a group phenomenon in which opposing factions, often of unequal size, exhibit asymmetrical influence and behavioral patterns. Within these groups, elites and masses operate under different motivations and levels of influence, challenging simplistic views of polarization. Yet, existing methods for measuring polarization in social networks typically reduce it to a single value, assuming homogeneity in polarization across the entire system. While such approaches confirm the rise of political polarization in many social contexts, they overlook structural complexities that could explain its underlying mechanisms. We propose a method that decomposes existing polarization and alignment measures into distinct components. These components separately capture polarization processes involving elites and masses from opposing groups. Applying this method to Twitter discussions surrounding the 2019 and 2023 Finnish parliamentary elections, we find that (1) opposing groups rarely have a balanced contribution to observed polarization, and (2) while elites strongly contribute to structural polarization and consistently display greater alignment across various topics, the masses, too, have recently experienced a surge in alignment. Our method provides an improved analytical lens through which to view polarization, explicitly recognizing the complexity of and need to account for elite-mass dynamics in polarized environments.