To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Within the broad context of design research, joint attention within co-creation represents a critical component, linking cognitive actors through dynamic interactions. This study introduces a novel approach employing deep learning algorithms to objectively quantify joint attention, offering a significant advancement over traditional subjective methods. We developed an optimized deep learning algorithm, YOLO-TP, to identify participants’ engagement in design workshops accurately. Our research methodology involved video recording of design workshops and subsequent analysis using the YOLO-TP algorithm to track and measure joint attention instances. Key findings demonstrate that the algorithm effectively quantifies joint attention with high reliability and correlates well with known measures of intersubjectivity and co-creation effectiveness. This approach not only provides a more objective measure of joint attention but also allows for the real-time analysis of collaborative interactions. The implications of this study are profound, suggesting that the integration of automated human activity recognition in co-creation can significantly enhance the understanding and facilitation of collaborative design processes, potentially leading to more effective design outcomes.
The Erdős-Sós Conjecture states that every graph with average degree exceeding $k-1$ contains every tree with $k$ edges as a subgraph. We prove that there are $\delta \gt 0$ and $k_0\in \mathbb N$ such that the conjecture holds for every tree $T$ with $k \ge k_0$ edges and every graph $G$ with $|V(G)| \le (1+\delta )|V(T)|$.
Advances in generative artificial intelligence (AI) have driven a growing effort to create digital duplicates. These semi-autonomous recreations of living and dead people can be used for many purposes. Some of these purposes include tutoring, coping with grief, and attending business meetings. However, the normative implications of digital duplicates remain obscure, particularly considering the possibility of them being applied to genocide memory and education. To address this gap, we examine normative possibilities and risks associated with the use of more advanced forms of generative AI-enhanced duplicates for transmitting Holocaust survivor testimonies. We first review the historical and contemporary uses of survivor testimonies. Then, we scrutinize the possible benefits of using digital duplicates in this context and apply the Minimally Viable Permissibility Principle (MVPP). The MVPP is an analytical framework for evaluating the risks of digital duplicates. It includes five core components: the need for authentic presence, consent, positive value, transparency, and harm-risk mitigation. Using MVPP, we identify potential harms digital duplicates might pose to different actors, including survivors, users, and developers. We also propose technical and socio-technical mitigation strategies to address these harms.
Climate change will impact wind and, therefore, wind power generation with largely unknown effects and magnitude. Climate models can provide insight and should be used for long-term power planning. In this work, we use Gaussian processes to predict power output given wind speeds from a global climate model. We validate the aggregated predictions from past climate model data with actual power generation, which supports using CMIP6 climate model data for multi-decadal wind power predictions and highlights the importance of being location-aware. We find that wind power projections for the two in-between climate scenarios, SSP2–4.5 and SSP3–7.0, closely align with actual wind power generation between 2015 and 2023. Our location-aware future predictions up to 2050 reveal only minor changes in yearly wind power generation. Our analysis also reveals larger uncertainty associated with Germany’s coastal areas in the North than Germany’s South, motivating wind power expansion in regions where the future wind is likely more reliable. Overall, our results indicate that wind energy will likely remain a reliable energy source.
Limited research has explored the delivery of sustainable design in higher education globally. Therefore, the aim of this paper is to investigate educational practices on the topic. Through an online survey, we investigated numerous aspects of units of study exposing topics related to sustainable design with a focus on contents, teaching methods and educational objectives. The survey was accessed by almost 400 educators in the field of sustainable design. The data show that a variety of teaching methods are used, with a critical role played by project-based learning in addition to traditional lectures. Most respondents rated all investigated intended learning outcomes as relevant or very relevant. In terms of contents and methods treated by the respondents, product eco-design and design for X are the most frequently taught methods. Educational approaches and teaching objectives are poorly affected by the discipline of the degree in which units of study are taught. In terms of contents, design degrees include approaches to sustainable design at the spatio-social level more frequently than engineering degrees do.
The Erdős–Simonovits stability theorem is one of the most widely used theorems in extremal graph theory. We obtain an Erdős–Simonovits type stability theorem in multi-partite graphs. Different from the Erdős–Simonovits stability theorem, our stability theorem in multi-partite graphs says that if the number of edges of an $H$-free graph $G$ is close to the extremal graphs for $H$, then $G$ has a well-defined structure but may be far away from the extremal graphs for $H$. As applications, we strengthen a theorem of Bollobás, Erdős, and Straus and solve a conjecture in a stronger form posed by Han and Zhao concerning the maximum number of edges in multi-partite graphs which does not contain vertex-disjoint copies of a clique.
In this study, we introduce a real-time pose estimation for a class of mobile robots with rectangular body (e.g., the common automatic guided vehicles), by integrating odometry and RGB-D images. First, a lightweight object detection model is designed based on the visual information. Then, a pose estimation algorithm is proposed based on the depth value variations within the target region that exhibit specific patterns due to the robot’s three-dimensional geometry and the observation perspective (termed as “differentiated depth information”). To improve the robustness of object detection and pose estimation, a Kalman filter is further constructed by incorporating odometry data. Finally, a series of simulations and experiments are conducted to demonstrate the method’s effectiveness. Experiments show that the proposed algorithm can achieve a speed over 20 Frames Per Second (FPS) together with a good estimation accuracy on a mobile robot equipped with an Nvidia Jetson Nano Developer KIT.
We consider the hypergraph Turán problem of determining $ex(n, S^d)$, the maximum number of facets in a $d$-dimensional simplicial complex on $n$ vertices that does not contain a simplicial $d$-sphere (a homeomorph of $S^d$) as a subcomplex. We show that if there is an affirmative answer to a question of Gromov about sphere enumeration in high dimensions, then $ex(n, S^d) \geq \Omega (n^{d + 1 - (d + 1)/(2^{d + 1} - 2)})$. Furthermore, this lower bound holds unconditionally for 2-LC (locally constructible) spheres, which includes all shellable spheres and therefore all polytopes. We also prove an upper bound on $ex(n, S^d)$ of $O(n^{d + 1 - 1/2^{d - 1}})$ using a simple induction argument. We conjecture that the upper bound can be improved to match the conditional lower bound.
Kinematically redundant parallel mechanisms (PMs) have attracted extensive attention from researchers due to their advantages in avoiding singular configurations and expanding the reachable workspace. However, kinematic redundancy introduces multiple inverse kinematics solutions, leading to uncertainty in the mechanism’s motion state. Therefore, this article proposes a method to optimize the inverse kinematics solutions based on motion/force transmission performance. By dividing the kinematically redundant PM into hierarchical levels and decomposing the redundancy, the transmission wrench screw systems of general redundant limbs and closed-loop redundant limbs are obtained. Then, input, output, and local transmission indices are calculated, respectively, to evaluate the motion/force transmission performance of such mechanisms. To address the problem of multiple inverse kinematics solutions, the local optimal transmission index is employed as a criterion to select the optimal motion/force transmission solution corresponding to a specific pose of the moving platform. By comparing performance atlas before and after optimization, it is demonstrated that the optimized inverse kinematics solutions enlarge the reachable workspace and significantly improve the motion/force transmission performance of the mechanism.
QuickSelect (also known as Find), introduced by Hoare ((1961) Commun. ACM4 321–322.), is a randomized algorithm for selecting a specified order statistic from an input sequence of $n$ objects, or rather their identifying labels usually known as keys. The keys can be numeric or symbol strings, or indeed any labels drawn from a given linearly ordered set. We discuss various ways in which the cost of comparing two keys can be measured, and we can measure the efficiency of the algorithm by the total cost of such comparisons.
We define and discuss a closely related algorithm known as QuickVal and a natural probabilistic model for the input to this algorithm; QuickVal searches (almost surely unsuccessfully) for a specified population quantile $\alpha \in [0, 1]$ in an input sample of size $n$. Call the total cost of comparisons for this algorithm $S_n$. We discuss a natural way to define the random variables $S_1, S_2, \ldots$ on a common probability space. For a general class of cost functions, Fill and Nakama ((2013) Adv. Appl. Probab.45 425–450.) proved under mild assumptions that the scaled cost $S_n / n$ of QuickVal converges in $L^p$ and almost surely to a limit random variable $S$. For a general cost function, we consider what we term the QuickVal residual:
\begin{equation*} \rho _n \,{:\!=}\, \frac {S_n}n - S. \end{equation*}
The residual is of natural interest, especially in light of the previous analogous work on the sorting algorithm QuickSort (Bindjeme and Fill (2012) 23rd International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods for the Analysis of Algorithms (AofA'12), Discrete Mathematics, and Theoretical Computer Science Proceedings, AQ, Association: Discrete Mathematics and Theoretical Computer Science, Nancy, pp. 339–348; Neininger (2015) Random Struct. Algorithms46 346–361; Fuchs (2015) Random Struct. Algorithms46 677–687; Grübel and Kabluchko (2016) Ann. Appl. Probab.26 3659–3698; Sulzbach (2017) Random Struct. Algorithms50 493–508). In the case $\alpha = 0$ of QuickMin with unit cost per key-comparison, we are able to calculate–àla Bindjeme and Fill for QuickSort (Bindjeme and Fill (2012) 23rd International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods for the Analysis of Algorithms (AofA'12), Discrete Mathematics and Theoretical Computer Science Proceedings, AQ, Association: Discrete Mathematics and Theoretical Computer Science, Nancy, pp. 339–348.)–the exact (and asymptotic) $L^2$-norm of the residual. We take the result as motivation for the scaling factor $\sqrt {n}$ for the QuickVal residual for general population quantiles and for general cost. We then prove in general (under mild conditions on the cost function) that $\sqrt {n}\,\rho _n$ converges in law to a scale mixture of centered Gaussians, and we also prove convergence of moments.
The rise of artificial intelligence is challenging the foundations of intellectual property. In AI versus IP: Rewriting Creativity, science writer Robin Feldman offers a balanced perspective as she explains how artificial intelligence (AI) threatens to erode all of intellectual property (IP) – patents, trademarks, copyrights, trade secrets, and rights of publicity. Using analogies to the Bridgerton fantasy series and the Good Housekeeping 'Seal of Approval,' Professor Feldman also offers solutions to ensure a peaceful coexistence between AI and IP. And if you've ever wanted to understand just how modern AI programs like ChatGPT, Claude, Gemini, Grok, Meta AI, and others work, AI versus IP: Rewriting Creativity explains it all in simple language, no math required. AI and IP can coexist, Feldman argues, but only if we fully understand them and only with considerable effort and forethought.
This handbook offers an important exploration of generative AI and its legal and regulatory implications from interdisciplinary perspectives. The volume is divided into four parts. Part I provides the necessary context and background to understand the topic, including its technical underpinnings and societal impacts. Part II probes the emerging regulatory and policy frameworks related to generative AI and AI more broadly across different jurisdictions. Part III analyses generative AI's impact on specific areas of law, from non-discrimination and data protection to intellectual property, corporate governance, criminal law and more. Part IV examines the various practical applications of generative AI in the legal sector and public administration. Overall, this volume provides a comprehensive resource for those seeking to understand and navigate the substantial and growing implications of generative AI for the law.
Data Rights in Transition maps the development of data rights that formed and reformed in response to the socio-technical transformations of the postwar twentieth century. The authors situate these rights, with their early pragmatic emphasis on fair information processing, as different from and less symbolically powerful than utopian human rights of older centuries. They argue that, if an essential role of human rights is 'to capture the world's imagination', the next generation of data rights needs to come closer to realising that vision – even while maintaining their pragmatic focus on effectiveness. After a brief introduction, the sections that follow focus on socio-technical transformations, emergence of the right to data protection, and new and emerging rights such as the right to be forgotten and the right not to be subject to automated decision-making, along with new mechanisms of governance and enforcement.
An original family of labelled sequent calculi $\mathsf {G3IL}^{\star }$ for classical interpretability logics is presented, modularly designed on the basis of Verbrugge semantics (a.k.a. generalised Veltman semantics) for those logics. We prove that each of our calculi enjoys excellent structural properties, namely, admissibility of weakening, contraction and, more relevantly, cut. A complexity measure of the cut is defined by extending the notion of range previously introduced by Negri w.r.t. a labelled sequent calculus for Gödel–Löb provability logic, and a cut-elimination algorithm is discussed in detail. To our knowledge, this is the most extensive and structurally well-behaving class of analytic proof systems for modal logics of interpretability currently available in the literature.
Achieving Zero Hunger by 2030, a United Nations Sustainable Development Goal, requires resilient food systems capable of securely feeding billions. This article introduces the Food Systems Resilience Score (FSRS), a novel framework that adapts a proven resilience measurement approach to the context of food systems. The FSRS builds on the success of the Community Flood Resilience Measurement Tool, which has been used in over 110 communities, by applying its five capitals (natural, human, social, financial, and manufactured) and four qualities (robustness, redundancy, resourcefulness, and rapidity) framework to food systems. We define food system resilience as the capacity to ensure adequate, appropriate, and accessible food supply to all, despite various disturbances and unforeseen disruptions. The FSRS measures resilience across multiple dimensions using carefully selected existing indicators, ensuring broad applicability and comparability. Our methodology includes rigorous technical validation to ensure reliability, including optimal coverage analysis, stability checks, and sensitivity testing. By providing standardized metrics and a comprehensive assessment of food system resilience, this framework not only advances research but also equips policymakers with valuable tools for effective interventions. The FSRS enables comparative analysis between countries and temporal tracking of resilience changes, facilitating targeted strategies to build and maintain resilient national food systems. This work contributes to the global effort toward long-term food security and sustainability.
This chapter delves into the concept of paradata with the dual aim to link paradata’s notable complexity to broad utility, and to provide groundwork for exploring paradata in the subsequent chapters of this volume. The concept of paradata is investigated in several ways, including explaining the etymology of paradata and reviewing paradata definitions in survey research, archaeology, and heritage visualisation research – three domains where paradata use is most well-established. The chapter then moves on to discuss metadata and provenance data, two key related terms that are used to further interrogate the concept of paradata. The chapter shows that the concept of paradata encompasses a range of meanings and definitions, and that these share several common characteristics and correspondences, but also notable differences. To conclude, the chapter outlines two approaches to grasping and utilising the concept of paradata. The many definitions of and approaches to paradata are discussed as being, in some respects, an obstacle for understanding and employing the concept. The chapter, however, also underlines the complexities of the concept of paradata as a useful resource when building connectivities across its many possible domains of use and application.
This chapter examines the transformative effects of generative AI (GenAI) on competition law, exploring how GenAI challenges traditional business models and antitrust regulations. The evolving digital economy, characterised by advances in deep learning and foundation models, presents unique regulatory challenges due to market power concentration and data control. This chapter analyses the approaches adopted by the European Union, United States, and United Kingdom to regulate the GenAI ecosystem, including recent legislation such as the EU Digital Markets Act, the AI Act, and the US Executive Order on AI. It also considers foundational models’ reliance on key resources, such as data, computing power, and human expertise, which shape competitive dynamics across the AI market. Challenges at different levels—including infrastructure, data, and applications—are investigated, with a focus on their implications for fair competition and market access. The chapter concludes by offering insights into the balance needed between fostering innovation and mitigating the risks of monopolisation, ensuring that GenAI contributes to a competitive and inclusive market environment.