To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The complexity involved in developing and deploying artificial intelligence (AI) systems in high-stakes scenarios may result in a “liability gap,” under which it becomes unclear who is responsible when things go awry. Scholarly and policy debates about the gap and its potential solutions have largely been theoretical, with little effort put into understanding the general public’s views on the subject. In this chapter, we present two empirical studies exploring laypeople’s perceptions of responsibility for AI-caused harm. First, we study the proposal to grant legal personhood to AI systems and show that it may conflict with laypeople’s policy preferences. Second, we investigate how people divide legal responsibility between users and developers of machines in a variety of situations and find that, while both are expected to pay legal damages, laypeople anticipate developers to bear the largest share of the liability in most cases. Our examples demonstrate how empirical research can help inform future AI regulation and provide novel lines of research to ensure that this transformative technology is regulated and deployed in a more democratic manner.
Various factors are considered when designing a floorplan layout, including the plan’s outer boundary, room shape and size, adjacency, privacy, and circulation space, among others. While graph-theoretic approaches have proven effective for floorplan generation, existing algorithms generally focus on defining the boundary of the plan or different room shapes, lacking the investigation of designing circulation space within a floorplan. However, the circulation design in architectural planning is a crucial factor that affects the functionality and efficiency of areas within a building. This paper presents a graph-theoretic approach for integrating circulation within a floorplan. In this study, we use plane graphs to represent floorplans and develop graph algorithms to incorporate various types of circulation within a floorplan as follows:
i. The first phase generates a spanning circulation, that is, a corridor leading to each room using a circulation graph.
ii. Subsequently, using an approximation algorithm, the circulation space is minimized, that is, generation of minimum circulation space covering all the rooms, thereby enhancing space utilization in the floorplan.
iii. Furthermore, customized circulations are generated to cater to user preferences, distinguishing between public and private spaces within the floorplan.
In addition to the theoretical framework, we have implemented our algorithms in Python and developed a user-friendly graphical interface (GUI), enabling seamless integration of our algorithms into architectural design processes.
The digital age, characterized by the rapid development and ubiquitous nature of data analytics and machine learning algorithms, has ushered in new opportunities and challenges for businesses. As the digital evolution continues to reshape commerce, it has empowered firms with unparalleled access to in-depth consumer data, thereby enhancing the implementation of a variety of personalization strategies. These strategies utilize sophisticated machine learning algorithms capable of attaining personal preferences, which can better tailor products and services to individual consumers. Among these personalization strategies, the practice of personalized pricing, which hinges on leveraging customer-specific data, is coming to the forefront.
In the era of digital economy, business operators often collect and utilize information such as consumers’ browsing history and past purchases to build user profiles and capture consumer needs. Based on such data, business operators would be able to provide personalized search results for their consumers. Arguably, this mode of operation is a boon to consumers and operators alike. It provides convenience and increases efficiency for the consumers, and their increased likelihood to purchase in turn generates profits and commercial returns for the business operators. In fact, the potential for personalized services is arguably one of the reasons driving the success of e-commerce.
The generation of floor plan layouts has been extensively studied in recent years, driven by the need for efficient and functional architectural designs. Despite significant advancements, existing methods often face limitations when dealing with specific input adjacency graphs or room shapes and boundary layouts. When adjacency graphs contain separating triangles, the floor plan must include rectilinear rooms (non-rectangular rooms with concave corners). From a design perspective, minimizing corners or bends in rooms is crucial for functionality and aesthetics. In this article, we present a Python-based application called G-Drawer for automatically generating floor plans with a minimum number of bends. G-Drawer takes any plane triangulated graph as an input and outputs a floor plan layout with minimum bends. It prioritizes generating a rectangular floor plan (RFP); if an RFP is not feasible, it then generates an orthogonal floor plan or an irregular floor plan. G-Drawer modifies orthogonal drawing techniques based on flow networks and applies them on the dual graph of a given PTG to generate the required floor plans. The results of this article demonstrate the efficacy of G-Drawer in creating efficient floor plans. However, in future, we need to work on generating multiple dimensioned floor plans having non-rectangular rooms as well as non-rectangular boundary. These enhancements will address both mathematical and architectural challenges, advancing the automated generation of floor plans toward more practical and versatile applications.
We report the results of a field experiment designed to increase honest disclosure of claims at a U.S. state unemployment agency. Individuals filing claims were randomized to a message (‘nudge’) intervention, while an off-the-shelf machine learning algorithm calculated claimants’ risk for committing fraud (underreporting earnings). We study the causal effects of algorithmic targeting on the effectiveness of nudge messages: Without algorithmic targeting, the average treatment effect of the messages was insignificant; in contrast, the use of algorithmic targeting revealed significant heterogeneous treatment effects across claimants. Claimants predicted to behave unethically by the algorithm were more likely to disclose earnings when receiving a message relative to a control condition, with claimants predicted to most likely behave unethically being almost twice as likely to disclose earnings when shown a message. In addition to providing a potential blueprint for targeting more costly interventions, our study offers a novel perspective for the use and efficiency of data science in the public sector without violating citizens’ agency. However, we caution that, while algorithms can enable tailored policy, their ethical use must be ensured at all times.
The intraclass correlation, ρ, is a parameter featured in much psychological research. Two commonly used estimators of ρ, the maximum likelihood and least squares estimators, are known to be negatively biased. Olkin and Pratt (1958) derived the minimum variance unbiased estimator of the intraclass correlation, but use of this estimator has apparently been impeded by the lack of a closed form solution. This note briefly reviews the unbiased estimator and gives a FORTRAN 77 subroutine to calculate it.
An algorithm, using a short cut due to Feldman and Klingler, for the Fisher-Yates exact test is presented: the algorithm is quick, simple and accurate.
A general solution for weighted orthonormal Procrustes problem is offered in terms of the least squares criterion. For the two-demensional case. this solution always gives the global minimum; for the general case, an algorithm is proposed that must converge, although not necessarily to the global minimum. In general, the algorithm yields a solution for the problem of how to fit one matrix to another under the condition that the dimensions of the latter matrix first are allowed to be transformed orthonormally and then weighted differentially, which is the task encountered in fitting analogues of the IDIOSCAL and INDSCAL models to a set of configurations.
In educational and psychological measurement when short test forms are used, the asymptotic normality of the maximum likelihood estimator of the person parameter of item response models does not hold. As a result, hypothesis tests or confidence intervals of the person parameter based on the normal distribution are likely to be problematic. Inferences based on the exact distribution, on the other hand, do not suffer from this limitation. However, the computation involved for the exact distribution approach is often prohibitively expensive. In this paper, we propose a general framework for constructing hypothesis tests and confidence intervals for IRT models within the exponential family based on exact distribution. In addition, an efficient branch and bound algorithm for calculating the exact p value is introduced. The type-I error rate and statistical power of the proposed exact test as well as the coverage rate and the lengths of the associated confidence interval are examined through a simulation. We also demonstrate its practical use by analyzing three real data sets.
This chapter describes three main numerical methods to model hazards which cannot be simplified by analytical expressions (as covered in Chapter 2): cellular automata, agent-based models (ABMs), and system dynamics. Both cellular automata and ABMs are algorithmic approaches while system dynamics is a case of numerical integration. Energy dissipation during the hazard process is a dynamic process, that is, a process that evolves over time. Reanalysing all perils from a dynamic perspective is not always justified, since a static footprint (as defined in Chapter 2) often offers a reasonable approximation for the purpose of damage assessment. However, for some specific perils, the dynamics of the process must be considered for their proper characterization. A variety of dynamic models is presented here, for armed conflicts, blackouts, epidemics, floods, landslides, pest infestations, social unrest, stampedes, and wildfires. Their implementation in the standard catastrophe (CAT) model pipeline is also discussed.
Dietary strategies for weight loss typically place an emphasis on achieving a prescribed energy intake. Depending on the approach taken, this may be achieved by restricting certain nutrients or food groups, which may lower overall diet quality. Various studies have shown that a higher quality diet is associated with better cardiovascular (CV) health outcomes1. This study aimed to evaluate the effect of an energy restricted diet on diet quality, and associated changes in cardiovascular risk factors. One hundred and forty adults (42 M:98 F, 47.5 ± 10.8 years, BMI 30.7 ± 2.3 kg/m2) underwent an energy restricted diet (30% reduction) with dietary counselling for 3 months, followed by 6 months of weight maintenance. Four-day weighed food diaries captured dietary data at baseline, 3 and 9 months and were analysed using a novel algorithm to score diet quality (based on the Dietary Guideline Index, DGI)2. Total DGI scores ranged from 0-120, with sub scores for consumption of core (0-70) and non-core foods (0-50). For all scores, a higher score or increase reflects better diet quality. The CV risk factors assessed included blood pressure (SBP and DBP) and fasting lipids (total (TC), high and low-density lipoprotein cholesterol (HDL-C, LDL-C) and triglycerides (TAG). Mixed model analyses were used to determine changes over time (reported as mean ± standard error), and Spearman rho (rs) evaluated associations between DGI score and CV risk factors. Dietary energy intake was significantly restricted at 3 months (−3222 ± 159 kJ, P<0.001, n = 114) and 9 months (−2410 ± 167 kJ, P<0.001, n = 100) resulting in significant weight loss (3 months −7.0 ± 0.4 kg, P<0.001; 9 months −8.2 ± 0.4 kg, P<0.001). Clinically meaningful weight loss (>5% body mass) was achieved by 81% of participants by 3 months. Diet quality scores were low at baseline (scoring 49.2 ± 1.5), but improved significantly by 3 months (74.7 ± 1.6, P<0.000) primarily due to reductions in the consumption of non-core i.e. discretionary foods (Core sub-score +4.0. ± 0.7, Non-core sub-score +21.3.1 ± 1.6, both P<0.001). These improvements were maintained at 9 months (Total score 71.6 ± 1.7, P<0.000; Core sub-score +4.4 ± 0.7 from baseline, P<0.000; Non-core sub-score +17.9 ± 1.6 from baseline, P<0.000). There were significant inverse relationships between changes in Total DGI score and changes in DBP (rs = −0.268, P = 0.009), TC (rs = −0.298, P = 0.004), LDL-C (rs = −0.224, P = 0.032) and HDL-C (rs = −0.299, P = 0.004) but not SBP and TG at 3 months. These data emphasise the importance of including diet quality as a key component when planning energy restricted diets. Automated approaches will enable researchers to evaluate subtle changes in diet quality and their effect on health outcomes.
This Handbook brings together a global team of private law experts and computer scientists to examine the interface between private law and AI, which includes issues such as whether existing private law can address the challenges of AI and whether and how private law needs to be reformed to reduce the risks of AI while retaining its benefits.
Sun Tzu's Art of War is widely regarded as the most influential military & strategic classic of all time. Through 'reverse engineering' of the text structured around 14 Sun Tzu 'themes,' this rigorous analysis furnishes a thorough picture of what the text actually says, drawing on Chinese-language analyses, historical, philological, & archaeological sources, traditional commentaries, computational ideas, and strategic & logistics perspectives. Building on this anchoring, the book provides a unique roadmap of Sun Tzu's military and intelligence insights and their applications to strategic competitions in many times and places worldwide, from Warring States China to contemporary US/China strategic competition and other 21st century competitions involving cyber warfare, computing, other hi-tech conflict, espionage, and more. Simultaneously, the analysis offers a window into Sun Tzu's limitations and blind spots relevant to managing 21st century strategic competitions with Sun-Tzu-inspired adversaries or rivals.
Through textually grounded "reverse engineering" of Sun Tzu’s ideas, this study challenges widely held assumptions. Sun Tzu is more straightforward, less "crafty," than often imagined. The concepts are more structural, less aphoristic. The fourteen themes approach provides a way of addressing Sun Tzu’s tendency to speak to multiple, often shifting, audiences at once ("multivocality"). It also sheds light on Sun Tzu’s limitations, including a pervasive zero-sum mentality; focus mostly on conventional warfare; a narrow view of human nature. Sun Tzu’s enduring value is best sought in the text’s extensive attention to warfare’s information aspects, where Sun Tzu made timeless contributions having implications for modern information warfare and especially its human aspects (e.g., algorithm sabotage by subverted insiders). The text points opportunities for small, agile twenty-first-century strategic actors to exploit cover provided by modern equivalents to Sun Tzu’s "complex terrain" (digital systems, social networks, complex organizations, and complex statutes) to run circles around large, sluggish, established institutional actors, reaping great profit from applying Sun Tzu’s insights.
There are two Sun Tzu verses which, by Sun Tzu’s own affirmations, may be seen as summations of the active ingredient of his way of war. One is Theme #6’s centerpiece verse III.4 (Passage #6.1).
Through textually grounded "reverse engineering" of Sun Tzu’s ideas, this study challenges widely held assumptions. Sun Tzu is more straightforward, less "crafty," than often imagined. The concepts are more structural, less aphoristic. The fourteen themes approach provides a way of addressing Sun Tzu’s tendency to speak to multiple, often shifting, audiences at once ("multivocality"). It also sheds light on Sun Tzu’s limitations, including a pervasive zero-sum mentality; focus mostly on conventional warfare; a narrow view of human nature. Sun Tzu’s enduring value is best sought in the text’s extensive attention to warfare’s information aspects, where Sun Tzu made timeless contributions having implications for modern information warfare and especially its human aspects (e.g., algorithm sabotage by subverted insiders). The text points opportunities for small, agile twenty-first-century strategic actors to exploit cover provided by modern equivalents to Sun Tzu’s "complex terrain" (digital systems, social networks, complex organizations, and complex statutes) to run circles around large, sluggish, established institutional actors, reaping great profit from applying Sun Tzu’s insights.
Among the rhetorical pleas that follow most instances of public dissatisfaction is the call for more or better accountability. Accountability is a lauded notion, a “golden concept” that is considered widely as critical to the success of democratic government. Such pleas, I will argue, are misplaced. Rather than starting from the premise of accountability as an idea that no one can be against, I consider the possibility that accountability undermines the very notion it ostensibly promotes: self-government. The concept of accountability in modern political theory is tied more closely to the emergence of an impersonal administrative state than it is to the hopeful horizon of a democratic one. In practice and in theory, it is a concept of irresponsibility, a technological approach to government that provides the comforts of impersonal rationality.