To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What is a counterrevolution? And how often do they occur? Chapter 2 is devoted to answering these foundational questions. According to this book, a counterrevolution is an irregular effort in the aftermath of a successful revolution to restore a version of the pre-revolutionary political regime. The chapter begins by explaining and contextualizing this definition. It reviews the various alternative understandings of counterrevolution that have been invoked by both scholars and activists. It then explains the decision to adopt a definition of counterrevolution as restoration and shows how this definition was operationalized in building the original dataset. The second half of the chapter lays out the main high-level findings from this dataset. About half of all revolutionary governments have faced a counterrevolutionary challenge of some type, and roughly one in five of these governments was successfully overturned. Moreover, these counterrevolutions have been distributed unevenly: the vast majority have toppled democratic revolutions, rather than ethnic or leftist ones. And counterrevolutions had for years been declining in frequency, until the last decade when this trend reversed. These descriptive findings provide the motivation for the theory developed in Chapter 3.
The nexus of artificial intelligence (AI) and memory is typically theorized as a ‘hybrid’ or ‘symbiosis’ between humans and machines. The dangers related to this nexus are subsequently imagined as tilting the power balance between its two components, such that humanity loses control over its perception of the past to the machines. In this article, I propose a new interpretation: AI, I posit, is not merely a non-human agency that changes mnemonic processes, but rather a window through which the past itself gains agency and extends into the present. This interpretation holds two advantages. First, it reveals the full scope of the AI–memory nexus. If AI is an interactive extension of the past, rather than a technology acting upon it, every application of it constitutes an act of memory. Second, rather than locating AI’s power along familiar axes – between humans and machines, or among competing social groups – it reveals a temporal axis of power: between the present and the past. In the article’s final section, I illustrate the utility of this approach by applying it to the legal system’s increasing dependence on machines, which, I claim, represents not just a technical but a mnemonic shift, where the present is increasingly falling under the dominion of the past – embodied by AI.
The advent of the digital age has brought about significant changes in how information is created, disseminated and consumed. Recent developments in the use of big data and artificial intelligence (AI) have brought all things digital into sharp focus. Big data and AI have played pivotal roles in shaping the digital landscape. The term ‘big data’ describes the vast amounts of structured and unstructured data generated every day. Advanced analytics on big data enable businesses and organisations to extract valuable insights, make informed decisions and enhance various processes. AI, on the other hand, has brought about a paradigm shift in how machines learn, reason and perform tasks traditionally associated with human intelligence. Machine-learning algorithms, a subset of AI, process vast datasets to identify patterns and make predictions. This has applications across diverse fields, including health care, finance, marketing and more. The combination of big data and AI has fuelled advancements in areas such as personalised recommendations, predictive analytics and automation in all aspects of our day-to-day lives.
This article is concerned with the history of eugenic sterilisation in Britain through the 1920s and 1930s. In this period, the Eugenics Society mounted an active but ultimately unsuccessful campaign to legalise the voluntary surgical sterilisation of various categories of people, including those deemed ‘mentally deficient’ or ‘defective’. We take as our explicit focus the propaganda produced and disseminated by the Eugenics Society as part of this campaign, and especially the various kinds of data mobilised therein. The parliamentary defeat of the Society’s Sterilisation Bill in July 1931 marks, we argue, a significant shift in the tactics of the campaign. Before this, the Eugenics Society framed sterilisation as a promising method for eradicating, or at least significantly reducing the incidence of, inherited ‘mental defect’. Subsequently, they came to emphasise the inequality of access to sterilisation between rich and poor, (re)positioning theirs as an egalitarian campaign aimed at extending a form of reproductive agency to the disadvantaged. These distinct phases of the campaign were each supported by different kinds of propaganda material, which in turn centred on very different types of data. As the campaign evolved, the numbers and quantitative rhetoric which typified earlier propaganda materials gave way to a more qualitative approach, which notably included the selective incorporation of the voices of people living with hereditary ‘defects’. In addition to exposing a rupture in the Eugenics Society’s propagandistic data practices, this episode underscores the need to further incorporate disabled dialogues and perspectives into our histories of eugenics.
Edited by
Rebecca Leslie, Royal United Hospitals NHS Foundation Trust, Bath,Emily Johnson, Worcester Acute Hospitals NHS Trust, Worcester,Alex Goodwin, Royal United Hospitals NHS Foundation Trust, Bath,Samuel Nava, Severn Deanery, Bristol
This chapter presents material on statistics, from the basic principles of the classification of data to normal distribution. We go on to discuss types of the null hypothesis, types of error and different methods of statistical analysis depending on the type of data set presented.
As payment is increasingly becoming part of social media, it takes on the operating, governance, and revenue models of the Silicon Valley tech industry. At the same time, new platform payments “ride the rails” of long-standing infrastructures. These conditions create opportunities for surveillance and infrastructural power, as well as new unanticipated harms for users. As the future of money is imagined, it is wise to contemplate a payment ecosystem that is – like social media more broadly – increasingly private, siloed, and rife with scams.
The production of knowledge in public health involves a systematic approach that combines imagination, science, and social justice, based on context, rigorous data collection, analysis, and interpretation to improve health outcomes and save lives. Based on a comprehensive understanding of health trends and risk factors in populations, research priorities are established. Rigorous study design and analysis are critical to establish causal relationships, ensuring that robust evidence-based interventions guide beneficial health policies and practice. Communication through peer-reviewed publications, community outreach, and stakeholder engagement ensures that insights are co-owned by potential beneficiaries. Continuous monitoring and feedback loops are vital to adapt strategies based on emerging outcomes. This dynamic process advances public health knowledge and enables effective interventions. The process of addressing a complex challenge of preventing HIV infection in young women in sub-Saharan Africa, a demographic with the least social power but the highest HIV risk, highlights the importance of inclusion in knowledge generation, enabling social change through impactful science.
Data has become central in various activities during armed conflict, including the identification of deceased persons. While the use of data-based methods can significantly improve the efficiency of efforts to identify the dead and inform their families about their fate, data can equally enable harm. This article analyzes the obligations that arise for States regarding the processing of data related to the identification of deceased persons. Despite being drafted long before the “age of data”, several international humanitarian law (IHL) provisions can be considered to give rise to obligations which protect those whose data is used to identify the dead from certain data-based harms. However, some of these protections are based on a data protection-friendly interpretation of more general obligations, and many only apply in international armed conflict. Against this background, it is suggested that further analysis on how international human rights law and domestic or regional data protection law could help to strengthen the case for data protection where IHL does not contain specific duties to protect data would be desirable.
A fundamental problem in descriptive epidemiology is how to make meaningful and robust comparisons between different populations, or within the same population over different periods. The problem has several dimensions. First, the data we have to work with (e.g. incident and prevalent cases, and deaths) is rarely usable in its raw form. We must therefore transform it in some way before undertaking the comparison itself. Second, our data usually tells us about fundamentally different attributes of the populations we are seeking to compare. If we are only ever interested in comparing any one of these attributes at a time (mortality, for example), then one of several simple and well-established transformations is all that is typically required. Increasingly, however, epidemiologists are being asked to bring these attributes together into more integrated and meaningful comparisons.
In this chapter, we review approaches to model climate-related migration including the multiple goals of modeling efforts and why modeling climate-related migration is of interest to researchers, commonly used sources of climate and migration data and data-related challenges, and various modeling methods used. The chapter is not meant to be an exhaustive inventory of approaches to modeling climate-related migration, but rather is intended to present the reader with an overview of the most common approaches and possible pitfalls associated with those approaches. We end the chapter with a discussion of some of the future directions and opportunities for data and modeling of climate-related migration.
The fast-growing clinical research outsourcing over the course of the past decade has attracted the active involvement of private equity buyers to the “business” of Clinical Research Organizations (CROs). The fragmentation of pharmaceutical services through CROs offers an opportunity to lower the financial risk of drug development in a highly critical industry. Justified by the operational efficiency that integrating assets and services to decrease costs even further may represent, private equity has recently demonstrated an appetite for vertical integrations to expand the geographic, patient, and service reach of clinical research sites. Since private equity managers have the incentive to maximize financial returns, gaining greater access and control of patient identification, enrollment and retention, and quality protocols may pose significant equity-based risks to biopharmaceutical stakeholders, fundamentally, drug end users.
The proliferation of CROs and their plans to gain further operational control of clinical trials pose the potential tradeoff between drug development efficiency and equitable development and access to drugs and research knowledge. This reasonable concern is most present in the efficiency-based scholarship debate on shareholder and stakeholder governance of recent years. The discussion centers on whether new corporate leaders and investors might be paying too much attention to shareholder value while not enough to stakeholder value. A balance, the debate seems to suggest, is critical to generating long-term shareholder wealth, shifting the long-standing “shareholder value” corporate governance model toward an equity-based, “deliver value” type, commitment.
By observing the lay of the land of CROs in the United States and using as a theoretical framework the increasingly influential stakeholderism approach to corporate governance, this essay offers a critical view of the use of stakeholder factors in stewardship decisions made by CROs’ corporate leaders and private equity investors. Ultimately, it aims to identify key areas of organizational and governance concerns in medical research and the shortcomings they represent to equitable access to medical innovation.
This chapter explains how ubiguitous child sexual abuse really is, and outlines its consequences. It is intended to provide a context for the story, over four years, of one of the most successful hunting teams in Britain. As such, it is rich in numerical data.
Trust in the validity of published work is of fundamental importance to scientists. Confirmation of validity is more readily attained than addressing the question of whether fraud was involved. Suggestions are made for key stakeholders - institutions and companies, journals, and funders as to how they might enhance trust in science, both by accelerating the assessment of data validity and by segregating that effort from investigation of allegations of fraud.
Most political systems consist of multiple layers. Yet datasets are predominantly situated at a single territorial tier, encouraging methodological nationalism, regionalism, and localism. We present three new integrated datasets that include electoral, institutional, ideological, and government composition data on the country and regional level (RD|CED, RED and RPSD). With this data, we cover 337 country elections on the regional level, 2,226 regional elections, and 2,825 regional cabinets in 365 regions of 21 countries from 1941 to 2019, accounting for 800 political parties and their ideological positions. Combined, these data complement and extend existing datasets and facilitate the study of political interaction across levels. Data are available at http://multi-level-cross-level-politics.eu/ or can be accessed through the Havard Dataverse repository. We conclude with an agenda for future cross-level studies.
This conversation addresses the impact of artificial intelligence and sustainability aspects on corporate governance. The speakers explore how technological innovation and sustainability concerns will change the way companies and financial institutions are managed, controlled and regulated. By way of background, the discussion considers the past and recent history of crises, including financial crises and the more recent COVID-19 pandemic. Particular attention is given to the field of auditing, investigating the changing role of internal and external audits. This includes a discussion of the role of regulatory authorities and how their practices will be affected by technological change. Further attention is given to artificial intelligence in the context of businesses and company law. As regards digital transformation, five issues are reflected, namely data, decentralisation, diversification, democratisation and disruption.
This conversation centres around innovation in the financial services sector and the related regulatory supervision. Three ‘Techs’ are especially relevant: FinTech, RegTech and SupTech. ‘FinTech’ combines the words ‘financial’ and ‘technology’ and refers to technological innovation in the delivery of financial services and products. ‘RegTech’ joins ‘regulatory’ and ‘technology’ and describes the use of technology by businesses to manage and comply with regulatory requirements. ‘SupTech’, finally, unites the words ‘supervisory’ and ‘technology’ to refer to the use of technology by supervisory authorities such as financial services authorities to perform their functions. Particular approaches presented in this session include regulatory sandboxes to promote innovative technology in the financial sector, automated data analysis, the collection and analysis of granular data, digital forensics and internet monitoring systems. The speakers also address collaboration between financial institutions and supervisory authorities, for example, in the creation of data collection formats and data sharing.
In his 2019 essay, Arthur Kleinman laments that medicine has become ever-competent at managing illness, yet caring for those who are ill is increasingly out of practice. He opines that the language of ‘the soul’ is helpful to those practicing medicine, as it provides an important counterbalance to medicine’s technical rationality that avoids the existential and spiritual domains of human life. His accusation that medicine has become soulless merits considering, yet we believe his is the wrong description of contemporary medicine. Where medicine is disciplined by technological and informational rationalities that risk coercing attention away from corporealities and toward an impersonal, digital order, the resulting practices expose medicine to becoming not soulless but excarnated. Here we engage Kleinman in conversation with Franco Berardi, Charles Taylor, and others to ask: Have we left behind the body for senseless purposes? Perhaps medicine is not proving itself to be soulless, but rather senseless, bodyless – the any-occupation of excarnated souls. If so, the dissension of excarnation and the recovery of touching purpose seems to us to be an apparent need within the contemporary and increasingly digitally managed and informationally ordered medical milieu.
This book explores the intersection of data sonification (the systematic translation of data into sound) and musical composition. Section 1 engages with existing discourse and offers an original model (the sonification continuum) which provides perspectives on the practice of sonification for composers, science communicators and those interested in this rapidly emerging field. Section 2 engages with the sonification process itself, exploring techniques, models of translation, data fidelity, analogic and symbolic data mapping, temporality and the listener experience. In Section 3 these concepts and techniques are all made concrete in the context of a selection of the author's projects (2004–2023). Finally, some reasons are offered on how sonification as a practice might enrich composition, communication, collaboration, and a sense of connection.
This chapter establishes what it means to do discourse analysis. This is done by defining discourse analysis and providing examples of discourse. The chapter offers a practical overview of how the discourse in discourse analysis fits within the research process. The examples of discourse that are introduced in this chapter are grammar, actions and practices, identities, places and spaces, stories, ideologies, and social structures. After reading the chapter, readers will know what discourse analysis is; understand that there are many types of discourse; know that discourse is an object of study; and understand how an object of study fits within a research project.
Chapter 5 addresses a major demographic puzzle concerning thousands of New York slaves who seem to have gone missing in the transition from slavery to freedom, and the chapter questions how and if slaves were sold South. The keys to solving this puzzle include estimates of common death rates, census undercounting, changing gender ratios in the New York black population, and, most importantly, a proper interpretation of the 1799 emancipation law and its effects on how the children of slaves were counted in the census. Given an extensive analysis of census data, with various demographic techniques for understanding how populations change over time, I conclude that a large number of New York slaves (between 1,000 and 5,000) were sold South, but not likely as many as some previous historians have suggested. A disproportionate number of these sold slaves came from Long Island and Manhattan.