To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 1 introduces the main arguments, findings, and contributions of the book. Counterrevolution is a subject that has often been overlooked by scholars, even as counterrevolutions have been responsible for establishing some of history’s most brutal regimes, for cutting short experiments in democracy and radical change, and for perpetuating vicious cycles of conflict and instability. The chapter reveals some of the most important statistics from the book’s original dataset of counterrevolution worldwide. These statistics raise a number of puzzling questions, which motivate the theoretical argument about counterrevolutionary emergence and success. After previewing this argument, the chapter discusses the main contributions of the book, including to theories of revolution, democratization, and nonviolence; to ongoing debates about Egypt’s revolution and the failures of the 2011 Arab Spring; and to our understanding of the present-day resurgence of authoritarianism worldwide. It finishes by laying out the multi-method research strategy and providing an overview of the chapters to come.
The introduction presents the main arguments that will be developed in the book and how letters and petitions that were found in the military archive are the basis from which to argue that the military was an institution in the first half of the nineteenth century. The nearly one thousand case studies provide the information that makes it possible to understand the Peruvian armed forces. This chapter also covers the historiographical debate by discussing the notion of caudillos and how although most of the new republics have been seen as controlled by armed men on horseback, the military can be described as an insitution that while having a colonial origin, transformed throughout the wars of independence. The way in which those who became members of the armed forces is analyzed in detail showing that a social system of protection for those who were part of it developed from the colonial systems Comparisons are made with the cases of the United States, France, Spain and the rest of Latin America. This section ends with a description of the book’s structure and a description of each chapter.
This article presents an innovative workflow for the acquisition and storage of archaeological data. The system is based on open-source software to enhance method replication and media accessibility. QGIS software is used as the central platform, connected to a spatial database developed in PostgreSQL and managed with the SQL and Python programming languages. The aim is to achieve an efficient, flexible, and reproducible digital method for data collection and management that can be applied to surface archaeological surveys. During the implementation and development of the method, we have recorded over 4,600 archaeological remains in two different structures with traces of Upper Paleolithic activity in the Lower Gallery of La Garma (Cantabria, Spain). After 18 months of continuous work, the results obtained demonstrate the usefulness and versatility of this procedure, which can be adapted to each context and to the specific needs of each researcher. Our goal is not simply to systematize archaeological documentation, as traditionally proposed, but to establish a simple and robust method for data collection and preservation, accessible to any user. Its fully open-source approach aims to promote a model that is nurtured by the use and contributions of the research community.
In recent years, ontological security studies (OSS) have developed an impressive breadth of empirical applications and depth of theoretical advancements. However, despite increasing disciplinary diversity, methodological differences in OSS and the resulting implications have not yet been discussed. Drawing on Jackson’s taxonomy of scientific methodologies, this article outlines that OSS is characterized by considerable methodological diversity cutting across existing distinctions in the field. Greater focus on this diversity is important, as (tacit) underlying methodological assumptions have significant implications concerning the types of knowledge claims that can be advanced. Providing the first systematic discussion of methodological questions in OSS, this article outlines the contours of grounding OSS in neopositivist, critical realist, reflexivist, and analyticist methodologies and provides examples thereof. It then discusses the implications emerging from different methodologies in terms of (1) the production and evaluation of valid knowledge claims about ontological (in)security, (2) the perception of and dealing with ontological and epistemological challenges in the concept of ontological (in)security, and (3) the critical potential of OSS. While highlighting the potential of OSS grounded in analyticism, this article ultimately emphasizes the inherent value of methodological pluralism structured around a common vocabulary enabling meaningful conversations – both within OSS and with International Relations more broadly.
I raise two concerns about Bergmann’s philosophical methodology: the first is a parity problem for his intuition-based “autodidactic” approach; the second is a tension between that approach and the commonsense tradition in which he situates it. I then use his approach to reflect on the limits of rational argument and set it alongside an alternative that likewise emphasizes the personal nature of philosophical inquiry while remaining more neutral about the rational standing of competing intuitions.
UK Biobank (UKB) is a large-scale, prospective resource offering significant opportunities for mental health research. Data include genetic and biological data, healthcare linkage, and mental health enhancements. Challenges arise from incomplete linkage of some sources and the incomplete coverage for enhancements, which also occur at different times post-baseline. We searched for publications using UKB for mental health research from 2016 to 2023 to describe and inspire future use. Papers were classified by mental health topic, ‘additional’ aspects, and the data used to define the mental health topic. We identified 480 papers, with 338 focusing on mental health disorder topics (affective, anxiety, psychotic, multiple, and transdiagnostic). The most commonly studied disorder was depression (41%). The most common single method of ascertaining mental disorder status was the Mental Health Questionnaire (26%), with genetic risk, for example, using polygenic risk scores, also frequent (21%). Common additional aspects included brain imaging, gene–environment interaction, and the relationship with physical health. The review demonstrates the value of UKB to mental health research. We explore the strengths and weaknesses, producing resources informed by the review. A strength is the flexibility: conventional epidemiological studies are present, but also genomics, imaging, and other tools for understanding mental health. A major weakness is selection effects. UKB continues to hold potential, especially with additional data continuing to become available.
The last decades have seen important progress in the economic analysis of institutions, with increasing concern about the need to ‘unbundle’ this concept and the diversity of situations it covers. It is so because of the complexity of the systems the concept intends to capture and the ambiguity of definitions often perceived as catch-all ideas without a clear connection to a research strategy. This essay contributes to the literature emphasising that overcoming these difficulties requires a theoretical framework identifying and characterising distinct institutional layers. The content of this framework is substantiated through the analysis of the nature and role of the long-ignored intermediate layer of ‘meso-institutions’. Meso-institutions designate devices and transmission mechanisms linking general rules, norms and beliefs established at the macro-institutional level with their perception, adaptation, and implementation (or challenge) by the actors populating the micro-level. Operationalising this framework relies on a research strategy that proceeds from a ‘substantive theory’ of institutions to the collection and processing of ‘empirical evidences’ through the development of ‘auxiliary theories’ designed to capture specific institutional objects. References to several empirical studies support the relevance of this approach.
Chapter 3 probes the meaning of the word ‘equality’. It outlines a multidimensional, substantive conception of equality, as adopted by the UN Committee for the Rights of Persons with Disabilities. But it notes the Act’s lack of engagement with some aspects of this ideal. The Act’s scope is both more limited and more individualised than this substantive concept might demand. Making sense of what law might intend to contribute to meeting equality ideals is difficult but necessary, as it can provide a benchmark against which to evaluate the law. With this in mind, this chapter proposes five potential objectives, which are guided by the Act’s scope. These range from changing attitudes and shaping perceived social norms through to influencing behaviours or compensating victims of negative treatment. These potential objectives are used as a framework for assessment of law’s contribution throughout the rest of the book.
Authenticity plays key methodological and normative roles for early Heidegger: as he puts it, to ‘work out the question of Being adequately … we must make an entity – the inquirer – transparent in his own Being’. But the precise nature of those roles, and how Heidegger differs from other thinkers of authenticity, is much less clear. This chapter considers three possible interpretations of authenticity found in the contemporary literature. On a transcendental reading, authenticity is what allows us to first recognize reasons as such and act in light of norms at all. On a unity reading, authenticity unifies Dasein’s commitments, and thereby grants a special narrative or judgmental coherence to my life. Finally, on the structural reading, ultimately defended here, authenticity is an inchoate awareness of the structural features of normative space and of Dasein’s own way of being. It is only this interpretation, it is argued, that can make sense of Heidegger’s text and the centrality of authenticity within his early work.
In this chapter, I examine arguments that have been or might be used to establish or defend the distinction that Heidegger draws between entities (things that are) and the being of entities (that by virtue of which those things are). I find these arguments for the ontological difference to fail – due largely to the self-concealing nature of being, which makes it difficult to distinguish being from entities. At the same time, I see something positive in these troubles for the ontological difference, that is, they serve as prompts to question the meaning of being.
TOTs are inherently subjective experiences; only the experiencer can really know whether one is happening and what it feels like as it does. As such, methodologies and their nuances are extremely important. This chapter covers the various methods that have been employed to measure TOTs. The prospecting method – now the most widely used method of studying TOTs – was first developed by Brown and McNeill (1966) in their seminal paper. Present-day use of the method commonly employs word definitions, general-information questions, or faces of famous people. The method can also be adapted to new learning. It is also important to determine how accurate TOTs are at predicting later memory, and we discuss approaches to doing so. Another approach to studying TOTs involves diary studies, in which people are asked to record their naturally occurring TOTs and their qualities and characteristics over a set period of time. How TOT rates should be computed remains an important issue. Depending on one's theoretical approach, it can make sense to divide the number of TOTs over all unrecalled items, or it may be better to divide the number of TOTs over TOTs plus correctly recalled items.
This chapter considers the shift in Holocaust historiography since the end of the Cold War to focus on eastern Europe and, increasingly, on neglected areas such as south-eastern Europe. Noteworthy too is the increasing diversity in methodologies, including digital humanities, gender history, family history, microhistory, transnational history, and spatial, geographical, and material approaches. Each of these strands approaches sources differently, meaning that what historians consider to be a usable source has also changed a great deal. This diversity reflects also the changing face of the historical profession itself and the world in which it operates. Our awareness has grown that the Holocaust was a continent-wide crime committed by willing participants everywhere. The historiography is moving in two opposite directions: towards more microhistories, but also towards greater understanding of the continent-wide scale of the Holocaust, with a particular focus on the hitherto unacknowledged extent of participation in the killings by non-Germans all across Europe. How to reconcile them and bring all of this research together is the challenge of our hyper-productive times.
Ethics guides for political science instruct researchers to avoid retraumatization of human subjects (for example, APSA 2022; Fujii 2012). Meanwhile, human subject research on sensitive topics, including violence and repression, has increased. This paper clarifies what is at stake when we talk about research participant distress and provides recommendations for handling concerns about trauma and retraumatization. It offers a new framework for trauma-informed political science research. This framework reflects the conclusions of the empirical literature on the risk of distress in different research settings as well as critical normative perspectives on consequentialist research ethics. In particular, it identifies two approaches for trauma-informed political science research: one for research in less vulnerable contexts and one for research with contexts that are vulnerable in terms of limited resources, ongoing suffering, and/or geopolitical instability. The framework details best practices for informed consent, debriefing, and more within each approach. The paper also addresses the special challenges of political violence research. While the literature suggests that retraumatization as such is rarely a major risk of research, the paper highlights that a narrowly defined concept of retraumatization can lead us to neglect other trauma-informed concerns.
We review experimental research on judicial decision-making with a focus on methodological issues. First, we argue that only experiments with relatively high realism, in particular real judges as study subjects, plausibly generalize to judicial decision-making in the real world. Most experimental evidence shows lay subjects to behave very differently from expert judges in specifically legal tasks. Second, we argue that studying the effects of non-law is not a substitute for studying the effects of law since large unexplained residuals could be attributed to either. Direct experimental studies of the law effect are few and find it to be puzzlingly weak. Third, we review the substantive findings of experiments with judges, distinguishing between studies investigating legal and nonlegal factors and paying close attention to the nature of the experimental task.
Despite a notable increase during recent decades in the application of anthropological approaches and archaeometric analyses in Neolithic and Bronze Age archaeology in China, studies relating to the post-Qin period of Chinese history (after 221 BC) continue to focus on social centres and elite tombs, and to rely on historical texts to validate archaeological discoveries. This article examines the extent to which archaeometric analyses might be applied more beneficially in post-Qin contexts and explores current barriers to the wider undertaking of these methods within Chinese archaeology.
Chapter 4 is devoted to the methodology applied in this book. For one, I motivate the choice of the twenty-four CPs investigated in this study and explain how they have been retrieved. Secondly, I ask how we can best operationalize the concepts of semantic scope and semantic specialization, which are relevant to test the first and the second hypotheses (Section 3.4). This presentation is followed by an introduction of the three different types of semantic specialization investigated in this study and an answer to the question of how they are operationalized. Here, I focus a) on the modifier slot (Section 4.3.1), b) on the determiner slot (Section 4.3.2) and c) on the wider assertive or non-assertive contexts that the CPs occur in. The chapter is concluded by an outline of the corpora in use, the time periods investigated and the statistical tests applied (Section 4.4).
The introduction presents the aims, methodology, genre, audience, and content of the book. My focus is on Tolkien’s literary ‘theory’, as informed by his own self-exegesis, that is, by Tolkien’s “experiment” and “observation” of his own literary work and experience as a writer. This explains my methodology, which is primarily inductive and exegetical, grounded as it is on a series of close readings of passages from Tolkien’s works. At the same time, Tolkien’s ‘theory’, however idiosyncratic, cannot be detached from the literary discourses of his age, nor from the traditions in which it is anchored. Despite its scope and specialism, the book is not only addressed to Tolkien readers and scholars, but also to the educated reader of English literature, who might have strong biases about the literary merits of Tolkien’s enterprise. In the book I will thus illustrate the depth and complexity of Tolkien’s literature by focusing on its meta-literary sophistication, that is, its self-reflexive focus on the nature and purpose of literature.
The rise of empirical methods has had a polarising effect on legal studies in Europe. On the one hand, quantitative empiricists have frequently dismissed traditional doctrinal scholarship as unscientific and its insights as unreliable. On the other hand, many doctrinal scholars are apprehensive about the perceived displacement of domain expertise from legal research caused by the empirical turn. To bridge the gap between the two camps and address their respective concerns, we propose a wider adoption of expert coding as a methodology for legal research. Expert coding is a method for systematic parsing and representation of phenomena such as legal principles in a structured form, using researchers’ subject matter expertise. To facilitate the uptake of expert coding, we provide a step-by-step guide that addresses not only the coding process but also fundamental prerequisites such as conceptualisation, operationalisation and document selection. We argue that this methodological framework leverages legal scholars’ expertise in a more impactful way than traditional doctrinal analyses. We illustrate each step and methodological principle with examples from European Union law.
If our aim is to pluralise the ‘subjects, methods, and aims’ of the academic study of international organisations, then one fairly obvious route to follow is that of historicisation. Historians of international organisations have elected a variety of avenues to relate the creation and the design of international institutions. Building on their work, this chapter offers another tool to the methodological palette the present volume offers; on the other hand and more specifically, I want to reflect on what international organisations are from the point of view of their making. To do so, I zoom in on one important moment in the history of modern international organisations: the 1865 international telegraph commission in Paris that convened to determine the scope and purpose of the first formal and permanent international organisation, the International Telegraph Union. I approach this case through the lens of micro-politics, combining biographical and sociological methods. Methodologically I study international organisations by means of biographical membership analysis; theoretically I argue that international organisations cannot be fully understood in separation from the situated political motives of their makers.