To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Woolcock focuses on the utility of qualitative case studies for addressing the decision-maker’s perennial external validity concern: What works there may not work here. He asks how to generate the facts that are important in determining whether an intervention can be scaled and replicated in a given setting. He focuses our attention on three categories: 1) causal density, 2) implementation capability, and 3) reasoned expectations about what can be achieved by when. Experiments are helpful for sorting out causally simple outcomes like the impact of deworming, but they are less insightful when there are many causal pathways, feedback loops, and exogenous influences. Nor do they help sort out the effect of mandate, management capacity, and supply chains, or the way results will materialize – whether some will materialize before others or increase or dissipate over time. Analytic case studies, Woolcock argues, are the main method available for assessing the generalizability of any given intervention.
Achen aims to correct what he perceives as an imbalance in favor of randomized controlled trials – experiments – within contemporary social science. “The argument for experiments depends critically on emphasizing the central challenge of observational work – accounting for unobserved confounders – while ignoring entirely the central challenge of experimentation – achieving external validity,” he writes. Using the mathematics behind randomized controlled trials to make his point, he shows that once this imbalance is corrected, we are closer to Cartwright’s view (Chapter 2) than to the current belief that RCTs constitute the gold standard for good policy research. Achen concludes: “Causal inference of any kind is just plain hard. If the evidence is observational, patient consideration of plausible counterarguments, followed by the assembling of relevant evidence, can be, and often is, a painstaking process.” Well-structured qualitative case studies are one important tool; experiments, another.
Whereas quantitative researchers often share their research designs and their data and encourage one another to rerun their analyses, qualitative researchers cannot as easily do so. However, they can enhance reliability in other ways. Moravcsik introduces new practices designed to enhance three dimensions of research transparency: data transparency, which stipulates that researchers should publicize the data and evidence on which their research rests; analytic transparency, which stipulates that researchers should publicize how they interpret and analyze evidence in order to generate descriptive and causal inferences; and production transparency, which stipulates that social scientists should publicize the broader set of design choices that underlie the research. To respond to these needs, Moravcsik couples technology with the practice of discursive footnotes common in law journals. He discusses the rationale for creating a digitally enabled appendix with annotated source materials, called Active Citation or the Annotated Transparency Initiative.
“Process tracing and program evaluation, or contribution analysis, have much in common, as they both involve causal inference on alternative explanations for the outcome of a single case,” Bennett says. “Evaluators are often interested in whether one particular explanation – the implicit or explicit theory of change behind a program – accounts for the outcome. Yet they still need to consider whether exogenous non-program factors … account for the outcome, whether the program generated the outcome through some process other than the theory of change, and whether the program had additional or unintended consequences, either good or bad.” Bennett discusses how to develop a process-tracing case study to meet these demands and walks the reader through several key elements of this enterprise, including types of confounding explanations and the basics of Bayesian analysis.
Experiments are a central methodology in the social sciences. Scholars from every discipline regularly turn to experiments. Practitioners rely on experimental evidence in evaluating social programs, policies, and institutions. This book is about how to “think” about experiments. It argues that designing a good experiment is a slow moving process (given the host of considerations) which is counter to the current fast moving temptations available in the social sciences. The book includes discussion of the place of experiments in the social science process, the assumptions underlying different types of experiments, the validity of experiments, the application of different designs, how to arrive at experimental questions, the role of replications in experimental research, and the steps involved in designing and conducting “good” experiments. The goal is to ensure social science research remains driven by important substantive questions and fully exploits the potential of experiments in a thoughtful manner.
Chapter 2 starts by placing experiments in the scientific process – experiments are only useful in the context of well-motivated questions, thoughtful theories, and falsifiable hypotheses. The author then turns to sampling and measurement since careful attention to these topics, despite being often neglected by experimentalists, are imperative. The remainder of Chapter 2 offers a detailed discussion of causal inference that is used to motivate an inclusive definition of “experiments.” The author views this as more than a pedantic exercise, as careful consideration of approaches to causal inference reveals the often implicit assumptions that underlie all experiments. The chapter concludes by touching on the different goals experiments may have and the basics of analysis. The chapter serves as a reminder of the underlying logic of experimentation and the type of mindset one should have when designing experiments. A central point concerns the importance of counterfactual thinking, which pushes experimentalists to think carefully about the precise comparisons needed to test a causal claim.
Chapter 6 touches on designing “good” experiments. The primary point is that regardless of changes, the fundamentals of conducing a sound experiment remain the same. A list of steps that should be taken for any design is offered.
Chapter 4 turns to experimental designs, focusing on three designs that have gained prominence in many social science applications in the last decade: audit field experiments, conjoint survey experiments, and lab-in-the-field experiments. These three designs also provide readers with examples of a type of field experiment, survey experiment, and lab experiment, respectively – the three conventional “types” of experiments employed in the social sciences. The chapter reviews the basics of each design and provides prominent examples. Importantly, it also discusses limitations and challenges of the designs, or put another way, how to think about these new designs. This chapter includes a brief overview of “public policy experiments”: It does so given the recent rise in studies of political elites, which ultimately connect to policymaking and responsiveness. The chapter makes clear that the substantive questions being explored should drive experimental design choices and not vice versa.
Chapter 1 discusses of the evolution of experiments, illustrating this development through the field of political science. The author argues that the discipline currently finds itself in a new era, parts of which apply to all of the social sciences. This new era began around 2010 and reflects the confluence of experiments achieving widespread acceptance in the discipline, technological advances, and the open science movement (these latter two dynamics have affected all of the social sciences). The era introduces many opportunities but also novel challenges. Ironically, the ease of conducting experiments today has the potential to undermine their quality. I conclude the chapter by discussing the motivation for the primer and reviewing the remainder of the book.
Chapter 5 delves into the steps that occur prior to, during, and after an experiment – including arriving at questions to explore with an experiment; documenting the steps in the process of conducting an experiment; and considering whether to replicate one’s findings after an experiment. This discussion touches on the themes of the aforementioned open science movement, offering in many instances a cautionary perspective.