To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data can be produced through a variety of means – from experiments and interviews to corpus construction – and take a wide range of forms, including numerical, text, sound, and image. The chapter opens with a review of the different roles data have in research – as reality, as construction, as disruption – and proposes to conceptualize data as a process, whereby records of a phenomenon (raw data) are elicited and transformed (transformed data). Second, the chapter proposes a classification of existing data collection methods and data types. Third, it discusses how technology has profoundly impacted data production, both making large datasets available and facilitating the collection of particularly rich data (e.g., big qualitative data). This allows us to understand both the opportunities and the challenges that new forms of data have for mixed-methods research.
This chapter introduces pragmatism as a process philosophy that is grounded in human activity. Pragmatism reconceptualizes the subject–object dichotomy and provide a transdisciplinary framework for creating useful knowledge. Eight pragmatist propositions for methodology are outlined: (1) Truth is in its consequences; (2) theories are tools for action; (3) research is as much about creating questions as answering questions; (4) data as a process; (5) qualitative and quantitative methods are synergistic; (6) recursively restructure big qualitative data to enable both qualitative and quantitative analyses; (7) social research creates both power and responsibility; and (8) social research should aim to expand human possibilities.
A pragmatist approach to methodology can foster insightful and impactful research. It can create knowledge that is useful, provide a framework for overcoming the qualitative–quantitative dichotomy, and conceptualize how to make the most of big qualitative data.
Taking a pragmatist approach to methods and methodology that fosters meaningful, impactful, and ethical research, this book rises to the challenge of today's data revolution. It shows how pragmatism can turn challenges, such as the abundance and accumulation of big qualitative data, into opportunities. The authors summarize the pragmatist approach to different aspects of research, from epistemology, theory, and questions to ethics, as well as data collection and analysis. The chapters outline and document a new type of mixed methods design called 'multi-resolution research,” which serves to overcome old divides between quantitative and qualitative methods. It is the ideal resource for students and researchers within the social and behavioural sciences seeking new ways to analyze large sets of qualitative data. This book is also available as Open Access on Cambridge Core.
This chapter provides a primer on the logic of Bayesian updating and shows how it is used for answering causal queries. We illustrate with applications to correlational and process-tracing inferences.
We illustrate Bayesian mixed methods with causal models through a reexamination of the model of inequality and democratization and of institutions and growth introduced in Chapter 8. We show how to use updated population models to draw both population- and case-level inferences, demonstrate situations in which learning is minimal and in which it is more substantial, and illustrate how the probative value of case-level evidence can be empirically established through model updating.
We turn to the problem of choosing between going “wide” and going “deep”: between seeking a little bit of information on a large number of cases versus studying a smaller number of cases intensively. We outline a simulation-based approach to identify the optimal mix of breadth and depth. Simulations suggest that going deep is especially valuable where confounding is a concern, for queries about causal pathways, and where models embed strong beliefs about causal effects. We also find that there are diminishing marginal returns to each strategy and that depth often provides the greatest gains when we have cross-case evidence on only a modest number of cases.
This chapter shows how to use causal models to inform the selection of cases for intensive analysis. We outline a procedure in which we predict the inferences that will be made when future data are found and use these predictions to inform case-selection strategies. We ask: Given a set of cases on which we already have data on X and Y, which cases will it be most advantageous to choose for more in-depth investigation? We show that the optimal case-selection strategy depends jointly on the model we start with and the causal question we seek to answer and draw out the implication that researchers should be wary of generic case-selection principles.
This chapter addresses the question of whether causal models can themselves be justified. We outline strategies for justifying models on the basis of prior data and so empirically grounding beliefs about the probative value of clues.
In this concluding chapter, we summarize the key payoffs of the causal-model-based approach to causal inference, point to a set of important limitations of the approach, and sketch out what we see as next steps in strengthening model-based inference.
We describe strategies for figuring out whether a model is likely doing more harm than good and for comparing the performance of different models to one another.
We provide a lay-language primer on the counterfactual model of causal inference and the logic of causal models. Topics include the representation of causal models with causal graphs and using causal graphs to read off relations of conditional independence among variables in a causal domain.
This chapter illustrates how we can express theoretical ideas in the form of a causal model by translating three arguments from published social science research into models. We illustrate using Paul Pierson’s (1994) work on welfare-state retrenchment, Elizabeth Saunders’ (2011) research on military intervention strategies, and Adam Przeworski and Fernando Limongi’s (1997) study of the relationship between national wealth and democracy.