To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research in the natural sciences follows a cycle from exploratory research with tentative findings (descriptive and correlational work) to more definitive causal claims. This chapter argues that the social sciences (particularly political science) would benefit from a wider application of this research cycle model. Creating space in top journals for tentative (novel) conclusions instead of precise estimation of causal effects could lead to greater causal explanations in the long term. This division of labor within the research cycle would require changing evaluations of a work’s contribution to research based on its location within the cycle. Causal explanations are still the goal of research in this model, but are facilitated by an openness to preliminary and tentative findings.
Our goal in this book has been to examine the production of knowledge in the social science disciplines with an eye to improvements that might be implemented, now or in the foreseeable future. To bring this matter into view, we adopted a systemic (macro-level) framework – as distinguished from the micro-level framework usual to social science methodology, which focuses primarily on the production and vetting of individual studies.
New methods and tools have emerged over the past decade to address pervasive problems of publication bias, p-hacking, and lack of reproducibility. This chapter reviews some of these advances, considering the strengths and shortcomings of each. Meta-analysis, study registration, pre-analysis plans, improved disclosure policies, and open data are all considered.
In recent years, methods of data collection in the social sciences have expanded in range and sophistication. New data sources (many of them hosted on the worldwide web) and data harvesting techniques (e.g., web crawlers) have been discovered, leading to big-data projects of a sort previously unimaginable (Steinert-Threlkeld 2018).
Social psychology has undergone a crisis in which concerns about replicability have cast a specter of radical doubt over widely reported findings in the field. This chapter uses the crisis in social psychology as a case study to articulate some of the challenges surrounding replication that bedevil efforts to improve replicability more broadly in the social sciences. It does so with an eye toward policy implications, but with the caveat that research heterogeneity means that there are no simple prescriptions applicable to all fields or research methods.
There remain several important hurdles to replication in the social sciences. While several recent initiatives have arisen to address these problems, there remains a lack of organization and coordination on this effort. This chapter lays out a proposal to coordinate all efforts that fall under the broad umbrella of "reappraisal": a reappraisal institute, a central directory of reappraisals, a reappraisal scorecard, and additional support infrastructure.
Arbitrary word limits on journal articles limit scholarly research, particularly when opportunities for publishing monographs are decreasing. This chapter argues that these limits should be relaxed or even eliminated. Removing arbitrary length limits will improve efficiency by allowing authors to spend less time worrying about said limits and strategizing ways to evade them, resulting in higher-quality articles. This claim is supported by observational evidence.
Despite broad advances for women across society, there remains a persistent gender bias in the academy and in the social sciences in particular. This chapter describes a series of interconnected institutional practices that proscribe gender roles in the academy and cement women’s inferior status. These practices extend from the nuts and bolts of teaching and mentorship to performance evaluation, compensation, and opportunities to engage in university leadership. A broad body of evidence supports these arguments. Addressing this issue is not only a question of justice, but also one of advancing knowledge.
Amidst rising concern about publication bias, pre-registration and results-blind review have grown rapidly in use. Yet discussion of both the problem of publication bias and of potential solutions has been remarkably narrow in scope: publication bias has been understood largely as a problem afflicting quantitative studies, while pre-registration and results-blind review have been almost exclusively applied to experimental or otherwise prospective research. This chapter examines the potential contributions of pre-registration and results-blind review to qualitative and quantitative retrospective research. First, the chapter provides an empirical assessment of the degree of publication bias in qualitative political science research. Second, it elaborates a general analytic framework for evaluating the feasbility and utility of pre-registration and results-blind review for confirmatory studies. Third, through a review of published studies, the paper demonstrates that much observational—and, especially, qualitative—political science research displays features that would make for credible pre-registration. The paper concludes that pre-registration and results-blind review have the potential to enhance the validity of confirmatory research across a range of empirical methods, while elevating exploratory work by making it harder to disguise discovery as testing.