Open science practices (OSP) are transforming the landscape of social science research, fostering greater transparency, inclusiveness, and reproducibility. OSP includes providing detailed, transparent, and open information about analytic and interpretive choices (e.g., data, variables, study design, and analysis steps); preregistration of hypotheses and analyses plans before data collection; and making data and replication materials public. The movement calls for transparency in all stages of the research process and unbiased consideration of research findings, those that provide evidence to both confirm and disconfirm theories.
In a parallel movement, Democratic Innovations (DI) has become a large and growing subfield in political science. DI is an interdisciplinary field, dedicated to understanding democratic reform through “institutions that have been specifically designed to increase and deepen citizen participation in the political decision-making process” (Smith Reference Smith2009, 1). DI research convenes political philosophers, political scientists, sociologists, and practitioners who use diverse research methods to produce knowledge about the effectiveness of institutionalized and noninstitutionalized forms of democratic enterprise. Research in this field has generated significant policy impact. Increasing numbers of governments work with scholars to implement high-profile citizens’ assemblies, participatory budgeting, innovative plebiscites, and other programs (see www.participedia.net). In 2021, the European Union (EU) launched an EU-level initiative encompassing a series of citizen-led discussions (i.e., citizens’ panels) on important policy issues facing European societies (e.g., climate change, health, and the economy), which allowed diverse contributors to shape the common European future (see https://futureu.europa.eu).
Yet, to fully understand the effectiveness of these institutions, it is important to cultivate research on DI through transparent, reproducible, and ethical standards, thereby increasing trust in the findings. Spada and Ryan (Reference Spada and Ryan2017) found most DI studies published in top journals in political science focused on best practices, with few studying processes that authors themselves identify as failures, alluding to the existence of a potential publication bias within the field.Footnote 1 Publication bias is one of the most common forms of questionable research practices (Ioannidis et al. Reference Ioannidis, Munafo, Fusar-Poli, Nosek and David2014; John, Loewenstein, and Prelec Reference John, Loewenstein and Prelec2012), in which only statistically significant, positive and confirmatory, exciting and novel results are published, whereas non-statistically significant (null) and negative findings (for supporters of favored theories) remain unpublished. This leads to “an overrepresentation of both significant findings and inflated effect sizes” (Dienlin et al. Reference Dienlin, Johannes, Bowman, Masur, Engesser, Kümpel and Lukito2021, 6) and generates an inaccurate body of evidence (Cooper, DeNeve, and Charlton Reference Cooper, DeNeve and Charlton1997; Fanelli Reference Fanelli2012; Miguel et al. Reference Miguel, Camerer and Casey2014). Findings that appear counterintuitive or even mundane in light of received wisdom should be considered significant in the wider sense of advancing theory and understanding if research designs are well conceived and delivered. Publication bias can have detrimental effects on research advancements within DI, with important implications for policy making and democratic experimentation—for example, by exaggerating the effectiveness of some DI for policy making and democratically shaping public opinion. OSP can serve as a preventive measure against publication bias (Chambers Reference Chambers2019).
Although the use of OSP recently has been increasing in the social sciences (Ferguson et al. Reference Ferguson, Littman and Christensen2023), uptake has varied. Aspects of the movement have been variously promoted and critiqued (Ansell and Samuels Reference Ansell and Samuels2016; Ferguson et al. Reference Ferguson, Littman and Christensen2023; Jacobs et al. Reference Jacobs, Büthe, Arjona, Arriola, Bellin and Bennett2021; Rinke and Wuttke Reference Rinke and Wuttke2021).
How prevalent are these practices within the wide interdisciplinary field of DI? As an interventionist subdiscipline, DI applications aimed at improving democracy may sow doubt rather than confidence if advocates find that they do not meet expectations. Therefore, tools for eradicating or exposing our own biases and developing strong trust across research communities are paramount. A distance between OSP and DI research practices would seem prima facie odd, given overlapping concerns for democratizing and expanding knowledge and improving competences for collective analysis and decision making.
Of course, a narrow understanding of transparency and procedures adopted in a “one-size-fits-all” manner would be detrimental to the quality of important types of social research—for example, where confidentiality and anonymity of respondents or investigators require it (Jacobs et al. Reference Jacobs, Büthe, Arjona, Arriola, Bellin and Bennett2021). DI research is characterized by significant methodological plurality and much case-based research. Within this culture of pluralism, a culture of OSP would deliver on the original democratizing imperatives of science by guarding against excesses of our natural biases and temptations by disentangling conflated assumptions in research reporting. OSP should ensure transparency and accuracy of interpretation for a community of knowledge, including within the necessary discussion about how they are adapted for more plural application. This article accurately describes existing practices to provide a solid baseline on which to have a professional debate, as political scientists increasingly are asked to evidence the effects of interventions they recommend in democratic life. We explored the prevalence of OSP within DI using the following three preregistered research questions:
-
(RQ1): What is the prevalence of OSP within the DI field?
-
(RQ2): What predicts the adoption of OSP?
-
(RQ3): Has the adoption of OSP changed over time?
OPEN SCIENCE PRACTICES
Open science aims to make all stages of a research and knowledge-production process transparent, reproducible, and accessible, drawing its foundations from research ethics. OSP emphasizes open sharing, including validating claims through replication, allowing effective peer review, reducing barriers to publicly funded work, and avoiding duplication of scarce research resources (Chambers Reference Chambers2019; Nosek et al. Reference Nosek, Ebersole, DeHaven and Mellor2018).
OSP is driven by concerns with publication bias (Gerber, Green, and Nickerson Reference Gerber, Green and Nickerson2001) and undesirable incentives and norms that might explain the “replication crises” (Pashler and Wagenmakers Reference Pashler and Wagenmakers2012). More infamously, genuine efforts to replicate studies have led to notorious discoveries of research fraud and fabrication of data (Bhattacharjee Reference Bhattacharjee2013; Broockman, Kalla, and Aronow Reference Broockman, Kalla and Aronow2015). With vigilance to pluralism of research approaches, OSP delivers on the original democratizing imperatives of science by guarding against excesses of human biases and temptations, which requires that we disentangle conflated assumptions when reporting research. They ensure transparency and accuracy of interpretation for a community of knowledge. It is hoped that OSP can restore public trust in science (Anvari and Lakens Reference Anvari and Lakens2018). Core OSP includes sharing replication materials, preregistration of studies, open-access publishing, and replication (Bakker et al. Reference Bakker, Jaidka, Dörr, Fasching and Lelkes2021; Dienlin et al. Reference Dienlin, Johannes, Bowman, Masur, Engesser, Kümpel and Lukito2021; Ferguson et al. Reference Ferguson, Littman and Christensen2023; Miguel et al. Reference Miguel, Camerer and Casey2014).
Data and Materials Sharing
Sharing replication materials entails making data and all steps in analytical procedures publicly available. This practice helps other researchers to reproduce findings; to better understand research design, instruments, and results; to uncover potential coding errors; and to assess the validity, verifiability, and rigor of the study. Data sharing also can be directly beneficial for researchers, for example, by increasing the number of citations (Christensen et al. Reference Christensen, Dafoe, Miguel, Moore and Rose2019). Several journals in political science already have adopted data-sharing policies that require researchers to upload replication materials online; however, there is significant variation in the strength and enforcement of these policies. For example, Rainey and Roe (Reference Rainey and Roe2024) revealed that 65% of 221 political science and international relations (IR) journals (at least) encourage researchers to make their replication materials available; only 20% make it mandatory.
Preregistration and Pre-Analysis Plan
Preregistering a study involves registering research questions, hypotheses, design, measures of variables, power calculations, and analytical strategy before collecting data. By registering a study at a designated platform (e.g., aspredicted.org; osf.io), it receives a timestamp and is publicly discoverable. The objective of this practice is to deter intended and unintended questionable research practices—such as p-hacking (i.e., when analysis strategies are adapted to privilege statistically significant findings) and HARKing (i.e., hypothesizing after the results are known)—thereby distinguishing between exploratory and confirmatory research (Bakker et al. Reference Bakker, Jaidka, Dörr, Fasching and Lelkes2021; Dienlin et al. Reference Dienlin, Johannes, Bowman, Masur, Engesser, Kümpel and Lukito2021).
Replication
Replication entails researchers who are independent of an original piece of research repeating that study by following the methodology applied (Chambers Reference Chambers2019). An underlying cause of the credibility crisis recognized in several scientific fields (psychology has had perhaps the most open reckoning with its practices) is that most studies do not replicate across contexts and time, which casts doubt on the credibility of major findings. The Reproducibility Project: Psychology systematically reproduced 97 psychological studies that reported positive findings and found that only 36% of those replicate (Open Science Collaboration 2015). Camerer et al. (Reference Camerer, Dreber, Holzmeister, Ho, Huber, Johannesson and Kirchler2018) replicated 27 experimental studies in the social sciences published in Science and Nature journals from 2010 to 2015 and found significant effects in the same direction: only 62% of the original studies with the effect sizes being 50% of those originally reported. Replication, therefore, can provide important safeguards against overclaiming and different types of research errors. We need not assume that social scientific investigations often, if ever, lend themselves to invariant generalizations. However, providing more opportunities for closer reproduction allows a better understanding of why some studies will or will not reproduce explanations under similar circumstances.
Open-Access Publishing
Publishing scientific articles that are made available for any member of society is now a practice endorsed and, in some cases, required by many major grant-awarding agencies. A culture of transparency can induce best practices and lead to the publication of higher-quality research as well as the demystification of science.
METHODS AND DATA
We systematically studied the adoption of OSP within DI researchFootnote 2 (figure 1) (Mestre et al. Reference Mestre, Muradova, Ryan, Gheasi and Bolton2025). First, we identified the scholarship on DI through Web of Science (WoS) from 1970 (i.e., the first year of recording) to June 2021, when our study began. DI has emerged as a field over several years, and not all publications have used the term as it has gradually become an organizing touchstone. We followed an inclusive adaptation of Elstub and Escobar’s (Reference Elstub, Escobar, Elstub and Escobar2019) typology of DI using the following keywords: “co-governance,” “collaborative governance,” “mini-public,*” “minipublic,*” “participatory budgeting,” “referendum,*” “referenda,” and “citizen* initiative.*”Footnote 3 Our search produced 8,209 publications. WoS also classifies publications according to Document Type,Footnote 4 ranging from “Article” or “Proceedings Paper” to “Letter” and “Meeting.” We omitted the publications that were classified as books or book series, thereby obtaining 7,286 publications. The majority that remained were in the “Article” and “Proceedings Paper” classifications, and we manually inspected the remainder to determine whether they could be included in our desired population of scholarly publications of a journal type. Finally, we retained only the publication types “Article,” “Proceedings Paper,” “Data Paper,” “Note,” and “Reprint,” thereby obtaining a total of 6,384 publications. We used this sample for analysis of Open Access (OA) publishing because WoS returned this information for each publication (see figure 1). Moreover, we randomly subsampled 30% of those publications (N=1,915) to code for data-sharing, preregistration, and replication practices (see figure 1). By retaining only those publications in English that were of an empirical nature, we coded a sample of 1,099.

Figure 1 Study Design
Coding Procedure
To code for replication, preregistration, and data availability, one of the authors reviewed and coded each article in our subsample (N=1,099). The first two variables were dichotomous (yes/no) and the last variable (i.e., data availability) had three categories. We coded for full and partial availability of replication materials. Full availability refers to studies for which the dataset and the replication codes are available. Partial-availability–designated studies share some information for replication (e.g., data and supporting information including extra analysis, figures, and tables) but in which other important information to allow replication is missing (i.e., in most cases data, code, or crucial information on how data was processed). The other authors coded a subsample to test for reliability of coding.Footnote 5 Sample-size calculations indicated that coding a random sample of 30% of the original sample would be sufficient to allow for inference within the population of interest. The research team met regularly to discuss the coding scheme and to consider and work through the issues of interpretation. WoS already classifies publications by its OA designations, which allowed us to obtain values for the entire population (N=6,384). We labeled articles “True” if they had any OA designation type (e.g., gold or bronze).
RESULTS
Figure 2a shows that OA publishing accounts for 31% of total publications. Replication and preregistration practices appear to be best in a fledgling stage and not popular in DI studies. In our sample, we did not encounter any study that applied preregistration, and the proportion of replicated studies was substantially low. Figure 2b indicates that studies that share full information for replication (fully availableFootnote 6) constituted approximately 4% of our sample. It is important to note that this percentage includes different quantitative and qualitative methodological approaches. For quantitative studies, replication materials may include availability of datasets and codes, whereas for qualitative studies, publication of interview transcripts, observation notes, coding schemes, or other relevant documents can provide transparent information for different levels of replication. Requirements for sharing replication information must account for the diverse nature of research approaches. According to our coding approach outlined in the online appendix, we evaluated only whether authors claimed that they were making materials available in their article and then whether they were. Publications with partially available replication information accounted for approximately 27%. Disaggregating this information by different types of partial availability (see the online appendix) demonstrates that the most common partial fulfilment of OSP is the provision of a link to secondary data, without providing the information necessary to repeat the analysis (i.e., 13% of total papers). Provision of some supplementary materials (e.g., robustness tests) but with replication materials missing accounted for 8.83% of the papers. Approximately 5.5% of the sampled papers invited interested readers to request replication materials from the author. Attempts to validate sharing resulted in many broken links, even for contemporary research, and supplementary material often consisted of cursory artefacts of analysis. Major general-research projects, corpora, and data archives used often were referred to as source material by authors as an allusion to data transparency but without any explanation of where the specific data could be found or how the data were analyzed.

Figure 2 Application of OSP in the DI Subfield
(2a) Percentage of publications that were fully OA, preregistered, or a replication of a previous study.
(2b) Percentage of publications that made their research materials fully available, partially available, or not available at all. Error bars indicate 95% confidence intervals calculated through a binomial (left) or multinomial (right) probability distribution. OA does not contain confidence intervals because it was calculated from the full population, not a subsample.
Figure 3 shows the frequency of articles using OSP from 1970 to 2021. Figure 3a reveals that OA was not popular in the last decades of the twentieth century when technologies and capacities were not favorable to its large-scale adoption. Beginning in 2014, OA publishing has seen significant growth, peaking at 48% of total publications from 2018 to mid-2021. The trend seems to have leveled off, with the practice plateauing around this value. This increase aligns with the overall growth in DI publications.

Figure 3 Changes in the Adoption of (a) OA Publishing, and (b) Replication, Data Availability, and Preregistration Over Time
Shadowed areas correspond to 95% confidence intervals.
Figure 3b shows changes in the adoption of data availability, replication, and preregistration. Before 2000, there were fewer empirical publications as the subfield emerged, and it is difficult to draw conclusions for that period (i.e., only the period after 2000 is displayed). Between 2000 and 2021, preregistration (i.e., “0” in the subsample) and replication remain rare. Full availability of replication materials appears to have increased over time to almost 10% of publications, but the confidence intervals continue to be too large to draw firm conclusions.
Figure 4a plots the top 15 journals in terms of total number of publications in the population (from top to bottom). OA journals have increased to compete with traditional journals that are more likely to retain subscription-based or hybrid models of access to research publications. Significant heterogeneity is observed in our data not only in terms of the age of journals—newer journals such as Sustainability, established in 2009, versus established journals such as The Political Quarterly, established in 1930—but also the target audience (e.g., general political journals versus cross-disciplinary journals). We also analyzed the prevalence of OA publishing in the 15 most common research fields, as labeled by WoS (ordered from top to bottom).

Figure 4 OA Practices for Top 15 Journals in the Study Population
(4a) Number of Publications by Journal
(4b) Frequency of OA Publications by Journal
DISCUSSION
This article presents the first assessment of OSP in the field of DI research from 1970 to 2021 (N=6,384). The only prevalent OSP that we found was the publication of results in OA journals (i.e., 31% of articles), increasing to 48% of journal articles in 2021. There is significant momentum toward OA publishing in the wider community; however, negotiations with major publishers and learned societies on appropriate funding models are sluggish, and the current system still largely benefits those with more existing resources. Preregistration or replication of other works is rare. Less than 1% of publications used any of these practices in our population.
This article presents the first assessment of OSP in the field of DI research from 1970 to 2021 (N=6,384). The only prevalent OSP that we found was the publication of results in OA journals.
Preregistration or replication of other works is rare. Less than 1% of publications used any of these practices in our population.
Data sharing is still far from a norm in the field: only about 3.6% of publications during the 50-year period adhered to what are more widely recognized as good data-sharing practices. How does this finding align with data availability practices in the broader political science community? Key (Reference Key2016) reviewed all quantitative articles from six top political science and IR journals (2013–2014) and found that 58% of 586 articles shared data and replication code. However, data availability varies widely across journals, with mandatory policies being the strongest predictor. By expanding the sample to all English-language journals in the Social Science Citation Index’s political science and IR categories (N=224), Rainey et al. (Reference Rainey, Roe, Wang and Zhou2024) found that only 31% of articles published in 2022 had available replication materials. In a survey of scholars working in economics, political science, psychology, and sociology, Ferguson et al. (Reference Ferguson, Littman and Christensen2023) found increases in the reported use of at least one OSP, particularly since 2017, leading to a majority of scholars posting data or code and a significant proportion engaging with preregistration. They found that the rate of adoption differs by both field and methodology, although the trend is clear: data sharing in the DI field is significantly less common than in the broader political and social science community.
It is important to note, however, that our sample differs significantly from those in some of the referenced studies. Unlike those studies, we did not code and analyze only quantitative articles; instead, our sample included qualitative, quantitative, and mixed-methods data. Similarly, the DI field traditionally has been dominated by case-based research and qualitative approaches; scholars have begun to incorporate more quantitative analyses only in the past two decades. That said, the finding that only 3.6% of researchers make data available remains a surprising rate of compliance, considering the significant progress that has been made in the larger social science community in adopting more open and transparent data practices.
What is behind the reluctance to adopt OSP? We speculate that this resistance is driven by a lack of awareness regarding these practices as well as skepticism toward them. There continues to be a sizeable group of scholars within social sciences (and beyond) who oppose the OS movement. Concerns include privacy related to data sharing, especially with qualitative data (Gabriel and Wessel Reference Gabriel and Wessel2013); difficulty in implementing research of an exploratory nature once the hypotheses are preregistered (see Dirnagl Reference Dirnagl2020 for a review); and the costs of implementing OSP (Ansell and Samuels Reference Ansell and Samuels2016, 1812).
DI researchers may argue that there are epistemological and ontological conflicts that advocates of OS underplay and that the search for a single replicable truth is futile. Transparency and openness are crucial goals for all researchers but they may require rejecting procedures that sterilize research by pretending that the researcher can take the role of disinterested observer. The DA-RT statement by political science journals was criticized for the way it treated qualitative research (Monroe Reference Monroe2018), and a series of public deliberations led to more bespoke sets of standards being developed for different approaches (Jacobs et al. Reference Jacobs, Büthe, Arjona, Arriola, Bellin and Bennett2021). Yet, all types of data and techniques are feasible within the OS paradigm. The belief that OSP is relevant only for quantitative data may be driven by ambiguity and uncertainty related to their implementation within qualitative research (Steinhardt, Mauermeister, and Schmidt Reference Steinhardt, Mauermeister and Schmidt2023). There is a lack of good exemplars to show how OS use in qualitative approaches can help dialogue on where and how themes emerge from data (see Banks et al. Reference Banks, Field, Oswald, O’Boyle, Landis, Rupp and Rogelberg2019, 261).
We currently lack individual-level data to test our conjectures. We speculate that differences in national research cultures and cohort effects of novel training may explain some variance. Resources including time, awareness, access to training, and use of materials (e.g., programs that allow for easy linking and annotation of data) are not distributed equally. Some researchers fear being “scooped” by more well-resourced peers. Nevertheless, DI scholars are exactly those within the profession who are best placed to provide answers to how collective action that overcomes at least the more perverse incentives that reduce OSP can be avoided through clever institutional engineering. DI researchers have been most occupied with designing procedures to incentivize positive collective action, overcome information asymmetries, open up deliberation to justify outcomes and decisions, and increase and distribute capacities for engaging in complex collective tasks. We call on these colleagues to turn their attention to the task of engineering open political science.
CONCLUSION
As DI practices gain global acceptance, the integrity of research to inform those practices is paramount. Nevertheless, confidence in applications may be short-lived if either intentional manipulation of research is allowed and uncovered or, perhaps riskier, a lack of vigilance to our own tendencies to serve problematic or conflicting incentives allows a general tolerance for questionable research practices. Observers are more likely to question democracy itself if researchers and the advocates who increasingly rely on DI research find that their interventions do not meet the results that the body of DI research claims. We need the courage of our convictions to apply the lessons of DI to the ecosystems in which we (re)produce research.
As DI practices gain global acceptance, the integrity of research to inform those practices is paramount.
SUPPLEMENTARY MATERIAL
To view supplementary material for this article, please visit http://doi.org/10.1017/S1049096525101297.
ACKNOWLEDGMENTS
The authors thank those who attended the panel that we organized on “Advancing Open Science Practices Within the Democratic Innovations Field” at the 2021 European Consortium for Political Research General Conference for providing comments and suggestions on previous drafts, especially Amélie Godefroidt, who acted as discussant. We also thank the anonymous reviewers for their comments that improved the article. Finally, we thank UK Research and Innovation funding (Grant No. MR/S032711/1) and support from the University of Southampton.
DATA AVAILABILITY STATEMENT
Research documentation and data that support the findings of this study are openly available at the PS: Political Science & Politics Harvard Dataverse at https://doi.org/10.7910/DVN/N0831B.
CONFLICTS OF INTEREST
The authors declare that there are no ethical issues or conflicts of interest in this research.