Hostname: page-component-76c49bb84f-vcmr7 Total loading time: 0 Render date: 2025-07-07T07:01:28.590Z Has data issue: false hasContentIssue false

Trust in Scientific Expertise and the Varying Demands of Value Transparency

Published online by Cambridge University Press:  28 May 2025

Hanna Metzen*
Affiliation:
Department of Philosophy, https://ror.org/02hpadn98 Bielefeld University , Bielefeld, Germany
Rights & Permissions [Opens in a new window]

Abstract

Value transparency is thought to promote trust in scientific expertise. Yet, transparency is a complex concept. I will argue that transparency requirements come with a varying extent of engagement: merely disclosing information, providing information that is publicly accessible, or having additional mechanisms for criticism in place. It is often not clear in which sense transparency requirements are to be understood in the context of trust in expertise. However, each sense can backfire in different ways. Merely talking about transparency in a general sense hides these possible trade-offs. This furthermore shows that requiring transparency may come with a greater regulatory force.

Information

Type
Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Canadian Journal of Philosophy, Inc

1. Introduction

Trust in scientific expertise is important. At the same time, philosophers of science agree that this trust cannot be grounded in the idea that scientific expertise is autonomous and free from any social, political, or economic values and interests. The problem is then how to maintain or enhance trust while acknowledging at the same time that science is not value-free (Holman & Wilholt, Reference Holman and Wilholt2022). Transparency is one proposed solution to this problem: Scientific experts need to be transparent about underlying values and interests in order to promote warranted trust in expertise.

Value transparency has been advocated for by many philosophers of science (e.g., de Melo-Martín & Intemann, Reference de Melo-Martín and Intemann2018; Douglas, Reference Douglas2008, Reference Douglas2009; Elliott, Reference Elliott2017; Elliott & Resnik, Reference Elliott and Resnik2014; Resnik & Elliott, Reference Resnik and Elliott2023). Yet others have pointed out dangers or difficulties of transparency when it comes to promoting trust (e.g., John, Reference John2018; Schroeder, Reference Schroeder2021). Recently, Intemann (Reference Intemann2024) has argued that arguments against transparency point to ways in which the norm of transparency needs to be qualified but fail to provide sufficient reason to abandon it completely.

However, as especially Elliott (Reference Elliott2022) has demonstrated, transparency is a complex concept and can vary along different dimensions. In this article, I will argue that there is an important ambiguity within the concept of transparency that is not only missing in Elliott’s taxonomy, but also in the overall debate on transparency and trust in science.

Transparency requirements come with a varying extent of engagement. Transparency can be more or less demanding and interfering—from merely disclosing information to providing information that is publicly accessible, or even allowing for public criticism and sanctions. Based on accounts from political theory, I will distinguish three senses of transparency: mere transparency, publicity, and accountability.

The extent of engagement matters for the debate on value transparency and trust. I will show that different arguments for transparency as a tool for improving trust, in fact, work with different senses of transparency. However, mere transparency, publicity, and accountability may backfire in distinct ways. Just talking about transparency without paying attention to this ambiguity muddles the philosophical discussion. Arguments for and against transparency are often only directed at a specific form of transparency.

Furthermore, I show that transparency is not the minimally demanding and interfering tool that it appears to be. This is especially important when transparency is used as a regulatory tool, for instance, in guidelines for providing scientific advice or writing a policy-relevant paper. Depending on how transparency requirements are supposed to facilitate trust in scientific expertise, they come with a different regulatory force.

This article proceeds as follows. I will start by presenting Elliott’s taxonomy of transparency (Section 2) and then introduce the extent of engagement as an additional dimension of transparency (Section 3). I will show how this ambiguity is hidden in arguments for transparency and trust (Section 4), before working out the varying ways in which transparency may backfire (Section 5). Finally, I will deal with transparency’s regulatory force (Section 6).

2. The Complexity of Transparency

This article rests on the idea that transparency is an ambiguous concept. This has been pointed out by Elliott (Reference Elliott2022, 347). In his taxonomy of transparency, Elliott identifies eight major dimensions along which different forms of transparency may vary. The first is the purpose dimension, which refers to the reason for which transparency is pursued or required, for example, to promote trustworthiness. The second is the audience dimension, which answers the question of who is receiving information, for instance, policymakers, journalists, or the general public. The third is the content dimension, addressing the question of what is made transparent, like data and materials, or value judgments and their implications. The following four dimensions address the question of how information should be provided: The fourth dimension, timeframes, specifies at which point something should be made transparent. The fifth dimension concerns the varying actors who are disclosing information, for example, scientists or government agencies. As a sixth dimension, the mechanisms for identifying and clarifying the information that needs to be transparent may differ, like written discussions among scientists or collaborations with community members. The seventh dimension answers the how-question in terms of venues for transmitting the information, from communication by scientists to reports from government agencies. The final dimension concerns the dangers of transparency, which involves things like violating privacy but also generating inappropriate skepticism or creating a false sense of trust.

Importantly, Elliott (Reference Elliott2022, 350) points out that different combinations of variations lead to different forms of transparency. There are interdependencies between the dimensions: For example, if the purpose of transparency is to promote high-quality policymaking, then this already determines policymakers as the audience. Likewise, not every danger is relevant for every form of transparency.

In this article, I will focus on transparency of values and interests as a tool to improve public trust in scientific experts. Footnote 1 Thus, within Elliott’s taxonomy, I am only interested in a specific form of transparency. The purpose of this form is to promote public trust in scientific experts. The audience is usually the public, but it can at times be politicians and policymakers, or the information is conveyed by science journalists and other mediators. Values and interests need to be made transparent, but if they are implicit, the content might have to be broader. The actors are different kinds of scientific experts.

I build on Elliott’s taxonomy, arguing that it is missing one important way in which transparency is an ambiguous concept. Transparency comes with what I call a varying extent of engagement Footnote 2: It can be a more or less demanding or intrusive requirement. This will be particularly relevant for the dangers of transparency, as they are often relative to the extent of engagement. I will start by explaining this dimension in the next section.

3. A Varying Extent of Engagement

The ambiguity with regard to the demandingness of transparency is explicitly emphasized in political theory (e.g., Baume, Reference Baume, Alloa and Thomä2018; Elster, Reference Elster and Elster2015; Lindstedt & Naurin, Reference Lindstedt and Naurin2010; Naurin, Reference Naurin2006; O’Neill, Reference O’Neill, Hood and Heald2006, Reference O’Neill2009). Based on these approaches, I will distinguish between mere transparency, publicity, and accountability. I take the general three-part distinction from Naurin (Reference Naurin2006) but adapt it in some respects. Before explaining each sense in more detail, two things should be noted on the terminology. First, these terms originate from debates in political philosophy. This is especially relevant for publicity: As it will become clear below, this use of the term is different from its marketing-related use in everyday language. Second, even in political theory, the three terms appear somewhat inconsistently and at points interchangeably. I only believe Naurin’s taxonomy to be helpful for the purpose of my article.

I will start with mere transparency. Transparency in this sense involves merely disclosing and disseminating information—the information has to be made available, but this does not mean that it has to be communicated in a way such that a specific audience is able to understand the information. Or, as Naurin (Reference Naurin2006, 91) puts it, mere transparency “literally means that it is possible to look into something, to see what is going on.”Footnote 3

Mere transparency appears in many areas of science, for example, by posting data or materials to repositories (Nosek et al., Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin and Christensen2015) or disclosing conflicts of interest. However, it is also central to scientific expertise that is used in the political realm. For example, after being criticized for their lack of transparency at the beginning of the COVID-19 pandemic, the British government later disclosed members and minutes of meetings with the Scientific Advisory Group for Emergencies (Jarman, Rozenblum, Falkenbach, Rockwell, & Greer, Reference Jarman, Rozenblum, Falkenbach, Rockwell and Greer2022).

Publicity, on the other hand, goes beyond availability. I will follow Elster’s (Reference Elster and Elster2015, 3) weak notion of publicity: Publicity means that the audience has access to the information, that is, they can acquire information with little to no costs or difficulties. Note that there is also a stronger notion of publicity, which involves that the content of the information has to become known among the audience (e.g., Naurin, Reference Naurin2006). For scientific expertise, however, this notion is too strong.

Publicity requires communicating information, not just disclosing it. Especially O’Neill’s (Reference O’Neill2009, 175) characterization of effective communication is helpful in this regard. She argues that in contrast to mere information handling, effective communication is sensitive to a specific audience, and it must seek to meet some basic standards. First, effective communication involves accessibility. This standard is understood in a more cognitive sense. The information has to be intelligible, that is, the audience needs to be able to follow what is being communicated, and it has to be relevant to the intended audience. Furthermore, communicative action should also be assessable, in the sense that the audience needs to be able to judge the status of that information, for example, whether certain claims are evidence-based or speculations. Transparency requirements, O’Neill (Reference O’Neill2009, 173) notes, often range over informational content, yet in fact, effective communication is important: “The activity by which information is made transparent places it in the public domain, but does not guarantee that anybody will find it, understand it or grasp its relevance.” Publicity, as I will use the term, captures this emphasis on communication as well as cognitive accessibility and assessability.Footnote 4

This makes publicity certainly more demanding than mere transparency. It is more difficult because the audience may not be invested in the issue, or because the audience lacks the capacity to access and process the information. For scientific expertise, especially the latter becomes relevant. Publicity requires not only disclosing information but also explaining it. Experts have specialized knowledge and experience with respect to a particular area, making publicity at least time-consuming.

Finally, let us look at accountability. Accountability in the traditional sense involves more than exposing information. In case of misconduct, there have to be some kind of sanctions. Such sanctioning mechanisms can be formal, for example, losing one’s position or being summoned before a court. Yet they can also be informal and consist of embarrassment or social stigma (Naurin, Reference Naurin2006, 92). For scientific expertise, sanctioning mechanisms, in particular formal ones, are rare and much weaker (see, e.g., Langvatn & Holst, Reference Langvatn and Holst2024). In contrast to political representatives, experts are not elected to their positions. They may be ousted from a position in an advisory committee, but even then, the accountability relation is more indirect. It is usually the politicians who decide not to appoint a certain expert anymore, perhaps because they fear sanction mechanisms from the public. One of the rare cases of legal expert accountability was the prosecution of six scientists after the L’Aquila earthquake in 2009 (Douglas, Reference Douglas2021). For scientific expertise outside of formal advisory committees and the like, sanctioning is even more difficult.

However, accountability, too, is an ambiguous concept (Brown, Reference Brown2009; Mansbridge, Reference Mansbridge2009; Philp, Reference Philp2009). There are at least two different senses: holding someone accountable and giving an account. Both describe a relationship between an actor or speaker and a forum or audience. However, only the first sense clearly involves monitoring and imposing sanctions. The second sense is sometimes called “deliberative accountability” (e.g., Mansbridge, Reference Mansbridge2009, 384; Brown, Reference Brown2009, 216): The actor has to explain and justify her actions or claims, which allows others to criticize and object to it. As Mansbridge (Reference Mansbridge2009, 384) notes, deliberative accountability rests on selecting actors who are likely to share the forum’s interests—instead of assuming that actors have entirely different interests and need to be sanctioned later. Importantly, when talking about accountability in the context of scientific expertise, philosophers usually have deliberative accountability in mind (e.g., Douglas, Reference Douglas2021, 74; Douglas, Reference Douglas2009, 153; Resnik & Elliott, Reference Resnik and Elliott2023, 274).

Deliberative accountability has similarities with publicity; giving an account is related to making information publicly accessible. However, the extent of engagement for deliberative accountability is different in at least some respects. First, both forms of accountability become more alike—and thus distinct from publicity—when looking at informal sanctions. It is difficult to draw a sharp line between criticism and sanctions like embarrassment or a loss of reputation. Second, in contrast to publicity, accountability refers to a relationship in which the audience can require something from the speaker (Philp, Reference Philp2009).Footnote 5 Even in the case of deliberative accountability, the audience is not a passive receiver of information but can actively engage with it.

To sum up, the difference between mere transparency, publicity, and accountability can be framed in terms of communication (Figure 1): Mere transparency is only about making information available. Publicity involves making information accessible and communicating it to a specific audience. Accountability describes a two-way communication between audience and speaker: The audience reacts to the information provided by the speaker, either by objecting or by imposing sanctions, and the speaker considers this reaction. The difference is often gradual: Information can be more or less accessible to or taken in by the public, and likewise, there can be weaker and stronger public reactions, from mere criticism to imposing formal sanctions.

Figure 1. The extent of engagement in transparency requirements. On the left is the speaker. On the right is the audience, which can be very broad or more specific. The arrows show the direction of information transmission. The key represents accessibility and assessability, while the flash symbolizes criticism or sanctioning.

Before showing how this relates to trust, let me make some additional remarks. The first concerns the dimensionality. I have grouped mere transparency, publicity, and accountability under one dimension, which I called extent of engagement. It should be noted, though, that publicity and accountability have a greater extent of engagement than mere transparency, but in slightly different ways. Publicity is more demanding than mere transparency because it involves making information accessible to a specific audience. Accountability demands of the speaker to consider criticism or even sanctioning by the audience. In many cases, accountability builds on publicity, in particular when understood in the deliberative sense: Providing good criticism presupposes that the information is accessible and assessable in the first place. Yet in principle, one could also have accountability without publicity. Thus, while mere transparency, publicity, and accountability vary with respect to the extent of engagement, this does not necessarily imply that all three are placed on the same spectrum from low to high.

Second, I introduced the distinction in general terms, but it applies to values and interests in the context of scientific expertise. For example, imagine a scientist working in an expert committee. She might disclose possible conflicts of interest or any values that her decision to communicate a certain claim rests on. However, she could also make this information accessible and explain how exactly certain values and interests may have influenced her claims or which alternative values she could have considered. And finally, she can be accountable and consider how her audience views her use of values or the influence of interests on her claims.

Third, I am interested in the process of making something transparent. The extent of engagement is particularly relevant if transparency is a norm or requirement, as it highlights the demands transparency puts on the expert. This is distinct from transparency as a description of the state some information is in. However, of course, these things are linked; for information to be transparent, it has to be made transparent first. Note that the focus on making information transparent is also mirrored in debates in philosophy of science. Value transparency is discussed as something experts can or should pursue—for example, as a tool to handle illegitimate value judgments (Douglas, Reference Douglas2009; Elliott, Reference Elliott2017) or as a norm for science communication more generally (Intemann, Reference Intemann2024; John, Reference John2018). The distinction between mere transparency, publicity, and accountability is not only missing in Elliott’s taxonomyFootnote 6 but also in these broader debates.

Now, a norm is not quite the same as a requirement. However, even if one does not require transparency but only believes that scientific experts should generally aspire transparency, the extent of engagement is relevantFootnote 7. Depending on which sense of transparency the expert strives for, this becomes a more or less demanding task.

Additionally, transparency is also a frequent institutional requirement. For example, in the OECD’s (2015) “checklist for science advice,” transparency appears as a tool for dealing with potential conflicts of interest as well as producing advice that is sound, unbiased, and legitimate (for similar guidelines, see, e.g., EASAC, 2014). Members of governmental expert bodies, like the Canadian COVID-19 vaccine task force, can even be formally required to disclose potential conflicts of interest (Government of Canada, 2024). Moreover, value transparency is advocated for by journal editors, for example, when writing policy papers in conservation science (Game, Schwartz, & Knight, Reference Game, Schwartz and Knight2015). The extent of engagement is particularly important if transparency appears as an institutional requirement: Transparency requirements can be more or less demanding and interfering, and as such have a varying regulatory force. I will come back to this point in Section 6.

In the following section, I will show that the ambiguity with regard to the extent of engagement is hidden in arguments concerning trust and value transparency.

4. Transparency as a Tool for Improving Trust

This article investigates public epistemic trust, that is, the trust laypeople place in scientific experts making certain claims. Philosophers of science have pointed out different grounds for appropriate trust in scientists and scientific experts (for some categorization attempts, see, e.g., Furman, Reference Furman2020; Metzen, Reference Metzen2024). I will not give a full account of trust in this article but instead highlight ways in which trust is connected to values and interests. The general idea is that approaches to trust must reflect the value-ladenness of science. Since scientific expertise is not free from non-epistemic values and interests, trust cannot be based on purely epistemic factors.

For instance, it has been argued that trust should be based on value alignment: Scientific experts are trustworthy if they take laypeople’s values and interests into account when making claims and recommendations, or at least if they do not act contrary to those values and interests (e.g., Goldenberg, Reference Goldenberg2023; Irzik & Kurtulmus, Reference Irzik and Kurtulmus2019; Wilholt, Reference Wilholt2013).Footnote 8 Others stress that trust is not necessarily about value alignment, but more generally about a legitimate use of values and interests. Ensuring such legitimacy might entail very different procedures, like restricting the proper role of values, or having venues for criticism on the community level (e.g., Douglas, Reference Douglas2009; Holman & Wilholt, Reference Holman and Wilholt2022; Oreskes, Reference Oreskes2019). Furthermore, for expertise that is used in political contexts, some philosophers rather focus on the relationship between scientific expertise and politics. They argue that while trustworthy scientific advice cannot be value-free, experts can still refrain from prescribing values and interests to political decision-makers (e.g., Carrier, Reference Carrier2022; Gundersen, Reference Gundersen2024). Such views on trust do not have to be conflicting accounts, as there may be different factors that contribute to trust in scientific experts.

Calls for transparency usually follow from these general views on trust. I will argue that upon closer examination, these are calls for different senses of transparency. Depending on what one believes to be the relevant connection between trust and values or interests, publicity or accountability may be needed for establishing such trust, instead of mere transparency. This also relates to the purpose dimension in Elliott’s (Reference Elliott2022) taxonomy: If we think of the purpose of promoting trust in more fine-grained terms, it becomes plausible that this might involve different forms of transparency. Yet because the ambiguity with respect to the extent of engagement is not adequately considered in arguments for transparency, it may wrongly seem that they all work with the same understanding of transparency.

We can distinguish at least four different arguments for why value transparency improves trust. First, transparency enables members of the public to place trust in those experts who do not act contrary to the members’ own values and interests. Second, transparency can demonstrate that values and interests legitimately influence scientific expertise. Third, if experts conditionalize their advice by making values and interests transparent, they remain more neutral and politically independent. Finally, if values and interests lead to unwanted distortions or biases, transparency enables criticism and rectification. I will explain each argument in more detail below.

Note that the distinction is somewhat artificial. The different arguments are often combined and can overlap. For example, Intemann (Reference Intemann2024) stresses the first and fourth arguments. Douglas (Reference Douglas2009) combines at least the second, third, and fourth arguments, although she uses the term “explicitness” (see also Douglas, Reference Douglas2018, 3). Additionally, philosophers advocating for transparency are aware that transparency by itself is not a sufficient tool for promoting trust; it needs to be accompanied by tools that promote diversity or lay participation (e.g., Intemann, Reference Intemann2024, 12). Transparency can also be merely an auxiliary tool for enabling participation. For example, in order to set up an expert panel that is diverse in terms of values, one needs to know the potential members’ values first.

The aim of my article is not to argue against the value of transparency, and I do not want to claim that other reasons for the importance of transparency do not matter. Instead, looking at existing arguments in an isolated manner helps to see that they involve a varying extent of engagement.

4.a. Demonstrating aligned values and interests

The first argument for transparency is related to the emphasis on value alignment. If one believes that members of the public place trust in experts who employ values and interests shared by the public—or at least do not act contrary to these values and interests—making the underlying values and interests explicit helps to assess the trustworthiness of experts and place one’s trust accordingly (e.g., Intemann, Reference Intemann2024, 6; Elliott, Reference Elliott2022, 344).

This argument rests mostly on mere transparency: For enabling trust that is based on an alignment of values and interests, it may be sufficient to disclose one’s values and interests. Take, for example, conflict of interest policies (Elliott, Reference Elliott2008; Elliott & Resnik, Reference Elliott and Resnik2014): A standard approach adopted by scientific journals, universities, or funding agencies is to require researchers to disclose their financial conflicts when applying for grants or publishing a paper. This approach has also spilled over to other areas of expertise. Science Media Centers, for instance, in the United Kingdom, require scientific experts to disclose conflicts of interest. They publish such information alongside statements that they distribute to the media (SMC, 2022).

The worry is that financial and other conflicts of interest are compromising the judgments of scientists (Elliott, Reference Elliott2008, 2). To what extent disclosure policies are effective in tackling this problem is a contested issue (e.g., de Melo-Martín & Intemann, Reference de Melo-Martín and Intemann2009). Yet in a minimal version, disclosure at least allows others to recognize the possibility that a certain expert is following her own financial interests or other sectional interests, rather than the public interest or the interest of the respective audience. This helps laypeople in placing their trust: They could trust those experts with no conflict of interest but be more wary otherwise—especially if the interests are strong, or if the disclosed interests differ greatly from one’s interests.

Disclosing values can, for example, be relevant for the trust policymakers place in scientific experts. Think of a city manager who has to help her city deal with the flooding risks associated with climate change. Disclosing value influences in climate modeling choices allows the city manager to make adaptation plans that align with the right values and provide the desired protection. If she thinks that the protection of citizens is important, she could, for instance, base her decisions on projections that are able to deal with worst-case flooding scenarios and not only with plausible scenarios (Elliott, Reference Elliott2022, 344; Parker & Lusk, Reference Parker and Lusk2019). If the city manager makes adaptation plans in accordance with the citizens’ values, this would also enhance public trust in the expertiseFootnote 9.

Moreover, making values explicit can directly affect public trust. For example, one reason why vaccine skeptics may distrust medical experts is that these experts make their safety assessments based on public health values, while the skeptics are concerned with their individual health (Irzik & Kurtulmus, Reference Irzik and Kurtulmus2019). A way of improving trust would be to provide recommendations more explicitly aligned with individual values, for instance, in conversations between doctor and patient.

4.b. Demonstrating legitimate influence of values and interest

The second argument dealing with trust and transparency is related to the first one. Yet, here the focus is more on the proper procedures for dealing with values, not on the values itself. Transparency can demonstrate that values and interests influence scientific expertise only in a legitimate way. For example, it can show that values were used only in a proper role (Douglas, Reference Douglas2009, 96; Bennett, Reference Bennett2020, 254) or that illegitimate value influences were handled through a collective critical evaluation by the scientific community (Oreskes, Reference Oreskes2019, 68; Solomon, Reference Solomon2021).

However, in most cases, mere transparency is not enough for this argument to work. Whether a certain value judgment was made appropriately is not immediately obvious to laypeople because they do not possess the specialized knowledge and experience of experts. Instead, publicity is required. Take Douglas’ distinction between direct and indirect roles of values, where only the latter—that is, values as guidance in what counts as sufficient warrant—is appropriate for expert reasoning. Demonstrating that values were only used in such an indirect role requires “the honest assessments of evidence and uncertainty, of what counts as sufficient evidence and why” (Douglas, Reference Douglas2008, 13). Experts need to explain what evidence there is, where the evidentiary bar should be set, in what way values enter these considerations, and the relevant possible evidence that would convince them of a different claim.

Douglas (Reference Douglas2008, 12) uses the example from uncertainties in climate projections: In the United States, the Bush administration argued in the 2000s that uncertainties are too great to warrant climate action. However, the scientific experts who set a lower evidentiary bar and recommended mandatory climate action were, in fact, making legitimate value judgments, as they used values only in their indirect role. Merely disclosing the value judgments is not sufficient for maintaining trust in such cases, experts need to explain why they made the judgments in a certain way and acknowledge the implications this may have.

Kitcher’s (Reference Kitcher2011, 151) notion of “ideal transparency” can be interpreted similarly. For Kitcher, ideal transparency entails that the methodological standards and procedures used in accepting or rejecting scientific claims are decided based on publicly shared values. However, additionally, the public needs to understand how values enter the certification process—in particular, that they were only used for judging whether the evidence is good enough (Kitcher, Reference Kitcher2011, 163–164). This, again, requires publicity.

4.c. Demonstrating political neutrality

The third argument is specific to expertise that is used in political contexts, like scientific policy advice. Many philosophers have argued that scientific experts should conditionalize their advice. They should act as “honest brokers” (Pielke, Reference Pielke2007) and be transparent about the way values and interests may have influenced their claims. If possible, this includes discussing different policy pathways with different underlying value assumptions. Transparency allows experts to remain neutral and independent from politics, which enables public trust in scientific expertise (Carrier, Reference Carrier2022, 17; Douglas, Reference Douglas2009, 153; Elliott & Resnik, Reference Elliott and Resnik2014, 649; Gundersen, Reference Gundersen2024, 139).

In some cases, this argument works with mere transparency. Take the flooding risks example again and assume that the experts provide different projections based on different modeling choices, for instance, one that includes worst-case scenarios but might overestimate certain factors and thus lead to expensive yet potentially unnecessary measures, and one that only includes more plausible scenarios but supports cheaper adaptation plans. By disclosing underlying values, the experts demonstrate their neutrality to the public because this shows that they do not prescribe or choose any values themselves. Ultimately, it is the city manager who has to make the value judgment and opt for a certain policy.

However, in other cases, policymakers and politicians might try to manipulate scientific advice. They could, for example, downplay a certain policy option and claim that they act based on scientific constraints, or they could point to the overall scientific uncertainty and argue against implementing any policy, as in the Bush administration example mentioned above (Douglas, Reference Douglas2008). In such cases, scientific experts need to explain their advice, as well as the underlying values and interests to the public, if they want to remain neutral. When facing politicization, publicity may be required to enable public trust in expertise.

Furthermore, mere transparency might be insufficient if the underlying evidence is already shaped by commercial or political values and interests. For example, Fernàndez Pinto (Reference Fernàndez Pinto2020) points out that commercial interests can lead to inappropriate consensus: If pharmaceutical companies predominantly publish studies with positive results, this can make a drug appear more effective than it actually is. Now, imagine an expert group that wants to provide conditionalized advice concerning this drug to policymakers. Demonstrating neutrality seems to require more in this case than just disclosing the existence of commercial interests—the experts would need to explain the mechanism through which pharmaceutical companies influenced the available evidence or why other policy pathways are valid even though they depart from the existing consensus.

4.d. Enabling criticism and rectification

According to the fourth argument, transparency is a tool to enable criticism and thus rectification, which improves the trustworthiness of expertise. This argument can be geared toward situations in which scientists are aware of possible value influences, but it is not clear whether the value influence is legitimate. Making value judgments explicit allows public scrutiny, and in turn, if the experts are open to such criticism and adapt their claims and recommendations, produces more trustworthy expertise (e.g., Douglas, Reference Douglas2009, 172; Intemann, Reference Intemann2024, 5).

In addition, criticism is important for uncovering implicit values and interests. This argument is especially prominent for criticism within the scientific community: If there are venues for criticism, members of the community can point out problematic background assumptions or other illegitimate influences of values and interests (e.g., de Melo-Martín & Intemann, Reference de Melo-Martín and Intemann2018, 122). However, others show that a scientific community should be open to external criticism as well, for example, by social movements (Bueter, Reference Bueter2017) or laypeople with local knowledge (Wylie, Reference Wylie, Padovani, Richardson and Tsou2015). Note that the content of transparency has to be more broad here: For uncovering value influences, parts of the data, code, or materials have to be open to scrutiny.

In most cases, mere transparency is not enough to enable public criticism. The public needs to be able to understand the influence of values and interests. For uncovering implicit values, the audience also needs to have some knowledge about the available evidence and methods. This is explicitly acknowledged: For example, Bueter (Reference Bueter2017) points to the role the women’s health movement had in making problematic background assumptions in medical research and care explicit. From the beginning, however, researchers and medical practitioners played an important part in the women’s health movement, which later resulted in establishing the field of gender medicine. In other cases, the focus is on participatory research practices where laypeople work together with scientific experts (Wylie, Reference Wylie, Padovani, Richardson and Tsou2015), or deliberative processes where “public voices need both to be heard and to be tutored” (Kitcher, Reference Kitcher2011, 220).

However, the extent of engagement involved in this argument goes beyond publicity. It requires deliberative accountability: The experts need to explain their interests and values to the public and respond to public criticism. Sanctions play only a minor role. Criticism can, of course, affect the expert’s reputation and thus be a sanctioning mechanism, but this is not required for this argument to work. Experts can also be open to criticism without fearing a loss of reputation. Deliberative accountability is demanding because experts need to actively engage with the audience and react to their objections.

Some philosophers are explicit in calling this accountability (e.g., Douglas, Reference Douglas2009, 172, Douglas, Reference Douglas2021; Resnik & Elliott, Reference Resnik and Elliott2023, 274), while others stick to the umbrella term of transparency. Intemann (Reference Intemann2024, 5), for example, argues that “making value judgments transparent demonstrates an openness to having those assumptions evaluated and potentially corrected or revised in future research.” Yet, being open to criticism is essentially what I call deliberative accountability here.

Note that there is a related argument, which is prominent in political theory: Transparency may force authorities to act in the public interest instead of their own interests if they know that their decisions are subjected to public scrutiny (Kogelmann, Reference Kogelmann2021). Here, it is not the reaction to criticism that enhances trustworthiness but the anticipation of sanctions, like being voted out of office in the next election. The argument rests on accountability, yet sanctions play a more explicit role here: They drive the improvement of trustworthiness.

To the best of my knowledge, this argument has not explicitly been defended in philosophy of scienceFootnote 10. In addition, of course, scientific experts differ from political representatives in important respects. However, it is important for experts to act in the public interest and produce claims and recommendations accordingly. It does not seem entirely implausible that the argument has some relevance for scientific expertise: For instance, knowing that the advice an expert gives publicly will be scrutinized by others—like other experts, journalists, or NGOs—might make the expert more careful in weighing the evidence and making appropriate value judgments.

5. How Transparency can Backfire

We have seen that arguments for why transparency enhances trust work with different conceptions of transparency. If one wants to promote public trust, scientific experts thus ought to be transparent in different ways. However, transparency requirements can backfire: They can reduce public trust in scientific expertise instead of promoting it. Philosophers have pointed out different dangers of transparency (e.g., Elliott, Reference Elliott2008; Elliott, Reference Elliott2022; Intemann, Reference Intemann2024; John, Reference John2018; Nguyen, Reference Nguyen2022; Schroeder, Reference Schroeder2021). I will show that possible dangers are, in fact, relative to the extent of engagement of transparency requirements. Mere transparency, publicity, and accountability each come with different costs.

It is important to distinguish two ways in which trust can be reduced. Transparency requirements can affect the credibility of expertise, and they can affect the trustworthiness of expertise. As I use these terms here, trustworthiness is a property of the speaker, whereas credibility reflects how others perceive the speaker and her claims (Whyte & Crease, Reference Whyte and Crease2010). Public trust can fail because scientific experts lack actual trustworthiness, but it can also fail if trustworthy experts are not perceived as such.

Note that transparency may prompt problems that I will not deal with in this article. For example, transparency requirements can clash with other rights like the protection of personal data—these are general problems that are not directly related to trust or the extent of engagement.

Finally, whether or not transparency requirements actually backfire is of course an empirical question (for some attempts to provide empirical answers, see, e.g., Elliott, McCright, Allen, & Dietz, Reference Elliott, McCright, Allen and Dietz2017; Hicks & Lobato, Reference Hicks and Lobato2022; Cologna, Baumberger, Knutti, Oreskes, & Berthold, Reference Cologna, Baumberger, Knutti, Oreskes and Berthold2022). My point is not so much to demonstrate that these dangers are inevitable. I rather want to show that there are considerations that make them plausible, but that these considerations vary depending on the extent of engagement of transparency requirements. I also do not claim that the following list of dangers is exhaustive.

5.a. Mere transparency

Because mere transparency requires information only to be disclosed and disseminated but not actually made accessible to a specific audience, it can reduce the credibility of expertise. Disclosing values and interests without providing sufficient explanations can make experts appear compromised even when they are, in fact, trustworthy. As Schroeder (Reference Schroeder2021, 551) notes: “If the public can’t trace the impact of those values, transparency doesn’t amount to much more than a warning—offering, in many cases, a reason to distrust, rather than to trust.” A conflict of interest disclosure, for example, can alert laypeople that the result might be biased even though it is not because they do not understand who a certain funder is or how the disclosed interests may affect the expertise (Intemann, Reference Intemann2024, 7; Schroeder, Reference Schroeder2021, 550).

Relatedly, disclosing values and interests may be problematic if the audience holds a “false folk philosophy of science” (John, Reference John2018, 81), that is, nonexperts can have false beliefs about the epistemic and social practices underlying scientific claims. In such nonideal environments, transparency can lead to the impression that scientists are not acting as they should. This is especially the case if laypeople think that science should be value-free. Disclosing that values did influence scientific claims then easily undermines trust, despite the fact that the influence was actually legitimate. This can even be used as a strategy to discount expertise: For example, climate skeptics often position themselves as objective while they portray mainstream scientists, such as experts working for the Intergovernmental Panel on Climate Change, as self-interested or biased (Kovaka, Reference Kovaka2021, 2366).

Besides negatively impacting the credibility of scientific expertise, mere transparency may also reduce its trustworthiness. This has mostly been pointed out with regard to conflict of interest disclosures. As Elliott (Reference Elliott2008, Reference de Melo-Martín and Intemann9) notes, “disclosures may also cause the sources of information to be more biased than they would otherwise be.” Scholars from psychology and economics have discussed the phenomenon of “moral licensing” (Loewenstein, Sah, & Cain, Reference Loewenstein, Sah and Cain2012, 669): Disclosing their interest can make advisors feel more comfortable in providing biased information because they know that the audience has been warned. Likewise, mere transparency may involve “strategic exaggeration,” where advisors provide more biased information to counteract anticipated discounting. Both effects are plausible if the audience cannot assess how the disclosed values or interests actually influence the expertise.

5.b. Publicity

Publicity requires experts to meet standards of accessibility and assessability: to explain the use and influence of values and interests in a way that laypeople are able to grasp such information. This avoids the credibility problems I have mentioned in the last section.

However, there are some important practical limitations. In many cases of scientific expertise, value choices are complex. Clearly explaining the importance of each individual choice and the possibility of alternative choices is not an easy task, and scaling this up to all value choices is even harder. What is more, even if experts were able to communicate very well, avoiding the credibility problem also requires a great deal of attention on the side of the public (Schroeder, Reference Schroeder2021, 552). Thus, because scientific experts have specialized knowledge, publicity may fail or may not be sufficient—in which case it may still backfire and reduce the credibility of expertise. Similarly, it can be hard and time-consuming to overcome a false folk philosophy of science, which requires not only communicating how values and interests have influenced scientific claims but also what the influence ought to look like and why (John, Reference John2018, 82).

Furthermore, publicity introduces new dangers if we look at the trustworthiness of scientific expertise. This has especially been emphasized by Nguyen (Reference Nguyen2022) who raises different objections against transparency. These are specific to publicity because they follow from increased demands on communicating expertiseFootnote 11. As Nguyen (Reference Nguyen2022, 9–10) writes: “In order for the assessment procedure to be made available to the public—in order for it to be auditable by outsiders to the discipline—it must be put in terms that are accessible to the public. However, insofar as the most appropriate terms of assessment require expertise to comprehend, then the demand for public accessibility can interfere with the application of expertise.”

First, publicity can change the experts’ recorded justification. If experts’ actual reasons are inaccessible to nonexperts, they might invent other reasons for public presentation, often making these reports uninformative or evasive. Second, publicity may be more epistemically intrusive and pressure experts to only act in ways that allow for public justification—they limit their actions to those for which they can find public justification, or seek or prefer public reasons in their own considerations (Nguyen, Reference Nguyen2022, 2–3).

Nguyen’s argument is not about transparency of values or interests, but it can be adapted. Making value judgments and influences accessible and assessable to a lay audience is a difficult and time-consuming task. In some cases, it may be impossible. Requiring experts to do so despite these difficulties may pressure them to adjust their explanations.

5.c. Accountability

In contrast to publicity, accountability involves responding to public criticism and perhaps even sanctioning mechanisms. This introduces additional possible dangers for trust in expertise. While public scrutiny is often important, it can also negatively affect an expert’s claims and recommendations, as well as the public perception of the expertise. This is especially relevant in cases where the risk of politicization is high or where expertise involves the inclusion of vulnerable groups.

A lack of public accessibility or practical problems in making expertise accessible can create problems for accountability. Not only it is difficult for lay people to criticize experts or hold them to account (Langvatn & Holst, Reference Langvatn and Holst2024, 105), but it is also difficult to judge whether criticism by others is appropriate. This is most salient for accountability for the content of expertise (which is usually why scholars argue against this form of accountability, e.g., Douglas, Reference Douglas2021), yet if value choices are difficult to identify, trace, and explain in lay terms, these problems are not eliminated by requiring accountability only for values and interests.

Inappropriate criticism can then contribute to a loss of credibility. Take the “Climategate” affair: Based on leaked emails from the University of East Anglia, climate skeptics accused scientists of fraud and misconduct, which also influenced public perception of the trustworthiness of science through a broader media uptake. Of course, greater public accountability may in fact be a solution to the problem of Climategate (e.g., Beck, Reference Beck2012)Footnote 12. However, this case, nevertheless, shows the danger of inappropriate criticism: It is difficult for laypeople to assess criticism directed at scientific expertise. As long as circumstances are nonideal, that is, if it is difficult to make expert claims and underlying values and interests accessible to the public, or if the public is easily influenced by powerful interest groups, public scrutiny can be weaponized.

Public scrutiny can also negatively affect the expertise itself—thus reducing its trustworthiness. This has been explicitly pointed out for deliberative and participatory settings in which scientific experts work closely together with local stakeholders, for example, policy dialogues in public health contexts. Mitchell, Reinap, Moat, and Kuchenmüller (Reference Mitchell, Reinap, Moat and Kuchenmüller2023, 8) argue that participants, especially nonscientists, may be afraid of saying something wrong: Opening up tentative deliberations to public scrutiny “risks changing their nature—if participants are concerned about saying the ‘wrong’ thing, revealing their ignorance, being held to account for something they casually suggest, or being criticized or humiliated for their opinions, they will be much more guarded about what they say.” Transparency and public scrutiny can backfire due to the vulnerability of the setting and participants.

The expertise itself may also be changed by criticism that is voiced or influenced by powerful interest groups. Biddle, Kidd, and Leuschner (Reference Biddle, Kidd and Leuschner2017) have argued that criticism can be “epistemically detrimental”: It can produce intimidation by creating an atmosphere in which scientists refrain from addressing certain topics or from arguing as forcefully as they believe is appropriate. Such criticism can even be epistemically corrupting. The reasonable anticipation of intimidation or threat can foster a disposition to preemptively understate epistemic claims.

Note that these latter discussions on epistemic intimidation do not happen in relation to transparency or accountability. However, they indicate that demanding experts to respond to public criticism can be dangerous if that criticism is inappropriate or turns into harassment. Again, examples of such effects often come from climate science (Biddle, Kidd, & Leuschner, Reference Biddle, Kidd and Leuschner2017). Here, problematic dissenting views often spilled over to mainstream media and politics, where they led to different attempts to intimidate scientists, from open letters to congressional hearings. Similar instances of intimidation attempts have also been pointed out in the context of COVID-19 (e.g., Nogrady, Reference Nogrady2021).

6. Transparency’s Regulatory Force

Value transparency is neither an easy solution nor an ineffective or dangerous mechanism for promoting trust. It is a tool that not only comes with benefits but also with costs—which might be inevitable or outweighed by the benefits. These trade-offs vary along the intended extent of engagement. Different senses of transparency are implicit in arguments for transparency, which then come with different benefits and costs.

Importantly, transparency can be a regulatory tool: As I have pointed out in Section 3, transparency is not only discussed as a rather abstract norm for science communication, but it also appears as a more specific, institutional requirement for scientific advisory bodies and providing policy-relevant expertise. Because it is a demand placed on scientific experts, transparency restricts the autonomy of science to a certain extent and introduces some form of public oversight (e.g., Resnik, Reference Resnik2008).

The complexities of transparency matter in the context of regulation: It is important to think of the relevant sense of transparency that needs to be at work and the dangers that this sense might involve. Existing approaches like Elliott’s (Reference Elliott2022) taxonomy are clearly helpful in this regard. However, the extent of engagement is crucial, too—not only because it explicates an additional ambiguity, but also because it directly addresses transparency’s regulatory force.

Transparency can seem to be something that is beneficial for several reasons, as well as less demanding and interfering compared to other regulatory tools. This makes it attractive for scientific expertise, whose autonomy is clearly valued. However, considering the extent of engagement reveals that transparency can be a more demanding regulatory tool than it appears to be. Depending on the benefits one wants to gain from implementing transparency requirements, publicity or accountability might be necessary—which come with higher demands for scientific experts and restrict their autonomy to a greater extent.

This parallels debates outside of science, where transparency is discussed as a mechanism for promoting sound markets or good government. For instance, Etzioni (Reference Etzioni2010) argues that transparency may appear as an alternative to regulating the actions of companies or politicians, yet in fact, transparency is a form of regulation.

The philosophical discussion regarding conflicts of interest can serve as a case in point here: While disclosure policies are very popular, they have been criticized for various reasons, as I noted in Section 5.a. This suggests that transparency requirements either need to involve a different level of engagement and thus reduce autonomy—for example, by establishing additional avenues for critical evaluation by peer researchers and regulators (de Melo-Martín & Intemann, Reference de Melo-Martín and Intemann2009, 1641; Elliott, Reference Elliott2008, 21)—or one needs to resort to other regulatory instruments.

Again, this does neither speak for nor against transparency requirements. In many cases of scientific expertise, we want some public oversight, for example, because the expertise might be distorted, or because we fear biased political uptake. However, it underlines the importance of being aware of the extent of engagement. Transparency is a regulatory tool, as are other tools like diversity or participatory measures—and it is not necessarily preferable based on concerns over a lower regulatory force.

7. Conclusion

I have argued that transparency is an ambiguous requirement, even beyond the complexities that Elliott (Reference Elliott2022) has pointed out. I distinguished between three senses of transparency: mere transparency, publicity, and accountability. Mere transparency means merely disclosing information, publicity involves making information publicly accessible and communicating it to a specific audience, and accountability describes a two-way communication that entails reacting to criticism or sanctions. Arguments for transparency of values and interests as a tool for facilitating trust in science work with different senses of transparency. However, this also comes with different costs, as requiring mere transparency, publicity, and accountability can backfire in different ways.

Acknowledging the ambiguity of transparency benefits the debate in philosophy of science. Whether or not transparency promotes trust depends on the respective argument and the form of transparency that is involved. Counterarguments like those from John (Reference John2018), Schroeder (Reference Schroeder2021), or Nguyen (Reference Nguyen2022) are directed at specific forms of transparency and thus only at specific arguments for transparency. Taking the extent of engagement into account allows for a more nuanced view of the relationship between transparency and trust. Furthermore, it is important to consider the extent of engagement if transparency is an institutional requirement. Requiring transparency amounts to different things and involves different trade-offs. Merely talking of transparency in a general sense hides the fact that it can be a more demanding and interfering regulatory tool than it appears to be.

Acknowledgments

The author would like to thank Torsten Wilholt for discussing various earlier versions of this article. The author also received detailed and helpful comments from Kevin Elliott, Robert Frühstückl, Hannah Hilligardt, Anton Killin, and Faik Kurtulmus. Furthermore, this article benefited from presenting it at the PhilLiSci meeting at Bielefeld University, the SOCRATES seminar at Leibniz University Hannover, and the GRK 2073 colloquium. Finally, the author is grateful to the two anonymous reviewers for their comments and suggestions.

Funding statement

The research leading to these results received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project 254954344/GRK2073.

Competing interests

The authors declare none.

Hanna Metzen is a postdoctoral researcher at Bielefeld University. Her research focusses on public trust in scientific expertise. Currently, she also works on the notion of scientific authority as well as the role of science journalists in trust relationships.

Footnotes

1 This is similar to what Elliott and Resnik (Reference Elliott and Resnik2019) subsume under “socially relevant transparency,” yet with a stronger focus on values and interests as the content and the public as the audience.

2 I thank Torsten Wilholt and David Lambert for their help in coming up with the term.

3 Naurin (Reference Naurin2006) calls this transparency, I added the “mere” to avoid confusion with transparency as the umbrella term.

4 Accessibility is also an ambiguously used term. For example, Naurin (Reference Naurin2006, 91) uses accessibility interchangeably with availability and claims it to be a criterion of mere transparency. Especially in the context of the Open Science Movement, accessibility is sometimes mentioned as a separate feature from intelligibility: According to this understanding, accessible data are data that can readily be found—which is also what is captured by terms like “open access”— whereas intelligible data need to be communicated in a way that the audience is able to understand it (Royal Society, 2012; Elliott & Resnik, Reference Elliott and Resnik2019). In contrast, I follow O’Neill’s (Reference O’Neill2009, 175) richer understanding of the term, where accessibility is a feature of effective communication and includes intelligibility but differs from availability. Access is thus more about acquiring information with little cognitive costs or difficulties. Whether information can be found in the first place is an interesting question, but I believe it to be less relevant when looking at transparency in the context of trust and science communication.

5 Note that this is a different level of requirement than the requirements I focus on in this paper. One can require a speaker to be accountable to some audience. However, accountability also includes that the audience can require something from the speaker. I am interested in the former, that is, requirements (or norms) about accountability.

6 In his content dimension, Elliott (Reference Elliott2022, 347) distinguishes between “data, methods, code, materials” and “interpretations of the data, methods, and code for nonspecialists”, as well as between “value judgments of many sorts”, “values or factors that influence the judgments”, and “implications of value judgments”. Taken together with the audience dimension, this mirrors parts of the transparency-publicity distinction. However, it is helpful in my view to explicitly consider publicity and its emphasis on accessibility, otherwise it is easy to overlook the different demands of transparency.

7 While aspirations cannot be sanctioned, deliberative accountability could still be something a speaker aspires, for example, by trying to be open to criticism.

8 This does not necessarily mean that trustors and trustees hold the same values. Often, this argument is framed in terms of inductive risks: When weighing inductive risks, experts should take the values of the trustor into account (Irzik & Kurtulmus, Reference Irzik and Kurtulmus2019). Others argue more generally that trustworthy experts should “work in the public interest” (Goldenberg, Reference Goldenberg2023, 370). In both cases, I can trust an expert who generally holds other social or political values than I do, as long as she considers my values or interests when making a specific claim or recommendation.

9 Note that mere transparency is sufficient here as values are generally easy to understand. However, this may not always be the case. Additionally, values can be quite abstract, which may make it harder to assess value alignment.

10 The argument appears in debates on conflict of interest disclosure: Here, the question is whether disclosure policies prevent bias by discouraging researchers from entering into compromising financial disagreements in the first place. However, this is usually dismissed on empirical grounds (de Melo-Martín & Intemann, Reference de Melo-Martín and Intemann2009).

11 Nguyen (Reference Nguyen2022, 5) is quite explicit about the type of transparency he is considering: “where expert domains are made transparent for the purpose of assessment by the public.” He also distinguishes it from accountability tied to formal systems of sanctions: “Transparency and accountability are often deeply interrelated, but we can have transparency without formal systems of accountability. We can simply publicize information and then let the chips fall where they may.” The type of transparency Nguyen considers in his paper is what I call publicity here.

12 John (Reference John2018) discusses the Climategate affair as an example of the dangers of transparency. However, others attribute the affair’s damage rather to a lack of transparency and accountability (Beck, Reference Beck2012).

References

Baume, S. (2018). Publicity and transparency: The itinerary of a subtle distinction. In Alloa, E. & Thomä, D. (Eds.), Transparency, society and subjectivity (pp. 203224). Springer International Publishing.10.1007/978-3-319-77161-8_10CrossRefGoogle Scholar
Beck, S. (2012). Between tribalism and trust: The IPCC under the ‘Public Microscope’. Nature and Culture, 7(2), 151173.10.3167/nc.2012.070203CrossRefGoogle Scholar
Bennett, M. (2020). Should i do as I’m told? Trust, experts, and Covid-19. Kennedy Institute of Ethics Journal, 30(3–4), 243263.10.1353/ken.2020.0014CrossRefGoogle Scholar
Biddle, J. B., Kidd, I. J., & Leuschner, A. (2017). Epistemic corruption and manufactured doubt: The case of climate science. Public Affairs Quarterly, 31(3), 165187.10.2307/44732791CrossRefGoogle Scholar
Brown, M. B. (2009). Science in democracy: Expertise, institutions, and representation. MIT Press.10.7551/mitpress/9780262013246.001.0001CrossRefGoogle Scholar
Bueter, A. (2017). Androcentrism, feminism, and pluralism in medicine. Topoi, 36(3), 521530.10.1007/s11245-015-9339-yCrossRefGoogle Scholar
Carrier, M. (2022). What does good science-based advice to politics look like? Journal for General Philosophy of Science, 53(1), 521.10.1007/s10838-021-09574-2CrossRefGoogle Scholar
Cologna, V., Baumberger, C., Knutti, R., Oreskes, N., & Berthold, A. (2022). The communication of value judgements and its effects on climate scientists’ perceived trustworthiness. Environmental Communication, 16(8), 10941107.10.1080/17524032.2022.2153896CrossRefGoogle Scholar
de Melo-Martín, I., & Intemann, K. (2009). How do disclosure policies fail? Let us count the ways. The FASEB Journal, 23(6), 16381642.10.1096/fj.08-125963CrossRefGoogle ScholarPubMed
de Melo-Martín, I., & Intemann, K. (2018). The fight against doubt. Oxford University Press.10.1093/oso/9780190869229.001.0001CrossRefGoogle Scholar
Douglas, H. E. (2008). The role of values in expert reasoning. Public Affairs Quarterly, 22(1), 118.Google Scholar
Douglas, H. E. (2009). Science, policy, and the value-free ideal. University of Pittsburgh Press.10.2307/j.ctt6wrc78CrossRefGoogle Scholar
Douglas, H. E. (2018). From tapestry to loom: Broadening the perspective on values in science. Philosophy, Theory, and Practice in Biology, 10(8), 18.10.3998/ptpbio.16039257.0010.008CrossRefGoogle Scholar
Douglas, H. E. (2021). Squaring expertise with accountability. In Science, values, and democracy (pp. 6796). Consortium for Science, Policy & Outcomes.Google Scholar
EASAC. (2014). Good practice in the dialogue between science academies and policy communities. European Academies Science Advisory Council. https://easac.eu/fileadmin/ppt/Science-Policy-Dialogue/Short_EASAC_Guidelines_PDF.pdf. Accessed 27.7.24.Google Scholar
Elliott, K. C. (2008). Scientific judgment and the limits of conflict-of-interest policies. Accountability in Research, 15(1), 129.10.1080/08989620701783725CrossRefGoogle ScholarPubMed
Elliott, K. C. (2017). A tapestry of values: An introduction to values in science. Oxford University Press.10.1093/acprof:oso/9780190260804.001.0001CrossRefGoogle Scholar
Elliott, K. C. (2022). A taxonomy of transparency in science. Canadian Journal of Philosophy, 52(3), 342355.10.1017/can.2020.21CrossRefGoogle Scholar
Elliott, K. C., McCright, A. M., Allen, S., & Dietz, T. (2017). Values in environmental research: Citizens’ views of scientists who acknowledge values. PLos One, 12(10), e0186049.10.1371/journal.pone.0186049CrossRefGoogle ScholarPubMed
Elliott, K. C. & Resnik, D. B. (2014). Science, policy, and the transparency of values. Environmental Health Perspectives, 122(7), 647650.10.1289/ehp.1408107CrossRefGoogle ScholarPubMed
Elliott, K. C., & Resnik, D. B. (2019). Making open science work for science and society. Environmental Health Perspectives, 127(7), 075002.10.1289/EHP4808CrossRefGoogle ScholarPubMed
Elster, J. (2015). Introduction. In Elster, J. (Ed.), Secrecy and publicity in votes and debates (pp. 114). Cambridge University Press.10.1017/CBO9781316015360CrossRefGoogle Scholar
Etzioni, A. (2010). Is transparency the best disinfectant? Journal of Political Philosophy, 18(4), 389404.10.1111/j.1467-9760.2010.00366.xCrossRefGoogle Scholar
Fernàndez Pinto, M. (2020). Commercial interests and the erosion of trust in science. Philosophy of Science, 87(5), 10031013.10.1086/710521CrossRefGoogle Scholar
Furman, K. (2020). Emotions and distrust in science. International Journal of Philosophical Studies, 28(5), 713730.10.1080/09672559.2020.1846281CrossRefGoogle Scholar
Game, E. T., Schwartz, M. W., & Knight, A. T. (2015). Policy relevant conservation science. Conservation Letters, 8(5), 309311.10.1111/conl.12207CrossRefGoogle Scholar
Goldenberg, M. J. (2023). Public trust in science. Interdisciplinary Science Reviews, 48(2), 366378.10.1080/03080188.2022.2152243CrossRefGoogle Scholar
Government of Canada. (2024). Covid-19 vaccine task force. Government of Canada. https://ised-isde.canada.ca/site/biomanufacturing/en/covid-19-vaccine-task-force. Accessed 27.7.24.Google Scholar
Gundersen, T. (2024). Trustworthy science advice: The case of policy recommendations. Res Publica, 30, 125143.10.1007/s11158-023-09625-zCrossRefGoogle Scholar
Hicks, D. J., & Lobato, E. J. C. (2022). Values disclosures and trust in science: A replication study. Frontiers in Communication, 7, 1017362.10.3389/fcomm.2022.1017362CrossRefGoogle Scholar
Holman, B., & Wilholt, T. (2022). The new demarcation problem. Studies in History and Philosophy of Science, 91, 211220.10.1016/j.shpsa.2021.11.011CrossRefGoogle ScholarPubMed
Intemann, K. (2024). Value transparency and promoting warranted trust in science communication. Synthese, 203(2), 42.10.1007/s11229-023-04471-1CrossRefGoogle Scholar
Irzik, G., & Kurtulmus, F. (2019). What Is epistemic public trust in science? The British Journal for the Philosophy of Science, 70(4), 11451166.10.1093/bjps/axy007CrossRefGoogle Scholar
Jarman, H., Rozenblum, S., Falkenbach, M., Rockwell, O., & Greer, S. L. (2022). Role of scientific advice in Covid-19 policy. BMJ, 378, e070572.10.1136/bmj-2022-070572CrossRefGoogle Scholar
John, S. (2018). Epistemic trust and the ethics of science communication: Against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 7587.10.1080/02691728.2017.1410864CrossRefGoogle Scholar
Kitcher, P. (2011). Science in a democratic society. Prometheus Books.10.1163/9789401207355_003CrossRefGoogle Scholar
Kogelmann, B. (2021). Secrecy and transparency in political philosophy. Philosophy Compass, 16(4), e12733.10.1111/phc3.12733CrossRefGoogle Scholar
Kovaka, K. (2021). Climate change denial and beliefs about science. Synthese, 198(3), 23552374.10.1007/s11229-019-02210-zCrossRefGoogle Scholar
Langvatn, S. A., & Holst, C. (2024). Expert accountability: What does it mean, why is it challenging – and is it what we need? Constellations, 31(1), 98113.10.1111/1467-8675.12649CrossRefGoogle Scholar
Lindstedt, C. & Naurin, D. (2010). Transparency is not enough: Making transparency effective in reducing corruption. International Political Science Review, 31(3), 301322.10.1177/0192512110377602CrossRefGoogle Scholar
Loewenstein, G., Sah, S., & Cain, D. M. (2012). The unintended consequences of conflict of interest disclosure. JAMA, 307(7), 669670.10.1001/jama.2012.154CrossRefGoogle ScholarPubMed
Mansbridge, J. (2009). A ‘Selection Model’ of political representation. Journal of Political Philosophy, 17(4), 369398.10.1111/j.1467-9760.2009.00337.xCrossRefGoogle Scholar
Metzen, H. (2024). Objectivity, shared values, and trust. Synthese, 203(2), 60.10.1007/s11229-024-04493-3CrossRefGoogle Scholar
Mitchell, P., Reinap, M., Moat, K., & Kuchenmüller, T. (2023). An ethical analysis of policy dialogues. Health Research Policy and Systems, 21(1), 13.10.1186/s12961-023-00962-2CrossRefGoogle ScholarPubMed
Naurin, D. (2006). Transparency, publicity, accountability – the missing links. Swiss Political Science Review, 12(3), 9098.Google Scholar
Nguyen, C. T. (2022). Transparency is surveillance. Philosophy and Phenomenological Research, 105(2), 331361.10.1111/phpr.12823CrossRefGoogle Scholar
Nogrady, B. (2021). ‘I Hope You Die’: How the COVID pandemic unleashed attacks on scientists. Nature, 598(7880), 250253.10.1038/d41586-021-02741-xCrossRefGoogle ScholarPubMed
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G. et al. (2015). Promoting an open research culture. Science, 348(6242), 14221425.10.1126/science.aab2374CrossRefGoogle ScholarPubMed
OECD. (2015). Scientific advice for policy making: The role and responsibility of expert bodies and individual scientists. In OECD science, technology and industry policy papers No. 21. OECD.Google Scholar
O’Neill, O. (2006). Transparency and the ethics of communication. In Hood, C. & Heald, D. (Eds.), Transparency: The key to better governance? (pp. 7490). Oxford University Press.Google Scholar
O’Neill, O. (2009). Ethics for communication?. European Journal of Philosophy, 17(2), 167180.10.1111/j.1468-0378.2009.00346.xCrossRefGoogle Scholar
Oreskes, N. (2019). Why trust science? Princeton University Press.Google Scholar
Parker, W. S., & Lusk, G. (2019). Incorporating user values into climate services. Bulletin of the American Meteorological Society, 100(9), 16431650.10.1175/BAMS-D-17-0325.1CrossRefGoogle Scholar
Philp, M. (2009). Delimiting democratic accountability. Political Studies, 57(1), 2853.10.1111/j.1467-9248.2008.00720.xCrossRefGoogle Scholar
Pielke, R. A. (2007). The honest broker. Cambridge University Press.10.1017/CBO9780511818110CrossRefGoogle Scholar
Resnik, D. B. (2008). Scientific autonomy and public oversight. Episteme, 5(2), 220238.10.3366/E1742360008000336CrossRefGoogle ScholarPubMed
Resnik, D. B., & Elliott, K. C. (2023). Science, values, and the new demarcation problem. Journal for General Philosophy of Science, 54(2), 259286.10.1007/s10838-022-09633-2CrossRefGoogle ScholarPubMed
Royal Society. (2012). Science as an open enterprise. London: The Royal Society. https://royalsociety.org/news-resources/projects/science-public-enterprise/report/. Accessed 27.7.24.Google Scholar
Schroeder, S. A. (2021). Democratic values: A better foundation for public trust in science. The British Journal for the Philosophy of Science, 72(2), 545562.10.1093/bjps/axz023CrossRefGoogle Scholar
SMC. (2022). Register of interests policy: Guidance note for scientists. London: Science Media Centre. https://www.sciencemediacentre.org/working-with-us/for-scientists/guidance-note-register-of-interests-policy/. Accessed 27.7.24.Google Scholar
Solomon, M. (2021). Trust: The need for public understanding of how science works. Hastings Center Report, 51(S1), S36S39.10.1002/hast.1227CrossRefGoogle ScholarPubMed
Whyte, K., & Crease, R. (2010). Trust, expertise, and the philosophy of science. Synthese, 177(3), 411425.10.1007/s11229-010-9786-3CrossRefGoogle Scholar
Wilholt, T. (2013). Epistemic trust in science. The British Journal for the Philosophy of Science, 64(2), 233253.10.1093/bjps/axs007CrossRefGoogle Scholar
Wylie, A. (2015). A plurality of pluralisms: Collaborative practice in archaeology. In Padovani, F., Richardson, A., & Tsou, J. Y. (Eds.), Objectivity in science: New perspectives from science and technology studies (pp. 189210). Springer.10.1007/978-3-319-14349-1_10CrossRefGoogle Scholar
Figure 0

Figure 1. The extent of engagement in transparency requirements. On the left is the speaker. On the right is the audience, which can be very broad or more specific. The arrows show the direction of information transmission. The key represents accessibility and assessability, while the flash symbolizes criticism or sanctioning.