Hostname: page-component-cb9f654ff-mx8w7 Total loading time: 0 Render date: 2025-08-28T13:21:50.049Z Has data issue: false hasContentIssue false

How confident are you in the ability of experts to provide reliable information? Evidence from a choice experiment on microplastics

Published online by Cambridge University Press:  20 August 2025

Peter King*
Affiliation:
Sustainability Research Institute, University of Leeds, Leeds, UK
Rights & Permissions [Opens in a new window]

Abstract

Policy making in areas of scientific uncertainty may be shaped by the public’s stated preferences (SP). SP surveys provide respondents with information about the scenario, typically from expert sources. Here, we tested whether respondents’ pre-existing confidence in the ability of experts in general to provide reliable information was associated with (a) status quo bias, (b) response certainty and (c) willingness to pay (WTP) estimates. Using 670 responses to a 2020 choice experiment on microplastic restrictions in the UK, we show that being ex ante more confident was significantly related to less frequent status quo choices and higher response certainty. However, we only observed differences in mean WTP for our ‘microplastics released’ attribute. Our findings suggest that confidence in expert-provided information shapes how respondents engage with SP surveys, particularly in contexts of scientific uncertainty. Future work to further understand determinants and consequences of perceived expert trustworthiness would be insightful.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

1. Introduction

Where scientific uncertainty persists, public confidence in expert-provided information plays a crucial role in shaping attitudes towards policy issues (e.g., Salanié and Treich, Reference Salanié and Treich2009; Sundblad et al., Reference Sundblad, Biel and Gärling2009; Bennett, Reference Bennett2020). Whether the public trusts expert-provided information is an important aspect of many policy issues including vaccine uptake (c.f., Larson et al., Reference Larson, Clarke, Jarrett, Eckersberger, Levine, Schulz and Paterson2018; Sturgis et al., Reference Sturgis, Brunton-Smith and Jackson2021), climate change (e.g., Sundblad et al., Reference Sundblad, Biel and Gärling2009) and how much to invest in adaptation and mitigation (Salanié and Treich, Reference Salanié and Treich2009). Prior research has evaluated how respondent beliefs about the trustworthiness, that is, the credibility or reliability, of those providing information may affect stated preferences (SP) for policy. For example, Hoehn and Randall (Reference Hoehn and Randall2002) discuss how respondents’ values for injury prevention are influenced by confidence in new information on injury risks. While previous research has examined trust in specific organizations (Khachatryan et al., Reference Khachatryan, Rihn and Wei2021), personal knowledge (LaRiviere et al., Reference LaRiviere, Czajkowski, Hanley, Aanesen, Falk-Petersen and Tinch2014) or survey-provided information (Kataria et al., Reference Kataria, Bateman, Christensen, Dubgaard, Hasler, Hime, Ladenburg, Levin, Martinsen and Nissen2012), no study has directly examined general trust in experts. It is often asserted that perceived credibility is a crucial element of survey design (e.g., Johnston et al., Reference Johnston, Boyle, Adamowicz, Bennett, Brouwer, Cameron, Hanemann, Hanley, Ryan, Scarpa and Tourangeau2017; Welling et al., Reference Welling, Zawojska and Sagebiel2022), which can influence protest votes (c.f., Meyerhoff and Liebe, Reference Meyerhoff and Liebe2009; Chen and Hua, Reference Chen and Hua2015; Rakotonarivo et al., Reference Rakotonarivo, Schaafsma and Hockley2016; Makriyannis et al., Reference Makriyannis, Johnston and Zawojska2024), and welfare estimates (Oh and Hong, Reference Oh and Hong2012; Remoundou et al., Reference Remoundou, Kountouris and Koundouri2012). Yet to the best of our knowledge no previous study has (a) ascertained to what extent respondents trust that experts can credibly provide reliable information, and (b) examined how this affects status quo choices, stated certainty and ultimately welfare estimates. Our study, therefore, fills this gap and investigates how general trust in experts can influence behaviour in, and results from SP surveys.

SP surveys provide respondents with information about the policy context and background to ensure that responses are well-informed (Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021) and that estimated values are internally valid (Rakotonarivo et al., Reference Rakotonarivo, Schaafsma and Hockley2016). Under uncertainty, public preferences and risk perception are a strong, but not sole, influence on whether a regulator overinvests in abatement measures (Salanié and Treich, Reference Salanié and Treich2009). When an SP survey is designed to elicit public preferences in a context that is unfamiliar to respondents or inherently uncertain, researchers must clearly communicate both the current state of scientific evidence and areas of uncertainty (Johnston et al., Reference Johnston, Boyle, Adamowicz, Bennett, Brouwer, Cameron, Hanemann, Hanley, Ryan, Scarpa and Tourangeau2017). We distinguish between risky attributes, where probabilities are known and explicitly stated, and uncertain attributes, where probabilities are unknown or imprecise (Shaw, Reference Shaw2016).

Our study focuses on microplastics, a topic with high public awareness (Janzik et al., Reference Janzik, Koch, Zamariola, Vrbos, White, Pahl and Berger2024) but significant scientific uncertainty regarding long-term human health effects (Catarino et al., Reference Catarino, Kramm, Völker, Henry and Everaert2021). The accumulation of microplastics in the marine environment increases the likelihood of harm, particularly through immune and inflammatory responses (e.g., Duis and Coors, Reference Duis and Coors2016; Burns and Boxall, Reference Burns and Boxall2018; Kosuth et al., Reference Kosuth, Mason and Wattenberg2018; De-la-Torre, Reference De-la-Torre2020; Prata et al., Reference Prata, da Costa, Lopes, Duarte and Rocha-Santos2020; Vethaak and Legler, Reference Vethaak and Legler2021; Thompson et al., Reference Thompson, Courtene-Jones, Boucher, Pahl, Raubenheimer and Koelmans2024). While plastic production and public awareness continues to increase (Lebreton et al., Reference Lebreton, Slat and Ferrari2018), qualitative research suggests that there remains a notable gap between subject-matter experts, such as toxicologists and environmental scientists, and public perceptions regarding microplastics potential health risks (Kramm et al., Reference Kramm, Steinhoff, Werschmöller, Völker and Völker2022). As respondents may not be aware of the true harmfulness of microplastics (Janzik et al., Reference Janzik, Koch, Zamariola, Vrbos, White, Pahl and Berger2024), expert assessments represent the primary source of information on potential health effects, though these assessments themselves remain uncertain. Expert assessments, which indicate that no direct health effects have been observed (Catarino et al., Reference Catarino, Kramm, Völker, Henry and Everaert2021), may contrast with public concerns about the spread and potential risks of microplastics (Kramm et al., Reference Kramm, Steinhoff, Werschmöller, Völker and Völker2022). Accurately and fairly communicating this mix of potential risks, while acknowledging that no direct health harms have been observed thus far, remains a challenge for both scientists (Catarino et al., Reference Catarino, Kramm, Völker, Henry and Everaert2021) and regulators interested in designing and justifying precautionary policies (c.f., King, Reference King2022).

Our study provides direct evidence that general confidence in experts influences choice behaviour in SP surveys. These findings extend prior research on trust in information sources (c.f., Kataria et al., Reference Kataria, Bateman, Christensen, Dubgaard, Hasler, Hime, Ladenburg, Levin, Martinsen and Nissen2012; LaRiviere et al., Reference LaRiviere, Czajkowski, Hanley, Aanesen, Falk-Petersen and Tinch2014; Chen and Hua, Reference Chen and Hua2015; Khachatryan et al., Reference Khachatryan, Rihn and Wei2021) by demonstrating that low pre-existing confidence in experts was associated with (a) a stronger bias towards choosing the status quo alternative in a choice experiment (CE; e.g., Meyerhoff and Liebe, Reference Meyerhoff and Liebe2009), (b) reduced response certainty (e.g., Dekker et al., Reference Dekker, Hess, Brouwer and Hofkes2016; Uggeldahl et al., Reference Uggeldahl, Jacobsen, Lundhede and Olsen2016; Dave et al., Reference Dave, Toner and Chen2023) and (c) differences in welfare estimates for uncertain attributes (e.g., Czajkowski et al., Reference Czajkowski, Hanley and LaRiviere2016). Although our design does not allow us to test whether the information itself in the survey affected stated confidence in experts, our results indicate that status quo bias, response certainty and welfare valuations are sensitive to perceived expert credibility in uncertain policy contexts.

Whereas state-of-the-art research has primarily examined how only the presentation of information, and not the beliefs about its credibility, may affect SP (e.g., Jacobsen et al., Reference Jacobsen, Boiesen, Thorsen and Strange2008; Hasselström and Håkansson, Reference Hasselström and Håkansson2014; Czajkowski et al., Reference Czajkowski, Hanley and LaRiviere2016; Chen and Cho, Reference Chen and Cho2019; Welling et al., Reference Welling, Zawojska and Sagebiel2022, Reference Welling, Sagebiel and Rommel2023; Makriyannis et al., Reference Makriyannis, Johnston and Zawojska2024), we examined how pre-existing beliefs about the reliability of expert-provided information can ultimately shape survey responses. In our non-experimental approach, all respondents received the same explanatory information about microplastics, including their definition, prevalence, health risks and regulation, before completing the SP task. We then asked respondents about their confidence in the ability of experts to provide reliable information, which captured general trust in expert-provided information and did not refer to a specific expert, field or institution.

Trustworthiness is often measured using five-item Likert scales (c.f., Sturgis et al., Reference Sturgis, Brunton-Smith and Jackson2021), though some studies, like ours, use a single-question measure, while others construct multi-item trust scales (Larson et al., Reference Larson, Clarke, Jarrett, Eckersberger, Levine, Schulz and Paterson2018). Trust and trustworthiness can be shaped by social norms (Sturgis et al., Reference Sturgis, Brunton-Smith and Jackson2021), media influence (He et al., Reference He, Mol, Zhang and Lu2015) and perceptions of competency (Levi and Stoker, Reference Levi and Stoker2000). Consequently, the response of regulators and scientists to microplastic pollution today may have lasting effects for the public’s perceived trust in these institutions. The aim of our study was to demonstrate that pre-existing beliefs about the reliability of expert-provided information can affect SP survey behaviour and results. Our findings underscore the importance of public confidence in experts in shaping SPs for environmental policy in the context of uncertainty.

The following section describes our research hypotheses, how we designed our CE and experts’ question, the econometric framework and how we used entropy balancing weights in the choice models to eliminate the effect of confounding variables within our cross-sectional research design, which may otherwise affect the identification of the effect on willingness to pay (WTP) of greater confidence in expert-provided information.

2. Methods

The context for our study is the restriction on the use of microplastics across multiple sectors proposed by the European Chemicals Agency (ECHA, 2019), given microplastics extreme persistence in the environment (c.f., Duis and Coors, Reference Duis and Coors2016; Burns and Boxall, Reference Burns and Boxall2018). We used an SP survey to evaluate consumer preferences for this restriction on intentionally added microplastics in cosmetics, and thus our WTP estimates may be of particular relevance to similar regulators. The context for our study was cosmetics because they are a significant source of intentionally added microplastics, and consumers are more familiar with these products than other applications. An SP survey was the appropriate method to investigate ex ante preferences for hypothetical trade-offs regarding attributes of cosmetic products. The UK was selected as the study site due to its large market for cosmetics and its potential subjection to the restrictions (ECHA, 2019). While there is limited SP evidence on preferences for microplastic restrictions (King, Reference King2022), prior work has demonstrated that respondents may be willing to pay to reduce concentrations of other marine pollutants (e.g., Logar et al., Reference Logar, Brouwer, Maurer and Ort2014; Choi and Lee, Reference Choi and Lee2018; Kim et al., Reference Kim, Lee and Yoo2019; Abate et al., Reference Abate, Borger, Aanesen, Falk-Andersson, Wyles and Beaumont2020).

2.1. Hypotheses

We formally tested three hypotheses about the influence of pre-existing confidence in expert-provided information on SP survey results. Hypothesis One ( $H_0^1)$ tested whether the frequency of choosing Option A (status quo) or B (ECHA restriction) varied across different levels of confidence in experts. Prior studies of status quo behaviour in CEs (e.g., Meyerhoff and Liebe, Reference Meyerhoff and Liebe2009) have shown that it may be related to respondents having information (e.g., Welling et al., Reference Welling, Zawojska and Sagebiel2022). Moreover, Czajkowski et al. (Reference Czajkowski, Hanley and LaRiviere2016) commented that status quo choices could be driven by strategic protest votes. In our study, we hypothesised that respondents who were less confident in the ability of experts to provide reliable information were more biased towards the status quo alternative which proposed that no restrictions would be enacted. Hypothesis One (equation (1)) was tested using ${\chi ^2}$ tests against the null hypothesis that there was no difference in the frequency of choices between confidence levels. Pairwise testing the frequency of status quo choices across each of the five confidence levels led to a diagonal matrix of results:

(1)\begin{equation}H_0^1:{\text{ }}Choices{{\text{ }}_{{\text{Confidence}} = = {\text{ }}1}} = Choice{s_{{\text{ Confidence}} \ne 1}}.\end{equation}

Hypothesis Two $(H_0^2)$ tested whether respondents’ response certainty varied with different levels of confidence in experts. There is a rich literature on the effect of stated choice certainty on SPs (Dekker et al., Reference Dekker, Hess, Brouwer and Hofkes2016; Dave et al., Reference Dave, Toner and Chen2023), especially in the contingent valuation literature (Brouwer et al., Reference Brouwer, Dekker, Rolfe and Windle2010). Although particular attention has been paid to the effect of certainty on hypothetical bias (e.g., Blomquist et al., Reference Blomquist, Blumenschein and Johannesson2009; Morrison and Brown, Reference Morrison and Brown2009; Loomis, Reference Loomis2014), there appears to be no consistent approach to measuring choice certainty. Dave et al. (Reference Dave, Toner and Chen2023) used two different scales (two and five points each), Dekker et al. (Reference Dekker, Hess, Brouwer and Hofkes2016) used a five-point scale, while others used scales with ten or more points (e.g., Morrison and Brown, Reference Morrison and Brown2009; Brouwer et al., Reference Brouwer, Dekker, Rolfe and Windle2010; Uggeldahl et al., Reference Uggeldahl, Jacobsen, Lundhede and Olsen2016). Blomquist et al. (Reference Blomquist, Blumenschein and Johannesson2009) even compare calibration results between a two-point and ten-point certainty scale, finding that high scores reflected higher certainty. Common throughout the literature is the use of descriptive language; ‘very unsure’ to ‘very sure’, which we echoed with our simple three-point descriptive scale; unsure, quite sure, very sure, to balance ease of interpretation and informative value. As asking respondents to indicate choice certainty is unlikely to affect follow-up behaviour (Brouwer et al., Reference Brouwer, Dekker, Rolfe and Windle2010), we asked respondents to indicate their response certainty question immediately after our four choice tasks, and prior to the confidence in expert question.

Previous work has examined how certainty affects choices and WTP (Dekker et al., Reference Dekker, Hess, Brouwer and Hofkes2016; Uggeldahl et al., Reference Uggeldahl, Jacobsen, Lundhede and Olsen2016), while here we were interested in the effect of greater confidence in experts on stated choice certainty. We formally tested Hypothesis Two (equation (2)) with ${\chi ^2}{\text{ }}$tests to determine whether the frequency of each level of choice certainty (unsure, quite sure, very sure) was the same between each level of confidence in experts. The null hypothesis was no difference in choice frequency between different confidence levels and we again reported a 20-element diagonal matrix comparing the five levels against each other.

(2)\begin{equation}H_0^2:{\text{ }}Certaint{y_{{\text{Confidence}} = = 1{\text{ }}}} = Certaint{y_{{\text{ Confidence}} \ne 1}}.\end{equation}

Hypothesis Three $\left( {H_0^3} \right)$ tested whether mean attribute WTP varied between levels of confidence. This test indicated how confidence affected the values used in welfare calculations. Previous work has demonstrated that WTP values may be sensitive to the presence of information (e.g., Munro and Hanley, Reference Munro, Hanley, Bateman and Willis2002; Czajkowski et al., Reference Czajkowski, Hanley and LaRiviere2016), but not to the best of our knowledge, to respondents’ pre-existing beliefs about the reliability of expert provided information. We applied the Mann–Whitney non-parametric test of means to test differences in WTP (equation (3)). This test is better suited to evaluating whether mean WTP was statistically different at different levels of confidence in experts than the Poe et al. (Reference Poe, Giraud and Loomis2005) test of differences across the distribution of WTP (e.g., Hynes et al., Reference Hynes, Armstrong, Xuan, Ankamah-Yeboah, Simpson, Tinch and Ressurreição2021a, Reference Hynes, Ankamah-Yeboah, O'Neill, Needham, Xuan and Armstrong2021b).

(3)\begin{equation}H_0^3:{\text{ }}WT{P_{{\text{ }}Attribute,{\text{ Confidence}} = = {\text{ }}1}} = WT{P_{{\text{ }}Attribute,{\text{ Confidence}} \ne {\text{ }}1}}.\end{equation}

We tested Hypothesis Three using conditional WTP recovered from two random parameter Mixed Logit (MXL) models: one estimated as standard practice without weights, while the other included entropy-balancing preprocessing weights in the log-likelihood function to eliminate the effect of potential confounders. Both models were otherwise specified identically with covariate controls and interaction effects to identify the effect of greater confidence in experts on WTP.

2.2. CE design

To evaluate consumer preferences, we included a CE. CEs are based on Lancaster’s (Reference Lancaster1966) characteristics theory of value, which posits that goods are considered to be a combination of their characteristics or attributes. CEs ask respondents to select their preferred alternative option, described by a series of attributes that vary by the levels they take (Hoyos, Reference Hoyos2010). Respondents are assumed to be utility-maximising and select an alternative with levels of attributes that maximise their utility (Train and Weeks, Reference Train, Weeks, Scarpa and Alberini2005). Where a price attribute is included, respondents’ attribute specific WTP for marginal changes in attribute levels can be recovered (Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021).

Our CE design (table 1) used a binary or pairwise design with two alternatives: an opt-out status quo (Option A) and a scenario with changes in attributes likely to arise from the proposed ECHA restriction (Option B). We included a status quo as (a) respondents’ utility may be highest in the status quo, (b) holding the status quo at zero permits identification of the other parameters and (c) it facilitates welfare calculations (Boyle and Özdemir, Reference Boyle and Özdemir2009; Meyerhoff and Liebe, Reference Meyerhoff and Liebe2009). Our design was purposefully simple, given the complexities of describing microplastic pollution. To minimise respondents’ cognitive burden, we thus chose to include only a single changing alternative, Option B. While having a binary choice may improve incentive compatibility, including three or more alternatives may allow for increased precision in the resulting estimates (Jacobsen et al., Reference Jacobsen, Boiesen, Thorsen and Strange2008; Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021). However, including fewer alternatives may also reduce status quo choices (c.f., Boyle and Özdemir, Reference Boyle and Özdemir2009; Oehlmann et al., Reference Oehlmann, Meyerhoff, Mariel and Weller2017), although this is debated and may also be related to the number of choice tasks and attributes (Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021).

Table 1. Example choice task given to respondents

Note: Explanatory text and attribute descriptions were provided following pre-testing.

The alternatives were described by three attributes: two non-monetary (percentage reduction in product performance and percentage reduction in microplastics released) and one monetary (product price). To generalise across cosmetic products, we described them only via price and product performance, which were the most salient in pre-testing, but future work to increase fidelity to market products and include more attributes may be insightful. The attribute description and the levels they took were chosen following a pre-testing process and were described in depth to respondents in the final survey before answering. Pilot versions of the survey experimented with an attribute described as changes in microplastics ingested. This attribute was ultimately removed as there was insufficient scientific evidence to quantify the relationship between microplastics released from cosmetics and the resulting levels of human ingestion, making it difficult for respondents to meaningfully evaluate this attribute.

The ‘product performance’ attribute had three levels: (5, 10 and 50 per cent). We expected the mean attribute WTP to take a negative sign as respondents would not be willing to pay for reduced product performance. The reduction in release attribute, referred to as ‘microplastics released’, was similar to the ‘potential environmental risk’ attributes used in Logar et al. (Reference Logar, Brouwer, Maurer and Ort2014). The attribute had three levels (10, 40 and 90 per cent) and was expected to have a positive sign as respondents were expected to value improved environmental quality. The monetary ‘price’ attribute had four levels (£0.50, £1.00, £2.50, £5.00) representing an increase in the current per-price value, although for generality, we did not mention a base comparison price. The CE used an orthogonal main-effects design using IBM SPSS Statistics 26. There were 16 different choice sets, and each respondent completed one randomly assigned block of four choices. Blocking of the sets was used to minimise task complexity and cognitive burden (Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021).

2.2.1. Experts question

We evaluated respondent pre-existing beliefs with a five-level (1 — Unconfident, 5 — Very confident) Likert scale that asked, ‘How confident are you in the ability of experts to provide reliable information?’. Beliefs are thus general rather than domain or expert specific. For all respondents, the question was placed after the valuation exercises so as to not otherwise influence behaviour. Although this preserves the content validity of the CE, the placement means that we cannot determine whether confidence was affected by the survey instrument itself. As such, we assume that confidence in experts was exogenously determined by external factors and pre-existing attitudes. The context for the question was the information provided to respondents prior to the CE, explaining the restrictions on microplastics in cosmetics (figures A1–A4 in the online appendix). The information aimed to describe the scientific uncertainty surrounding the potential environmental and health effects of microplastics, but did not specify any one single expert, field or organisation. Therefore, we chose to ask the respondents about ‘experts’ in general and focused on respondents’ beliefs about the reliability of the information itself, rather than the type, presentation or processing of information (c.f., Czajkowski et al., Reference Czajkowski, Hanley and LaRiviere2016; Welling et al., Reference Welling, Zawojska and Sagebiel2022).

Our approach, therefore, specifically assessed the influence of confidence in expert-provided information influences survey responses. We measured this with a five-item Likert scale and one question whereas Sturgis et al. (Reference Sturgis, Brunton-Smith and Jackson2021) used a five-item strongly agree-disagree Likert scale with seven questions to more fully describe public trust in different scientists and experts. Although Larson et al. (Reference Larson, Clarke, Jarrett, Eckersberger, Levine, Schulz and Paterson2018) found that the most common approach to measuring trust was a single question of a five-item Likert scale, they cautioned that a scale — i.e., multiple questions on aspects of trust — may provide greater insight. As such, future SP research may be enhanced by using a validated scale with several questions to construct a latent variable of ‘trust,’ which a hybrid model could use to explain choices (c.f., Dekker et al., Reference Dekker, Hess, Brouwer and Hofkes2016; Faccioli et al., Reference Faccioli, Czajkowski, Glenk and Martin-Ortega2020; Buckell et al., Reference Buckell, Hensher and Hess2021).

We visualise the distribution of responses in figures A5 and A7 (online appendix) broken down by responses to the perceived risks of microplastics to the respondent and to the environment, and report three positive and significant correlation tests. These results show that respondents who perceived there to be a greater threat from microplastics were typically also those to be more highly confident in the ability of experts to provide reliable information. Intuitively this follows as respondents who are more confident in experts’ reliability may then perceive greater health risks possible from microplastics (c.f., Kramm et al., Reference Kramm, Steinhoff, Werschmöller, Völker and Völker2022). Future work with a validated scale with more comprehensive items may be beneficial in this context to fully elucidate how confident respondents are in the ability of experts to provide reliable information, especially about areas of uncertainty. Moreover, varying the placement of the question in the survey may reveal the extent to which survey information affects beliefs about confidence in the ability of experts to provide reliable information.

2.3. Econometric framework

CE data were analysed using the Random Utility Model (Train and Weeks, Reference Train, Weeks, Scarpa and Alberini2005), and participants were assumed to choose the utility-maximising alternative in a CE. We write the utility U of respondent n choosing option i in equation (4) as a vector ${X_{in}}$ of the CE attributes (price, product performance, microplastics released) that are individual n and option i specific with the ${\beta _n}$ parameters that vary across respondents. The MXL approach provided insight into preference heterogeneity in the sample (Hynes et al., Reference Hynes, Armstrong, Xuan, Ankamah-Yeboah, Simpson, Tinch and Ressurreição2021a) as we recovered the mean and variance of the distribution of $\beta $ parameters, rather than assuming fixed coefficients. We used MXL to allow for random heterogeneity in the attributes and allow for more flexible substitution. The MXL models were simulated using 2000 Sobol draws (Czajkowski and Budziński, Reference Czajkowski and Budziński2019). Equation (4) featured an error term ${\varepsilon _{in}}$ independently and identically distributed extreme value (Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021). Although we do not investigate the error variance, i.e., the scale parameter in this research given theoretical challenges (c.f., Hess and Rose, Reference Hess and Rose2012; Hess and Train, Reference Hess and Train2017), future research with a larger sample could follow the Swait and Louviere (Reference Swait and Louviere1993) method to investigate whether pre-existing confidence in the reliability of experts in general affected error variance.

(4)\begin{equation}{U_{in}} = {\beta _n}{X_{in}} + {\varepsilon _{in}}.\end{equation}

The conditional probability of a respondent n choosing option i is the probability that the utility of option i is greater than the utility of any other available option $j$ in the set ${C_n}$ (equation (5)),

(5)\begin{equation}{P_{in}} = Pr\left( {U_{in}} \gt {U_{jn,\forall j \ne i}} \right) = \left\{ \frac{\exp \left( {{\beta _n}{X_{in}}} \right)}{\mathop \sum \limits_{j \in {C_n}}\exp \left( {{\beta _n}{X_{jn}}} \right)} \right\}.\end{equation}

The log-likelihood of the model can be written as equation (6). As we used the MXL, a simulated log-likelihood was used to solve the expression by integrating over draws from a random distribution; further elaboration is widely available in the literature (e.g., Train and Weeks, Reference Train, Weeks, Scarpa and Alberini2005). Hynes et al. (Reference Hynes, Armstrong, Xuan, Ankamah-Yeboah, Simpson, Tinch and Ressurreição2021a) demonstrated that entropy-balancing weights $\left( {{w_n}} \right)$ may be directly included in the log-likelihood function. We justify the use of entropy balancing weights as our cross-sectional data meant that we cannot randomly assign respondents to different levels of confidence in experts, raising the possibility that pre-existing differences between respondents with higher/lower confidence could confound our results. To mitigate this concern, we used entropy balancing to reweight our sample and achieve covariate balance. In an unweighted model, these weights equal one for all respondents.

(6)\begin{equation}LL\left( 0 \right) = {\text{ }}\mathop \sum \limits_{n = 1}^N {w_n}\ln {P_{in}}.\end{equation}

Equation (7) specifies the indirect utility function for alternative B as coded in R. We specified one beta for each non-zero level of our non-monetary attributes to investigate non-linearity. A normal distribution was assumed for each non-monetary attribute to allow for both negative and positive preferences (Beharry-Borg and Scarpa, Reference Beharry-Borg and Scarpa2010; Faccioli et al., Reference Faccioli, Czajkowski, Glenk and Martin-Ortega2020). We specified the price parameter to be negative lognormally distributed to ensure the theoretically expected negative sign (e.g., Ghosh et al., Reference Ghosh, Maitra and Das2013; Czajkowski et al., Reference Czajkowski, Budziński, Campbell, Giergiczny and Hanley2017; Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021). Further, using a lognormal for the monetary attribute and then estimating in WTP-space may also avoid validity issues with recovering WTP as the ratio of two random normal distributions (c.f., Train and Weeks, Reference Train, Weeks, Scarpa and Alberini2005; Daly et al., Reference Daly, Hess and Train2012; Sarrias, Reference Sarrias2020). Although a normally distributed price parameter may be behaviourally plausible in the context of luxury cosmetics good, our CE description listed several typically inexpensive personal care products: toothpaste, shampoo, shower gel, deodorant and most often SPF50 sunscreen. As such, our bid levels may represent a larger fraction of the price and pre-testing demonstrated that respondents were price sensitive at the levels presented in the CE. However, if respondents were considering luxury cosmetics, such as perfumes, instead, our bid levels would have been a smaller fraction of the more expensive price, thus reducing price-sensitivity in the estimated model. Future work to repeat our CE with luxury cosmetics instead may be insightful into how important environmental effects are for this type of consumer.

(7)\begin{align}V_{iA}&=ASC_{SQ}+\beta_{Price}\nonumber \\ & \quad \ast(Price_B+\beta_{ProductPerformance,Medium}\ast(ProductPerformance_A==10) \nonumber\\ & \quad +\beta_{ProductPerformance,High}\ast(ProductPerformance_A==50)\nonumber\\ & \quad +\beta_{MicroplasticsReleased,Medium}\ast(MicroplasticsReleased_A==40)\nonumber\\ & \quad +\beta_{MicroplasticsReleased,High}\ast(MicroplasticsReleased_A==90))+Interactions\end{align}

Equation (8) included an interactions term (Interactions) where the confidence in experts’ question (coded as a continuous variable with values from 1–5) was interacted with the two levels of each non-monetary attribute to evaluate the multifaceted effect of confidence.

(8)\begin{align} Interactions & = {\beta _{Exp*P10}}*\left( {Experts*ProductPerformanc{e_{10}}} \right) \nonumber\\ & \quad + {\beta _{Exp*P50}}*\left( {Experts*ProductPerformanc{e_{50}}} \right) \nonumber\\ & \quad + {\beta _{Exp*MR40}}*\left( {Experts*MicroplasticsReleas{d_{40}}} \right) \nonumber\\ & \quad + {\beta _{Exp*MR90}}*\left( {Experts*MicroplasticsReleas{d_{90}}} \right). \end{align}

The specification of the indirect utility function for Alternative A included an Alternative Specific Coefficient ( $AS{C_{SQ}}$) in equation (9) to indicate the respondent's degree of status quo bias (Brouwer et al., Reference Brouwer, Hadzhiyska, Ioakeimidis and Ouderdorp2017). The ASC was interacted with survey debriefing questions to control for the effect of socioeconomic factors on respondents’ choices.

(9)\begin{align} AS{C_{SQ}} & = ASC + \left( {{\beta _{Age}}*Ag{e_{dummy}}} \right) + \left( {{\beta _{BluePlanet}}*BluePlanet} \right) \nonumber\\ & \quad + \left( {{\beta _{Certa\operatorname{int} y}}*Certaint{y_{Q12}}} \right) + \left( {{\beta _{Charity}}*Charity} \right) \nonumber\\ & \quad + \left( {{\beta _{ConcernQ13}}*Concer{n_{Q13}}} \right) + \left( {{\beta _{Concern,Q14}}*Concer{n_{Q14}}} \right) \nonumber\\ & \quad + \left( {{\beta _{Concern,Q15}}*Concer{n_{Q15}}} \right) + \left( {{\beta _{Consequential}}*Consequential} \right) \nonumber\\ & \quad + \left( {{\beta _{Education}}*Education} \right) + \left( {{\beta _{Gender}}*Gender} \right) + \left( {{\beta _{Income}}*Incom{e_{dummy}}} \right) \nonumber\\ & \quad + \left( {{\beta _{Knowledge}}*Knowledg{e_{Q5}}} \right) + \left( {{\beta _{Order}}*Order} \right). \end{align}

2.4. Entropy balancing

While randomized experiments are the gold standard for causal inference, our study relies on cross-sectional survey data, where unobserved confounding may still exist. One increasingly popular approach for observational data is to re-weight samples to achieve covariate balance (Vass et al., Reference Vass, Boeri, Poulos and Turner2022). Re-weighting the sample with balancing weights is an important step as confounding variables, namely those we observe in the survey, may influence choices directly and indirectly (e.g., Hynes et al., Reference Hynes, Armstrong, Xuan, Ankamah-Yeboah, Simpson, Tinch and Ressurreição2021a, Reference Hynes, Ankamah-Yeboah, O'Neill, Needham, Xuan and Armstrong2021b). If confidence levels differ systematically across key covariates, observed associations may be confounded. Entropy balancing allows us to adjust for these observed differences, improving internal validity when comparing groups. Observed confounders can be controlled for if weights can be estimated that balance between the groups’ observed characteristics. By weighting our sample to have zero covariate differences, any differences in choices, confidence and WTP are attributable to the differences between respondents’ confidence in experts.

Many weighting methods have aimed for high levels of covariate balance while maintaining sample size. Hainmueller (Reference Hainmueller2012) proposed entropy balancing as an algorithmic solution to always achieve high levels of balance, and it has since been shown to be robust in a range of contexts (Zhao and Percival, Reference Zhao and Percival2016). In the CE literature, Hynes et al. (Reference Hynes, Ankamah-Yeboah, O'Neill, Needham, Xuan and Armstrong2021b) demonstrated how entropy balancing weights could be used in log-likelihood functions to weight observations from different groups. While Hynes et al. (Reference Hynes, Ankamah-Yeboah, O'Neill, Needham, Xuan and Armstrong2021b) used weighting with their binary [yes, no] question, we used weighting with a five-item Likert scale. We treated the Likert scale as a continuous variable and estimated weights for each level. Entropy balancing for continuous treatments has been favourably evaluated beforehand by Tübbicke (Reference Tübbicke2021).

Table A1 (online appendix) reports the group mean pre and post balancing for each variable for each level of confidence in experts. We used 11 variables from our survey (age, blue planet viewing, charity involvement, coronavirus impact, distance, education, employment, gender, knowledge, income, question order) to weight the samples which a priori were expected to influence choices (Faccioli et al., Reference Faccioli, Czajkowski, Glenk and Martin-Ortega2020). Income, well-known to influence WTP, was measured using income brackets of gross monthly income, a measure that respondents were most familiar with. The results were dummy coded as above/below sample mean income. A dummy for whether respondents’ income was affected by the coronavirus pandemic (survey taken in April 2020) was also included for completeness. We controlled for self-reported distance from the coast (categories of kilometres from the coast), which may influence preferences for marine pollution (Hynes et al., Reference Hynes, Armstrong, Xuan, Ankamah-Yeboah, Simpson, Tinch and Ressurreição2021a), and survey question order (Day and Prades, Reference Day and Prades2010). Other variables were included as they were expected (Mihelj et al., Reference Mihelj, Kondor and Štětka2022) to influence confidence in expert-provided information, e.g., respondent age, whether they had watched the Blue Planet TV programme (Hynes et al., Reference Hynes, Ankamah-Yeboah, O'Neill, Needham, Xuan and Armstrong2021b), were involved in or donated to charity, or had higher self-reported prior knowledge of microplastic pollution elicited on a 1–5 scale as prior attitudinal differences could explain differences in confidence levels. It was important to control for the effect of prior knowledge, as higher knowledge could explain differences in confidence, choices and WTP. Table A2 (online appendix) indicates that entropy balancing achieved a very high balance (difference less than 0.000) between each level of pre-existing confidence. Weights were calculated for the population average treatment effect. We then incorporated the weights directly into the log-likelihood function of the weighted MXL models.

2.5. Data collection

The survey was designed between November 2019 and February 2020 with Environment Agency, industry representatives and academic experts per Johnston et al.’s (Reference Johnston, Boyle, Adamowicz, Bennett, Brouwer, Cameron, Hanemann, Hanley, Ryan, Scarpa and Tourangeau2017) guidance for SP surveys. It included a small pilot study (N = 50) to refine the CE description. The survey had five sections: (1) understanding the socioeconomic distribution of the sample, (2–3) SP tasks, (4) assessing environmental attitudes and perceived risks and (5) debriefing questions including confidence in experts. Section 3 included two contingent valuation questions; discussed in King (Reference King2022). In April 2020, DJS Research Ltd collected 670 UK adult responses to the online survey, reflecting a response rate of 65 per cent. An average respondent took 7.5 minutes to complete the 25 separate questions. ${\chi ^2}{\text{ }}$tests report that the sample was broadly representative of the UK adult population along socioeconomic characteristics (online appendix table A3). Sample income was marginally lower than the total UK population, possibly owing to having more female or student respondents in the sample. Full replication is possible through the publicly available exact survey design, estimation R code and output data at the author’s Github (https://github.com/pmpk20/PhD_CEPaper). The work was undertaken during time at the University of Bath and additional support was facilitated by the Specialist and high-performance computing at University of Kent. The analysis was conducted in R (Version 4.2.0) using the WeightIt and Apollo packages (Hess and Palma, Reference Hess and Palma2019; Greifer, Reference Greifer2022; R Core Team, 2022). Ethical approval for the data collection was granted by the University of Bath.

3. Results

3.1. Hypothesis One: Choices

The ${\chi ^2}{\text{ }}$tests showed that respondents’ choices were statistically different depending on their pre-existing level of confidence in experts (figure 1 and table 2). We, therefore, rejected Hypothesis One; having greater confidence in experts reduced status quo bias. We can further delineate the relationship between confidence and choices by stated certainty (figure A6, online appendix). The status quo Option A was the most commonly chosen by low confidence levels, while higher confidence respondents increasingly chose Option B (figure 1), e.g., the proportion of respondents who always chose the status quo Option A was 18 per cent (N = 127), which was higher amongst those with low confidence <3/5 (26 per cent, N = 83) than higher confidence >3/5 (12 per cent, N = 44). The exact frequencies are reported in table A4 (online appendix).

Figure 1. Percentage of choices for either Option A or B by self-reported confidence in experts. Horizontal dotted line drawn at 50 per cent of choices.

Table 2. ${\chi ^2}$ test statistics (p values in parentheses) against the null hypothesis that choice frequency did not differ between levels of self-reported confidence in experts

Note: A diagonal matrix was produced to report all test combinations.

These results indicate that respondents who were more highly confident in the information from experts that microplastics may later cause a health risk were much more likely to choose a reformulated cosmetic (Option B) which may mitigate potential health risks. Conversely, when respondents do not have confidence in experts, they were much more averse to changes from the status quo (more likely to choose Option A).

3.2. Hypothesis Two: Choice certainty

Figure 2. Choice certainty by levels of confidence in experts. Both most (5/5) and least (5/5) confidence respondents were much more likely to say, ‘very sure’ while ‘quite sure’ was the most common response for all others.

The ${\chi ^2}$ tests found that respondent's self-reported certainty in their responses was statistically different depending on their level of confidence in experts (figure 2 and table 3). We rejected Hypothesis Two as greater confidence affected respondents’ certainty. Each level of confidence was significantly different from each other in the frequency of each level of choice certainty. Figure 2 shows that the distribution of confidence in experts across the five levels was roughly normally distributed for those who were unsure about their choices but shifted and became more right-skewed among those who were highly sure about their choices. The shifting distribution was not caused by changes in the tails of the distributions – that is, respondents saying that they were 1/5 or 5/5 confident – but instead by the central mass shifting rightwards. The implication here is that highly sure respondents were also more confident in experts. While the skewness may reduce the counts for the ${\chi ^2}$ tests, we simulated the p values to ensure robustness. Respondents with the lowest (1/5) and highest (5/5) levels of confidence had the highest proportion of respondents reporting that they were very sure about their choices while others were less likely to be very sure. We further describe this relationship by choices in figure A6 (online appendix).

Table 3. The ${\chi ^2}$test statistic (p value in parentheses) against the null hypothesis that the frequency of respondents reporting each level of certainty was not statistically different between each level of confidence in experts

3.3. Hypothesis Three: WTP

The unweighted and weighted MXL models are presented in table 4 with further results in table A5 (online appendix). The resulting distributions of conditional attribute WTP are visualised in figure 3 with exact summary statistics for attribute WTP in table A6 (online appendix). Both models report estimates that are comparable in sign, significance and magnitude with price sensitivity (Price) as expected. We find evidence of status quo bias indicated by the highly significant and positive ASC coefficient. While respondents were only willing to pay to avoid larger changes in product performance (Product Performance: high), only the estimated mean parameter for smaller changes in the release of microplastics (Microplastics released: medium) was statistically different from zero. However, the standard deviations for each distribution of preferences were highly significant, suggesting that there was a large degree of respondent heterogeneity.

Table 4. Selected estimated parameters from weighted and unweighted random-parameter mixed logit models

Notes: Results are the coefficient estimate and in parentheses are the p values against the null hypothesis that the true coefficient value is zero. In table A5 (online appendix), we detail all estimated results for our control variables (age, blue-planet viewership, respondent stated choice certainty, charity membership, environmental concern, distance from the coast, education levels, gender, income levels, self-reported knowledge of microplastics, survey order and self-reported survey understanding).

Figure 3. Box and whisker plots of the distributions of conditional mean WTP per confidence level (X-axis) per model. Price values are not truly WTP measures but instead the estimated price sensitivity. Y-axis scale varies across facets.

There was no significant effect of confidence on the product performance attributes (Experts * ProductPerformance), possibly due to respondents already having defined knowledge and preferences about their cosmetic products that obviates the need for information from expert sources. Conversely, we found that confidence had a highly significant effect on the microplastic released attributes (Experts * MicroplasticsReleased). We argue that the microplastics released attribute was less familiar to respondents, especially compared to the product performance attribute, although we did not recover data on attribute-specific understanding or attendance. While respondents may be used to evaluating product price and performance, the microplastics released attribute necessitates them making subjective estimates of how many microplastics are currently included, and how reducing the number released influences microplastic pollution levels, a less common task. The positive sign on the microplastics released interaction terms indicate that more-confident respondents reported higher WTP. These interaction effects illustrate that the effect of confidence in expert sources only extended to information about which respondents may be a priori unsure about. Although there was very little observable difference in preferences for the two product performance attributes, both models indicate substantial variation in preferences for the price and microplastics released attributes. The price parameters for the lowest confidence respondents had a much wider distribution while the estimates for the higher confidence respondents were much tighter around their mean value.

We tested WTP from the unweighted model as choices in the weighted with balancing weights model are similar by design. For the product performance attribute, we largely could not reject the null hypothesis that the mean WTP was not statistically different between respondents with different confidence levels. The results fit intuitively with figure 3 where means for the microplastics released attribute only varied with confidence levels. Therefore, there were mixed results for Hypothesis Three as greater confidence in experts may affect respondents’ choices and resulting mean WTP only for uncertain and unfamiliar attributes (table A7, online appendix).

4. Discussion

In various contexts (e.g., Sjöberg, Reference Sjöberg1999; Cramer et al., Reference Cramer, Brodsky and DeCoster2009; Salanié and Treich, Reference Salanié and Treich2009; Sundblad et al., Reference Sundblad, Biel and Gärling2009; Bennett, Reference Bennett2020; Brennan, Reference Brennan, Baghramian and Martini2022; Mihelj et al., Reference Mihelj, Kondor and Štětka2022), public confidence in experts can strongly affect behaviour. In our work, we examined how pre-existing confidence in experts affected support for precautionary measures (King, Reference King2022) to restrict microplastics amidst public and expert uncertainty over their long-term effects (c.f., Janzik et al., Reference Janzik, Koch, Zamariola, Vrbos, White, Pahl and Berger2024). Our contribution to the SP literature is to demonstrate that survey behaviour – particularly status quo bias, response certainty and welfare measures – are all sensitive to pre-existing confidence in the reliability of expert-provided information. Our results indicate that more comprehensive strategies to restore and ensure public confidence in experts may have a downstream effect on building support for expert-recommended policies, especially in the context of uncertainty where the timely implementation of precautionary measures may be critical. Further improving our understanding of the determinants of public confidence in experts and how such differences can be influenced in scientific communication strategies may be beneficial.

Our findings indicate that respondents who had pre-existing higher confidence in experts were significantly more likely to choose reformulated cosmetics. In figure A7 (online appendix), we show that this higher confidence was positively correlated with higher perceived risks from microplastics (c.f., Kramm et al., Reference Kramm, Steinhoff, Werschmöller, Völker and Völker2022). Thus, respondents who were a priori more confident in the ability of experts to provide reliable information, perceived there to be greater health risks from microplastics, which we in turn suggest affected their choices and welfare estimates from our CE. Our results, therefore, suggest that public support for precautionary measures could be shaped by underlying trust in expertise rather than solely by the content of risk communication; building public trust in experts is, therefore, crucial in shaping support for or against precautionary policies.

We found that lower-confidence respondents were more likely to vote for the status quo. While the overall percentage of respondents consistently choosing the status quo was consistent with levels in the literature, notably the >50 per cent (Meyerhoff and Liebe, Reference Meyerhoff and Liebe2009; Carson et al., Reference Carson, Chilton, Hutchinson and Scarpa2020) or the >20 per cent in Welling et al. (Reference Welling, Zawojska and Sagebiel2022), the fact that this varied by level of confidence in experts is instructive. Status quo behaviour may arise from respondents being willing to pay something but not facing a combination of attributes they preferred across their four choices (Boxall et al., Reference Boxall, Adamowicz and Moon2009), or design of the CE itself (e.g., Oehlmann and Meyerhoff., Reference Oehlmann and Meyerhoff2017; Oehlmann et al., Reference Oehlmann, Meyerhoff, Mariel and Weller2017; Welling et al., Reference Welling, Zawojska and Sagebiel2022), although our binary choice design may result in fewer status quo choices (Boyle and Özdemir, Reference Boyle and Özdemir2009), Moreover, Haghani et al. (Reference Haghani, Bliemer, Rose, Oppewal and Lancsar2021) suggested that task complexity could motivate status quo bias, although our design with three attributes and two alternatives is less likely to suffer from excess task complexity. Consistent choice of the status quo opt-out Option A may indicate strategic protest behaviour (c.f., Carson et al., Reference Carson, Chilton, Hutchinson and Scarpa2020; Mariel et al., Reference Mariel, Hoyos, Meyerhoff, Czajkowski, Dekker, Glenk, Jacobsen, Liebe, Olsen, Sagebiel and Thiene2021).

A limitation of our survey was not recovering further information on whether the status quo choices were explicit protest votes through debriefing or open-ended questions to those serially choosing the status quo (von Haefen et al., Reference von Haefen, Massey and Adamowicz2005). However, Brouwer and Martín-Ortega (Reference Brouwer and Martín-Ortega2012) debated whether excluding protest votes biased WTP estimates. Future study may elucidate how protest voting is influenced by factors including pre-existing confidence in experts (c.f., Rakotonarivo et al., Reference Rakotonarivo, Schaafsma and Hockley2016). It may be that respondents who did not have confidence in expert-provided information were more likely to strategically protest against the proposed restriction, not believing it necessary (c.f., Meyerhoff and Liebe, Reference Meyerhoff and Liebe2009; Czajkowski et al., Reference Czajkowski, Hanley and LaRiviere2016). Indeed, the proportion of respondents who always chose option B increased with higher levels of confidence, indicating that respondents who were more highly confident in the information from experts that microplastics may later cause a health risk were more likely to choose a reformulated cosmetic. This finding highlights the potential for low public trust in experts to limit the support for precautionary policies, even when scientific evidence may support such measures. However, this also indicates that improving public trust in experts could lead to more informed choices and involvement in policy making (c.f., Salanié and Treich, Reference Salanié and Treich2009).

We found that both the most and least confident respondents reported that they were highly certain of their choices. The most confident respondents may be highly certain about their choices because they were consistent with expert-provided information in the survey. Conversely, respondents with low confidence in experts may have been more certain due to a reinforcement of their existing bias towards the status quo alternatives (Meyerhoff and Liebe, Reference Meyerhoff and Liebe2009), potentially driven by a distrust of expert information. This illustrates that preferences could become entrenched if experts are perceived to be untrustworthy, an important aspect of reactions to the coronavirus pandemic or climate change (Bennett, Reference Brennan, Baghramian and Martini2022; Brennan, Reference Brennan, Baghramian and Martini2022; Mihelj et al., Reference Mihelj, Kondor and Štětka2022) and policy communication (Chen and Cho, Reference Chen and Cho2019).

Our results show that WTP is highly contextual, depending on respondent confidence in the reliability of expert-provided information, familiarity with the attribute being valued (c.f., Hasselström and Håkansson, Reference Hasselström and Håkansson2014) and the size of the change. For our familiar attribute, product performance, respondents had positive WTP to avoid large reductions, regardless of confidence levels in experts. This finding was expected, given the predictable reduction in consumer utility, and suggests that cosmetic manufacturers must be careful when substituting out microplastics to avoid large reductions in product performance. The WTP to avoid reduced product performance, in comparison with preferences for smaller reductions in microplastics released, suggest that for some respondents, environmental concern may not be the primary driver of preferences. Respondents were only willing to pay for medium-sized reductions in microplastics released, while consistently prioritizing product performance, a pattern suggestive of ‘greenwashing’ (e.g., Volschenk et al., Reference Volschenk, Gerber and Santos2022; Kolcava, Reference Kolcava2023). In this case, greenwashing reflects a preference for purchasing products marketed as environmentally friendly, provided there is no associated reduction in performance. This raises policy concerns, as firms may exploit such preferences to promote green-branded products without meaningful reductions in microplastic pollution. Stricter eco-labelling regulations (c.f., Nygaard, Reference Nygaard2023) may be necessary to ensure that greenwashing attempts are also accompanied by meaningfully reduced releases of microplastics.

Importantly, this pattern does not apply to all respondents. Our significant standard deviation estimates, and interaction terms, indicate that preferences were highly heterogeneous, particularly for the microplastics released attribute, where WTP was influenced by pre-existing confidence in experts. Future research could further explore this heterogeneity by incorporating survey questions on greenwashing awareness and motivations or applying latent-class models to identify distinct consumer classes. Qualitative methods, such as respondent interviews, could also provide deeper insight into whether WTP reflects strategic greenwashing behaviour or genuine environmental concern.

Overall, comparing results for our two attributes provides insight into how confidence influenced welfare valuations. Attributes that respondents are already familiar with, such as product performance, were unaffected by pre-existing confidence in experts, whereas unfamiliar attributes, such as microplastics released, were more sensitive to beliefs about experts providing information around them. Our results are thus consistent with Jacobsen et al. (Reference Jacobsen, Boiesen, Thorsen and Strange2008) who found that WTP was affected by whether a species was familiar or not, and Hasselström and Håkansson (Reference Hasselström and Håkansson2014) who found that WTP was unaffected by information only for attributes respondents were familiar with. When respondents are more confident in the reliability of experts, they are willing to pay for reductions in microplastics released, suggesting that expert credibility plays a key role in shaping SP for uncertain environmental attributes. Future work to investigate whether confidence in the reliability of experts in general affects how respondents attend to or protest such attributes may provide further insight. Although mean differences in WTP are slight, the aggregate effect of lower confidence in experts could meaningfully influence cost–benefit ratios.

For robustness, we controlled for potential differences in confounders of confidence in experts using the entropy balancing framework, following Hynes et al. (Reference Hynes, Armstrong, Xuan, Ankamah-Yeboah, Simpson, Tinch and Ressurreição2021a). By controlling for potential differences in determinants of confidence in experts, we were able to ensure that our findings were not driven by confounding factors, thereby increasing the robustness of our results.

Despite this robustness, there are three limitations to the survey instrument. Firstly, while the CE method is appropriate for this ex-ante valuation, the context of consumer decisions for cosmetics is suitable for follow-up work with a revealed preference measure. However, eliciting truthful comments on confidence in experts may be easier within the anonymous online survey context. Secondly, we found that the distribution of confidence in experts across the five levels was more right-skewed, notably among those who were highly certain in their choices. In practical terms, this suggests that future work with a continuous sliding scale may improve the precision of our measures. Future open-ended qualitative questions to better understand how respondents processed this question may also be a valuable validity check. A third limitation is that our cross-sectional data was collected during the height of the Coronavirus pandemic restrictions, which could not be avoided given the time constraints and uncertainty over the pandemic duration, but unavoidably altered public perceptions and confidence in experts (Mihelj et al., Reference Mihelj, Kondor and Štětka2022). Future work, not just to repeat the sample post-pandemic, but to dynamically explore how confidence shifts temporally, may provide valuable insights.

5. Conclusion

Our work illustrated the effect that low public trust in experts and science can have on survey behaviour designed to measure public preferences and the potentially detrimental effect on support for precautionary policies, even in unrelated contexts. This highlights the urgency of addressing and improving public confidence in experts.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/S1355770X25100077

Competing interests

The author declares none.

References

Abate, TG, Borger, T, Aanesen, M, Falk-Andersson, J, Wyles, KJ and Beaumont, N (2020) Valuation of marine plastic pollution in the European Arctic: Applying an integrated choice and latent variable model to contingent valuation. Ecological Economics 169, . https://doi.org/10.1016/j.ecolecon.2019.106521.CrossRefGoogle Scholar
Beharry-Borg, N and Scarpa, R (2010) Valuing quality changes in Caribbean coastal waters for heterogeneous beach visitors. Ecological Economics 69, 11241139. https://doi.org/10.1016/j.ecolecon.2009.12.007.CrossRefGoogle Scholar
Bennett, M (2020) Should I do as I'm told? Trust, experts, and COVID-19. Kennedy Institute of Ethics Journal 30, 243263. https://doi.org/10.1353/ken.2020.0014.CrossRefGoogle Scholar
Blomquist, GC, Blumenschein, K and Johannesson, M (2009) Eliciting willingness to pay without bias using follow-up certainty statements: Comparisons between probably/definitely and a 10-point certainty scale. Environmental and Resource Economics 43, 473502. https://doi.org/10.1007/s10640-008-9242-8.CrossRefGoogle Scholar
Boxall, P, Adamowicz, WL and Moon, A (2009) Complexity in choice experiments: Choice of the status quo alternative and implications for welfare measurement. Australian Journal of Agricultural and Resource Economics 53, 503519. https://doi.org/10.1111/j.1467-8489.2009.00469.x.CrossRefGoogle Scholar
Boyle, KJ and Özdemir, S (2009) Convergent validity of attribute-based, choice questions in stated-preference studies. Environmental and Resource Economics 42, 247264. https://doi.org/10.1007/s10640-008-9233-9.CrossRefGoogle Scholar
Brennan, J (2022) Can novices trust themselves to choose trustworthy experts? Reasons for (reserved) optimism. In Baghramian, M, and Martini, C (eds), Questioning Experts and Expertise. London: Routledge, 122135. https://doi.org/10.4324/9781003161851.CrossRefGoogle Scholar
Brouwer, R, Dekker, T, Rolfe, J and Windle, J (2010) Choice certainty and consistency in repeated choice experiments. Environmental and Resource Economics 46, 93109. https://doi.org/10.1007/s10640-009-9337-x.CrossRefGoogle Scholar
Brouwer, R, Hadzhiyska, D, Ioakeimidis, C and Ouderdorp, H (2017) The social costs of marine litter along European coasts. Ocean & Coastal Management 138, 3849. https://doi.org/10.1016/j.ocecoaman.2017.01.011.CrossRefGoogle Scholar
Brouwer, R and Martín-Ortega, J (2012) Modeling self-censoring of polluter pays protest votes in stated preference research to support resource damage estimations in environmental liability. Resource and Energy Economics 34, 151166. https://doi.org/10.1016/j.reseneeco.2011.05.001.CrossRefGoogle Scholar
Buckell, J, Hensher, DA and Hess, S (2021) Kicking the habit is hard: A hybrid choice model investigation into the role of addiction in smoking behavior. Health Economics 30, 319. https://doi.org/10.1002/hec.4173.CrossRefGoogle Scholar
Burns, EE and Boxall, ABA (2018) Microplastics in the aquatic environment: Evidence for or against adverse impacts and major knowledge gaps. Environmental Toxicology and Chemistry 37, 27762796. https://doi.org/10.1002/etc.4268.CrossRefGoogle ScholarPubMed
Carson, KS, Chilton, SM, Hutchinson, WG and Scarpa, R (2020) Public resource allocation, strategic behavior, and status quo bias in choice experiments. Public Choice 185, 119. https://doi.org/10.1007/s11127-019-00735-y.CrossRefGoogle Scholar
Catarino, AI, Kramm, J, Völker, C, Henry, TB and Everaert, G (2021) Risk posed by microplastics: Scientific evidence and public perception. Current Opinion in Green and Sustainable Chemistry 29, . https://doi.org/10.1016/j.cogsc.2021.100467.CrossRefGoogle Scholar
Chen, WY and Cho, FHT (2019) Environmental information disclosure and societal preferences for urban river restoration: Latent class modelling of a discrete-choice experiment. Journal of Cleaner Production 231, 12941306. https://doi.org/10.1016/j.jclepro.2019.05.307.CrossRefGoogle Scholar
Chen, WY and Hua, J (2015) Citizens' distrust of government and their protest responses in a contingent valuation study of urban heritage trees in Guangzhou, China. Journal of Environmental Management 155, 4048. https://doi.org/10.1016/j.jenvman.2015.03.002.CrossRefGoogle Scholar
Choi, EC and Lee, JS (2018) The willingness to pay for removing the microplastics in the ocean – The case of Seoul metropolitan area, South Korea. Marine Policy 93, 93100. https://doi.org/10.1016/j.marpol.2018.03.015.CrossRefGoogle Scholar
Cramer, RJ, Brodsky, SL and DeCoster, J (2009) Expert witness confidence and juror personality: Their impact on credibility and persuasion in the courtroom. Journal of the American Academy of Psychiatry and the Law Online 37, 6374.Google ScholarPubMed
Czajkowski, M and Budziński, W (2019) Simulation error in maximum likelihood estimation of discrete choice models. Journal of Choice Modelling 31, 7385. https://doi.org/10.1016/j.jocm.2019.04.003.CrossRefGoogle Scholar
Czajkowski, M, Budziński, W, Campbell, D, Giergiczny, M and Hanley, N (2017) Spatial heterogeneity of willingness to pay for forest management. Environmental and Resource Economics 68, 705727. https://doi.org/10.1007/s10640-016-0044-0.CrossRefGoogle Scholar
Czajkowski, M, Hanley, N and LaRiviere, J (2016) Controlling for the effects of information in a public goods discrete choice model. Environmental and Resource Economics 63, 523544. https://doi.org/10.1007/s10640-014-9847-z.CrossRefGoogle Scholar
Daly, A, Hess, S and Train, K (2012) Assuring finite moments for willingness to pay in random coefficient models. Transportation 39, 1931. https://doi.org/10.1007/s11116-011-9331-3.CrossRefGoogle Scholar
Dave, K, Toner, J and Chen, H (2023) Accounting for respondent's preference uncertainty in choice experiments. Journal of Environmental Economics and Policy 12, 508523. https://doi.org/10.1080/21606544.2023.2182368.CrossRefGoogle Scholar
Day, B and Prades, J-L (2010) Ordering anomalies in choice experiments. Journal of Environmental Economics and Management 59, 271285. https://doi.org/10.1016/j.jeem.2010.03.001.CrossRefGoogle Scholar
Dekker, T, Hess, S, Brouwer, R and Hofkes, M (2016) Decision uncertainty in multi-attribute stated preference studies. Resource and Energy Economics 43, 5773. https://doi.org/10.1016/j.reseneeco.2015.11.002.CrossRefGoogle Scholar
De-la-Torre, GE (2020) Microplastics: An emerging threat to food security and human health. Journal of Food Science and Technology 57, 16011608. https://doi.org/10.1007/s13197-019-04138-1.CrossRefGoogle ScholarPubMed
Duis, K and Coors, A (2016) Microplastics in the aquatic and terrestrial environment: Sources (with a specific focus on personal care products), fate and effects. Environmental Sciences Europe 28, . https://doi.org/10.1186/s12302-015-0069-y.CrossRefGoogle ScholarPubMed
ECHA (2019) Annex XV restriction report proposal for a restriction. European Chemicals Agency. Available at https://echa.europa.eu/registry-of-restriction-intentions/-/dislist/details/0b0236e18244cd73.Google Scholar
Faccioli, M, Czajkowski, M, Glenk, K and Martin-Ortega, J (2020) Environmental attitudes and place identity as determinants of preferences for ecosystem services. Ecological Economics 174, . https://doi.org/10.1016/j.ecolecon.2020.106600.CrossRefGoogle Scholar
Ghosh, S, Maitra, B and Das, SS (2013) Effect of distributional assumption of random parameters of mixed logit model on willingness-to-pay values. Procedia-Social and Behavioral Sciences 104, 601610. https://doi.org/10.1016/j.sbspro.2013.11.154.CrossRefGoogle Scholar
Greifer, N (2022) WeightIt: Weighting for covariate balance in observational studies. Available at https://CRAN.R-project.org/package=WeightIt.Google Scholar
Haghani, M, Bliemer, MC, Rose, JM, Oppewal, H and Lancsar, E (2021) Hypothetical bias in stated choice experiments: Part II. Conceptualisation of external validity, sources and explanations of bias and effectiveness of mitigation methods. Journal of Choice Modelling 41, . https://doi.org/10.1016/j.jocm.2021.100322.CrossRefGoogle Scholar
Hainmueller, J (2012) Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies. Political Analysis 20, 2546. https://doi.org/10.1093/pan/mpr025.CrossRefGoogle Scholar
Hasselström, L and Håkansson, C (2014) Detailed vs. fuzzy information in non-market valuation studies: The role of familiarity. Journal of Environmental Planning and Management 57, 123143. https://doi.org/10.1080/09640568.2012.736370.CrossRefGoogle Scholar
He, G, Mol, APJ, Zhang, L and Lu, Y (2015) Environmental risks of high-speed railway in China: Public participation, perception and trust. Environmental Development 14, 3752. https://doi.org/10.1016/j.envdev.2015.02.002.CrossRefGoogle Scholar
Hess, S and Palma, D (2019) Apollo: A flexible, powerful and customisable freeware package for choice model estimation and application. Journal of Choice Modelling 32, . https://doi.org/10.1016/j.jocm.2019.100170.CrossRefGoogle Scholar
Hess, S and Rose, JM (2012) Can scale and coefficient heterogeneity be separated in random coefficients models? Transportation 39, 12251239. https://doi.org/10.1007/s11116-012-9394-9.CrossRefGoogle Scholar
Hess, S, and Train, K (2017) Correlation and scale in mixed logit models. Journal of Choice Modelling 23, 18. https://doi.org/10.1016/j.jocm.2017.03.001.CrossRefGoogle Scholar
Hoehn, JP and Randall, A (2002) The effect of resource quality information on resource injury perceptions and contingent values. Resource and Energy Economics 24, 1331. https://doi.org/10.1016/S0928-7655(01)00051-3.CrossRefGoogle Scholar
Hoyos, D (2010) The state of the art of environmental valuation with discrete choice experiments. Ecological Economics 69, 15951603. https://doi.org/10.1016/j.ecolecon.2010.04.011.CrossRefGoogle Scholar
Hynes, S, Ankamah-Yeboah, I, O'Neill, S, Needham, K, Xuan, BB and Armstrong, C (2021b) The impact of nature documentaries on public environmental preferences and willingness to pay: Entropy balancing and the blue planet II effect. Journal of Environmental Planning and Management 64, 14281456. https://doi.org/10.1080/09640568.2020.1828840.CrossRefGoogle Scholar
Hynes, S, Armstrong, CW, Xuan, BB, Ankamah-Yeboah, I, Simpson, K, Tinch, R and Ressurreição, A (2021a) Have environmental preferences and willingness to pay remained stable before and during the global Covid-19 shock? Ecological Economics 189, . https://doi.org/10.1016/j.ecolecon.2021.107142.CrossRefGoogle Scholar
Jacobsen, JB, Boiesen, JH, Thorsen, BJ and Strange, N (2008) What's in a name? The use of quantitative measures versus ‘iconised’ species when valuing biodiversity. Environmental and Resource Economics 39, 247263. https://doi.org/10.1007/s10640-007-9107-6.CrossRefGoogle Scholar
Janzik, R, Koch, S, Zamariola, G, Vrbos, D, White, MP, Pahl, S and Berger, N (2024) Exploring public risk perceptions of microplastics: Findings from a cross-national qualitative interview study among German and Italian citizens. Risk Analysis 44, 521535. https://doi.org/10.1111/risa.14184.CrossRefGoogle ScholarPubMed
Johnston, R, Boyle, K, Adamowicz, W, Bennett, J, Brouwer, R, Cameron, T, Hanemann, W, Hanley, N, Ryan, M, Scarpa, R and Tourangeau, R (2017) Contemporary guidance for stated preference studies. Journal of the Association of Environmental and Resource Economists 4, 319405. https://doi.org/10.1086/691697.CrossRefGoogle Scholar
Kataria, M, Bateman, I, Christensen, T, Dubgaard, A, Hasler, B, Hime, S, Ladenburg, J, Levin, G, Martinsen, L and Nissen, C (2012) Scenario realism and welfare estimates in choice experiments – A non-market valuation study on the European water framework directive. Journal of Environmental Management 94, 2533. https://doi.org/10.1016/j.jenvman.2011.08.010.CrossRefGoogle Scholar
Khachatryan, H, Rihn, A and Wei, X (2021) Consumers’ preferences for eco-labels on plants: The influence of trust and consequentiality perceptions. Journal of Behavioral and Experimental Economics 91, . https://doi.org/10.1016/j.socec.2020.101659.CrossRefGoogle Scholar
Kim, H-J, Lee, H-J and Yoo, S-H (2019) Public willingness to pay for endocrine disrupting chemicals-free labelling policy in Korea. Applied Economics 51, 131140. https://doi.org/10.1080/00036846.2018.1494803.CrossRefGoogle Scholar
King, P (2022) Willingness-to-pay for precautionary control of microplastics, a comparison of hybrid choice models. Journal of Environmental Economics and Policy 12, 379402. https://doi.org/10.1080/21606544.2022.2146757.CrossRefGoogle Scholar
Kolcava, D (2023) Greenwashing and public demand for government regulation. Journal of Public Policy 43, 179198. https://doi.org/10.1017/S0143814X22000277.CrossRefGoogle Scholar
Kosuth, M, Mason, SA and Wattenberg, EV (2018) Anthropogenic contamination of tap water, beer, and sea salt. PloS One 13, . https://doi.org/10.1371/journal.pone.0194970.CrossRefGoogle ScholarPubMed
Kramm, J, Steinhoff, S, Werschmöller, S, Völker, B and Völker, C (2022) Explaining risk perception of microplastics: Results from a representative survey in Germany. Global Environmental Change 73, . https://doi.org/10.1016/j.gloenvcha.2022.102485.CrossRefGoogle Scholar
Lancaster, KJ (1966) A new approach to consumer theory.Journal of Political Economy 74, 132157. https://doi.org/10.1086/259131.CrossRefGoogle Scholar
LaRiviere, J, Czajkowski, M, Hanley, N, Aanesen, M, Falk-Petersen, J and Tinch, D (2014) The value of familiarity: Effects of knowledge and objective signals on willingness to pay for a public good. Journal of Environmental Economics and Management 68, 376389. https://doi.org/10.1016/j.jeem.2014.07.004.CrossRefGoogle Scholar
Larson, HJ, Clarke, RM, Jarrett, C, Eckersberger, E, Levine, Z, Schulz, WS and Paterson, P (2018) Measuring trust in vaccination: A systematic review. Human Vaccines and Immunotherapeutics 14, 15991609. https://doi.org/10.1080/21645515.2018.1459252.CrossRefGoogle ScholarPubMed
Lebreton, L, Slat, B and Ferrari, F (2018) Evidence that the Great Pacific Garbage Patch is rapidly accumulating plastic. Scientific Reports 8, . https://doi.org/10.1038/s41598-018-22939-w.CrossRefGoogle ScholarPubMed
Levi, M and Stoker, L (2000) Political trust and trustworthiness. Annual Review of Political Science 3, 475507. https://doi.org/10.1146/annurev.polisci.3.1.475.CrossRefGoogle Scholar
Logar, I, Brouwer, R, Maurer, M and Ort, C (2014) Cost-benefit analysis of the Swiss national policy on reducing micropollutants in treated wastewater. Environmental Science and Technology 48, 1250012508. https://doi.org/10.1021/es502338j.CrossRefGoogle ScholarPubMed
Loomis, JB (2014) 2013 WAEA keynote address: Strategies for overcoming hypothetical bias in stated preference surveys. Journal of Agricultural and Resource Economics 39, 3446. https://www.jstor.org/stable/44131313.Google Scholar
Makriyannis, C, Johnston, RJ and Zawojska, E (2024) Do numerical probabilities promote informed stated preference responses under inherent outcome uncertainty? Insight from a coastal adaptation choice experiment. International Journal of Disaster Risk Reduction 107, . https://doi.org/10.1016/j.ijdrr.2024.104481.CrossRefGoogle Scholar
Mariel, P, Hoyos, D, Meyerhoff, J, Czajkowski, M, Dekker, T, Glenk, K, Jacobsen, JB, Liebe, U, Olsen, SB, Sagebiel, J, and Thiene, M (2021) Environmental Valuation with Discrete Choice Experiments: Guidance on Design, Implementation and Data Analysis SpringerBriefs in Economics. Cham, Switzerland: Springer Nature. https://library.oapen.org/handle/20.500.12657/43295.10.1007/978-3-030-62669-3CrossRefGoogle Scholar
Meyerhoff, J and Liebe, U (2009) Status quo effect in choice experiments: Empirical evidence on attitudes and choice task complexity. Land Economics 85, 515528. https://doi.org/10.3368/le.85.3.515.CrossRefGoogle Scholar
Mihelj, S, Kondor, K and Štětka, V (2022) Establishing trust in experts during a crisis: Expert trustworthiness and media use during the COVID-19 pandemic. Science Communication 44, 292319. https://doi.org/10.1177/10755470221100558.CrossRefGoogle Scholar
Morrison, M and Brown, TC (2009) Testing the effectiveness of certainty scales, cheap talk, and dissonance-minimization in reducing hypothetical bias in contingent valuation studies. Environmental and Resource Economics 44, 307326. https://doi.org/10.1007/s10640-009-9287-3.CrossRefGoogle Scholar
Munro, A and Hanley, ND (2002) Information, uncertainty, and contingent valuation. In Bateman, IJ and Willis, KG ((eds)), Valuing Environmental Preferences. Oxford: Oxford University Press, 258279.Google Scholar
Nygaard, A (2023) Is sustainable certification's ability to combat greenwashing trustworthy? Frontiers in Sustainability 4, . https://doi.org/10.3389/frsus.2023.1188069.CrossRefGoogle Scholar
Oehlmann, M and Meyerhoff, J (2017) Stated preferences towards renewable energy alternatives in Germany – Do the consequentiality of the survey and trust in institutions matter? Journal of Environmental Economics and Policy 6, 116. https://doi.org/10.1080/21606544.2016.1139468.CrossRefGoogle Scholar
Oehlmann, M, Meyerhoff, J, Mariel, P and Weller, P (2017) Uncovering context-induced status quo effects in choice experiments. Journal of Environmental Economics and Management 81, 5973. https://doi.org/10.1016/j.jeem.2016.09.002.CrossRefGoogle Scholar
Oh, H and Hong, JH (2012) Citizens’ trust in government and their willingness-to-pay. Economics Letters 115, 345347. https://doi.org/10.1016/j.econlet.2011.12.010.CrossRefGoogle Scholar
Poe, GL, Giraud, KL and Loomis, JB (2005) Computational methods for measuring the difference of empirical distributions. American Journal of Agricultural Economics 87, 353365. https://doi.org/10.1111/j.1467-8276.2005.00727.x.CrossRefGoogle Scholar
Prata, JC, da Costa, JP, Lopes, I, Duarte, AC and Rocha-Santos, T (2020) Environmental exposure to microplastics: An overview on possible human health effects. Science of the Total Environment 702, . https://doi.org/10.1016/j.scitotenv.2019.134455.CrossRefGoogle ScholarPubMed
Rakotonarivo, OS, Schaafsma, M and Hockley, N (2016) A systematic review of the reliability and validity of discrete choice experiments in valuing non-market environmental goods. Journal of Environmental Management 183, 98109. https://doi.org/10.1016/j.jenvman.2016.08.032.CrossRefGoogle ScholarPubMed
R Core Team (2022). R: A Language and Environment for Statistical Computing (Version 4.2.0) [Computer software].Google Scholar
Remoundou, K, Kountouris, Y and Koundouri, P (2012) Is the value of an environmental public good sensitive to the providing institution? Resource and Energy Economics 34, 381395. https://doi.org/10.1016/j.reseneeco.2012.03.002.CrossRefGoogle Scholar
Salanié, F and Treich, N (2009) Regulation in Happyville. The Economic Journal 119, 665679. https://doi.org/10.1111/j.1468-0297.2009.02221.x.CrossRefGoogle Scholar
Sarrias, M (2020) Individual-specific posterior distributions from mixed logit models: Properties, limitations and diagnostic checks. Journal of Choice Modelling 36, . https://doi.org/10.1016/j.jocm.2020.100224.CrossRefGoogle Scholar
Shaw, WD (2016) Environmental and natural resource economics decisions under risk and uncertainty: A survey. International Review of Environmental and Resource Economics 9, 1130. https://doi.org/10.1561/101.00000074.CrossRefGoogle Scholar
Sjöberg, L (1999) Risk perception by the public and by experts: A dilemma in risk management. Human Ecology Review 6, 19. https://www.jstor.org/stable/24707052.Google Scholar
Sturgis, P, Brunton-Smith, I and Jackson, J (2021) Trust in science, social consensus and vaccine confidence. Nature Human Behaviour 5, 15281534. https://doi.org/10.1038/s41562-021-01115-7.CrossRefGoogle ScholarPubMed
Sundblad, E-L, Biel, A and Gärling, T (2009) Knowledge and confidence in knowledge about climate change among experts, journalists, politicians, and laypersons. Environment and Behavior 41, 281302. https://doi.org/10.1177/0013916508314998.CrossRefGoogle Scholar
Swait, J and Louviere, J (1993) The role of the scale parameter in the estimation and comparison of multinomial logit models. Journal of Marketing Research 30, 305314. https://doi.org/10.1177/002224379303000303.CrossRefGoogle Scholar
Thompson, RC, Courtene-Jones, W, Boucher, J, Pahl, S, Raubenheimer, K and Koelmans, AA (2024) Twenty years of microplastic pollution research—what have we learned? Science 386, . https://doi.org/10.1126/science.adl2746.CrossRefGoogle ScholarPubMed
Train, K and Weeks, M (2005) Discrete choice models in preference space and willingnessto-pay space. In Scarpa, R, and Alberini, A (eds), Applications of Simulation Methods in Environmental and Resource Economics. Dordrecht, Netherlands: Springer, 116. https://doi.org/10.1007/1-4020-3684-1_1.Google Scholar
Tübbicke, S (2021) Entropy balancing for continuous treatments. Journal of Econometric Methods 11, 7189. https://doi.org/10.1515/jem-2021-0002.CrossRefGoogle Scholar
Uggeldahl, K, Jacobsen, C, Lundhede, TH and Olsen, SB (2016) Choice certainty in discrete choice experiments: Will eye tracking provide useful measures? Journal of Choice Modelling 20, 3548. https://doi.org/10.1016/j.jocm.2016.09.002.CrossRefGoogle Scholar
Vass, CM, Boeri, M, Poulos, C and Turner, AJ (2022) Matching and weighting in stated preferences for health care. Journal of Choice Modelling 44, . https://doi.org/10.1016/j.jocm.2022.100367.CrossRefGoogle Scholar
Vethaak, AD and Legler, J (2021) Microplastics and human health. Science 371, 672674. https://doi.org/10.1126/science.abe5041.CrossRefGoogle ScholarPubMed
Volschenk, J, Gerber, C and Santos, BA (2022) The (in)ability of consumers to perceive greenwashing and its influence on purchase intent and willingness to pay.South African Journal of Economic and Management Sciences 25, . https://doi.org/10.4102/sajems.v25i1.4553.CrossRefGoogle Scholar
von Haefen, RH, Massey, DM and Adamowicz, WL (2005) Serial nonparticipation in repeated discrete choice models. American Journal of Agricultural Economics 87, 10611076. https://doi.org/10.1111/j.1467-8276.2005.00794.x.CrossRefGoogle Scholar
Welling, M, Sagebiel, J and Rommel, J (2023) Information processing in stated preference surveys: A case study on urban gardens. Journal of Environmental Economics and Management 119, . https://doi.org/10.1016/j.jeem.2023.102798.CrossRefGoogle Scholar
Welling, M, Zawojska, E and Sagebiel, J (2022) Information, consequentiality and credibility in stated preference surveys: A choice experiment on climate adaptation. Environmental and Resource Economics 82, 257283. https://doi.org/10.1007/s10640-022-00675-0.CrossRefGoogle Scholar
Zhao, Q and Percival, D (2016) Entropy balancing is doubly robust. Journal of Causal Inference 5, . https://doi.org/10.1515/jci-2016-0010.Google Scholar
Figure 0

Table 1. Example choice task given to respondents

Figure 1

Figure 1. Percentage of choices for either Option A or B by self-reported confidence in experts. Horizontal dotted line drawn at 50 per cent of choices.

Figure 2

Table 2. ${\chi ^2}$ test statistics (p values in parentheses) against the null hypothesis that choice frequency did not differ between levels of self-reported confidence in experts

Figure 3

Figure 2. Choice certainty by levels of confidence in experts. Both most (5/5) and least (5/5) confidence respondents were much more likely to say, ‘very sure’ while ‘quite sure’ was the most common response for all others.

Figure 4

Table 3. The ${\chi ^2}$test statistic (p value in parentheses) against the null hypothesis that the frequency of respondents reporting each level of certainty was not statistically different between each level of confidence in experts

Figure 5

Table 4. Selected estimated parameters from weighted and unweighted random-parameter mixed logit models

Figure 6

Figure 3. Box and whisker plots of the distributions of conditional mean WTP per confidence level (X-axis) per model. Price values are not truly WTP measures but instead the estimated price sensitivity. Y-axis scale varies across facets.

Supplementary material: File

King supplementary material

King supplementary material
Download King supplementary material(File)
File 837.1 KB