A lot of what we think we know about public support for democracy comes from surveys where respondents are asked if they think democracy is “preferable” to other regime types, how important they think it is to live in a “democracy,” or how “good” or “bad” they think different types of regimes are. Based on such measures, scholars have examined, for example, whether pro-democratic attitudes increase democratic resilience (Claassen, Reference Claassen2020) or whether successful democracies breed pro-democratic citizens (Acemoglu et al., Reference Acemoglu, Ajzenman, Aksoy, Fiszbein and Molina2021). These are particularly relevant questions in light of growing pressure on liberal democracies around the world (Bermeo, Reference Bermeo2016; Waldner and Lust, Reference Waldner and Lust2018).
However, surveying democratic attitudes is challenging. One of the most frequently voiced concerns is the possibility that responses are artifacts that fail to reflect citizens’ sincere attitudes. Estimates of support for democracy are said to be “inflated by social desirability effects and instrumentally motivated support” (Inglehart and Welzel, Reference Inglehart and Welzel2005, 11) because direct questions about democracy invite “socially desirable, politically correct responses” (Svolik, Reference Svolik2019, 29). Consequently, survey respondents may be “democrats in name only” (Wuttke et al., Reference Wuttke, Gavras and Schoen2022, 426), professing support for democracy and liberal-democratic institutions without genuinely holding such preferences. If so, scholars may not only overestimate the prevalence of pro-democratic attitudes but also reach flawed conclusions about their causes and consequences. Yet, despite such concerns, we have surprisingly few studies examining systematic misreporting of democratic attitudes.
In this article, we empirically examine the plausibility that various survey measures of democratic attitudes suffer from such a social desirability bias (SDB). We offer evidence from three unique studies. In the first two, we examine the relationship between survey mode—interviewer-administration versus self-completion—and measures of democratic attitudes. We examine this possibility, first, by exploiting a switch caused by the COVID-19 pandemic from face-to-face interviewing to self-completion that occurred in six of the 24 countries that conducted rounds 6 and 10 of the European Social Survey (ESS)’s module on democracy. Second, we take advantage of a unique opportunity in Great Britain and Finland, in which face-to-face data collection was accompanied by a simultaneous parallel run that collected responses through self-completion using similar sampling procedures. In both cases, we seek to determine whether the often found relationship between self-completed surveys and less socially desirable responses (Tourangeau and Yan, Reference Tourangeau and Yan2007; Kreuter et al., Reference Kreuter, Presser and Tourangeau2008; Bosnjak, Reference Bosnjak, Eifler and Faulbaum2017) is also found here, resulting in lower democratic (or higher autocratic) support than in interviewer-administered surveys.
However, survey modes “are really a bundle of features,” encompassing “a whole suite of measurement characteristics, including different forms of non-observation error or observation error” (Tourangeau, Reference Tourangeau, Vannette and Krosnick2018, 395). This means that a significant relationship between survey mode and responses to questions about democracy would not by itself conclusively demonstrate that respondents bend their responses toward what they find to be socially acceptable. Therefore, in our third study, we shift to an alternative strategy to assess SDBs: among the same subjects and using the same mode, we compare the prevalence of agreement with an anti-democratic statement using both direct and indirect questioning, by means of a double-list experiment (DLE) embedded in a survey of Portuguese voters.
Across the three studies, we observe no evidence compatible with the notion that SDB inflates survey measures of democratic attitudes. The small and—especially—inconsistent differences in the propensity to support democracy, autocracy, or democratic principles between responses obtained through interviewer-administered and self-completed questionnaires are not compatible with an SBD explanation. Furthermore, agreement with an anti-democratic statement obtained directly is not distinguishable from the prevalence estimates obtained via indirect questioning. Taken together, our findings do not support the notion that survey respondents misrepresent their attitudes and beliefs when responding to surveys about democracy and liberal-democratic institutions in order to present them as more socially desirable according to perceived social norms. To be sure, this does not mean that existing items about democratic support are free from a variety of other sources of measurement error. However, the notion that SDB is the reason behind a possible inflation of democratic support in surveys is not supported by our evidence.
Our study thus contributes to research on democratic attitudes by applying the most systematic analysis to date (using 3 different research strategies, extensive data from as many as 24 European countries, and a host of attitudes measuring support for democracy and autocracy) to address concerns about SDB in direct measurement of democratic attitudes.
1. Surveys, democratic support, and social desirability
The suspicion that survey measures of democratic support are affected by respondents’ motivation to present a positive image of themselves has been around for a long time. Dalton (Reference Dalton1994, 479) suggested that many East Germans after reunification could be Fragebogendemokraten (“questionnaire democrats”) who “have simply learned how to be good respondents in Western-oriented public opinion surveys.” Similarly, Inglehart (Reference Inglehart2003, 52) questioned whether the overwhelming support for democracy obtained in direct questions was more than mere “lip service.” The argument was that, as democracy became a “worldwide value” (Schedler and Sarsfield, Reference Schedler and Sarsfield2007, 638), measurement quality in responses to survey questions about it became threatened by the tendency to bias one’s answers in the direction of socially desirable responses (Sudman and Bradburn, Reference Sudman and Bradburn1974).
Research on democratic attitudes has taken different approaches to deal with this alleged SDB. By combining questions about overt support for democracy with questions about rejection of non-democratic forms of government, scholars have detected much greater variation in democratic support than previously appreciated (Inglehart, Reference Inglehart2003; Claassen, Reference Claassen2019; Reference Claassen2020; Malka et al., Reference Malka, Lelkes, Bakker and Spivack2022). Another approach has been to avoid altogether survey items that refer to regime types and instead focus on specific democratic institutions, rights, and principles (Gibson et al., Reference Gibson, Duch and Tedin1992; Schedler and Sarsfield, Reference Schedler and Sarsfield2007) or to gauge respondents’ conceptions of “democracy,” allowing such conceptions to include procedures and goals different from those of liberal democracy and, in some cases, genuinely illiberal understandings (Bertsou and Pastorella, Reference Bertsou and Pastorella2017; König et al., Reference König, Markus, Siewert and Ackermann2022). Other scholars have turned to choice experiments to examine the importance that respondents assign to democratic features of hypothetical candidates or parties vis-‘à-vis other attributes, thus concealing the researchers’ interest in the former (e.g., Graham and Svolik, Reference Graham and Svolik2020). Broadly, all these approaches have generated results that are far from the almost universal support for liberal democracy that might be inferred from the earliest available data.
However, the issue of SDB in measures of democratic attitudes lingers. How much confidence can we have that, for example, those respondents who profess their rejection of “a strong leader who does not have to bother with parliament and elections” or award high levels of importance to “free and fair elections” are not bending their responses toward what they find to be socially acceptable? Two additional strategies are typically employed to examine this kind of possibility. One is to estimate how the presence or absence of an interviewer affects responses to potentially sensitive questions. Respondent privacy in surveys does correlate with other potentially consequential dimensions, such as modes of information transmission, available communication channels, interactivity, and locus of control (De Leeuw and Hox, Reference de Leeuw, Hox, Engel, Jann, Lynn, Scherpenzeel and Sturgis2015), making it difficult to discern the different effects of these dimensions. Still, the most robust meta-analytic finding in this literature is that, in comparison with interviewer-administered surveys, self-administered modes elicit less socially approved and more socially disapproved responses (Gnambs and Kaspar, Reference Gnambs and Kaspar2015; Bosnjak, Reference Bosnjak, Eifler and Faulbaum2017; Reference Bosnjak, Vannette and Krosnick2018; de Leeuw et al., Reference de Leeuw, Hox, Scherpenzeel, Lavrakas, Traugott, Kennedy, de Leeuw and West2019). If the same phenomenon is observed in responses to questions about democratic attitudes, this would be compatible with SDB affecting their measurement.
A different strategy consists of comparing results from direct questions with those of indirect methods of eliciting sensitive information and opinions, such as the randomized response or item count (or list experiment) techniques (Tourangeau and Yan, Reference Tourangeau and Yan2007). In what regards political attitudes and behaviors, important differences between direct and indirect measures have been found in this way, in domains such as support for leaders and parties under authoritarian regimes (Frye et al., Reference Timothy, Gehlbach, Marquardt and Reuter2017; Blair et al., Reference Blair, Coppock and Moor2020), vote buying (Kiewiet de Jonge, Reference Kiewiet de Jonge2015), racial attitudes (Kuklinski et al., Reference Kuklinski, Cobb and Gilens1997), or support for the rights of sexual minorities (Aksoy, Carpenter and Sansone, Reference Aksoy, Carpenter and Sansone2024).
However, the deployment of these strategies to the investigation of a potential SDB in survey measures of democratic attitudes has been rare, and its results broadly inconclusive. Ejaz and Thornton Reference Ejaz and Thornton2025—to our knowledge the only study addressing this issue by looking at mode effects—found that U.S. respondents interviewed via the Internet expressed lower satisfaction with democracy (SWD) than those interviewed face-to-face. However, it is also well-established that this variable has a rather different etiology from support for democracy as a regime or for its principles and captures neither (Linde and Ekman, Reference Linde and Ekman2003). Using a list experiment, Kiewiet de Jonge Reference Kiewiet de Jonge2016, in a study conducted in Honduras, found that, although the difference was substantively small, a preference for authoritarism over democracy was more frequent in indirect questioning. Conversely, Kaftan Reference Kaftan2024 found that while survey respondents in France, Germany, Italy, and the United Kingdom do have less demanding conceptualizations of democracy than often assumed (emphasizing “electoral” over “liberal” components), their agreement with the undemocratic statement: “It would be better if [country] would not be a democracy,” did not vary significantly between direct and indirect questioning through the item count technique. In sum, the evidence about SDB in responses to survey questions about democracy is limited and inconclusive. In the following sections, we employ the above-mentioned strategies to further examine this possibility.
2. Data and analyses
We present the results of three studies. The first uses data from the ESS rounds 6 and 10 (
$N_{countries}= 24$ and
$N_{respondents}= 93,210$).Footnote 1 We take advantage of the fact that, among the 24 countries that conducted both rounds, 6 shifted data collection mode from face-to-face interviews to self-completion due to the COVID-19 pandemic. The second study exploits a unique situation in Great Britain and Finland (N = 3,740 and 2,920, respectively), in which two representative samples were collected simultaneously in each country using very similar sampling methods but different survey modes. Again, we leverage this variation in survey mode to compare responses to questions about democracy. The third study uses a DLE conducted in Portugal (N = 379) to estimate the prevalence of support for an undemocratic statement; negating judicial limits to executive power. We then compare this estimate to the results of a direct question using the same statement, obtaining an estimate of SDB. Table 1 provides an overview.
Table 1. Overview of studies

2.1. Study 2: Effect of shifting to self-completion
In rounds 6 (2012) and 10 (2020) of the ESS, respondents were asked three types of questions about democracy susceptible of eliciting socially desirable responses. First, respondents were asked about their SWD, yielding a measure of specific support for democracy (Linde and Ekman, Reference Linde and Ekman2003). A second question asked about the importance of living in a democracy, which provides a commonly used measure of diffuse support for democracy (Kokkonen and Linde, Reference Kokkonen and Linde2023). The final set of questions moves away from more abstract attitudes about regime types, aiming instead at capturing citizens’ conceptions of democracy, asking how important a range of aspects are for “democracy.” Among those aspects, we focus on those constituting a liberal democratic conception (Kriesi et al., Reference Kriesi, Saris, Moncagatta, Ferrin and Kriesi2016): free and fair elections; clear and viable partisan alternatives; vertical accountability; media freedom; equality before the law; and minority rights. Including the latter category of items represents an important extension of previous research on SDB in democratic attitudes, which has primarily focused on specific or diffuse support for democracy. Recent work, however, suggests that even citizens in established democracies often hold significantly less demanding conceptualizations of democracy than scholars typically assume (Kiewiet de Jonge, Reference Kiewiet de Jonge2016; Kaftan, Reference Kaftan2024). This raises the possibility that SDB may be particularly prevalent concerning liberal-democratic conceptions. All responses range on a scale from 0 to 10, with 0 indicating the least “democratic” option (see question-wording in Table 2).
Table 2. Operationalization of democratic attitudes

Note: Survey items for outcome variables.
A total of 24 countries conducted both rounds of the ESS module on “Understandings and evaluations of democracy.”Footnote 2 However, due to the COVID-19 pandemic, six of them—Cyprus, Germany, Israel, Poland, Spain, and Sweden—were forced to switch from a face-to-face to a self-completion mode (web and paper). We leverage this fact by conceiving this switch as a treatment (1 = if the country switched to self-completion in round 10 and 0 = otherwise) and estimate its effect on the responses to survey measures of democratic attitudes. Specifically, we employ a generalized difference-in-differences (two-way fixed effects) approach to the repeated ESS rounds 6 and 10 cross-sectional data, testing the hypothesis that, if SDB contaminates responses about democracy, the shift to self-completion should decrease respondents’ propensity to express pro-democratic views. Our identification strategy, thus, relies on two assumptions. First, we assume common trends, such that the difference in expressed democratic attitudes between rounds 6 and 10 in the untreated countries is a valid counterfactual for the difference in expression of democratic attitudes in the countries that switch to self-completion due to Covid-19. Second, we assume that repeated cross-sections are representative samples from the same underlying population. The supplementary material discusses the data collection, shows that the groups followed similar pre-treatment trends on SWD (the only measure that was also available in ESS round 9), and validates the treatment instrument by demonstrating that the change in survey mode is associated with expected changes in pro-immigration attitudes.Footnote 3
Our regressions include country fixed effects (adjusting for unobserved factors that may make those scores systematically different across countries) and ESS round fixed effects (adjusting for general Europe-wide trends in democratic attitudes between 2012 and 2020). Furthermore, we adjust for a set of covariates measured in each survey at the individual and country levels; gender, age, age squared, educational attainment, and income perception, and also include GDP per capita (in constant 2017 international dollars, purchasing power parity), control of corruption, and the Gini coefficient of equivalized disposable income.Footnote 4 Finally, to further address the possibility of systematic differences in democratic attitudes resulting from different types of respondents participating in the survey depending on the mode used, all analyses employ sampling weights, correcting for differential selection probabilities within each country.Footnote 5
We estimate country cluster-corrected standard errors, allowing for the disturbances for observations within each country sample to be correlated. Further, since we test multiple hypotheses, we estimate Romano–Wolf adjusted p-values, which control the family-wise error rate, the probability of rejecting at least one true null hypothesis, and account for the dependence structure of the test statistics (Clarke et al., Reference Clarke, Romano and Wolf2020).Footnote 6
2.1.1. Results
Table 3 shows the results. Of the eight outcomes, we observe negative coefficients on all but one. However, only the effect of self-completion on conceptualizing “democracy” as encompassing the protection of minority rights is close to statistical significance at conventional levels.
Table 3. Effects of survey mode on democratic attitudes (Study 1)

Note: Unstandardized OLS regression estimates with country and ESS round fixed effects (TWFE). Clustered standard errors in parenthesis. Romano–Wolf p-values (250 bootstrap replications). * p < .05, ** p < .01, *** p < .001
The estimated effect in that case is 0.79 (
$s.e.$ = 0.12, 95% CI [−1.06, −0.53]; Romano–Wolf p = .052). This effect is substantively small in magnitude when compared to previous findings on survey mode effects on SWD (Ejaz and Thornton, Reference Ejaz and Thornton2025, for example, find a 13 pp. gap in SWD). It is also interesting to note that, in what concerns the variable arguably most prone to “lip service” to democracy—the “importance of living in a country that is governed democratically”—the coefficient is even positive. Overall, to the extent that self-completion offers greater privacy, which, in turn, decreases the expression of socially desirable attitudes, the results from Study 1 do not suggest SDB in a broad range of existing survey measures of democratic attitudes. Indeed, we only observe partial support for such a decrease when respondents are asked about the importance of protecting minority rights. While this is consistent with previously documented SDB in questions that concern minorities, especially ethnic or racial (Kuklinski et al., Reference Kuklinski, Cobb and Gilens1997; Janus, Reference Janus2010, but see Blair et al. 2020), this falls short of evidence for systematic SDB in direct measures of support for democracy as a regime or for the constitutive principles of liberal democracy. Rather, the modest effect observed regarding the importance of protecting minority rights is more consistent with sensitivities related to racial/ethnic attitudes than to democratic attitudes.
2.2. Study 2: parallel surveys analysis
In ESS round 10, aside from the main data collection employing a face-to-face mode, parallel data collections using nearly identical sampling procedures but using a self-completion mode took place in Great Britain and Finland. As with Study 1, we exploit this variation, using self-completion as treatment (D = 0 if interviewed face-to-face, and D = 1 if self-completed), and estimate the differences in democratic attitudes between face-to-face and self-completed responses.
Note, however, that the sampling strategies for the two surveys were not entirely identical. The main differences were that the self-administered survey included only respondents aged 18 and older, did not cluster by address (used in the main ESS face-to-face sampling to reduce travel time), and, in the case of Great Britain, excluded Northern Ireland. To enhance comparability, we exclude respondents under 18 and those from Northern Ireland from the main ESS data (see Appendix B1 for more details on the data). Our analysis thus assumes that any remaining differences in sampling and sample composition are unrelated to potential outcomes—that is, they do not systematically account for differences in expressions of democratic attitudes.
We use the same measures of democratic attitudes as in Study 1 (see Table 2), adding another item that was only asked in round 10 and taps into support for autocracy. This item measures support for the concentration of power in a single person/body, asking respondents “how acceptable for you would it be for [country] to have a strong leader who is above the law?” This allows us to estimate the effects of self-completion on the propensity to display anti-democratic attitudes, but using an item where higher values represent greater support for autocracy. This is important: if self-completion facilitates the expression of attitudes seen as socially undesirable, and if anti-democratic attitudes are felt as undesirable by respondents, then we should expect self-completion to exert a positive effect on this variable.
We use OLS regressions to estimate the difference-in-means between survey modes, adjusting for the same individual-level covariates as in Study 1 as well as for a variable available only in round 10: frequency of internet use.Footnote 7 As the Finland data has no weighting variable, we show the results without using sample weights. However, the supplementary material shows that the great Britain findings are similar when using weights. Again, we adjust for multiple comparisons using the Romano–Wolf adjustment method as we test nine hypotheses simultaneously.
2.2.1. Results
Table 4 shows the results. Looking first at the Finnish data, we observe only small and inconsistent effects of survey mode on the expression of democratic attitudes. Self-reported estimates of SWD are slightly lower than estimates based on interviewer-administered data (difference = -.30;
$s.e. = .08$; 95% CI [-.47, -.13]). This difference, however, corresponding to .17 of the standard deviation, would typically be considered a small effect (Gignac and Szodorai, Reference Gignac and Szodorai2016; Lovakov and Agadullina, Reference Lovakov and Agadullina2021). Similarly, while we also observe a significant difference in the importance assigned to protecting minority rights (difference = -.51;
$s.e. = .09$; 95% CI [-.68, -.33]), this effect—the largest consistent with an SDB expectation—corresponds to only .28 of the standard deviation.
Table 4. Effects of survey mode on attitudes toward democracy and autocracy (Study 2)

Note: Unstandardized OLS regression estimates. Robust standard errors in bars. Romano–Wolf p-values (250 bootstrap replications). Standardized effects are Cohen’s D (Average Treatment Effect on the Treated/Standard Deviation). * p < .05 ** p < .01 *** p < .001
We also find a slightly larger difference in acceptance of a strong leader above the law (difference = −1.03;
$s.e. = 0.12$; 95% CI [-1.27, −0.78]), about 0.33 of the standard deviation. However, as noted above, the polarity of this item is opposite to that of the other outcomes. The effect of self-completion on support for autocracy, therefore, has the opposite sign to the expectation based on SDB: acceptance of “strong leaders” in self-completed surveys is lower—i.e., more “pro-democratic”—than in face-to-face interviews. We return to this in the discussion below. The analysis of all the remaining outcomes, also discussed in Study 1, yields statistically insignificant effects of self-completion.
Turning to GB, we observe more consistently negative effects of self-completion on our outcome measures. We find an estimated effect on SWD of about 0.18 of the standard deviation (difference = 0.44;
$s.e. = 0.09$; 95% CI [−0.63, −0.25]). Furthermore, in contrast to the Finnish sample, all differences in the items related both to diffuse support (importance of living in a democracy) and to “liberal-democratic conceptions” are negative and significant, implying that British respondents in interviewer-administered surveys give greater emphasis to these institutions for democracy than those who responded through self-completion. Yet, in all instances, they are substantively small (between −0.57 and −0.19 scale-points; 0.27 and 0.09 of an SD). And crucially, we again find that the estimated effect of self-completion on support for autocracy is negative (difference = −0.27;
$s.e. = 0.10$; 95% CI [−0.47, −0.06]).
Study 2 thus finds negative effects of self-completion (relative to face-to-face interviews) on a few measures in Finland and all outcomes in Great Britain. The potential reasons for the differences between these two countries are outside the scope of this article. However, two important findings stand out. First, all effects range from about one-tenth to about one-third of a standard deviation in the outcome variable, and can thus be considered substantively small (Gignac and Szodorai, Reference Gignac and Szodorai2016; Lovakov and Agadullina, Reference Lovakov and Agadullina2021). Second, and most importantly, we observe in both cases that support for autocracy is oppositely signed to what we would expect based on an SDB hypothesis: people in the self-completed surveys find having a strong leader above the law less acceptable than those in the interviewer-administered surveys. To the extent that the additional anonymity and privacy of self-completion should decrease social desirability pressures, we would have expected to observe the opposite pattern if SDB was present.
This—perhaps surprising—finding calls again attention to the fact that the elicitation of more sincere responses is not the only consequence to be expected from moving from interviewer-administered to self-completion modes. Although work on mode effects has focused primarily on SDB (Bosnjak, Reference Bosnjak, Eifler and Faulbaum2017, 17), mode effects can also manifest themselves in other ways (Paulhus, Reference Paulhus, Robinson, Shaver and Wrightsman1991). For example, compared to interviewer-administered surveys, respondents in self-administered modes appear less likely to choose acquiescent, extreme, or positive response options, particularly in subjective questions (Ye et al., Reference Cong, Fulton and Tourangeau2011; Liu et al., Reference Mingnan, Conrad and Lee2017; de Leeuw et al., Reference de Leeuw, Hox, Scherpenzeel, Lavrakas, Traugott, Kennedy, de Leeuw and West2019; Schork et al., Reference Joachim, Riillo and Neumayr2021; Goodman et al., Reference Goodman, Brown, Silverwood, Sakshaug, Calderwood, Williams and Ploubidis2022; Hope et al., Reference Hope, Campanelli, Nicolaas, Lynn and Jackle2022).
Hence, while Study 2 points at most to small self-completion effects, the fact that they are negative concerning both pro-democratic and pro-autocratic attitudes challenges the notion that SDB is behind an overestimation of democratic attitudes in survey measures. Instead, it suggests that self-completion may favor a less positive or acquiescent response style to questions framed in terms of how “satisfied” one is with something or “important” or “acceptable” that something is thought to be.
2.3. Study 3: support for anti-democratic statements
Our final study offers evidence from a DLE used to measure indirectly the level of agreement with an anti-democratic statement, comparing such an estimate with that resulting from a direct question about the same statement. Both the DLE and the direct question were included in an online survey conducted in Portugal, the Wave 6 of the CROss-National Online Survey (CRONOS) Panel.Footnote 8 The sub-sample was drawn from the ESS’s round 10 probabilistic sample, with a total of 379 respondents accepting to answer CRONOS wave 6.
Respondents were given a list of statements and asked how many (not which specific ones) they agreed with, giving them a protective layer of anonymity (Smith et al., Reference Smith, Federer and Raghavarao1974; Miller, Reference Miller1984). In this DLE, subjects were randomly assigned to receive the short-list (baseline) or long-list (treatment) first, allowing all respondents to be exposed to the sensitive item, increasing statistical power and allowing us to test the robustness of findings to different baseline lists (Droitcour et al., Reference Droitcour, Caspar, Hubbard, Parsley, Visscher, Ezzati, Biemer, Groves, Lyberg, Mathiowetz and Sudman1991; Glynn, Reference Glynn2013).Footnote 9 The difference-in-means between the short and long lists yields an estimate of the share of the population who agree with the sensitive statement.
We focused on citizens’ views about judicial constraints on the executive, a crucial aspect of enforcing liberal-democratic norms and practices. The anti-democratic item in both lists is “the government should be able to ignore court rulings that are regarded as politically biased,” previously used in research on democratic attitudes (e.g., Graham and Svolik, Reference Graham and Svolik2020; Simonovits et al., Reference Simonovits, Mccoy and Littvay2022; Carey et al., Reference Carey, Clayton, Helmke, Nyhan, Sanders and Stokes2022). A key concern with list experiments is that a reduction in bias comes at the cost of increased variance (Blair et al., Reference Blair, Coppock and Moor2020). The double-list design, however, provides opportunities to increase power. To increase statistical precision, we included low-variance items with both high and low prevalence and negatively correlated items in both baseline lists, and made sure that they were positively correlated (Glynn, Reference Glynn2013). Table 5 shows the two lists with the sensitive item in boldface.
Table 5. DLE design

Our DLE rely on three key assumptions (Statistical Analysis of List Experiments, Reference Blair and Imain.d.). First, that randomization was successful. Second, that there were no design effects, such that respondents do not change their answers to the baseline lists depending on the presence of the sensitive item. Finally, we assume no liars, implying that respondents respond truthfully to the sensitive item. The supplementary material shows evidence vindicating these assumptions. To estimate differences in agreement with the anti-democratic statement, we follow standard practice and estimate linear difference-in-means regressions. The differences-in-means between the long and short lists from lists 1 and 2 are averaged, yielding a more precise estimate of the share of “agreers” (Glynn, Reference Glynn2013).
As recommended in the literature (Aronow et al., Reference Aronow, Coppock, Crawford and Green2015; Kramon and Weghorst, Reference Kramon and Weghorst2019; Blair and Imai, Reference Blair and Imai2012), we also asked a direct question regarding agreement with the statement that the government should be able to ignore court rulings regarded as politically biased, part of a battery of 17 direct questions used in Claassen et al. Reference Claassen, Ackermann, Bertsou, Borba, Carlin, Cavari, Dahlum, Gherghina, Hawkins, Lelkes, Magalhães, Mattes, Meijers, Neundorf, Oross, Ozturk, Sarsfield, Self, Stanley, Tsai, Zaslove and Zechmeister2025. The direct question provides a baseline estimate of the share of “agreers,” with which we compare the estimated prevalence from our DLE.Footnote 10
A potential limitation concern with including both direct and indirect items in the same survey is that respondents might change their answers to the list experiment after having answered the direct question. However, prior work has convincingly dismissed such demand effects (Mummolo and Peterson, Reference Mummolo and Peterson2019) and has demonstrated that asking identical questions before and after a treatment does not influence how respondents answer outcome measures (Clifford et al., Reference Clifford, Sheagley and Piston2021). Hence, while this is a potential limitation of our design in Study 3, existing evidence largely alleviates such concerns, and our approach follows recommendations in the literature.
2.3.1. Results
Figure 1 shows the results. Between the direct question and the DLE estimate, we observe no difference in the share of respondents who agree with the notion that the government should be able to ignore court rulings that are regarded as politically biased. Indeed, the difference in shares of “agreers” is −0.033, suggesting that the direct question—if anything—over-reports support for the anti-democratic statement. This difference, however, is not statistically distinguishable from zero (p = .278;
$s.e. = 0.061$).Footnote 11 Hence, using a DLE to elicit democratic attitudes freer from SDB, we find no evidence that support for the anti-democratic statement is under-reported in direct questions.

Figure 1. No signs of SDB in support for undemocratic statements.
3. Conclusion
Research on public support for democracy and its causes and consequences has predominantly relied on survey measures. This literature has long been criticized for making a “sincere attitudes assumption,” ignoring the possibility that respondents provide socially desirable responses, falsifying their (non-)democratic preferences in survey interviews (Valentim, Reference Valentim2024, see also Inglehart 2003; Inglehart and Welzel 2005; Svolik 2019). However, there is surprisingly little evidence supporting this widespread concern with SDB in measures of democratic attitudes. To address this gap, we presented evidence from three studies examining the plausibility that survey measures of democratic attitudes are systematically misreported due to SDB.
Overall, we found no evidence compatible with SDB in survey estimates of democratic attitudes. Study 1 found that the presence of an interviewer had no discernible impact on expressed democratic attitudes, with the possible exception of the importance attached to the protection of minority rights. Study 2 found a few significant mode effects. However, these were all substantively small and, crucially, respondents in self-completed surveys were less likely to select higher response options on questions gauging both support for democracy and support for autocracy. These results contradict the SDB expectation, suggesting that the small and inconsistent differences caused by interviewer presence may be better explained by other survey mode effects less examined by the literature, such as acquiescence or positivity biases. Finally, Study 3, using a DLE to test for misreporting of support for anti-democratic statements, found no significant difference between direct questions and the list experiment estimate. In sum, the notion that SDB causes an overestimation of democratic attitudes in survey measures is not borne out by our evidence.
Our results contribute to existing literature by using three different designs (observational and experimental), extensive data (from 93,210 respondents in 24 countries to novel experiments in Portugal), and a wide range of democratic attitudes (covering specific and diffuse support as well as liberal democratic conceptions), to show that there is very little evidence for one of the most frequently voiced concerns about survey measures of democratic support—SDB. In other words, while socially desirable misreporting is well-documented in surveys dealing with a range of highly sensitive topics (Tourangeau and Yan, Reference Tourangeau and Yan2007; Ehler et al., Reference Ehler, Wolter and Junkermann2021), our results suggest that the potential sensitivity of questions about “democracy” has been largely overestimated. In fact, as a handful of recent studies has shown, this also seems to be the case with other political attitudes, such as support for Donald Trump (Coppock, Reference Coppock2017; Brownback and Novotny, Reference Brownback and Novotny2018), gender-based stereotypes (Holman, Reference Holman2023), and even prejudice (Blair et al., Reference Blair, Coppock and Moor2020). This has positive implications, as it suggests that survey research on democratic attitudes is not doomed to obtain measures rendered invalid by misrepresentation. Scholars concerned with tracking the aggregate levels of democratic attitudes need not, for example, be so worried that responses are survey artifacts that do not represent sincere attitudes.
Having said that, this does not mean that all such instruments are free from other types of measurement error. For example, questions that explicitly try to elicit responses about the notion of “democracy” have been shown to detract from cross-cultural comparability (Ariely and Davidov, Reference Ariely and Davidov2011) and even to lead to over-reporting of democratic support (Kiewiet de Jonge, Reference Kiewiet de Jonge2016). The reason may be related less to any propensity of respondents to engage in socially desirable misreporting when asked about “democracy” than to the idealized considerations that are prompted when respondents are confronted with such an abstract concept (Kiewiet de Jonge, Reference Kiewiet de Jonge2016; Chapman et al., Reference Chapman, Hanson, Dzutsati and DeBell2024). Furthermore, although our results suggest respondents’ sincerity when answering questions about regime types or democratic rights and principles, it remains perfectly possible that individuals who honestly support democracy nevertheless have a selective conception of it or that their sincere democratic preferences nevertheless yield to other and more salient or intense ones, ultimately resulting in the endorsement of undemocratic practices and actors in concrete contexts (Grossman et al., Reference Grossman, Kronick, Levendusky and Meredith2022; Adserà et al., Reference Adserà, Arenas and Boix2023; Kaftan and Gessler, Reference Kaftan and Gessler2025).
Given the nature of our findings, we encourage further research into the potential for SDB in survey measures of democratic attitudes. Although they are based on evidence from as many as 24 European countries, the findings cannot be extrapolated to different regions of the world without invoking further assumptions. Hence, a promising avenue for future research is to expand the scope of research to different parts of the world. Similarly, scholars could further advance our understanding of democratic attitudes by expanding the range of attitudinal measures. While we focus on three sets of attitudes (specific support, diffuse support, and liberal-democratic conceptions), there are alternative ways of measuring democratic attitudes, which may be more prone to SDB. Finally, it is worth noting that while we employ two often used methods isolating SDB measurement error, other approaches exist, such as the randomized-response technique (Blair et al., Reference Blair, Imai and Zhou2015) or latent class modelling (Aichholzer, Reference Aichholzer2013). Using a host of different approaches would help increase confidence in existing survey measures of democratic attitudes.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/psrm.2025.10044. To obtain replication material for this article, https://doi.org/10.7910/DVN/LIGHFY.
Acknowledgements
We would like to thank Rory Fitzgerald and Tim Hanson from ESS-ERIC HQ and Ole-Petter Øvrebø from Sikt-Norwegian Agency for Shared Services in Education and Research for providing access to the GB and FI parallel run data, Alice Ramos as NC of ESS in Portugal for including the DLE in Wave 6 of the CRONOS Panel in Portugal. We also thank Nicholas Haas, Claire Gothreau, Matias Engdal Christensen for valuable comments and discussions on previous versions of this manuscript.
Ethics
This research was conducted in line with the ethical standards contained in the 1964 Declaration of Helsinki and its later amendments. Studies 1 and 2: in accordance with the ESS ERIC Statutes (Article 23.3), the ESS ERIC subscribes to the Declaration on Professional Ethics of the International Statistical Institute, and the ESS Rounds were reviewed by the ESS ERIC Research Ethics Board. Study 3: Approved by the Ethics Committee of the Institute of Social Sciences of the University of Lisbon (CE 2021-19). The authors declare no competing interests.