Hostname: page-component-54dcc4c588-mz6gc Total loading time: 0 Render date: 2025-09-26T12:56:15.714Z Has data issue: false hasContentIssue false

Gender-of-Interviewer Effects on Support for Women’s Rights and Democracy in Africa

Published online by Cambridge University Press:  23 September 2025

Zack Zimbalist*
Affiliation:
Institute for International Political Economy, Vienna University of Economics and Business, Vienna, Austria Southern Centre for Inequality Studies (SCIS), University of the Witwatersrand, Johannesburg, South Africa
Rights & Permissions [Opens in a new window]

Abstract

Public attitude surveys provide invaluable insights into societal views on women’s rights, democracy and other critical issues. However, many research studies do not account for biases introduced by the gender of the interviewer, which can distort estimates of public opinion and key relationships among covariates of interest. This article examines gender-of-interviewer effects on public support for women’s rights to work, own and inherit land, as well as support for democracy and feelings of closeness to opposition (versus ruling) parties, using Afrobarometer data from 34 African countries. In line with prevailing conservative social norms in Africa, the analysis reveals significant gender-of-interviewer effects, with respondents reporting more gender-unequal attitudes when interviewed by male interviewers. Additionally, gender-of-interviewer effects appear in responses to questions on support for democracy and feelings of closeness to opposition (versus ruling) parties, with respondents more likely to voice pro-democratic attitudes and close affiliation with opposition parties to male interviewers, regardless of their own gender. These findings highlight the importance of accounting for such biases to ensure the validity of public opinion research and analyses based on these political variables.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Government and Opposition Ltd.

Public opinion surveys are essential for understanding people’s values and attitudes across the world. By conducting these surveys repeatedly with random samples, we can track changes in public attitudes over time across different societies. Various stakeholders, including donors, policymakers, political candidates and civil society organizations, use these data to gauge public perceptions on pressing issues, from women’s rights to support for democracy and support for ruling (versus opposition) parties. However, data users are often unaware of, or do not account for, survey response bias, which can stem from a respondent’s disposition or from situational factors such as the salient identities of the interviewer. Though often neglected, these salient interviewer identities can introduce substantial response biases, thereby limiting the validity and precision of survey findings. For instance, studies in both developed and developing countries have uncovered biased responses based on the gender, race, religion and ethnicity of the interviewer, sometimes in relation to the respondent’s characteristics (Adida et al. Reference Adida, Ferree, Posner and Robinson2016; Beatty and Herrmann Reference Beatty, Herrmann, Dillman, Eltinge, Groves and Little2002; Blaydes and Gillum Reference Blaydes and Gillum2013; Dionne Reference Dionne2014; Groves Reference Groves2005; Groves et al. Reference Groves2009; Krumpal Reference Krumpal2013; Tourangeau and Yan Reference Tourangeau and Yan2007). In the African context, studies have also demonstrated interviewer effects based on the perceived survey sponsor (government versus non-government affiliated) and whether bystanders are present during the interview (Lau Reference Lau2018; Tannenberg Reference Tannenberg2022).

Research on biases resulting from the interviewer’s gender in particular (referred to as gender-of-interviewer effects) has tended to focus on survey items specific to marriage preferences and other sensitive issues related to women’s decision-making and behaviours (Liu and Stainback Reference Liu and Stainback2013). However, there is far less research examining gender-of-interviewer effects as they pertain to support for gender equality in other domains such as economic and political rights as well as attitudes towards democracy and partisanship. A few exceptions are a panel study in Germany (Zoch Reference Zoch2021), a cross-sectional survey in Morocco (Benstead Reference Benstead2014) and a cross-national study in Latin America (Guizzo Altube and Scartascini Reference Guizzo Altube and Scartascini2024).

Of most relevance to this study are three papers that have leveraged cross-national Afrobarometer data. Charles Lau (Reference Lau2018) analyses Afrobarometer Round 4 data from 2008, which covers 20 countries. He examines gender-of-interviewer effects for indicators of political engagement, support for democracy and democratic procedures, and support for authoritarianism. Similarly, Leila Demarest (Reference Demarest2017) analyses these effects for 12 countries across Afrobarometer Rounds 3 through 5 with regard to preference for democracy, whether women should be elected to political office and whether one feels close to any political party.Footnote 1 Finally, in line with Demarest (Reference Demarest2017), Aksel Sundström and Daniel Stockemer (Reference Sundström and Stockemer2022) use Afrobarometer Round 6 data comprising a larger sample of African countries to examine effects on attitudes towards the single question of whether women should be elected to political office.

This article builds on these studies in the following ways. First, this study leverages Round 7 of the Afrobarometer to demonstrate gender-of-interviewer effects for two previously unstudied gender-specific survey items: (1) support for women’s equal rights to owning and inheriting land; and (2) support for women’s equal rights to a job. Second, the article examines gender-of-interviewer effects for a broader set of survey items pertaining to support for democracy and feelings of closeness to opposition (versus ruling) parties with a larger sample of 34 countries across Africa. It further explores explanations for these effects and their implications for scholarship and policymaking. In doing so, this analysis builds on efforts to account for gender-of-interviewer effects and reduce biases in both measuring levels of support for women’s rights, democracy and close affiliation with ruling (versus opposition) parties and in drawing inferences about apparent relationships in survey data. With regard to the latter, the article highlights the risk of biased inferences in the many studies that use support for democracy or partisan identity as key explanatory, moderator or outcome variables without accounting for the interviewer’s gender. Many studies take this approach using Afrobarometer data (Bartels and Kramon Reference Bartels and Kramon2020; Burchard Reference Burchard2020; Fjelde and Olafsdottir Reference Fjelde and Olafsdottir2024; García-Peñalosa and Konte Reference García-Peñalosa and Konte2014; Kuenzi and Lambright Reference Kuenzi and Lambright2011; von Borzyskowski et al. Reference von Borzyskowski, Daxecker and Kuhn2022), Americasbarometer data (Singer Reference Singer2018), the World Values Survey (which includes a range of diverse countries) data (Flesken and Hartl Reference Flesken and Hartl2018) and survey data covering European countries (Baekgaard Reference Baekgaard2023; Mazepus and Toshkov Reference Mazepus and Toshkov2022). Moreover, several of these studies incorporate these indicators both as explanatory or moderating variables and as outcome variables. For example, two recent studies using Afrobarometer data examine support for democracy as an outcome while considering partisan identity as a key explanatory or moderating variable (Fjelde and Olafsdottir Reference Fjelde and Olafsdottir2024; von Borzyskowski et al. Reference von Borzyskowski, Daxecker and Kuhn2022). If both support for democracy (as the dependent variable) and partisan identity (as the independent variable) are influenced by the interviewer’s gender (as this article will show), failing to account for this factor in the model can introduce omitted variable bias, leading to distorted estimates.

The regressions in this study show that the interviewer’s gender influences responses to questions about support for democracy and feelings of closeness to opposition (versus ruling) parties, reflecting a bias towards more pro-democratic and pro-opposition attitudes in the presence of a male interviewer, regardless of the respondent’s gender. I speculate that this bias arises with male interviewers because both male and female respondents perceive men as more likely to hold political office, be engaged in politics and place greater value on democratic norms and competition. In contrast, when interviewed by women, respondents might view them as less politically engaged and more focused on basic needs, household welfare and social and political stability. As a result, respondents may align their answers with the salient social norm favouring democratic ideals and robust democratic competition in the presence of male interviewers, while adjusting their responses based on what they perceive a female interviewer would find more favourable or relevant – resulting in lower expressed support for democracy or opposition parties.

The regressions also show substantial gender-of-interviewer effects for the previously unstudied normative questions about women’s rights to land and to a job. In line with social desirability bias, respondents report more gender-unequal attitudes when interviewed by male interviewers and more gender-equal attitudes when interviewed by female interviewers. On the question of equal rights to owning and inheriting land, this bias is more pronounced among male respondents. Due to the high level of gender inequality in land rights, I posit that male respondents may experience stronger social desirability pressures to adhere to the conservative unequal norm, while female respondents may also feel pressure to conform, though to a lesser extent. The entrenched gender disparity in land rights may make women less concerned about facing social costs or negative evaluation for their responses. Additionally, male respondents may express less gender-unequal attitudes when speaking to female interviewers, possibly to gain social favour or to enhance their perceived status.

The structure of the rest of the article is as follows: the next section reviews the relevant literature and provides a framework to motivate the empirical analysis. The following part describes the research design and selection of the dependent and independent variables. Then I present the quantitative results for the models. The final section concludes, discussing study limitations and identifying avenues for future research.

Theoretical and empirical literature

Given the pervasiveness of gender inequality and gender-unequal norms across the African continent, it is likely that an interviewer’s gender will play an important role in shaping survey responses. According to the 2024 Human Development Report’s Gender Inequality Index, the region of ‘sub-Saharan Africa’ (as classified by the United Nations Development Programme (UNDP)) exhibits the highest gender inequalities with an average score of 0.565, which is greater than all other regions (Conceicao et al. Reference Conceicao2024: 296).Footnote 2 While there is substantial heterogeneity across African countries – with the index ranging from a low of 0.360 in Cabo Verde to a high of 0.679 in Nigeria (based on 2016 data corresponding to the Afrobarometer data used in this article) – Cabo Verde remains highly unequal, exhibiting a 52.8 percentage point gap in the share of seats in parliament and a 11.5 percentage point gap in the labour force participation rate.Footnote 3

In this context, gender inequalities and unequal gender norms across political, economic and social realms likely heighten the salience of interviewer gender and associated norms, shaping respondents’ self-reported attitudes on key issues related to women’s rights and democracy. This is not inconsequential; policymakers and development actors, including domestic political decision-makers as well as multilateral and bilateral aid agencies such as the United States Agency for International Development (USAID), often use public opinion survey results as key indicators and inputs for policies and programmes.Footnote 4

In the literature on gender-of-interviewer effects, the existing studies theorize that response bias can arise due to social desirability pressures. Timothy Johnson and Fons van de Vijver (Reference Johnson, Van de Vijver, In Harkness, Van de Vijer and Mohler2003: 194) define social desirability as the ‘tendency of individuals to “manage” social interactions by projecting favourable images of themselves, thereby maximizing conformity to others and minimizing the danger of receiving negative evaluations from them’. In the case of interviewer characteristics, the literature suggests that respondents project these images to appease the interviewer.

Gundula Zoch (Reference Zoch2021) and Lindsay Benstead (Reference Benstead2014) emphasize two models of social desirability that are likely to operate in the context of gender-of-interviewer effects: the social attribution model and the conditional attribution model. Zoch (Reference Zoch2021: 628) writes: ‘The social attribution model argues that respondents attribute values and views to the interviewer that are based on social stereotypes, and solely linked to observable characteristics of the interviewer.’

In contexts where gender inequality is pervasive in social, economic and political realms, it is likely that women lean towards more egalitarian gender views than men. As a result, respondents are more likely to assert egalitarian views when interviewed by a woman and more traditional, conservative gender views when interviewed by a man, conforming to the perceived social norm among men (Zoch Reference Zoch2021: 628). Because this effect hinges solely on the gender of the interviewer, social attribution theory anticipates that male and female respondents are more likely to express more egalitarian views to females and inegalitarian views to males, constituting a direct effect.

The conditional attribution model (also known as the social distance model) extends this first model and posits that respondents adjust their responses based on both the characteristics of the interviewer and themselves, aiming to minimize the perceived relative social distance (Benstead Reference Benstead2014; Zoch Reference Zoch2021). The perceived relative social distance is smaller when respondents and interviewers share the same gender, thus respondents will feel more comfortable and adjust their responses to a smaller extent. However, when there is a gender mismatch, this model predicts that males will offer more egalitarian responses to female interviewers while female respondents will provide more traditional, conservative responses to male interviewers to conform to the perceived social norm (Benstead Reference Benstead2014; Liu and Stainback Reference Liu and Stainback2013; Zoch Reference Zoch2021). To test this model, it is necessary to examine the interaction effect of the interviewer–respondent (mis)match.

While both of these models fall under the social desirability framework, it is crucial to examine the distinctive social psychological mechanisms that activate social desirability pressures in face-to-face interviews. In a review of the literature published between 1997 and 2002 on compliance and conformity, Robert Cialdini and Noah Goldstein (Reference Cialdini and Goldstein2004: 593) highlight research suggesting that ‘individuals avoid or alleviate feelings of shame and fear via public compliance’. Other mechanisms or normative conformity motivations include gaining social approval, building relationships with others or elevating self-esteem (Cialdini and Goldstein Reference Cialdini and Goldstein2004). Both injunctive social norms (what is usually approved or disapproved of) and descriptive norms (what is typically done) are likely to influence behaviour and survey responses, especially when uncertainty is high (Cialdini and Goldstein Reference Cialdini and Goldstein2004). In the context of gender-of-interviewer effects, patriarchal (or otherwise gender-unequal) norms are likely to influence many respondents to adjust their answers due to a combination of social conformity pressures, a desire for social approval or a goal of reducing negative evaluations from a male interviewer.

Although small, the empirical literature has found mixed support for both models. Brady West and Annelies Blom (Reference West and Blom2017: 187) conducted a research synthesis of previous studies on gender-of-interviewer effects on the outcome of quality of survey responses rather than bias. They found that 10 out of 23 studies obtained null effects; nine studies suggested that female interviewers collect higher quality responses, four found that male interviewers do so, while there were three studies where the gender effects were moderated by the respondent’s gender (in line with the conditional attribution model). In Morocco, Benstead (Reference Benstead2014: 369) examined survey items related to gender equality in politics and found support for the conditional attribution model for survey response bias: ‘males reported more egalitarian views to female interviewers’ and sought ‘to reduce social distance with females’. In parallel, Flores-Macias and Lawson (Reference Flores-Macias and Lawson2008) discovered that respondents in cosmopolitan Mexico City offered more egalitarian responses when interviewed by women, and that these effects were stronger for male respondents than female respondents for gender-sensitive items. However, they did not find any differences in the rest of Mexico. Similarly, drawing on panel data from Germany, Zoch (Reference Zoch2021: 633) concluded that survey respondents reported ‘less traditional gender ideologies to female interviewers’, but these results were primarily driven by male respondents and related to questions about gender equality in work and technology.

Findings using the Afrobarometer survey are also mixed. Drawing on data from 20 countries, Lau (Reference Lau2018) found that respondents surveyed by women were less supportive of democratic procedures and more supportive of authoritarianism. Sundström and Stockemer (Reference Sundström and Stockemer2022) found strong support for the direct (social attribution model) effect and limited support for the conditional attribution model based on a single question of whether women should have an equal chance of being elected to political office.

Based on the theoretical literature and the mixed results in the empirical literature, I hypothesized that the presence of a male interviewer (relative to a female interviewer) would lead to a significant underestimation of public support for greater gender equality, as respondents of all genders aim to conform to the predominant gender-unequal norm and appease the male interviewer. Because of the high levels of gender inequality and unequal gender norms in the sample of African countries, I speculate that male interviewers often activate a psychological awareness of the situation and induce respondents towards a socially desirable response (i.e. what they think the average male would want to hear based on societal norms). At the same time, it is also possible that respondents are providing more egalitarian descriptive and normative views when they are interviewed by a woman (which may be in line with what they believe a woman interviewer would find socially desirable). In other words, both male and female interviewers likely generate pressures to conform socially and avoid negative evaluations (in line with Cialdini and Goldstein Reference Cialdini and Goldstein2004).

Hypothesis 1: Respondents are more likely to offer responses that align with gender-unequal norms (less support for women’s rights) to a male interviewer. In addition, respondents are more likely to offer socially desirable responses aligning with gender equality to a female interviewer.

In line with Sundström and Stockemer’s (Reference Sundström and Stockemer2022) finding for support for equal rights to political office, my conjecture was that these effects would apply to questions about equal rights to owning and inheriting land as well as equal rights to a job. In other words, I anticipated that the models would provide support for both the social attribution model (a direct effect) and the social distance model (an interaction effect), with the effect appearing stronger for male respondents.

My preliminary hypothesis for the stronger effect among male respondents is that they may experience greater social desirability pressure linked to their perception of how their response might influence their social approval (or negative evaluation) depending on whether they align (or clash) with conservative gender norms when speaking to a male interviewer. This pressure is particularly relevant when answering gender-specific questions about men’s rights to own and inherit land or receive preferential treatment in the labour market, as men may feel more compelled to provide responses that they believe align with their male counterpart’s expectations about the dominant unequal social norm. In contrast, while female respondents may also experience social desirability pressures to conform to the unequal gender norm when interviewed by men, the effect is likely weaker due to the absence of a shared gender identity. This hypothesis is a slight revision of the conditional attribution model, because I contend that men are more likely than women to adjust their responses based on both the characteristics of the interviewer and themselves.

Hypothesis 2: Male respondents are more likely than female respondents to offer responses that align with gender-unequal norms (less support for women’s rights) to a male interviewer, as they may be more motivated to appease their male counterpart by aligning with the dominant social norm.

In addition, the literature on interviewer effects has typically found that people with less education are more vulnerable to interviewer effects (Blaydes and Gillum Reference Blaydes and Gillum2013; Sundström and Stockemer Reference Sundström and Stockemer2022). Given the significance of traditional social norms surrounding gender inequality and the potential for these norms to diminish with higher education levels, I examine the interaction between the respondent’s education level and the interviewer’s gender (following Sundström and Stockemer Reference Sundström and Stockemer2022). I hypothesize that more-educated individuals will be less inclined to appease the interviewer by conforming to perceived social norms associated with the interviewer’s gender – expressing pro-male positions with male interviewers and pro-egalitarian views with female interviewers.

Hypothesis 3: Gender-of-interviewer effects on questions about women’s rights will be smaller where the respondent has higher levels of education.

Previous research on the gender-of-interviewer effect has mostly been limited to gender-specific questions. One exception is Lau (Reference Lau2018), who found that respondents were less supportive of democratic procedures and more supportive of authoritarianism in the presence of a female interviewer (a direct effect). Building on this study, I draw on a larger sample of 34 countries and also include previously unstudied political questions such as feeling close to opposition (versus ruling) parties. In order to construct the variable, I follow Fjelde and Olafsdottir’s (Reference Fjelde and Olafsdottir2024) approach, with a few slight revisions. I code ‘ruling’ party based on which party held executive power at the time of the survey dates in each specific country. If the head of state was an independent (e.g. Patrice Talon in Benin), I coded the parties that supported that candidate as the ruling party.Footnote 5

Although theoretical explanations for gender-of-interviewer effects on responses to questions about democracy are limited, I anticipated that, in African countries – where significant gender disparities in political authority and political engagement exist, with men disproportionately occupying positions of power (Rwanda’s parliament being a notable exception) – the gender of the interviewer would significantly influence responses to political questions.Footnote 6 Specifically, I contend that male interviewers, rather than female interviewers, are more likely to activate social desirability pressures to align with pro-democratic norms, which are widely endorsed across the continent irrespective of the actual level of democracy in the country.Footnote 7 Survey data across African countries have shown that men tend to be more supportive of democracy than women (García-Peñalosa and Konte Reference García-Peñalosa and Konte2014) and have higher rates of political participation (Coffe and Bolzendahl Reference Coffe and Bolzendahl2011).

There are several reasons put forward to explain these gender gaps, including women’s ‘preference for social expenditures such as basic infrastructure (e.g. water supplies), health and education that impact the production of household goods, including children, on which women tend to specialize’ (García-Peñalosa and Konte Reference García-Peñalosa and Konte2014: 105). If women perceive a trade-off between social expenditures on essential public goods and funding for democratic institutions or multiple political parties, they may express lower support for democracy. Alternatively, if they believe that promoting democracy and pluralistic competition hinders their access to these services, they may also be less inclined to support democracy.

Relatedly, another theoretical reason offered is ‘women’s greater risk aversion’ and that women may be more likely to ‘accept traditional roles, prefer the order and security of authoritarian rule and are less willing to accept plurality’ (García-Peñalosa and Konte Reference García-Peñalosa and Konte2014: 105–106). Indeed, based on their regression analyses of the Afrobarometer data, Cecilia García-Peñalosa and Maty Konte (Reference García-Peñalosa and Konte2014: 115–116) conclude that women may perceive the ‘presence of many parties’ as a potential source of conflict, which could explain why women are less supportive of pluralistic democratic competition. In related research, Amanda Clayton and Pär Zetterberg (Reference Clayton and Zetterberg2021: 874) assert: ‘Women may be more risk adverse than men as they choose political careers and thus more likely to affiliate with more established and stable ruling parties.’ Taken together, this suggests that women interviewers may be perceived as less politically engaged, more likely to support stable ruling parties and more focused on household welfare and social stability, such as the absence of political conflict. As a result, both male and female respondents may adjust their answers based on what they assume female interviewers would find favourable, such as weaker support for pluralistic democratic competition or opposition parties.

Hypothesis 4: Respondents are more likely to offer responses that align with pro-democratic norms to a male interviewer, and to express less support for democracy and opposition parties to a female interviewer. (direct effect)

Because these are not gender-specific questions about women’s rights, I hypothesize that respondents will be made less aware of their own gender and relative position in society vis-à-vis the interviewer’s gender. In other words, any social desirability pressure is less likely to be activated differently for male and female respondents.

In addition, based on the importance of traditional social norms and their likely erosion with greater education, I interact the respondent’s level of education with the gender of the interviewer (following Sundström and Stockemer Reference Sundström and Stockemer2022). I hypothesize that individuals with more education should be less likely to appease the interviewer by conforming to the socially desirable norm of supporting democracy in the presence of a male interviewer or by expressing lower support for democracy and pluralistic competition in the presence of a female interviewer.

Hypothesis 5: Gender-of-interviewer effects on survey items gauging pro-democratic values will be smaller at higher levels of respondent education.

Data and method

In this study, I draw on the Round 7 Afrobarometer dataset, which includes 45,823 households across 34 African countries. The survey took place between September 2016 and August 2018 (Afrobarometer Network Reference Afrobarometer Network2017). The countries included in the dataset are: Benin, Botswana, Burkina Faso, Cabo Verde, Cameroon, Côte d’Ivoire, Eswatini, Gabon, The Gambia, Ghana, Guinea, Kenya, Lesotho, Liberia, Madagascar, Malawi, Mali, Mauritius, Morocco, Mozambique, Namibia, Niger, Nigeria, São Tome e Principé, Senegal, Sierra Leone, South Africa, Sudan, Tanzania, Togo, Tunisia, Uganda, Zambia and Zimbabwe.

In each wave, between 1,200 and 2,400 households are interviewed in each country. Within each of these households, only one adult household member is randomly selected to be interviewed. Afrobarometer employs a clustered, stratified, multistage, probability sample design that resulted in a sample composed of 50.06% women and 49.94% men. Afrobarometer’s sampling method adheres to a gender quota by explicitly alternating respondents by gender, ensuring a balanced sample. Round 7 is the first round in which the Afrobarometer Network decided to switch to computer-assisted personal interviewing (CAPI). The chosen male or female respondent was read questions from the screen of a handheld tablet (Afrobarometer Network Reference Afrobarometer Network2017).

One threat to the validity of gender-of-interviewer effects is that male and female interviewers are not randomly assigned and there may be differences between the characteristics of male and female interviewers which could also affect responses and the estimate of the gender-of interviewer effects. The second related concern is linked to the non-random assignment of interviewers to respondents, which could introduce bias if there are systematic differences in the respondents’ characteristics which are correlated with the gender of the interviewer (for example, if female interviewers are assigned to high-educated respondents and male interviewers are assigned to low-educated respondents).

To address these concerns, Sundström and Stockemer (Sundström and Stockemer Reference Sundström and Stockemer2022: 676) conducted interviews with Afrobarometer and its national partner organizations, concluding that ‘gender of interviewer effects in the data do not stem from assignment bias or a systematic skewness in the assignment of male or female interviewers to particular areas or respondents.’ Additionally, they note that because the survey employs a gender quota in its sampling, ‘the assignment of interviewers to respondents should not create gendered distortions’ (Sundström and Stockemer Reference Sundström and Stockemer2022: 676).

I assess these differences in the Round 7 data and, like Sundström and Stockemer (Reference Sundström and Stockemer2022), find very small albeit statistically significant differences in interviewer characteristics (age, education and rural/urban residence) by interviewer gender (see Table A1 in the Appendix in the Supplementary Material). To address this particular threat, all models control for interviewer age, interviewer education and whether the interviewer comes from a rural or urban area. Similarly, Table A2 in the Appendix shows very small statistically significant differences for most of the respondent characteristics that are controlled for in the model.Footnote 8 For example, the last rows of the table show that female interviewers are slightly more likely than men to be assigned to urban areas. Given that there are small differences, the model also accounts for these potential confounders including the respondent’s gender, age, education and urban/rural residence.

Moreover, in line with other studies in the literature, there are strong theoretical and empirical reasons linking these factors to the outcomes of interest in this article such as support for women’s rights, support for democracy and partisan identity (Flores-Macias and Lawson Reference Flores-Macias and Lawson2008; Lau Reference Lau2018; Lau et al. Reference Lau, Baker, Fiore, Greene, Lieskovsky, Matu and Peytcheva2017). In addition, the models include covariates for whether one’s spouse is present during the interview (following Flores-Macias and Lawson Reference Flores-Macias and Lawson2008; Lau et al. Reference Lau, Baker, Fiore, Greene, Lieskovsky, Matu and Peytcheva2017; Zimbalist Reference Zimbalist2022) and for whether the respondent believes the interviewer was sent by the government (as opposed to being sent by a non-governmental institution) (following Lau Reference Lau2018; Tannenberg Reference Tannenberg2022; Zimbalist Reference Zimbalist2018).Footnote 9 These covariates have been shown to be associated with biased responses for a range of survey questions that overlap with the ones used in this study. Of most importance to this article, these studies have found that the perception of a government interviewer is associated with less support for democracy and greater support for the ruling party, possibly due to fears of repercussions from the perceived government interviewer. (For further description of these independent variables as well as the outcome variables, see Table A8 in the Appendix.) All models include country-fixed effects to control for unobserved confounding factors that vary across countries (e.g. relatively constant or slowly changing national cultural attitudes that may affect gender norms and the outcome variables).Footnote 10

Table 1 provides descriptive statistics for respondent and interviewer characteristics included in the model and for the outcome variables.Footnote 11 Respondents are 37 years old on average, evenly split between males and females and possess between primary school completed (= 3) and some secondary education (= 4) on average. Respondents are more likely to be rural residents (55%). Furthermore, respondents believe that interviewers are state representatives nearly 40% of the time. A respondent’s spouse is present in 6% of interviews. With regard to interviewer traits, their average characteristics based on the 45,823 household respondents’ data are as follows: interviewers are slightly younger than respondents on average (the average age of the interviewer is 30 years old across all interviews), more likely to be urban residents (56% of interviews are conducted by an urban interviewer), possess between secondary school completed (= 5) and post-secondary qualifications (= 6) and are slightly more likely to be male (52% of interviews were conducted by a male interviewer).Footnote 12

Table 1. Descriptive Statistics

Notes: The data are weighted.

Following Sundström and Stockemer (Reference Sundström and Stockemer2022), the analysis has multiple steps. I first report cross-tabular statistics for gender-specific questions, and use a chi-squared test of independence to examine whether those differences are statistically significant. Since the outcome variables are ordered – except for support for democracy, which is dichotomized due to the lack of a natural category ordering, and closeness to opposition parties, which is dichotomized as closeness to ruling versus opposition parties – I use ordered logistic regression models to assess gender-of-interviewer effects.Footnote 13 In addition to the tables, I present marginal effects plots to graphically display the gender-of-interviewer effects for the interaction terms.

Findings

Tables 2 and 3 show the cross-tabular results for two previously unstudied questions in the Afrobarometer survey that ask about normative support for women’s equal rights to land and job opportunities. Women respondents are, unsurprisingly, more supportive of equal rights across the board. However, the chi-squared tests of independence show that there are clear gender-of-interviewer effects. In line with Hypothesis 1, respondents are more likely to support gender equality when interviewed by women, and less likely to do so when interviewed by men. Table 2 shows that there are substantively smaller statistically significant row-level differences for female respondents (compared to male respondents) in support of equal rights to land, which aligns with the multivariate interaction effects observed in the subsequent regression models. By contrast, the magnitude of the differences for male and female respondents in support for equal rights to a job are comparable, which aligns with the statistically insignificant multivariate interaction effects observed in the subsequent regression models.

Table 2. Agreement with Equal Rights to Own and Inherit Land

Table 3. Agreement with Equal Rights to Job

The multivariate regression models presented in Table 4, including an appropriate set of controls, provide stronger support for Hypothesis 1 (an unconditional direct effect) for the outcome of equal rights to a job. There is a large negative statistically significant association between male interviewer and support for equal rights (ranging from −0.33 to −0.44 depending on the specification). Based on the smallest coefficient of −0.33, the odds of weaker support for equal rights to a job increase by 1.39 times on average if one is interviewed by a man relative to a woman.Footnote 14

Table 4. Logit Regressions for Support for Equal Rights to a Job

Notes: ** p < 0.05, *** p < 0.01.

The models are ordered logit models with individuals as the unit of analysis. All models include country-fixed effects. The data are unweighted. Standard errors in parentheses.

For the binary predictors, only being a male respondent has a larger association (ranging from −0.53 to −0.56) with the outcome variable. By contrast, column II shows that there is no support for Hypothesis 2 (a conditional interaction effect). There are no statistically significant differences for the male interviewer effect conditional on the gender of the respondent. This can be seen graphically in Figure 1, where the gaps for female and male respondents roughly mirror each other (comparing lines of the same colour across the two panels). The figure shows that, with all other factors held constant in the model, women have a roughly seven-percentage-point higher chance of strongly agreeing with the statement that women should have equal rights to a job as men when the interviewer is female compared to when the interviewer is male (a change from 41% to 34%). Meanwhile, men exhibit a statistically similar six-percentage-point lower probability of strongly agreeing with the same statement that women should have equal rights to a job as men when interviewed by a woman than when interviewed by a man (the predicted probability to give this answer decreases from 29% to 23%, as shown in Figure 1).

Figure 1. Predicted Average Marginal Effect of Interviewer Gender, by Interviewer Gender

Finally, to evaluate Hypothesis 3, I interact interviewer gender with the respondent’s level of education, and find a positive statistically significant association in column III of Table 4 (see the results displayed graphically in Figure 2). In other words, at higher levels of education, Figure 2 shows that respondents are more likely to agree with support for equal rights to both male and female interviewers, especially for the ‘strongly agree’ response, shown in maroon and yellow (the predicted probabilities rise substantially as education increases). Similarly, the predicted probabilities for ‘strongly disagreeing’ with support for equal rights fall dramatically for both male and female interviewers as education rises (depicted in purple and dark blue). The bars around the point estimates display the confidence intervals, showing that the predicted probabilities are statistically different from one another across most education levels for the ‘strongly agree’ and ‘strongly disagree’ responses.

Figure 2. Predicted Average Marginal Effect of Interviewer Gender, by Respondent’s Education Level

The findings for the second gender-specific outcome of interest, agreement with support for equal rights to own and inherit land, are also interesting. The results in Table 5 provide support for Hypothesis 1 (an unconditional direct effect) and, in contrast to the first outcome, Hypothesis 2 (a conditional attribution effect). There is a large negative statistically significant association between male interviewer and support for equal rights (ranging from −0.10 to −0.30 depending on the specification). For the binary predictors, only being a male typically has a larger association (ranging from −0.35 to −0.52) with the outcome variable. Similarly, columns II and III show that there is support for Hypothesis 2 (a conditional interaction effect). This can be seen graphically in Figure 3, where the slopes of the lines are steeper for male respondents for all response options except for ‘agree’. The figure shows that, with all other factors held constant in the model, men exhibit a roughly five-percentage-point lower probability of strongly agreeing with the statement (shown in yellow) that women should have equal rights to own and inherit land as men when interviewed by a woman than when interviewed by a man (the predicted probability to give this answer decreases from 34% to 29%, as shown in the right-hand panel of Figure 3). By contrast, there is a statistically smaller, roughly three-percentage-point higher chance of women strongly agreeing with the same statement when the interviewer is female compared to when the interviewer is male (44% to 41%, as shown in the left-hand panel of Figure 3).

Figure 3. Predicted Average Marginal Effect of Interviewer Gender, by Interviewer Gender

Table 5. Logit Regressions for Support for Equal Rights to Own and Inherit Land

Notes: ** p < 0.05, *** p < 0.01

The models are ordered logit models with individuals as the unit of analysis. All models include country-fixed effects. The data are unweighted. Standard errors in parentheses.

Because the interaction between interviewer gender and respondent gender was statistically significant in column II, I also ran a triple interaction between these terms and respondent education (presented in column III). While the triple interaction coefficient is not statistically significant, the two-way interaction between interviewer gender and respondent gender remains statistically significant and doubles in magnitude (in line with Hypothesis 2), and the interaction between interviewer gender and education is statistically significant and positive (in line with Hypothesis 3). This latter association is greater in magnitude in column IV, when the triple interaction term is omitted. I present the results graphically of this two-way interaction effect in Figure 4. The figure demonstrates that, at higher levels of education, all respondents are more likely to ‘strongly agree’ with support for equal rights in the presence of both male and female interviewers. The predicted probabilities for the ‘strongly agree’ response (represented in maroon and yellow) increase significantly as education levels rise. Meanwhile, as respondent education increases, one can also observe that the predicted probabilities for disagreeing with support for equal rights decline for both male and female interviewers.

Figure 4. Predicted Average Marginal Effect of Interviewer Gender, by Respondent Education Level

To summarize, the analyses supported a direct unconditional effect (Hypothesis 1) and a declining gender-of-interviewer effect with increasing respondent education (Hypothesis 3). There was support for the conditional attribution effect (Hypothesis 2) for the equal rights to owning and inheriting land outcome, but not for the equal rights to a job outcome. I interpret the divergent results on the conditional effects hypothesis as follows: due to the significant gender inequality regarding women’s rights to own and inherit land,Footnote 15 male respondents may perceive that they have more at stake when answering this question in the presence of a male interviewer. This dynamic could lead them to respond in ways that align more closely with traditional gender norms, especially if they believe their responses might weaken or reinforce their standing within those norms.

In contrast, for the question regarding equal rights to a job, male and female respondents do not appear to be influenced differently by the gender of the interviewer. I speculate that this outcome reflects the relatively balanced labour force participation rates between men and women, particularly in sub-Saharan Africa, which has one of the highest rates of female labour force participation worldwide. According to UNDP data, the female labour force participation rate (ages 15 and older) in 2016 – the year of the Afrobarometer survey – was 60.8% in sub-Saharan Africa, surpassing all other regions identified by the UNDP, including the ‘Very High Human Development’ category, which had a rate of 53.6%.Footnote 16 Still, there are major gender disparities in types and quality of work, with women more likely to be employed in lower wage, informal, or self-employment categories (Woldemichael Reference Woldemichael2020). Because this outcome exhibits relative parity – making the descriptive norm more gender-equal in stark contrast to land rights – male respondents may feel less pressure than in the land rights question to align their answers with what they believe their male interviewer counterparts find socially desirable. As a result, their responses do not differ significantly from those of female respondents, as both groups state more gender-unequal attitudes with male interviewers and more gender-equal attitudes with female interviewers.

Table 6 presents the results for five questions about support for democracy, one question about whether the respondent feels close to opposition parties (versus ruling parties), and a question about whether people can disobey laws of governments they did not vote for. The gender-of-interviewer effect was statistically significant for all outcomes except for one: rejection of one-man rule. In line with Hypothesis 4, these effects reveal that, in the presence of a male interviewer, respondents consistently overstate their support for democracy, rejection of military rule or one-party rule and support for freedom of movement as opposed to supporting government restrictions on movement in the interest of public security. Similarly, there were statistically significant gender-of-interviewer effects for two previously unstudied questions about feeling close to an opposition party, and whether one should obey government laws even if the respondent did not vote for that government (presented in Table 6).

Table 6. Logit Regressions for Support for Democracy and Opposition Parties

Notes: ** p < 0.05, *** p < 0.01

The models are logit (when binary dependent variable) or ordered logit models (when ordinal dependent variable) with individuals as the unit of analysis. All models include country-fixed effects. The data are unweighted. Standard errors in parentheses.

In the presence of a male interviewer, respondents asserted higher levels of closeness to opposition parties (versus the ruling party) and were more likely to agree with a statement that one does not need to obey the laws of the government if they didn’t vote for it. It is important to highlight that all of these gender-of-interviewer effects operate in the opposite direction of the perceived government interviewer effects. Specifically, male interviewers elicit responses biased towards support for democracy and close affiliation with opposition parties, while perceived state interviewers prompt biases favouring non-democratic governance and close ties to ruling parties.Footnote 17 These results lend some support to the notion that respondents seek to align with the injunctive norm of pro-democracy and pluralistic competition, which male interviewers – who are more likely to be involved in politics and are perceived to value democratic competition – may make more salient. Meanwhile, respondents interviewed by women may instead adjust their answers to align with what they perceive to be socially desirable to female interviewers – prioritizing social and political stability, even if it means expressing less support for democracy or opposition parties.

The magnitudes of these effects are also non-trivial. The odds of stronger support for democracy increase by 1.38 times on average if one is interviewed by a man relative to a woman.Footnote 18 With regard to feeling close to an opposition party, an individual is 1.21 times more likely on average to state that they feel close to an opposition party if they are interviewed by a male compared to a female.Footnote 19

While there are direct, unconditional effects associated with interviewer gender, no statistically significant differences were found between male and female respondents when the interaction between respondent and interviewer gender was included. The inclusion of these interaction terms does not alter the direct, unconditional effects. I present those results in Appendix Table A4.

Finally, there is mixed support for Hypothesis 5. The interaction term between interviewer gender and respondent education is statistically significant for less than half of the outcomes (three out of seven), suggesting that more-educated respondents do not consistently show less susceptibility to social desirability pressures to align with prevailing norms in favour of democracy and support for opposition parties when interviewed by men (see Table 6).

Discussion

This article has examined gender-of-interviewer effects for a broader set of survey items pertaining to gender equality, democracy and feelings of closeness to opposition (versus ruling) parties with a larger sample of 34 countries across Africa. The regression models uncovered substantial gender-of-interviewer effects for two previously unstudied survey questions on women’s political and economic rights, irrespective of the interviewee’s gender, supporting the (unconditional) social attribution model. Consistent with social desirability bias theory, the direction of these unconditional effects aligns with respondents reporting less gender-equal attitudes to male interviewers and more gender-equal attitudes to female interviewers. In contrast, there was mixed support for the conditional attribution model, as the estimated effects diverged for male and female respondents for only one of the two women’s rights outcomes – a normative question about equal rights to owning and inheriting land. While all respondents are less supportive of equal rights to land when interviewed by men, the negative effect is stronger for male respondents.

Another important contribution of this study was to demonstrate statistically significant unconditional gender-of-interviewer effects for a wider range of questions about support for democracy and partisan identity in a larger sample of countries. I find that male interviewers elicit responses that show greater support for democracy and feelings of closeness to opposition parties rather than ruling parties. I suggest that these effects, which are observed irrespective of the respondent’s gender, may stem from men’s disproportionate participation and influence in politics and a perception that they value democratic competition, which raises the salience of pro-democratic norms. Conversely, respondents may perceive female interviewers as more focused on satisfying basic needs and preserving social and political stability (e.g. avoiding conflict), leading them to adjust their responses accordingly. However, further research is needed to better understand whether and how these posited mechanisms operate.

From a practical perspective, this article’s findings underscore the importance of accounting for and assessing gender-of-interviewer effects. Many studies on attitudes towards women’s rights, support for democracy and partisan identity rely on Afrobarometer data and other regional barometer data without considering interviewer gender. This omission may lead to biased estimates of support levels (e.g. the proportion of the population that supports democracy) and inaccurate inferences about relationships involving these variables in regression analyses (e.g. the relationship between partisan identities and support for democracy). Moreover, many widely used surveys omit questions about the interviewer’s gender and should include this information, making it accessible to data users. This would allow researchers and other stakeholders to evaluate and possibly adjust for any resulting biases. Without this data, such analyses remain impossible in numerous existing surveys.

This study’s findings also have implications for improving survey research across developing countries. This might include self-administration in high literacy contexts, which eliminates the biases caused by the gender of the interviewer. Research has shown that respondents are more willing to answer sensitive questions when they are self-administered (Hochstim 1967, cited in Tourangeau and Yan Reference Tourangeau and Yan2007: 863; see Krumpal Reference Krumpal2013 for a review). In addition, indirect modes of administration such as endorsement experiments, list experiments (or item count or unmatched count technique) (Glynn Reference Glynn2013) or randomized response techniques (Coutts and Jann Reference Coutts and Jann2011; Rosenfeld et al. Reference Rosenfeld, Imai and Shapiro2015) could be tested. Despite their limitations, indirect methods may improve data collection on questions that are likely to be susceptible to interviewer gender bias. Moreover, they are more implementable than self-administration in low literacy contexts. Further research would be helpful in bolstering our understanding of whether, and to what extent, these methods produce estimates that are less affected by interviewer gender across different contexts. Future experiments could rigorously test bias by randomly assigning interviewers of different genders or, if financially and logistically feasible, using mixed-gender interviewer pairs. Another potential approach would be to experiment with matched interviewer-respondent gender pairs or allow respondents to select the gender of their interviewer. Previous research has yielded mixed results on this approach, warranting further study (Catania et al. Reference Catania, Binson, Canchola, Pollack, Hauck and Coates1996; Davis et al. Reference Davis, Couper, Janz, Caldwell and Resnicow2010).

Overall, these results provide new evidence of substantial gender-of-interviewer effects across a range of survey questions about women’s rights as well as questions about democracy and opposition support in a large sample of African countries. Large gender inequalities and unequal gender norms and stereotypes make it likely that an interviewer’s gender generates response bias for many questions. In light of this, it is important to experiment with new ways of accurately measuring public opinion on these critical issues, which have policy and programmatic consequences. Domestic political leaders, opposition parties, civil society organizations and external development actors often draw on public opinion results to make decisions or lobby for women’s rights and democracy policies and programmes. Regarding women’s rights, it is likely that support for gender equality is underestimated in many African countries because respondents adjust their responses to cater to the gender-unequal norm when interviewed by a male. Alternative survey techniques could be deployed to ensure that respondents feel free from pressure, enabling them to provide their honest beliefs and assessments on important gender-related questions, as well as other topics such as democracy and partisan identity. Still, further research is needed to better understand the specific underlying social psychological mechanisms underpinning gender-of-interviewer effects for these questions, which will offer valuable insights for researchers and policymakers.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/gov.2025.10017.

Acknowledgements

Thanks are due to Ashley Blair Simpson, Jonas Bunte, Teresa Hübel, Dewa Gede Sidan Raeskyesa and Matthias Kourek for their suggestions. Although I wrote the entire original text independently, I used ChatGPT to help refine certain sentences that I found unwieldy. My process was to input my original sentence and ask ChatGPT to revise it. I then chose whether to accept the suggestion or parts of it. All errors remain my own.

Footnotes

1 This study did not analyse self-reported closeness to ruling versus opposition parties.

2 For comparison, the average score is 0.194 in OECD (Organisation for Economic Co-operation and Development) countries. The index encompasses reproductive health, empowerment and economic status.

4 The author previously worked at USAID, where high-level policymakers frequently referred to public opinion data, including the regional barometers to measure levels of, and attitudes toward democracy.

5 For consistency, I used the elections data available at www.electionguide.org/elections/id/3215/ and back-checked those results with additional sources available online. Similarly, in countries without presidential systems, I coded the largest party in government as the ruling party based on the most recent parliamentary elections (e.g. Lesotho). If there was a ruling party coalition at the national level (e.g. the Mauritian Alliance in Mauritius), then all parties included in that coalition were coded as ruling. Survey respondents are first asked if they ‘feel close to any particular political party’. Respondents who reply ‘yes’ are then asked a follow-up question: ‘Which party is that?’ Thus, the sample for the follow-up question is restricted only to those who express closeness to a particular party, which is 20,816 respondents or 45.43% of the full Afrobarometer sample. I do not create a ‘non-partisans’ category for those who answer ‘no’ to the first question because this group likely includes both genuine non-partisans and individuals who prefer not to disclose their partisan identity. Combining them would conflate these distinctions. See Appendix Table A8 in the Supplementary Material for full details of the survey questions.

6 According to a recent report surveying women’s political participation across the continent, the International Institute for Democracy and Electoral Assistance (2024) found that women’s representation in African parliaments had risen by only one percentage point from 25% in 2021 to 26% in 2024. The authors projected that it would take until 2100 for African countries to reach gender parity in parliament.

7 This is supported by data even in Afrobarometer countries that are rated as highly undemocratic by country expert surveys such as the Varieties of Democracy Institute and Freedom House. For example, Uganda and Zimbabwe are far from beacons of democracy but the percentages of respondents in Round 7 who assert that ‘Democracy is preferable to any other kind of government’ are 87% and 81%, respectively.

8 There are no statistical differences for spouse present or perception of a government interviewer.

9 At the end of the survey, respondents are asked who they believe sent the interviewer. I code responses indicating a political party, politician or the government as perceiving the interviewer to be affiliated with the government. All other responses – including Afrobarometer, private companies and universities – are coded as non-government.

10 In Appendix Tables A5, A6 and A7, I demonstrate that the results of the main models are robust when replicated using region fixed effects instead of country fixed effects.

11 The data are weighted per the Afrobarometer survey manual (Afrobarometer Network Reference Afrobarometer Network2017).

12 Interviewer traits are self-reported by the interviewer. It is important to note that some countries face challenges in hiring equal numbers of women and men, likely due to gender inequities in access to higher education, which is beneficial for enumerator positions.

13 The models present the ordered log-odds (logit) regression coefficients, the standard errors and the significance levels. The log-pseudolikelihood of the model and a likelihood ratio (LR) chi-squared test demonstrate that all models are significant. Per the Afrobarometer survey manual, the regression models are run on unweighted data (Afrobarometer Network Reference Afrobarometer Network2017: 107). The models are run in Stata 18.

14 This obtains from exponentiating the coefficient on male interviewer (−0.329) in column II of Table 4, which provides an odds ratio of 0.720. 1 divided by the odds ratio gives: 1/0.720 = 1.39.

15 For example, according to SDG Indicator 5.a.1, which measures the proportion of the agricultural population with ownership or secure rights over agricultural land, the majority of African countries with available data are classified as either ‘Very far from target’ or ‘Far from target’. Data available from www.fao.org/sustainable-development-goals-data-portal/data/indicators/5a1-women-ownership-of-agricultural-land/en.

17 As a robustness check, I included interactions between interviewer gender and perceived survey sponsor (government versus non-government), which are reported in Appendix Table A3. The gender-of-interviewer effects were robust for all nine outcomes, while the interaction effects were statistically significant for only three out of nine outcomes.

18 This obtains from exponentiating the coefficient on male interviewer (0.323) in column I of Table 6, which provides an odds ratio of 1.38.

19 This obtains from exponentiating the coefficient on male interviewer (0.190) in column II of Table 6, which provides an odds ratio of 1.21.

References

Adida, CL, Ferree, KE, Posner, DN and Robinson, AL (2016) Who’s Asking? Interviewer Coethnicity Effects in African Survey Data. Comparative Political Studies 49(12), 16301660. https://doi.org/10.1177/0010414016633487.CrossRefGoogle Scholar
Afrobarometer Network, (2017) Round 7 Survey Manual. Technical Report, July. https://afrobarometer.org/wp-content/uploads/migrated/files/survey_manuals/ab_r7_survey_manual_en1.pdf.Google Scholar
Baekgaard, M (2023) Own-party Bias: How Voters Evaluate Electoral Outcomes. Government and Opposition: An International Journal of Comparative Politics 58(3), 556575. https://doi.org/10.1017/gov.2021.55.CrossRefGoogle Scholar
Bartels, BL and Kramon, E (2020) Does Public Support for Judicial Power Depend on Who Is in Political Power? Testing a Theory of Partisan Alignment in Africa. American Political Science Review 114(1), 144163. https://doi.org/10.1017/S0003055419000704.CrossRefGoogle Scholar
Beatty, P and Herrmann, D (2002) To Answer or Not to Answer: Decision Processes Related to Survey Item Nonresponse. In Dillman, DA, Eltinge, JL, Groves, RM and Little, RJ (eds), Survey Nonresponse. Hoboken, NJ: John Wiley and Sons, pp. 7186.Google Scholar
Benstead, LJ (2014) Effects of Interviewer–Respondent Gender Interaction on Attitudes toward Women and Politics: Findings from Morocco. International Journal of Public Opinion Research 26(3), 369383. https://doi.org/10.1093/ijpor/edt024.CrossRefGoogle Scholar
Blaydes, L and Gillum, RM (2013) Religiosity-of-Interviewer Effects: Assessing the Impact of Veiled Enumerators on Survey Response in Egypt. Politics and Religion 6(3), 459482. https://doi.org/10.1017/S1755048312000557.CrossRefGoogle Scholar
Burchard, SM (2020) Get Out the Vote or Else: The Impact of Fear of Election Violence on Voters. Democratization 27(4), 588604. https://doi.org/10.1080/13510347.2019.1710490.CrossRefGoogle Scholar
Catania, JA, Binson, D, Canchola, J, Pollack, LM, Hauck, W and Coates, TJ (1996) Effects of Interviewer Gender, Interviewer Choice, and Item Wording on Responses to Questions Concerning Sexual Behavior. Public Opinion Quarterly 60(3), 345375. https://doi.org/10.1086/297758.CrossRefGoogle Scholar
Cialdini, RB and Goldstein, NJ (2004) Social Influence: Compliance and Conformity. Annual Review of Psychology 55, 591621. https://doi.org/10.1146/annurev.psych.55.090902.142015.CrossRefGoogle ScholarPubMed
Clayton, A and Zetterberg, P (2021) Gender and Party Discipline: Evidence from Africa’s Emerging Party Systems. American Political Science Review 115(3), 869884.https://doi.org/10.1017/S0003055421000368.CrossRefGoogle Scholar
Coffe, H and Bolzendahl, C (2011) Gender Gaps in Political Participation across Sub-Saharan African Nations. Social Indicators Research 102, 245264. https://doi.org/10.1007/s11205-010-9676-6.CrossRefGoogle ScholarPubMed
Conceicao, P et al. (2024) Human Development Report 2023/24. Breaking the Gridlock: Reimagining Cooperation in a Polarized World. Technical Report, United Nations Development Programme. https://hdr.undp.org/system/files/documents/global-report-document/hdr2023-24reporten.pdf.Google Scholar
Coutts, E and Jann, B (2011) Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT). Sociological Methods & Research 40(1), 169193. https://doi.org/10.1177/0049124110390768.CrossRefGoogle Scholar
Davis, RE, Couper, MP, Janz, NK, Caldwell, CH and Resnicow, K (2010) Interviewer Effects in Public Health Surveys. Health Education Research 25(1), 1426. https://doi.org/10.1093/her/cyp046.CrossRefGoogle ScholarPubMed
Demarest, L (2017) An Assessment of Interviewer Error in the Afrobarometer Project. Centre for Peace and Development Working Paper 53, Leuven, CRPD, https://soc.kuleuven.be/crpd/files/working-papers/WP53-Afrobarometer-Interviewer-error.pdf.Google Scholar
Dionne, K (2014) The Politics of Local Research Production: Surveying in a Context of Ethnic Competition. Politics, Groups, and Identities 2(3), 459480. https://doi.org/10.1080/21565503.2014.930691.CrossRefGoogle Scholar
Fjelde, H and Olafsdottir, G (2024) Viewing Violence through a Partisan Lens: How Electoral Violence Shapes Citizens’ Support for Democracy. Government and Opposition: An International Journal of Comparative Politics 60(2), 313334. https://doi.org/10.1017/gov.2024.17.CrossRefGoogle Scholar
Flesken, A and Hartl, J (2018) Party Support, Values, and Perceptions of Electoral Integrity. Political Psychology 39(3), 707724. https://doi.org/10.1111/pops.12431.CrossRefGoogle Scholar
Flores-Macias, F and Lawson, C (2008) Effects of Interviewer Gender on Survey Responses: Findings from a Household Survey in Mexico. International Journal of Public Opinion Research 20(1), 100110. https://doi.org/10.1093/ijpor/edn007.CrossRefGoogle Scholar
García-Peñalosa, C and Konte, M (2014) Why are Women Less Democratic than Men? Evidence from Sub-Saharan African Countries. World Development 59, 104119. https://doi.org/10.1016/j.worlddev.2014.01.005.CrossRefGoogle Scholar
Glynn, AN (2013) What Can We Learn with Statistical Truth Serum? Design and Analysis of the List Experiment. Public Opinion Quarterly 77(S1), 159172. https://doi.org/10.1093/poq/nfs070.CrossRefGoogle Scholar
Groves, RM (2005) Survey Errors and Survey Costs. Hoboken, NJ: John Wiley and Sons.Google Scholar
Groves, RM et al (2009) Survey Methodology, Volume 561. Hoboken, NJ: John Wiley and Sons.Google Scholar
Guizzo Altube, M and Scartascini, C (2024) Gender-Based Research and Interviewer Effects: Evidence for Latin America and the Caribbean. Working Paper IDB-WP-1596, www.econstor.eu/bitstream/10419/299406/1/1888947039.pdf.10.18235/0012886CrossRefGoogle Scholar
International Institute for Democracy and Electoral Assistance (2024) Women’s Political Participation: Africa Barometer 2024. Technical Report, International Institute for Democracy and Electoral Assistance, Stockholm, www.idea.int/sites/default/files/2024-07/womens-political-participation-africa-barometer-2024.pdf.Google Scholar
Johnson, TP and Van de Vijver, FJ (2003) Social Desirability in Cross-cultural Research. In Harkness, J, Van de Vijer, F and Mohler, P (eds), Cross-Cultural Survey Methods, Vol. 325. Hoboken, NJ: Wiley, pp. 195206.Google Scholar
Krumpal, I (2013) Determinants of Social Desirability Bias in Sensitive Surveys: A Literature Review. Quality and Quantity 47(4), 20252047. https://doi.org/10.1007/s11135-011-9640-9.CrossRefGoogle Scholar
Kuenzi, M and Lambright, GM (2011) Who Votes in Africa? An Examination of Electoral Participation in 10 African Countries. Party Politics 17(6), 767799. https://doi.org/10.1177/1354068810376779.CrossRefGoogle Scholar
Lau, CQ (2018) The Influence of Interviewer Characteristics on Support for Democracy and Political Engagement in Sub-Saharan Africa. International Journal of Social Research Methodology 21(4), 467486. https://doi.org/10.1080/13645579.2017.1407087.CrossRefGoogle Scholar
Lau, CQ, Baker, M, Fiore, A, Greene, D, Lieskovsky, M, Matu, K and Peytcheva, E (2017) Bystanders, Noise, and Distractions in Face-to-Face Surveys in Africa and Latin America. International Journal of Social Research Methodology 20(5), 469483. https://doi.org/10.1080/13645579.2016.1208959.CrossRefGoogle Scholar
Liu, M and Stainback, K (2013) Interviewer Gender Effects on Survey Responses to Marriage-related Questions. Public Opinion Quarterly 77(2), 606618. https://doi.org/10.1093/poq/nft019.CrossRefGoogle Scholar
Mazepus, H and Toshkov, D (2022) Standing up for Democracy? Explaining Citizens’ Support for Democratic Checks and Balances. Comparative Political Studies 55(8), 12711297. https://doi.org/10.1177/00104140211060285.CrossRefGoogle Scholar
Rosenfeld, B, Imai, K and Shapiro, JN (2015) An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions. American Journal of Political Science 60(3), 783802. https://doi.org/10.1111/ajps.12205.CrossRefGoogle Scholar
Singer, M (2018) Delegating Away Democracy: How Good Representation and Policy Successes Can Undermine Democratic Legitimacy. Comparative Political Studies 51(13), 17541788. https://doi.org/10.1177/0010414018784054.CrossRefGoogle Scholar
Sundström, A and Stockemer, D (2022) Measuring Support for Women’s Political Leadership: Gender of Interviewer Effects among African Survey Respondents. Public Opinion Quarterly 86(3), 668696. https://doi.org/10.1093/poq/nfac031.CrossRefGoogle ScholarPubMed
Tannenberg, M (2022) The Autocratic Bias: Self-Censorship of Regime Support. Democratization 29(4), 591610. https://doi.org/10.1080/13510347.2021.1981867.CrossRefGoogle Scholar
Tourangeau, R and Yan, T (2007) Sensitive Questions in Surveys. Psychological Bulletin 133(5), 859883. https://doi.org/10.1037/0033-2909.133.5.859.CrossRefGoogle ScholarPubMed
von Borzyskowski, Inken, Daxecker, U and Kuhn, PM (2022) Fear of Campaign Violence and Support for Democracy and Autocracy. Conflict Management and Peace Science 39(5), 542564. https://doi.org/10.1177/07388942211026319.CrossRefGoogle Scholar
West, BT and Blom, AG (2017) Explaining Interviewer Effects: A Research Synthesis. Journal of Survey Statistics and Methodology 5(2), 175211. https://doi.org/10.1093/jssam/smw024.Google Scholar
Woldemichael, A (2020) Closing the Gender Gap in African Labor Markets Is Good Economics. Brookings Institution 23 January, www.brookings.edu/articles/closing-the-gender-gap-in-african-labor-markets-is-good-economics/.Google Scholar
Zimbalist, Z (2018) ‘Fear-of-the-State Bias’ in Survey Data. International Journal of Public Opinion Research 30(4), 631651. https://doi.org/10.1093/ijpor/edx020.CrossRefGoogle Scholar
Zimbalist, Z (2022) Bystanders and Response Bias in Face-to-Face Surveys in Africa. International Journal of Social Research Methodology 25(3), 361377. https://doi.org/10.1080/13645579.2021.1886397.CrossRefGoogle Scholar
Zoch, G (2021) Gender-of-Interviewer Effects in Self-Reported Gender Ideologies: Evidence Based on Interviewer Change in a Panel Survey. International Journal of Public Opinion Research 33(3), 626636. https://doi.org/10.1093/ijpor/edaa017.CrossRefGoogle Scholar
Figure 0

Table 1. Descriptive Statistics

Figure 1

Table 2. Agreement with Equal Rights to Own and Inherit Land

Figure 2

Table 3. Agreement with Equal Rights to Job

Figure 3

Table 4. Logit Regressions for Support for Equal Rights to a Job

Figure 4

Figure 1. Predicted Average Marginal Effect of Interviewer Gender, by Interviewer Gender

Figure 5

Figure 2. Predicted Average Marginal Effect of Interviewer Gender, by Respondent’s Education Level

Figure 6

Figure 3. Predicted Average Marginal Effect of Interviewer Gender, by Interviewer Gender

Figure 7

Table 5. Logit Regressions for Support for Equal Rights to Own and Inherit Land

Figure 8

Figure 4. Predicted Average Marginal Effect of Interviewer Gender, by Respondent Education Level

Figure 9

Table 6. Logit Regressions for Support for Democracy and Opposition Parties

Supplementary material: File

Zimbalist supplementary material

Zimbalist supplementary material
Download Zimbalist supplementary material(File)
File 37.5 KB