Hostname: page-component-5b777bbd6c-w9n4q Total loading time: 0 Render date: 2025-06-23T00:20:52.130Z Has data issue: false hasContentIssue false

Measuring Gender in Comparative Survey Research

Published online by Cambridge University Press:  09 June 2025

Oscar Castorena
Affiliation:
Vanderbilt University, USA
Eli G. Rau
Affiliation:
Tecnologico de Monterrey, Mexico
Valerie Schweizer-Robinson
Affiliation:
Vanderbilt University, USA
Elizabeth J. Zechmeister
Affiliation:
Vanderbilt University, USA
Rights & Permissions [Opens in a new window]

Abstract

As societal conceptions of gender have evolved, so too have survey-based approaches to the measurement of gender. Yet, most research innovations and insights regarding the measurement of gender come from online or phone surveys in the Global North. We focus on face-to-face surveys in the Global South, specifically in the Latin America and Caribbean (LAC) region. Through in-person interviews, an online experiment, and survey experiments, we identify and assess an open-ended approach to incorporating respondent-provided gender identity in face-to-face interviews. Our results affirm that the measure is comparatively effective in minimizing discomfort and does not have substantial consequences for data quality across a diverse set of LAC countries. We discuss the potential traveling capacity of our approach and identify paths for further research on best practices in recording interviewee gender in face-to-face surveys in the LAC region and beyond.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of American Political Science Association

Nearly every political science survey collects data on gender. As societal understandings of gender have evolved, researchers are giving consideration to new measurement approaches (e.g., Albaugh et al. Reference Albaugh, Harell, Loewen, Rubenson and Stephenson2023; Padgett et al. Reference Padgett, Gutierrez, Wronski and Barari2024). Scholars in and outside political science have produced innovations, as well as lessons in what does not work. Yet, most research has focused on online or phone surveys in North America and Western Europe.

We shift the focus to face-to-face surveys, which present unique challenges for incorporating gender questions, and to the Latin America and Caribbean (LAC) region. Through in-person cognitive interviews, an online experiment, and a survey experiment, we identify and assess an effective approach to incorporating an expansive gender question in these contexts. In so doing, we consider potential trade-offs. Expansive gender questions can yield more accurate data on gender minorities, but asking people to self-identify their gender in face-to-face surveys—rather than interviewers recording their inference on respondent gender—could have the following downsides:

  • Causing discomfort (among cis and/or trans respondents and interviewers)

  • Introducing missingness in a key variable (gender)

  • Increasing attrition and item nonresponse elsewhere in the survey

  • Influencing responses to other questions (e.g., by provoking backlash against LGBTQ+ inclusivity)

We show that most of these concerns can be mitigated by using a carefully designed question. We present an expansive gender question that minimizes (though does not eliminate) discomfort, and we show that this question does not increase attrition or item nonresponse. It does introduce some missingness into the gender variable, but the missingness rate is low (1.4%), especially when weighed against the rate of error in interviewer-inferred gender (up to 0.6%). We find some evidence that the gender question has small effects on responses to questions about LGBTQ+ rights among certain subgroups. In assessing these trade-offs, we identify issues and approaches for researchers to consider as they extend efforts to ask expansive gender questions across place and time.

We present an expansive gender question that minimizes (though does not eliminate) discomfort, and we show that this question does not increase attrition or item nonresponse.

Expansive Gender Questions

Many opinion scholars, among others, are interested in moving beyond the gender binary, which requires that surveys adapt traditional instruments (Albaugh et al. Reference Albaugh, Harell, Loewen, Rubenson and Stephenson2024; Medeiros, Forest, and Öhberg Reference Medeiros, Forest and Öhberg2020). Asking expansive gender questions can reduce measurement error (Albaugh et al. Reference Albaugh, Harell, Loewen, Rubenson and Stephenson2023). Even when individual samples contain too few gender-minority respondents for analysis, it may be possible to pool multiple surveys to achieve leverage for valid statistical analyses (Lagos and Compton Reference Lagos and Compton2021). Some also consider that the inclusion of expansive gender questions has legitimizing power (Browne Reference Browne, Browne and Nash2016). Surveys serve both instrumental (disseminating facts) and symbolic (signaling public values) roles (Herbst Reference Herbst1993). As such, expansive gender questions can normalize and validate gender minorities. Although we focus on gender in this project, these arguments apply to measures of other LGBTQ+ identities and to comprehensive considerations of the LGBTQ+ community.

In developing expansive gender questions, researchers must consider context-specific understandings, lexicons, and norms. The terminology people use to describe their gender identity varies across subgroups (e.g., by age), time, and place (Puckett et al. Reference Puckett, Brown, Dunn, Mustanski and Newcomb2020). Individuals differ in how much they see sex and gender as distinct constructs (Reisner et al. Reference Reisner, Biello, Rosenberger, Austin, Haneuse, Perez-Brumer, Novak and Mimiaga2014). And contexts vary in terms of norms and laws that stigmatize and create risk for gender minorities (Holzberg et al. Reference Holzberg, Ellis, Kaplan, Virgile and Edgar2019).

In developing expansive gender questions, researchers must consider context-specific understandings, lexicons, and norms.

To date, survey innovations related to respondent-provided gender identity have centered on industrialized Western countries, with many recommendations focused on phone or online surveys. Experts have recently shifted to approaches that decouple questions about gender identity from those about sex or transgender experience (NASEM 2022). Multi-item modules provide the benefit of identifying transgender respondents, and recent work has provided guidance on how to word such questions (e.g., Albaugh et al. Reference Albaugh, Harell, Loewen, Rubenson and Stephenson2023; see also Puckett et al. Reference Puckett, Brown, Dunn, Mustanski and Newcomb2020).

Scholars simultaneously have made strides in identifying best practices for gender questions. For example, Albaugh et al. (Reference Albaugh, Harell, Loewen, Rubenson and Stephenson2023) use data from the online Canadian Election Studies (CES) to show that response options matter: Including an “other” category that mentions “trans, nonbinary, two-spirit, etc.” lowers the proportion of individuals identifying as men or women because that wording nudges man- and woman-identifying transgender individuals into that catch-all category.

Research in Western contexts finds that asking about gender typically does not evoke substantial discomfort, backlash, or incomprehension (Bauer et al. Reference Bauer, Braimoh, Scheim and Dharma2017; Holzberg et al. Reference Holzberg, Ellis, Kaplan, Virgile and Edgar2019; Lagos and Compton Reference Lagos and Compton2021; Medeiros, Forest, and Öhberg Reference Medeiros, Forest and Öhberg2020). Yet, researchers recognize potential traveling limitations for scholarship that focuses on comparatively progressive, advanced-industrialized democracies (Bauer et al. Reference Bauer, Braimoh, Scheim and Dharma2017; NASEM 2022).

Looking across regions, European countries are among the most progressive on policies toward same-sex couples and gender recognition, and clusters of African and Asian countries are among the most conservative.Footnote 1 As a region, the Americas occupies a shifting and heterogeneous middle ground on LGBTQ+ legal frameworks and public opinion (de Abreu Maia, Chiu, and Desposato Reference de Abreu Maia, Chiu and Desposato2023). As such, discomfort and confusion may be comparatively prevalent among Latin American respondents (see, e.g., Reisner et al. Reference Reisner, Biello, Rosenberger, Austin, Haneuse, Perez-Brumer, Novak and Mimiaga2014). By discomfort, we refer to a range of emotional unease that is associated with sensitive survey questions (McNeeley Reference McNeeley and Gideon2012).

Another important consideration is the face-to-face mode, which is common among surveys in less developed contexts (Lupu and Michelitch Reference Lupu and Michelitch2018). Until recently, most face-to-face surveys instructed interviewers to observe and record the respondent’s apparent gender, asking for clarification only if needed (Westbrook and Saperstein Reference Westbrook and Saperstein2015). Relying on interviewers to code various traits is not uncommon, yet interviewer-assessed gender may produce errors (Lagos and Compton Reference Lagos and Compton2021; Westbrook and Saperstein Reference Westbrook and Saperstein2015).

To extend the state of the field, we developed an expansive gender question for inclusion on face-to-face surveys. We focus first on its use in the LAC region and later discuss its broader applicability. As a subregion, Latin America leans socially conservative, but attitudes toward LGBTQ+ individuals have trended more positive in recent decades (Daby and Rau Reference Daby and Rau2024) and several countries have recently passed progressive LGBTQ+ rights legislation. The contemporary region is significantly diverse, with Central American countries (and Paraguay) comparatively anti-LGBTQ+ and the Southern Cone countries (Brazil, Argentina, and Chile, among others) more progressive legally and attitudinally (de Abreu Maia, Chiu, and Desposato Reference de Abreu Maia, Chiu and Desposato2023).

In assessing gender expansive questions, we focus primarily on discomfort. Discomfort is not uncommon in surveys that include “sensitive” questions: items that provoke worry (e.g., about repercussions from honest answers), motivate censoring, and/or result in emotional unease (McNeeley Reference McNeeley and Gideon2012). Notably, discomfort can have negative effects, including response bias, misreporting, increased satisficing, increased mode effects, and interviewer effects (Andreenkova and Javeline Reference Andreenkova, Javeline, Johnson, Pennell, Ineke and Dorer2018; Tourangeau and Yan Reference Tourangeau and Yan2007). We define discomfort as a feeling of unease that, in theory, may surface among interviewees or interviewers, and—in the case of gender questions—may affect gender minorities, cisgender individuals, or both. We further consider that including expansive gender questions may prime individuals on LGBTQ+ issues, whether positively or negatively, in ways that shape top-of-the-head answers they provide to downstream survey questions (see Medeiros, Forest, and Öhberg Reference Medeiros, Forest and Öhberg2020).

Research Design

Our goal was to design an expansive gender question for face-to-face surveys that would be as unobtrusive as possible and then assess whether it exerts more than minimal consequences on survey quality. We used a multistage, multimethod approach. Stage 1 (question design) comprised cognitive interviews with small convenience samples drawn from the local population by a survey firm. The objective was to assess two variants of an open-ended gender self-identification question with a diverse group, although there was no selection on LGBTQ+ identity to form this group. Stage 2 (online experiment) compared the option that worked better in the cognitive interviews with a two-question sex and gender module in an online experiment in Guatemala. Stage 3 (comparative study) included the gender self-identification question in face-to-face surveys in 22 LAC countries alongside a question order experiment in three of these countries (Argentina, Brazil, and Chile) to test whether inclusion of the self-identification question affected attrition and responses to other questions; we then extended this experiment to two more socially conservative countries (El Salvador and Honduras).

Stage 1: Question Design

We measured gender self-identity using an open-ended question (coded in the field by the interviewer using a handheld device). We tested two variants:

  1. 1.For statistical purposes, could you please tell me your gender?”

  2. 2.For statistical purposes, could you please confirm your gender?

The phrase “statistical purposes” is common in discussions around confidentiality (anonymity) and related privacy issues (e.g., NASEM 2022, 49). We adopted this language to emphasize that the data would not be used in an identifiable form. The phrase “confirm your gender” was used based on advice from experienced practitioners on ways to assure respondents that their gender was not being questioned.

Round 1 of cognitive interview pretesting centered on the first variant. Evidence indicated that it elicited both interviewee and interviewer discomfort. In Guatemala pretesting in 2022, cognitive interviews were conducted with 20 individuals—13 women and 7 men—ages 19 to 64. Eight respondents reacted to the gender question, unprompted, either with discomfort, laughter, surprise or with how they thought others would feel about the question. Some interviewees became visibly uncomfortable. Others voiced concerns either for themselves (e.g., saying it was “strange” to be asked) or for others (e.g., saying that others would be “offended” or “upset”). Notably, interviewers also reported discomfort. This is relevant because requiring interviewers to ask sensitive questions can incentivize data fabrication (Logan et al. Reference Logan, Parás, Robbins and Zechmeister2020).

Round 2 of pretesting centered on the second variant and provided some evidence of improvement. A second Guatemala pretest in 2022 included 10 new participants, 8 women and 2 men, ages 19 to 58. Three respondents showed discomfort with the gender identity question, but there appeared to be an improvement in reactions. When probed, some noted surprise (e.g., laughter) or discomfort (e.g., you should be able to tell “just by looking”). Yet, both interviewees and interviewers appeared more willing to cooperate when the question asked “to confirm” the respondent’s gender.

Stage 2: Online Experiment

The cognitive interviews identified a gender self-identification question that appeared to reduce, but not remove, discomfort. Our next step was to assess whether the remaining discomfort was sufficiently large to decrease respondent cooperation or compromise data quality. Given the aforementioned benefits of each approach, as well as the dearth of research on expansive gender questions in the LAC region, we focused this stage on both the single gender question and a two-step instrument. We conducted an online experiment in Guatemala in 2022 that compares the “best” option from the field (“For statistical purposes, could you please confirm your gender?”) to a two-step sex + gender module recommended by NASEM (2022). Our regional expertise and reading of the literature suggested that a two-step question would provoke greater discomfort in a LAC context. We designed and programmed a survey that was implemented via Offerwise to 2,634 respondents sampled using quotas from the firm’s opt-in panel. The experiment had two treatment conditions: respondents were asked, at the start of the survey, the field-tested single-item measure (treatment 1) or a two-question module about their sex at birth and their current gender (treatment 2).Footnote 2

Post-treatment questions indicated that respondent levels of comfort with the gender questions were high, albeit significantly different from each other: on a 0–10 scale, averages were 9.11 for the single-item measure and 8.93 for the two-question module.Footnote 3 We did not find significant differences across treatments on overall survey satisfaction, overall comfort, and reported confusion with the gender question. In brief, the respondents evidenced little difficulty comprehending the single-item gender question and no major concerns regarding overall satisfaction and comfort (see online appendix table A4). However, there are three limitations. The sample skews toward higher levels of socioeconomic status; the study was conducted online, a format that reduces discomfort around sensitive questions; and we cannot assess data-quality issues such as breakoffs or nonresponse rates, given that online panel respondents are incentivized to complete the questionnaire.

Stage 3: Comparative Survey and Question Order Experiment

We next inserted the single-item self-identification question into the 2023 AmericasBarometer by LAPOP Lab. Interviewers coded open-ended responses into an e-device, where the options were woman, man, neither, don’t know, or no response. Before the self-identification question, interviewers also coded their inference regarding the interviewee’s gender. This allows us to compare responses across interviewer-inferred and respondent-provided measures.

We also conducted a question order experiment. As noted earlier, asking a gender self-identification question in face-to-face surveys provokes some level of discomfort and other negative responses. Firms working on the 2023 AmericasBarometer reported that pretests in countries as varied as Chile, Guatemala, Jamaica, and Peru revealed that individuals felt uncomfortable because it was obvious by looking at them (Chile), it was disrespectful (Guatemala), it was offensive (Jamaica), and it was strange (Peru). In theory, this could cause individuals to refuse to answer the question or to terminate (breakoff) the survey early. Asking gender self-identification might also signal a progressive project on LGBTQ+ issues. One pretest participant in Jamaica reacted by referring to the survey as part of a “woke agenda.” For these reasons, we designed the experiment to consider the following potential consequences for data quality: item nonresponse, survey attrition (breakoff), and shifts in attitudes toward LGBTQ+ rights.

Comparative Survey

For the face-to-face AmericasBarometer surveys (22 countries,Footnote 4 N = 34,459), 37 individuals (0.11%) self-identified in a way that was coded “neither male/man or female/woman.” Of those who self-identified outside the binary, 19 were coded as men, 17 were coded as women, and 1 was coded as “don’t know” by the interviewer. Ten respondents were coded as “don’t know” by the interviewer but self-identified as either a man or woman. An additional 152 individuals who self-identified as women were initially coded by the interviewer as men or vice versa.Footnote 5 In total, the number of respondents either misclassified or marked as “don’t know” by the interviewer was 200, or 0.6% of the sample. The number who did not respond to the self-identification question was 488 respondents, or 1.4%. On the interviewer-assessed measure, the “don’t know” option was used only 12 times (in one of these cases, the respondent self-identified as nonbinary). These data yield two conclusions: The number of individuals self-identifying outside the conventional gender binary in LAC-region surveys is low, and some individuals appear to be miscoded by interviewer-determined measures of gender. We also see some nonresponse to the gender self-identification question, although not at rates higher than for other sensitive questions.Footnote 6

Question Order Experiment

To systematically test for unintended effects of the gender question, we ran a preregistered experiment in three countries: Argentina, Brazil, and Chile.Footnote 7 We subsequently extended the study to national surveys in El Salvador and Honduras.Footnote 8 Individuals were randomly assigned to one of two conditions: In the treatment condition, the survey asked about gender at the beginning alongside other basic demographic information, whereas in the control condition, respondents received the gender question at the end of the survey.

We examined whether the gender self-identification question reduced engagement with the survey—via attrition and item nonresponse rates—and whether (via a priming effect) it affected expressed attitudes about LGBTQ+ rights. For this latter test, we analyzed response patterns on six questions (of which each respondent received four). These questions came after the gender question in the treatment group and before it in the control group. Each respondent was asked their approval or disapproval (on a 0–10 scale) of gay individuals’ right to run for office and of same-sex marriage. Respondents were asked two additional questions about equal rights and adoption, with random assignment to whether those questions asked about sexual minorities (LGB) or gender minorities (T).Footnote 9

The data from the Southern Cone countries are from the 2023 AmericasBarometer. These countries—Argentina, Brazil, and Chile—have some of the most progressive laws and attitudes on LGBTQ+ issues in the LAC region. In the 2023 AmericasBarometer, support for equal rights for gender minorities was 64% in Argentina, 63% in Chile, and 60% in Brazil. In only three other LAC countries was there majority support (Lupu et al. Reference Lupu, Rodríguez, Wilson and Zechmeister2023). We might expect comparatively lower risk of backlash and attrition in such contexts. In 2024 we implemented the same experimental design in face-to-face national surveys conducted by LAPOP in El Salvador and Honduras. Central Americans hold more conservative attitudes on LGBTQ+ rights: Support for same-sex marriage in El Salvador and Honduras is only 14% and 16%, respectively, compared to 49% to 61% support in Argentina, Brazil, and Chile (Lupu et al. Reference Lupu, Rodríguez, Wilson and Zechmeister2023; see also de Abreu Maia, Chiu, and Desposato Reference de Abreu Maia, Chiu and Desposato2023). The 2024 surveys in Honduras and El Salvador did not include questions on LGBTQ+ rights, but they do enable us to test for attrition and nonresponse bias.

Attrition and Nonresponse

Survey attrition (aka breakoff or early termination of the survey) and item nonresponse reflect survey disengagement and can create survey bias (Roßmann, Blumenstiel, and Steinbrecher Reference Roßmann, Blumenstiel and Steinbrecher2015). Figure 1 shows the estimated treatment effect on attrition and overall nonresponse rates across all questions in the survey. The treatment did not increase attrition or item nonresponse in any of the five countries.Footnote 10

Figure 1 Attrition and Overall Nonresponse

Treatment effects are estimated from linear regressions, controlling for interviewer ID. The estimated effects are presented with 95% confidence intervals.

Attitudes toward LGBTQ+ Rights

We next consider whether the treatment affected attitudes toward LGBTQ+ rights. Figure 2 shows the estimated average treatment effect (ATE) for each outcome question in the Southern Cone countries. In only one instance does treatment affect aggregate opinion: It has a positive effect on Adoption (LGB) for Brazil.

Figure 2 Average Treatment Effects

Treatment effects are estimated from linear regressions, controlling for age, urban/rural residence, interviewer ID, and whether any other individuals were present at the time of interview. The estimated effects are presented with 95% confidence intervals.

We also preregistered analysis of heterogeneous effects by age and education. Whereas we do not find evidence that our gender identity question shifts aggregate opinion on most LGBTQ+ rights questions, we do find evidence that it affects some individual-level responses. We observe heterogeneous effects by age on four questions in Argentina and on one question in Brazil and in Chile. Across all LGBTQ+ rights questions, younger respondents are more supportive than older respondents. The treatment (receiving the gender identity question at the start of the survey) widens this gap: It causes younger respondents to express even more support for LGBTQ+ rights and older respondents to express even less support.Footnote 11

Conclusion

We set out to identify an approach to measuring gender in face-to-face surveys, with a focus on the LAC region and with the goals of (1) allowing respondents to self-identify, rather than relying on interviewer perceptions; (2) minimizing discomfort; and (3) not introducing data-quality problems. Through a multistage process, we developed question wording that improves the measurement of gender (some respondents identified with a gender different from that perceived by the interviewer), comparatively minimizes discomfort, and does not introduce major problems for attrition or nonresponse bias across five experiments. Although there is some (but comparatively low) missingness on the self-identified gender question in some countries, the absence of attrition and response-rate effects holds in both liberal and conservative countries. This suggests that our approach may be useful across a range of contexts, including face-to-face surveys in advanced industrialized settings. We caution, however, against generalizing to much more conservative settings, especially to contexts where disclosing certain gender identities would pose legal or other risks.

We do not identify substantive effects of the question on aggregate opinion toward LGBTQ+ rights—an important finding for survey projects that want to improve their measurement of gender while maintaining the ability to track shifts over time in public opinion on LGBTQ+ issues. We do find evidence that the gender question affects some individual responses on the LGBTQ+ rights questions, with heterogeneous effects by age on certain questions.Footnote 12

Our question invokes comparatively lower levels of discomfort, but it does still make some respondents uncomfortable. Discomfort is not necessarily a reason to abandon important questions, but it should be consciously considered and weighed in the context of the survey’s goals (Amaya, Vogels, and Brown Reference Amaya, Vogels and Brown2020; NASEM 2022). Some pretest reports of discomfort appear to be instances in which socially conservative gender norms were challenged in ways that provoked discomfort. At the same time, we also saw evidence that some interviewees became unsettled in terms of their gender presentation or sexuality. In the 2023 AmericasBarometer pretests, one Peruvian respondent asked whether she was asked the gender identification question because the interviewer thought she was gay. Given that conventional surveys have asked gender only “if not obvious” (Westbrook and Saperstein Reference Westbrook and Saperstein2015), it is not unreasonable for a respondent to think their gender is uniquely being questioned. Future research might consider ways to further minimize these various types of discomfort.

Another direction for future research is to drill more deeply into heterogeneous effects by interviewee or interviewer traits. Yet still another pathway for future research is to design and test alternative ways of collecting data on gender and related topics. One approach could be to ask respondents to fill out a brief demographic form (on paper or via an e-device) that includes gender alongside other questions. Compared to asking directly, this approach ought to reduce discomfort and would avoid the issues of interviewers assigning a gender based on their perceptions. Filling out a demographic sheet is a familiar activity for many people; yet, it may not always be feasible in face-to-face surveys because of illiteracy or logistical issues in the field.

Although we focused on gender, we hope that our study contributes to discussions on methods for measuring various LGBTQ+ identifications. Even though measures of trans identity and sexual orientation are not used as widely across surveys as measures of gender, a growing number of projects have started to collect data on LGBTQ+ identification (see, e.g., Padgett et al. Reference Padgett, Gutierrez, Wronski and Barari2024). Our gender question can be combined with additional questions about LGBTQ+ identity, including whether the respondent identifies as transgender (see Albaugh et al. Reference Albaugh, Harell, Loewen, Rubenson and Stephenson2023 and Puckett et al. Reference Puckett, Brown, Dunn, Mustanski and Newcomb2020 on question wording). As with gender identity, extant research often focuses on online surveys in advanced industrialized democracies; our study’s multistage approach offers one template for designing and assessing LGBTQ+ self-identification questions across various contexts.

In addition to offering a recommendation for how to ask a gender self-identification question in a face-to-face survey that “works” across a range of sociolegal contexts, we also raise a set of considerations and approaches relevant to testing and improving measurement strategies. Within the confines of this study, what we can state with confidence is that (1) interviewers sometimes make mistakes in describing respondents’ gender, which can be corrected by allowing the respondent to self-identify and (2) even if some respondents express discomfort, we find no evidence (in the contexts we studied) that this gender self-identification question poses a considerable threat to data quality.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit http://doi.org/10.1017/S1049096525000381.

ACKNOWLEDGEMENTS

The authors thank the LAPOP Lab research team for their extensive and expert efforts in designing and implementing the AmericasBarometer and other surveys analyzed in this article.

DATA AVAILABILITY STATEMENT

Code and data that reproduce the findings of this study are openly available at the Harvard Dataverse at https://doi.org/10.7910/DVN/V0P7CN (Castorena Reference Castorena, Rau, Schweizer-Robinson and Zechmeister2025).

CONFLICT OF INTEREST

The authors declare no ethical issues or conflicts of interest in this research.

Footnotes

2. See appendix section I for question wording.

3. Two-tailed p-value for the difference in means: 0.03.

4. Appendix table A6 contains country-level descriptives; full technical information is at https://www.vanderbilt.edu/lapop/studies-country.php. We exclude the two AmericasBarometer surveys conducted by phone (Haiti, Nicaragua).

5. We assume that the interviewer’s determination was inaccurate.

6. The highest rate of nonresponse on the gender self-identification question was 4.72% in Paraguay, and the lowest was 0% in El Salvador. Appendix table A6 has more information on missingness by country. For comparison, item nonresponse for the 2023 AmericasBarometer income question in the same countries is 12.95% (ranging from 3.7% to 29% at the country level). For education, a less sensitive question, item nonresponse was 0.67% (ranging from 0% to 2.6%).

7. Preregistration available at https://osf.io/cy5rp/.

8. See appendix section 2 for survey details.

9. See appendix table A5 for question wording.

10. For analysis of nonresponse for LGBTQ+-related questions, see appendix figures A1 and A2.

11. See appendix figures A5A10 for full details.

12. These heterogeneous effects do not necessarily imply data quality problems. The opinions expressed in the treatment condition could be more sincere than those in the control condition (e.g., younger respondents might hold more strongly positive views than they express in the control condition and take the expansive gender question as a signal that they can comfortably express those views to the interviewer).

References

Andreenkova, Anna V., and Javeline, Debra. 2018. “Sensitive Questions in Comparative Surveys.” In Advances in Comparative Survey Methods: Multinational, Multiregional, and Multicultural Contexts (3MC), eds. Johnson, Timothy P., Pennell, Beth-Ellen, Ineke, A. L. Stoop, and Dorer, Brita, 139–60. John Wiley & Sons.10.1002/9781118884997.ch7CrossRefGoogle Scholar
Albaugh, Quinn, Harell, Allison, Loewen, Peter John, Rubenson, Daniel, and Stephenson, Laura B.. 2023. “Measuring Transgender and Non-Binary Identities in Online Surveys: Evidence from Two National Election Studies.” APSA Preprints. doi: 10.33774/apsa-2023-5tr5q.Google Scholar
Albaugh, Quinn, Harell, Allison, Loewen, Peter John, Rubenson, Daniel, and Stephenson, Laura B.. 2024. “From Gender Gap to Gender Gaps: Bringing Nonbinary People into Political Behavior Research.” Perspectives on Politics 119. https://doi.org/10.1017/S1537592724000975.Google Scholar
Amaya, Ashley, Vogels, Emily A., and Brown, Anna. 2020. “Adapting How We Ask About the Gender of our Survey Respondents.” Pew Research Center: Decoded. https://pewrsr.ch/3W2GGCN.Google Scholar
Bauer, Greta R., Braimoh, Jessica, Scheim, Ayden I., and Dharma, Christoffer. 2017. “Transgender-Inclusive Measures of Sex/Gender for Population Surveys: Mixed-Methods Evaluation and Recommendations.” PLOS ONE 12(5): e0178043. doi: 10.1371/journal.pone.0178043.CrossRefGoogle ScholarPubMed
Browne, Kath. 2016. “Queer Quantification or Queer(y)ing Quantification: Creating Lesbian, Gay, Bisexual or Heterosexual Citizens Through Governmental Social Research.” In Queer Methods and Methodologies: Intersecting Queer Theories and Social Science Research, eds. Browne, Kath and Nash, Catherine J., chap. 14. Routledge. doi: 10.4324/9781315603223.Google Scholar
Castorena, Oscar, Rau, Eli G., Schweizer-Robinson, Valerie, and Zechmeister, Elizabeth J.. 2025. “Replication Data for: Measuring Gender in Comparative Survey Research.” PS: Political Science & Politics. DOI: 10.7910/DVN/V0P7CN.Google Scholar
Daby, Mariela, and Rau, Eli. 2024. “Towards Tolerance and Acceptance: Public Opinion and LGBTQ+ Politics in Latin America.” Paper presented at the annual meeting of the American Political Science Association, Philadelphia.Google Scholar
de Abreu Maia, Lucas, Chiu, Albert, and Desposato, Scott. 2023. “No Evidence of Backlash: LGBT Rights in Latin America.” Journal of Politics 85(1): 4963. doi: 10.1086/720940.CrossRefGoogle Scholar
Herbst, Susan, 1993Numbered Voices: How Opinion Polling Has Shaped American Politics. University of Chicago Press.Google Scholar
Holzberg, Jessica, Ellis, Renee, Kaplan, Robin, Virgile, Matt, and Edgar, Jennifer. 2019. “Can They and Will They? Exploring Proxy Response of Sexual Orientation and Gender Identity in the Current Population Survey.” Journal of Official Statistics 35(4): 885911. doi: 10.2478/jos-2019-003.CrossRefGoogle Scholar
Lagos, Danya, and Compton, D’Lane. 2021. “Evaluating the Use of a Two-Step Gender Identity Measure in the 2018 General Social Survey.” Demography 58(2): 763–72. doi: 10.1215/00703370-8976151.CrossRefGoogle ScholarPubMed
LAPOP Lab. AmericasBarometer 2023, v1.0. Center for Global Democracy. https://www.vanderbilt.edu/lapop .Google Scholar
Logan, Carolyn, Parás, Pablo, Robbins, Michael, and Zechmeister, Elizabeth J.. 2020. “Improving Data Quality in Face-to-Face Survey Research.” PS: Political Science & Politics 53(1): 4650. doi: 10.1017/S1049096519001161.Google Scholar
Lupu, Noam, and Michelitch, Kristin. 2018. “Advances in Survey Methods for the Developing World.” Annual Review of Political Science 21: 195214. doi: 10.1146/annurev-polisci-052115-021432.CrossRefGoogle Scholar
Lupu, Noam, Rodríguez, Mariana, Wilson, Carole J., and Zechmeister, Elizabeth J., eds. 2023. Pulse of Democracy. Vanderbilt University. LAPOP.Google Scholar
McNeeley, Susan. 2012. “Sensitive Issues in Surveys: Reducing Refusals While Increasing Reliability and Quality of Responses to Sensitive Survey Items.” In Handbook of Survey Methodology for the Social Sciences, ed. Gideon, Lior, chap. 2. Springer. doi: 10.1007/978-1-4614-3876-2_22.Google Scholar
Medeiros, Mike, Forest, Benjamin, and Öhberg, Patrik. 2020The Case for Non-Binary Gender Questions in Surveys.” PS: Political Science & Politics 53(1): 128–35. doi: 10.1017/S1049096519001203.Google Scholar
NASEM. 2022. Measuring Sex, Gender Identity, and Sexual Orientation. National Academies Press. doi: 10.17226/26424.Google Scholar
Padgett, Zoe, Gutierrez, Sam, Wronski, Laura, and Barari, Soubhik. 2024. “Measuring the Growth of Gender-Inclusive Surveys Around the World.” Survey Practice 17. doi: 10.29115/SP-2023-0019CrossRefGoogle Scholar
Puckett, Jae, Brown, Nina C., Dunn, Terra, Mustanski, Brian, and Newcomb, Michael E.. 2020. “Perspectives from Transgender and Gender Diverse People on How to Ask About Gender.” LGBT Health 7(6): 305–11. doi: 10.1089/lgbt.2019.0295.CrossRefGoogle ScholarPubMed
Reisner, Sari L., Biello, Katie, Rosenberger, Joshua G., Austin, S. Bryn, Haneuse, Sebastien, Perez-Brumer, Amaya, Novak, David S., and Mimiaga, Matthew J.. 2014. “Using a Two-Step Method to Measure Transgender Identity in Latin America/the Caribbean, Portugal, and Spain.” Archives of Sexual Behavior 43: 1503–14. doi: 10.1007/s10508-014-0314-2.CrossRefGoogle ScholarPubMed
Roßmann, Joss, Blumenstiel, Jan Eric, and Steinbrecher, Markus. 2015. “Why Do Respondents Break off Web Surveys and Does It Matter? Results From Four Follow-Up Surveys.” International Journal of Public Opinion Research 27(2): 289302. doi: 10.1093/ijpor/edu025.Google Scholar
Tourangeau, Roger, and Yan, Ting. 2007. “Sensitive Questions in Surveys.” Psychological Bulletin 133(5): 859–83. doi: 10.1037/0033-2909.133.5.859.CrossRefGoogle ScholarPubMed
Westbrook, Laurel, and Saperstein, Aliya. 2015. “New Categories Are Not Enough: Rethinking the Measurement of Sex and Gender in Social Surveys.” Gender & Society 29(4): 534–60. doi: 10.1177/0891243215584758.CrossRefGoogle Scholar
Figure 0

Figure 1 Attrition and Overall NonresponseTreatment effects are estimated from linear regressions, controlling for interviewer ID. The estimated effects are presented with 95% confidence intervals.

Figure 1

Figure 2 Average Treatment EffectsTreatment effects are estimated from linear regressions, controlling for age, urban/rural residence, interviewer ID, and whether any other individuals were present at the time of interview. The estimated effects are presented with 95% confidence intervals.

Supplementary material: File

Castorena et al. supplementary material

Castorena et al. supplementary material
Download Castorena et al. supplementary material(File)
File 1.6 MB